Skip to content
#

language-model-architectures

Here are 2 public repositories matching this topic...

Language: All
Filter by language

This work provides extensive empirical results on training LMs to count. We find that while traditional RNNs trivially achieve inductive counting, Transformers have to rely on positional embeddings to count out-of-domain. Modern RNNs (e.g. rwkv, mamba) also largely underperform traditional RNNs in generalizing counting inductively.

  • Updated Oct 6, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the language-model-architectures topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the language-model-architectures topic, visit your repo's landing page and select "manage topics."

Learn more