The Memory Capacity of Attention

How much information can attention mechanisms store? Using relative error analysis, we show that linear attention scales linearly with dimension while softmax attention scales exponentially.
📝 Guangxuan Xiao Sep 1, 2025 ⏱️ 15 min read

How much information can a Transformer's attention mechanism store? This question connects modern deep learning to classical associative memory theory. In this post, I present a mathematical analysis using relative error bounds showing that linear attention has a capacity of \(O(d)\) while softmax attention achieves \(O(e^{d/2})\)—an exponential difference that helps explain their relative performance characteristics.

Note: This analysis follows the excellent framework presented in "Understanding Transformer from the Perspective of Associative Memory" by Zhong et al. (2025). Their paper provides the complete derivation, and this post presents my own working through of these ideas using a similar approach to build intuition.


Attention as Associative Memory

The key insight is to view attention mechanisms as associative memory systems. We want to store \(N\) key-value pairs \(\{(k_1, v_1), (k_2, v_2), \ldots, (k_N, v_N)\}\) and reliably retrieve value \(v_i\) when queried with key \(k_i\).

This framing immediately raises the central question: what is the maximum number of pairs \(N\) that can be stored as a function of the hidden dimension \(d\), while maintaining reliable retrieval?

Model Assumptions

To make the analysis tractable, we assume:

These assumptions model the "typical case" behavior and allow us to derive capacity limits.


Linear Attention: Superposition and Its Limits

Linear attention stores information in an explicit state matrix:

$$S = \sum_{i=1}^{N} v_i k_i^T$$

When we query with \(k_i\), the retrieval process gives:

$$\tilde{v}_i = S k_i = \sum_{j=1}^{N} v_j (k_j^T k_i) = \underbrace{v_i}_{\text{Signal}} + \underbrace{\sum_{j \neq i} (k_j^T k_i) v_j}_{\text{Crosstalk Noise}}$$

Since \(k_i\) are unit vectors, \(k_i^T k_i = 1\), so we perfectly retrieve the signal component. However, we also get crosstalk from all other stored patterns.

Error Analysis and Capacity

To find the capacity, we analyze the magnitude of the error term. We'll make the standard assumption that keys \(k_j\) and values \(v_j\) are random, independent unit vectors in \(\mathbb{R}^d\).

We're interested in the expected squared norm of the error vector:

$$E\left[\|E_i\|^2\right] = E\left[ \left\| \sum_{j \neq i} v_j (k_j^T k_i) \right\|^2 \right]$$

Expanding this expression:

$$= E\left[ \sum_{j \neq i} \sum_{l \neq i} (v_j^T v_l) (k_j^T k_i) (k_l^T k_i) \right]$$

Since all vectors are chosen independently, cross-terms where \(j \neq l\) have zero expectation. We only need to consider terms where \(j = l\):

$$= \sum_{j \neq i} E\left[ \|v_j\|^2 (k_j^T k_i)^2 \right] = \sum_{j \neq i} E\left[ (k_j^T k_i)^2 \right]$$

For two random unit vectors in high-dimensional space, the expected squared dot product is \(E[(k_j^T k_i)^2] = 1/d\). Therefore:

$$E\left[\|E_i\|^2\right] = \sum_{j \neq i} \frac{1}{d} = \frac{N-1}{d}$$

For successful reconstruction, we want the expected squared error to be small relative to the signal. Setting a tolerance \(\epsilon\):

$$E[\|E_i\|^2] < \epsilon^2 \|v_i\|^2 = \epsilon^2$$

This gives us:

$$\frac{N-1}{d} < \epsilon^2 \implies N - 1 < \epsilon^2 d$$

Therefore, the maximum number of patterns we can store is:

$$N_{\max} \leq \lfloor \epsilon^2 d \rfloor + 1 \implies N = O(d)$$

Linear attention's capacity scales linearly with dimension. For example, with \(\epsilon = 0.1\) (10% relative error), we can store approximately \(N = 0.01d\) patterns.


Softmax Attention: The Exponential Advantage

Softmax attention uses a different retrieval mechanism:

$$\tilde{v}_i = \sum_{j=1}^{N} \frac{\exp(k_i^T k_j / \tau)}{\sum_{l=1}^{N} \exp(k_i^T k_l / \tau)} v_j$$

where \(\tau\) is a temperature parameter. The softmax creates a "winner-take-all" effect that suppresses crosstalk through its exponential nonlinearity.

Error Analysis and Capacity

The weights in softmax attention are:

$$\alpha_{ij} = \frac{\exp(k_i^T k_j / \tau)}{\sum_{l=1}^N \exp(k_i^T k_l / \tau)}$$

The retrieved value is \(\tilde{v}_i = \sum_{j=1}^N \alpha_{ij} v_j\). To find the error, let's carefully expand:

$$\begin{align} \tilde{v}_i - v_i &= \sum_{j=1}^N \alpha_{ij} v_j - v_i \\ &= \alpha_{ii} v_i + \sum_{j \neq i} \alpha_{ij} v_j - v_i \\ &= (\alpha_{ii} - 1)v_i + \sum_{j \neq i} \alpha_{ij} v_j \\ &= -\sum_{j \neq i} \alpha_{ij} v_i + \sum_{j \neq i} \alpha_{ij} v_j \quad \text{(since } \sum_j \alpha_{ij} = 1) \\ &= \sum_{j \neq i} \alpha_{ij} (v_j - v_i) \end{align}$$

Let's define \(s_i = \sum_{j \neq i} \alpha_{ij} = 1 - \alpha_{ii}\) as the total weight given to incorrect keys. This is the key quantity we need to control.

To bound \(s_i\), note that we can write:

$$s_i = \frac{\sum_{j \neq i} \exp(k_i^T k_j / \tau)}{\exp(1/\tau) + \sum_{j \neq i} \exp(k_i^T k_j / \tau)} = \frac{B}{A + B}$$

where \(A = \exp(1/\tau)\) (the "signal" term) and \(B = \sum_{j \neq i} \exp(k_i^T k_j / \tau)\) (the "noise" term). Since \(s_i = B/(A+B) \leq B/A\), we have:

$$s_i \leq \frac{\sum_{j \neq i} \exp(k_i^T k_j / \tau)}{\exp(1/\tau)} = \exp(-1/\tau) \sum_{j \neq i} \exp(k_i^T k_j / \tau)$$

This bound is exact—no approximation needed! Taking expectations:

$$E[s_i] \leq \exp(-1/\tau) \sum_{j \neq i} E[\exp(k_i^T k_j / \tau)]$$

Now we can bound the expected squared error. Using the triangle inequality:

$$\|\tilde{v}_i - v_i\| = \left\|\sum_{j \neq i} \alpha_{ij}(v_j - v_i)\right\| \leq \sum_{j \neq i} \alpha_{ij} \|v_j - v_i\| \leq 2s_i$$

where we used that \(\|v_j - v_i\| \leq \|v_j\| + \|v_i\| = 2\) for unit vectors. Therefore:

$$E[\|\tilde{v}_i - v_i\|^2] \leq 4E[s_i^2] \leq 4E[s_i]$$

where the last inequality uses \(s_i \in [0,1]\) so \(s_i^2 \leq s_i\). For relative error less than \(\epsilon\), we need \(E[\|\tilde{v}_i - v_i\|^2] < \epsilon^2\), which requires:

$$E[s_i] < \frac{\epsilon^2}{4}$$

Substituting our bound on \(E[s_i]\):

$$\exp(-1/\tau) \sum_{j \neq i} E[\exp(k_i^T k_j / \tau)] < \frac{\epsilon^2}{4}$$

Now we need to evaluate \(E[\exp(k_i^T k_j / \tau)]\). For random unit vectors, the dot product \(k_i^T k_j\) has mean 0 and variance \(1/d\). In high dimensions, by the Central Limit Theorem, this dot product is approximately Gaussian: \(k_i^T k_j \sim \mathcal{N}(0, 1/d)\).

Gaussian Approximation: While \(k_i^T k_j\) is not exactly Gaussian, this approximation becomes increasingly accurate as \(d\) grows, and gives us the correct scaling behavior. For a Gaussian random variable \(X \sim \mathcal{N}(0, \sigma^2)\), the moment generating function is \(E[e^{\lambda X}] = e^{\lambda^2\sigma^2/2}\).

Using the Gaussian approximation with variance \(\sigma^2 = 1/d\):

$$E[\exp(k_i^T k_j / \tau)] = \exp\left(\frac{1}{2d\tau^2}\right)$$

Therefore, our condition becomes:

$$\exp(-1/\tau) \cdot (N-1) \cdot \exp\left(\frac{1}{2d\tau^2}\right) < \frac{\epsilon^2}{4}$$

Simplifying:

$$(N-1) \exp\left(\frac{1}{2d\tau^2} - \frac{1}{\tau}\right) < \frac{\epsilon^2}{4}$$

Taking logarithms and rearranging:

$$\log(N-1) < \log(\epsilon^2/4) + \frac{1}{\tau} - \frac{1}{2d\tau^2}$$

The right-hand side is maximized when its derivative with respect to \(\tau\) equals zero:

$$\frac{d}{d\tau}\left(\frac{1}{\tau} - \frac{1}{2d\tau^2}\right) = -\frac{1}{\tau^2} + \frac{1}{d\tau^3} = 0$$

This gives the optimal temperature \(\tau_{opt} = 1/d\). Substituting back:

$$\log(N-1) < \log(\epsilon^2/4) + d - \frac{d}{2} = \log(\epsilon^2/4) + \frac{d}{2}$$

Therefore:

$$N - 1 < \frac{\epsilon^2}{4} \cdot e^{d/2}$$
$$N = O\left(e^{d/2}\right)$$

Softmax attention's capacity scales exponentially with dimension. The exponential nonlinearity creates a "winner-take-all" dynamic where the correct key can be distinguished from a large set of distractors while maintaining relative error below \(\epsilon\).

Note on Temperature: In standard transformer implementations, the attention scores are computed as \(q k^T / \sqrt{d_k}\), which in our notation corresponds to \(\tau = \sqrt{d}\), not \(1/\sqrt{d}\). Our capacity-optimal choice \(\tau_{opt} = 1/d\) arises from the mathematical analysis under the unit-norm, exact-match query model and should not be confused with the numerical stability scaling used in practice.


Understanding the Exponential Gap

The difference in capacity—linear versus exponential—stems from different retrieval mechanisms:

Feature Linear Attention Softmax Attention
Mechanism Linear superposition Non-linear competitive weighting
Error Source Additive crosstalk from all other memories Residual weights from the tail of the softmax distribution
Capacity \(N = O(d)\) (Linear) \(N = O(e^{d/2})\) (Exponential)
Computational Cost \(O(Nd)\) \(O(N^2d)\)

Intuitive Analogies

To better understand these mechanisms, consider these analogies:

Linear Attention is like a blurry composite photograph. You store memories by overlaying their faint images. When you recall one, you see the correct image but with blurry traces of every other image you've stored. As you add more images (\(N\)), the entire composite gets blurrier until nothing is recognizable. The capacity is limited by how much blur (\(d\)) you can tolerate.

Softmax Attention is like an indexed digital search. When you store memories, you give each one a unique, high-dimensional tag (the key). When you search with a tag, the system performs a targeted lookup. Because the tags are well-separated in high-dimensional space, the search can distinguish the correct memory while suppressing others. The exponential capacity comes from the properties of high-dimensional "tagging" space.

The Role of Nonlinearity

Linear attention averages all stored patterns weighted by their similarity to the query. This creates unavoidable interference that grows with the number of stored items.

Softmax attention's exponential nonlinearity creates a "winner-take-all" dynamic. The temperature parameter \(\tau\) controls the sharpness of this selection—lower temperatures create sharper attention patterns. Note that the standard choice \(\tau = 1/\sqrt{d}\) naturally balances the scale of dot products in high dimensions.

Practical Implications

Using our derived bounds with error tolerance \(\epsilon = 0.1\) (10% relative error), we can compute the theoretical capacity for different dimensions:

Dimension \(d\) Linear Attention Capacity Softmax Attention Capacity Ratio
64 \(N \leq 1\) \(N < 2 \times 10^{11}\) \(> 10^{11}\)
128 \(N \leq 2\) \(N < 2 \times 10^{25}\) \(> 10^{25}\)
256 \(N \leq 3\) \(N < 1 \times 10^{53}\) \(> 10^{53}\)
512 \(N \leq 6\) \(N < 4 \times 10^{108}\) \(> 10^{108}\)
1024 \(N \leq 11\) \(N < 6 \times 10^{219}\) \(> 10^{219}\)

Key observations:

The exponential capacity advantage means that despite the computational cost of \(O(N^2)\), softmax attention remains the preferred choice for applications requiring precise retrieval from large contexts.


Conclusion

The mathematical analysis reveals a trade-off in attention design:

This exponential gap helps explain why softmax attention has been successful in practice. For tasks requiring precise retrieval from large contexts, the exponential capacity provides a significant advantage.


References

  1. Understanding Transformer from the Perspective of Associative Memory. Shu Zhong, Mingyu Xu, Tenglong Ao, Guang Shi. arXiv preprint, 2025. arXiv:2505.19488