Research · Per Ardua

The Concentration Barrier: Effective Dimensionality Bounds Domain Selectivity in Neural Network Activations

The mathematical explanation for why linear interventions cannot achieve domain-selective effects

AI-11 Activation Geometry DOI

Executive Summary

Paper VIII showed that shaped noise injection at terminal transformer layers cannot achieve domain-selective entropy effects. Paper IX mapped the response tensor at each layer. This paper establishes the mathematical explanation: the concentration barrier.

We prove that in an activation space with effective dimensionality d_eff (measured by participation ratio), the maximum achievable selectivity from k domain directions is bounded by k/d_eff. This is not an approximation or an empirical observation — it is a mathematical consequence of how variance distributes in high-dimensional spaces.

Key Findings

  • Concentration barrier theorem: Maximum selectivity from k domain directions bounded by k/d_eff — proved analytically and verified empirically at all 28 layers
  • Effective dimensionality measurements: d_eff ranges from 4.7 to 26.0 (mean 19.2) for last-token extraction across Qwen-2.5 7B layers
  • Pooling collapse: Mean-pooled d_eff collapses to 1.0 at layers 3-25, revealing previously undocumented representation anisotropy invisible to position-aware measurements
  • INLP variance fraction: Increases from 1.3% at early layers to 12.5% at the terminal layer, yet never exceeds the concentration barrier bound

The Core Result

The concentration barrier constrains the entire class of linear intervention methods. Any technique that operates by projecting perturbations onto directions identified at a single layer faces this bound. The geometry that INLP discovers is real — the domain directions genuinely separate domains in classification — but the fraction of total computation they capture is bounded by the ratio of subspace dimension to effective dimensionality. In high-dimensional spaces, this fraction is necessarily small.

Key References

  • McEntire (2026) — Shaped Noise Injection: terminal measurement limit (Paper VIII)
  • McEntire (2026) — Layer-Resolved Response Tensor: selectivity profile (Paper IX)
  • Vershynin (2018) — High-dimensional probability
  • Ravfogel et al. (2020) — Iterative Nullspace Projection

Download Full Paper

Access the complete research paper with detailed methodology, empirical evidence, and formal proofs.

Download PDF