Research · Per Ardua

INLP Projection Transmission: Domain-Specific Activation Transfer at the Natural Language Boundary

Can domain-specific activations be transmitted between model instances?

AI-16 Activation Geometry DOI

Executive Summary

If the natural language interface is a uniform lossy channel (Paper XIV), perhaps we can bypass it by directly transmitting domain-specific activations between model instances. This paper tests that idea by extracting INLP projections from a sender instance and injecting them into a receiver instance, measuring whether domain-specific information can be transferred at the activation level.

The results reveal a surprising paradox. The text baseline alone already achieves 95% domain classification accuracy in the receiver, leaving little room for improvement. INLP injection adds just +1.9 percentage points, narrowly outperforming full activation injection (+1.3%). The 36-dimensional INLP subspace actually outperforms the full 3584-dimensional activation vector — a denoising effect where the lower-dimensional projection strips away irrelevant variance that would otherwise interfere with the receiver's processing.

Most critically, sender and receiver instances encode domain information in nearly orthogonal directions (cosine similarity 0.27), and Procrustes alignment between the two representation spaces fails completely (residual > 1). This means that even though both instances classify domains accurately, they do so using fundamentally different geometric strategies. Direct activation transfer between instances faces not just a bandwidth problem but an alignment problem — there is no shared coordinate system for domain-specific information.

Key Findings

  • Text baseline dominates: Text alone achieves 95% domain classification accuracy in the receiver, leaving minimal headroom for activation-level augmentation
  • INLP outperforms full activation: 36-dim INLP injection (+1.9%) beats 3584-dim full activation (+1.3%) through a denoising effect
  • Orthogonal encoding: Sender and receiver encode domain in nearly orthogonal directions (cosine similarity 0.27)
  • Procrustes alignment fails: No linear transformation can align sender and receiver domain representations (residual > 1)

Key References

  • McEntire (2026) — The Inter-Instance Compression Barrier: uniform lossy channel (Paper XIV)
  • McEntire (2026) — The Concentration Barrier: dimensionality bounds (Paper X)
  • McEntire (2026) — Full Mind Transfer: bandwidth vs fidelity (Paper XVI)
  • Ravfogel et al. (2020) — Iterative Nullspace Projection
  • Schonemann (1966) — A generalized solution of the orthogonal Procrustes problem

Download Full Paper

Access the complete research paper with detailed methodology, empirical evidence, and formal proofs.

Download PDF