Per Ardua

Research Program

Structural Compression Theory — 40 papers across neural network geometry, multi-agent coordination, organizational physics, and applied analysis

Activation Geometry

Neural network training dynamics and model composition through activation-space analysis

AI-1

Leap+Verify: Regime-Adaptive Speculative Weight Prediction

Applies speculative execution to accelerate neural network training. Discovers universal momentum catastrophe. 67-90% strict acceptance at 7B scale.

DOI arXiv SSRN
AI-2

Training Once Is Enough: Activation Fingerprint Convergence

Independent training runs converge to identical activation fingerprints. Cross-seed cosine similarity >0.999. Saves 60-80% ensemble compute at 7B.

DOI
AI-3

Constellation-Indexed Model Composition

Dynamic specialist composition via activation fingerprints. Generalist-space indexing: 20.6% to 98.1% win rate. Stochastic resonance at sigma=0.020.

DOI
AI-4

The Shape of the Problem: Domain-Invariant Structural Signatures

INLP erases domain signal to near-chance while shape classification holds at 95.6%+. Clean double dissociation of domain and structure.

DOI
AI-5

Capability Manifold Surveillance: Topological Detection of Model Distillation

Activation fingerprint geometry as security primitive. AUC 1.000 on systematic attacks at 1.5B+. Bypasses OMED impossibility.

DOI
AI-7

GenAI Is Socially Awkward: RLHF Instruction Tuning Damages Social Cognition at Small Scale by Suppressing Pragmatic Inference

RLHF hurts social cognition at 7B (-18 points) but helps at 72B (+6). The deficit is processing mode: compliance training suppresses fuzzy pattern matching social reasoning requires.

DOI
AI-8

Shaped Noise Injection at Inference Time: Domain Precision, Loop Breaking, and the Terminal Measurement Limit

Shaped noise breaks 100% of repetition loops at 3B and 7B. Cross-domain selectivity fails — terminal measurement problem constrains all direction-space intervention methods.

DOI
AI-9

Layer-Resolved Response Tensor: Where Domain Selectivity Lives in the Forward Pass

Maps selectivity at nine transformer layers. Peaks at layers 7-10 (mean ~0.5), declines toward input and output. Domain asymmetry: code/science positive, medical/legal negative.

DOI
AI-10

Spectral Geometry of the Forward Pass: How INLP Directions Interact with Layer Jacobians

INLP/random amplification ratio 0.99. The forward pass treats domain directions as generic directions. PCA-INLP alignment increases toward terminal layers where selectivity is lowest.

DOI
AI-11

The Concentration Barrier: Effective Dimensionality Bounds Domain Selectivity

Proves selectivity bounded by k/d_eff. d_eff ranges 4.7-26.0 across layers. Mean-pooled d_eff collapses to 1.0 at layers 3-25 — representation anisotropy invisible to position-aware measurements.

DOI
AI-12

Channel Capacity of Domain-Specific Stochastic Resonance

KL divergence confirms SR in information space. Total KL peaks at 14-15 bits but domain-specific component bounded at ~2 bits, consistent with concentration barrier bound of 2.24 bits.

DOI
AI-13

The Activation Geometry Program: Twelve Papers on the Mathematical Structure of Neural Network Representations

Program synthesis. Classification and intervention operate under different constraints. The geometry INLP discovers is real, but causal pathways are distributed across dimensions no fixed subspace can isolate.

DOI
AI-14

Causal Basis Discovery for Domain-Selective Noise Injection

INLP is the only basis with positive selectivity (+0.618). All causal bases (patching, contrastive, gradient) are anti-selective. Classification-intervention dissociation established.

DOI
AI-15

The Inter-Instance Compression Barrier: Domain-Specific Information Loss at the Natural Language Interface

NL interface is a uniform lossy channel dropping ~80% proportionally across domain-specific and domain-agnostic subspaces. The concentration barrier is the sole constraint.

DOI
AI-16

INLP Projection Transmission: Domain-Specific Activation Transfer at the Natural Language Boundary

Text baseline already 95% domain classification. 36-dim INLP outperforms 3584-dim full activation. Sender/receiver encode domain in orthogonal directions (cosine 0.27).

DOI
AI-17

Full Mind Transfer: Bandwidth vs Fidelity in Activation-Level Coordination

Text and activations carry fundamentally different information. RSA 0.11 (text) vs 0.47 (activation). Despite 4x geometric fidelity, KL divergence similar. Domain reversal in preservation.

DOI
AI-18

Sender Continuation Perplexity: Measuring Reasoning Trajectory Alignment at the Natural Language Boundary

Text preserves 63.7% of reasoning trajectory. Full activation without text scaffold = zero trajectory information (PPL 5.60 vs no-context 5.62). The 36.3% gap is the coordination ceiling.

DOI
AI-19

Ensemble Gravity: Coordination Through Contextual Priming vs. Activation-Level Transmission

Priming selection closes 48.9% of coordination gap, 5.4x better than activation injection. Mechanism is processing style alignment (15.6% domain match rate), not domain matching.

DOI
AI-20

Ritual Shape: Structural Features of Effective Coordination Sequences

15 tokens capture 98.8% of benefit. Reset instruction before priming achieves CE 0.733, beating expert baseline by 39%. Forced naming worse than natural vocabulary.

DOI
AI-21

Shepherd Agents: Adaptive Priming Through Directed Intervention

All bare shepherd strategies worse than no coordination. With reset, shepherd content contributes nothing. Adaptive priming is actively harmful. Reset accounts for 22:1 of benefit.

DOI
AI-22

The Coordination Problem Is Interference: Nine Experiments on What Not to Do

Synthesis of AI-16 through AI-24. Every mechanism that adds to receiver context adds interference. Two mechanisms work by removing: context fence and minimal domain priming.

DOI
AI-23

Context Fences: Two-Phase Coordination Through Mode Switching and Domain Activation

Garbage collector hypothesis falsified. Correct mechanism: mode switching. Mode and domain are orthogonal composable dimensions (rho 0.858). Reset benefit constant ~0.076 nats.

DOI
AI-24

Hop Scaling: Multi-Agent Chain Degradation Saturates

Chain degradation is logarithmic, not exponential. CE plateaus by hop 5. Cross-domain handoffs outperform within-domain. Protocol scales to arbitrary depth without compounding error.

DOI
AI-25

Entangled Directions: Concept-Pure Discrimination Geometry Masks Universal Activation Entanglement

SVD directions can be concept-pure for classification (V-matrix purity >0.96) while carrying all concepts in activations. Minimum cross-concept damage 38.8%. Discrimination-activation dissociation established.

DOI
AI-26

Structural Entanglement in the Informative Subspace: Eight Experiments on Why Every Direction Carries Every Concept

Random Gaussian projections to d>=448 reproduce learned entanglement intensity (EI=1.50). PCA reverses it. Bootstrap 95% CIs confirm significance across all four models. Superlinear amplification: triple EI exceeds pairwise by 2x. Entanglement is geometric, not learned.

DOI
AI-27

The Entanglement Theorem: Structural Concept Coupling as a Geometric Consequence of High-Dimensional Encoding

Proves structural entanglement is a mathematical theorem via concentration of measure. Specialist bound: compositional architecture is geometric necessity. Alignment implication: surgical concept editing is geometrically limited.

DOI
AI-28

Entanglement-Optimal Fine-Tuning: Leveraging Structural Concept Coupling for Parameter-Efficient Adaptation

B3 drives EI to zero in Qwen-32B (8 seeds, all converge by step 3500). Cross-family replication confirms collapse is Qwen-specific. CSR preserves structure. Block-diagonal LoRA adapters outperform concept-isolating adapters (48.2% vs 36.6% HumanEval+).

DOI
AI-29

Selective Disentanglement: Natural Language Fine-Tuning as an Architecture-Dependent Decoupler of Cross-Domain Entanglement

NL fine-tuning drives EI to zero in Qwen-32B but not in four other architectures. Scale ladder: -26% at 7B, -76% at 14B, -100% at 32B. Non-degeneracy margin refuted, pre-training composition unsupported, scale dependence confirmed with phase boundary between 14B and 32B.

DOI

Organizational Theory

Structural forces, information dynamics, and governance in organizations

Academic Monograph

The unified theoretical framework

Monograph Published

Structural Compression Theory: A Unified Information-Theoretic Account of Organizational Dysfunction, Creativity, and Substrate-Independent Selection Dynamics

Academic monograph unifying 40 papers into a single formal framework. Proves that compression under selection produces systematic drift from reality toward internal fit — and that the mechanism is substrate-independent across cognition, organizations, and AI. Three theorems, one lemma, five sufficient conditions, and the inseparability corollary: dysfunction and creativity share a single channel, distinguished only by selection regime. 17 chapters across five parts, 17 appendices (including the Entanglement Theorem), validated across 500 computational configurations with zero counterexamples. Derives the Hallucination Corollary: you cannot eliminate hallucination without eliminating generalization.

DOI SSRN

In Progress

Active research awaiting formal publication