Per Ardua
Research Program
Structural Compression Theory — 40 papers across neural network geometry, multi-agent coordination, organizational physics, and applied analysis
Activation Geometry
Neural network training dynamics and model composition through activation-space analysis
Leap+Verify: Regime-Adaptive Speculative Weight Prediction
Applies speculative execution to accelerate neural network training. Discovers universal momentum catastrophe. 67-90% strict acceptance at 7B scale.
Training Once Is Enough: Activation Fingerprint Convergence
Independent training runs converge to identical activation fingerprints. Cross-seed cosine similarity >0.999. Saves 60-80% ensemble compute at 7B.
Constellation-Indexed Model Composition
Dynamic specialist composition via activation fingerprints. Generalist-space indexing: 20.6% to 98.1% win rate. Stochastic resonance at sigma=0.020.
The Shape of the Problem: Domain-Invariant Structural Signatures
INLP erases domain signal to near-chance while shape classification holds at 95.6%+. Clean double dissociation of domain and structure.
Capability Manifold Surveillance: Topological Detection of Model Distillation
Activation fingerprint geometry as security primitive. AUC 1.000 on systematic attacks at 1.5B+. Bypasses OMED impossibility.
GenAI Is Socially Awkward: RLHF Instruction Tuning Damages Social Cognition at Small Scale by Suppressing Pragmatic Inference
RLHF hurts social cognition at 7B (-18 points) but helps at 72B (+6). The deficit is processing mode: compliance training suppresses fuzzy pattern matching social reasoning requires.
Shaped Noise Injection at Inference Time: Domain Precision, Loop Breaking, and the Terminal Measurement Limit
Shaped noise breaks 100% of repetition loops at 3B and 7B. Cross-domain selectivity fails — terminal measurement problem constrains all direction-space intervention methods.
Layer-Resolved Response Tensor: Where Domain Selectivity Lives in the Forward Pass
Maps selectivity at nine transformer layers. Peaks at layers 7-10 (mean ~0.5), declines toward input and output. Domain asymmetry: code/science positive, medical/legal negative.
Spectral Geometry of the Forward Pass: How INLP Directions Interact with Layer Jacobians
INLP/random amplification ratio 0.99. The forward pass treats domain directions as generic directions. PCA-INLP alignment increases toward terminal layers where selectivity is lowest.
The Concentration Barrier: Effective Dimensionality Bounds Domain Selectivity
Proves selectivity bounded by k/d_eff. d_eff ranges 4.7-26.0 across layers. Mean-pooled d_eff collapses to 1.0 at layers 3-25 — representation anisotropy invisible to position-aware measurements.
Channel Capacity of Domain-Specific Stochastic Resonance
KL divergence confirms SR in information space. Total KL peaks at 14-15 bits but domain-specific component bounded at ~2 bits, consistent with concentration barrier bound of 2.24 bits.
The Activation Geometry Program: Twelve Papers on the Mathematical Structure of Neural Network Representations
Program synthesis. Classification and intervention operate under different constraints. The geometry INLP discovers is real, but causal pathways are distributed across dimensions no fixed subspace can isolate.
Causal Basis Discovery for Domain-Selective Noise Injection
INLP is the only basis with positive selectivity (+0.618). All causal bases (patching, contrastive, gradient) are anti-selective. Classification-intervention dissociation established.
The Inter-Instance Compression Barrier: Domain-Specific Information Loss at the Natural Language Interface
NL interface is a uniform lossy channel dropping ~80% proportionally across domain-specific and domain-agnostic subspaces. The concentration barrier is the sole constraint.
INLP Projection Transmission: Domain-Specific Activation Transfer at the Natural Language Boundary
Text baseline already 95% domain classification. 36-dim INLP outperforms 3584-dim full activation. Sender/receiver encode domain in orthogonal directions (cosine 0.27).
Full Mind Transfer: Bandwidth vs Fidelity in Activation-Level Coordination
Text and activations carry fundamentally different information. RSA 0.11 (text) vs 0.47 (activation). Despite 4x geometric fidelity, KL divergence similar. Domain reversal in preservation.
Sender Continuation Perplexity: Measuring Reasoning Trajectory Alignment at the Natural Language Boundary
Text preserves 63.7% of reasoning trajectory. Full activation without text scaffold = zero trajectory information (PPL 5.60 vs no-context 5.62). The 36.3% gap is the coordination ceiling.
Ensemble Gravity: Coordination Through Contextual Priming vs. Activation-Level Transmission
Priming selection closes 48.9% of coordination gap, 5.4x better than activation injection. Mechanism is processing style alignment (15.6% domain match rate), not domain matching.
Ritual Shape: Structural Features of Effective Coordination Sequences
15 tokens capture 98.8% of benefit. Reset instruction before priming achieves CE 0.733, beating expert baseline by 39%. Forced naming worse than natural vocabulary.
Shepherd Agents: Adaptive Priming Through Directed Intervention
All bare shepherd strategies worse than no coordination. With reset, shepherd content contributes nothing. Adaptive priming is actively harmful. Reset accounts for 22:1 of benefit.
The Coordination Problem Is Interference: Nine Experiments on What Not to Do
Synthesis of AI-16 through AI-24. Every mechanism that adds to receiver context adds interference. Two mechanisms work by removing: context fence and minimal domain priming.
Context Fences: Two-Phase Coordination Through Mode Switching and Domain Activation
Garbage collector hypothesis falsified. Correct mechanism: mode switching. Mode and domain are orthogonal composable dimensions (rho 0.858). Reset benefit constant ~0.076 nats.
Hop Scaling: Multi-Agent Chain Degradation Saturates
Chain degradation is logarithmic, not exponential. CE plateaus by hop 5. Cross-domain handoffs outperform within-domain. Protocol scales to arbitrary depth without compounding error.
Entangled Directions: Concept-Pure Discrimination Geometry Masks Universal Activation Entanglement
SVD directions can be concept-pure for classification (V-matrix purity >0.96) while carrying all concepts in activations. Minimum cross-concept damage 38.8%. Discrimination-activation dissociation established.
Structural Entanglement in the Informative Subspace: Eight Experiments on Why Every Direction Carries Every Concept
Random Gaussian projections to d>=448 reproduce learned entanglement intensity (EI=1.50). PCA reverses it. Bootstrap 95% CIs confirm significance across all four models. Superlinear amplification: triple EI exceeds pairwise by 2x. Entanglement is geometric, not learned.
The Entanglement Theorem: Structural Concept Coupling as a Geometric Consequence of High-Dimensional Encoding
Proves structural entanglement is a mathematical theorem via concentration of measure. Specialist bound: compositional architecture is geometric necessity. Alignment implication: surgical concept editing is geometrically limited.
Entanglement-Optimal Fine-Tuning: Leveraging Structural Concept Coupling for Parameter-Efficient Adaptation
B3 drives EI to zero in Qwen-32B (8 seeds, all converge by step 3500). Cross-family replication confirms collapse is Qwen-specific. CSR preserves structure. Block-diagonal LoRA adapters outperform concept-isolating adapters (48.2% vs 36.6% HumanEval+).
Selective Disentanglement: Natural Language Fine-Tuning as an Architecture-Dependent Decoupler of Cross-Domain Entanglement
NL fine-tuning drives EI to zero in Qwen-32B but not in four other architectures. Scale ladder: -26% at 7B, -76% at 14B, -100% at 32B. Non-degeneracy margin refuted, pre-training composition unsupported, scale dependence confirmed with phase boundary between 14B and 32B.
Bridge Theory
Unifying frameworks connecting neural and organizational phenomena
Organizational Theory
Structural forces, information dynamics, and governance in organizations
The Cage: How Fiduciary Duty Creates Organizational Incompleteness
SEC filing regression: lexical diversity declines -0.012/year post-IPO (R²=0.798). Legal Amplifier adds 85% for high-liability firms.
Dysmemic Pressure: Selection Dynamics in Organizational Information Environments
Crawford-Sobel partition coarsening with agent-based simulation (MI 0.88). DP index. Cases: Boeing 737 MAX, Theranos, Wirecard.
The Mirror: Navigating Incompleteness Through Meta-Compliance
Game-theoretic Mode A/B governance. Turbulence threshold τ*=0.176-0.294. Cases: NASA, Toyota, Bridgewater.
Multi-Agent AI: Substrate-Independent Dysfunction
Token governance overhead scales with links (25.5% at n=3 to 69.4% at n=15), not capability (CV=0.0000).
Ambient Structure Discovery via Stigmergic Mesh
22,500 Monte Carlo runs. Coordination failure TPR 0.82 at FPR<0.15. Patent filed USPTO #63/981,369.
Emergence: A Programming Paradigm for Constraint-Shaped Agency
WF1-WF4 well-formedness definitions. Q-dynamics contraction. Tessera as existence proof.
Contraction Dynamics: Constraint Typing and Gaussian Fixed-Point Convergence
Pure mathematics. Gaussian Contraction Theorem. Constraint type system. Five open problems.
The Strategic Rate-Distortion-Perception Tradeoff
Info-theoretic bridge. RLHF mapping: sycophancy as generative distortion. Conformity pressure = 27.4% of distortion at bias=0.1.
Applied Analysis
Empirical investigations of organizational mechanisms
Tournament-Based Performance Evaluation and Systematic Misallocation
54,000 MC trials. Error rate 0.494. $10M pool → $11.76M total cost. BTL supplement: 22% reduction.
Structural Immunity: Platform Dominance and Corporate Impunity
Six-filter multiplicative pipeline. Consumer finance: 0.00075% survival rate.
The Inversion of Doubt: Skepticism Inverts Epistemic Warrant
Cross-domain metascience synthesis. Three structural mechanisms. Composite threshold from RR divergence, calibration gap, and falsifiability timescale.
Academic Monograph
The unified theoretical framework
Structural Compression Theory: A Unified Information-Theoretic Account of Organizational Dysfunction, Creativity, and Substrate-Independent Selection Dynamics
Academic monograph unifying 40 papers into a single formal framework. Proves that compression under selection produces systematic drift from reality toward internal fit — and that the mechanism is substrate-independent across cognition, organizations, and AI. Three theorems, one lemma, five sufficient conditions, and the inseparability corollary: dysfunction and creativity share a single channel, distinguished only by selection regime. 17 chapters across five parts, 17 appendices (including the Entanglement Theorem), validated across 500 computational configurations with zero counterexamples. Derives the Hallucination Corollary: you cannot eliminate hallucination without eliminating generalization.
In Progress
Active research awaiting formal publication