Per Ardua

Research Program

Structural Compression Theory — 25 papers across neural network geometry, multi-agent coordination, organizational physics, market mechanisms, and applied analysis

Activation Geometry

Neural network training dynamics and model composition through activation-space analysis

AI-1/2

Training Trajectory Structure: Regime Detection, Cross-Seed Convergence, and Speculative Weight Prediction

Consolidated paper. Three-regime taxonomy via activation fingerprints generalizes across 124M-7B. Cross-seed convergence: CV 0.41-2.43%, phase boundaries synchronized to +/-50 steps. Linear prediction achieves 60-90% strict acceptance at 7B in stable regimes. Universal momentum catastrophe: 100-10,000x loss inflation at all scales. Supersedes Leap+Verify and Ensemble Collapse.

DOI
AI-3

Constellation-Indexed Model Composition

Dynamic specialist composition via activation fingerprints. Generalist-space indexing: 20.6% to 98.1% win rate. Stochastic resonance at sigma=0.020.

DOI
AI-4

The Shape of the Problem: Domain-Invariant Structural Signatures

INLP erases domain signal to near-chance while shape classification holds at 95.6%+. Clean double dissociation of domain and structure.

DOI
AI-5

Capability Manifold Surveillance: Topological Detection of Model Distillation

Activation fingerprint geometry as security primitive. AUC 1.000 on systematic attacks at 1.5B+. Economic deterrence via temporal accumulation and coverage-clustering tradeoff.

DOI
AI-7

GenAI Is Socially Awkward: RLHF Instruction Tuning Damages Social Cognition at Small Scale by Suppressing Pragmatic Inference

RLHF hurts social cognition at 7B (-18 points) but helps at 72B (+6). The deficit is processing mode: compliance training suppresses fuzzy pattern matching social reasoning requires.

DOI
AI-8/9/10/11/12/13

The Geometry of Failure: Terminal Measurement, Concentration Barriers, and the Limits of Linear Intervention in Neural Networks

Consolidated paper. Shaped noise breaks 100% of repetition loops but cross-domain selectivity fails. Layer-resolved mapping, spectral analysis, concentration barrier theorem (selectivity bounded by k/d_eff), and SR channel capacity (~2 bits) converge on one result: classification and intervention operate under different constraints. The forward pass is an isotropic amplifier; no fixed subspace can isolate domain-specific computation.

DOI
AI-14

Causal Basis Discovery for Domain-Selective Noise Injection

INLP is the only basis with positive selectivity (+0.618). All causal bases (patching, contrastive, gradient) are anti-selective. Classification-intervention dissociation established.

DOI
AI-15

The Inter-Instance Compression Barrier: Domain-Specific Information Loss at the Natural Language Interface

NL interface is a uniform lossy channel dropping ~80% proportionally across domain-specific and domain-agnostic subspaces. The concentration barrier is the sole constraint.

DOI
AI-16/17/18/19

The Dissociation of Geometry and Function: Why Activation-Level State Transfer Cannot Compete with Text

Consolidated paper. Four experiments on activation-level state transfer converge: geometric similarity does not imply functional equivalence. Text preserves 63.7% of reasoning trajectory; activation injection adds 1.2%. Priming selection (activation-guided input) closes 48.9% of the expert gap, outperforming injection by 5.4x. Domain reversal: text preserves legal best, activations preserve science best. Supersedes Projection Transmission, Bandwidth-Fidelity, Continuation Perplexity, and Ensemble Gravity.

DOI
AI-20/21/23/24

Context Management in Autoregressive Language Models

Consolidated paper. Ritual shape, shepherd agents, context fences, and hop scaling unified. 15 tokens capture 98.8% of benefit. Adaptive priming is harmful. Mode switching, not garbage collection. Chain degradation saturates logarithmically by hop 5. Cross-domain handoffs outperform within-domain.

DOI
AI-25/26

Universal Entanglement in Transformer Activation Space

Consolidated paper. SVD directions can be concept-pure for classification yet carry all concepts in activations. Random Gaussian projections reproduce learned entanglement intensity (EI=1.50). Superlinear amplification confirmed. Entanglement is geometric, not learned.

DOI
AI-27

The Entanglement Theorem: Structural Concept Coupling as a Geometric Consequence of High-Dimensional Encoding

Proves structural entanglement is a mathematical theorem via concentration of measure. Specialist bound: compositional architecture is geometric necessity. Alignment implication: surgical concept editing is geometrically limited.

DOI
AI-28/29

Entanglement Under Fine-Tuning: Architecture-Dependent Collapse, Scale Thresholds, and Entanglement-Optimal Adaptation

Consolidated paper. B3 drives EI to zero in Qwen-32B (8 seeds, phase transition by step 3500) but not in four other architectures. Scale ladder: -32% at 7B, -93% at 14B, -100% at 32B. Block-diagonal LoRA adapters that respect entanglement geometry outperform concept-isolating adapters (48.2% vs 36.6% HumanEval+). CSR preserves structure.

DOI

Applied Analysis

Empirical investigations of organizational mechanisms

AP-1

Tournament-Based Performance Evaluation and Systematic Misallocation

54,000 MC trials. Error rate 0.494. $10M pool → $11.76M total cost. BTL supplement: 22% reduction.

DOI arXiv
AP-2

Structural Immunity: Platform Dominance and Corporate Impunity

Six-filter multiplicative pipeline. Consumer finance: 0.00075% survival rate.

DOI SSRN
AP-3

The Inversion of Doubt: Skepticism Inverts Epistemic Warrant

Cross-domain metascience synthesis. Three structural mechanisms. Composite threshold from RR divergence, calibration gap, and falsifiability timescale.

DOI
AP-4

The Topology of Influence: Latent Structure and Resilience in Online Populations

1.4B Reddit comments, 44,837 subreddits. 12-dimensional latent space. Influence wave propagation (ρ=0.27). Simpson's paradox in leadership (18.5x per-dimension, 1.0x global). Topology predicts resilience (ρ=0.56).

DOI
AP-5

Entropy-Bounded Decomposition for Time Series Forecasting

1,200 lines of NumPy vs 200M-param transformer. Fourier + wavelet + AR with max-entropy stopping. ETTh1: 0.131 vs TimesFM 0.375 (+65%). ETTm1: 0.088 vs 0.320 (+73%). Residual AC(1) predicts which basis to use.

DOI spectral-forecast →
AP-6

Decentralized Drone Swarm Coordination via Broadcast-as-Shared-State and Hierarchical PCA-Tree Bisection

One primitive (broadcast-as-shared-state with PCA-tree-keyed local computation), four provable layers (assignment, recovery, priority, localization). 1.43% gap from Hungarian at N=10,000. 14× fewer messages than CBBA at 3.0% gap. Slot-disjointness composition theorem. Witness-alarm byzantine defense. v1.1 adds comms-layer empirical validation: inverted EN_ROUTE+ETA quiescence holds 0% missed-deadline through p=0.7 packet loss; ETA-spoofing attack closed by per-drone sanity bound. Bootstrap CIs throughout.

DOI drone-swarm-coordination →
MM-1

Welfare-Optimal Matching in Hiring Markets via Min-Cost Flow

MCF-Tiered: 95.3% vs greedy 24.5% company satisfaction. Adaptive β=0.7σ(S). 3 datasets, scorer-independent. p<10⁻²⁰.

DOI TalentSync →

Academic Monograph

The unified theoretical framework

Monograph Published

Structural Compression Theory: A Unified Information-Theoretic Account of Organizational Dysfunction, Creativity, and Substrate-Independent Selection Dynamics

Academic monograph unifying 24 papers into a single formal framework. Proves that compression under selection produces systematic drift from reality toward internal fit — and that the mechanism is substrate-independent across cognition, organizations, and AI. Three theorems, one lemma, five sufficient conditions, and the inseparability corollary: dysfunction and creativity share a single channel, distinguished only by selection regime. 17 chapters across five parts, 17 appendices (including the Entanglement Theorem), validated across 500 computational configurations with zero counterexamples. Derives the Hallucination Corollary: you cannot eliminate hallucination without eliminating generalization.

DOI SSRN

In Progress

Active research awaiting formal publication