Research · Per Ardua

The Inversion of Doubt: Public Skepticism Is Concentrated Where Epistemic Warrant Is Highest and Absent Where It Is Lowest

A cross-domain metascience synthesis

AP-3 Applied DOI

Executive Summary

Public skepticism is not randomly distributed across scientific domains. The claims attracting the most sustained public doubt — vaccine safety, anthropogenic climate change, biological evolution — are among the most empirically robust in the history of inquiry. Meanwhile, macroeconomic projections, demographic forecasts, and clinical practice guidelines with documented failure rates of 40 to 97 percent circulate through prestige journalism and policy discourse with near-zero friction.

This inversion is not the product of individual irrationality. It is structurally produced by three forces: the asymmetry between falsifiability timescales in hard versus soft science, identity-protective cognition that indexes skepticism to perceived threat rather than accuracy, and a selection environment that rewards transmissibility over calibration. Where ground truth is unavailable, the correction machinery never activates, and claims circulate without measurement or challenge.

A provisional threshold operationalization is offered: domains meeting two of three measurable criteria — registered report divergence exceeding 20 percentage points, expert calibration gap exceeding 20 percentage points, and falsifiability timescale exceeding a professional generation — warrant presumptive skepticism of public acceptance rates.

Key Contributions and Methodology

The paper's primary contribution is synthesizing five independent lines of evidence into a single cross-domain argument: that high public acceptance rate can serve as evidence of low accuracy above a measurable threshold. The component mechanisms have been documented in isolation — Ioannidis on false discovery rates, Fanelli on positive result gradients, Tetlock on forecasting failure, Kahan on identity-protective cognition — but have not previously been integrated into a unified framework with threshold operationalization.

The methodology is synthetic rather than experimental. Each section draws on the strongest available evidence from its domain: NISTEP Delphi technology foresight retrospectives for hard science reliability, Scheel et al.'s registered reports natural experiment for metascientific measurement, the Good Judgment Project and Survey of Professional Forecasters for calibration data, Herrera-Perez et al.'s systematic review for clinical reversal rates, and Gallup/Pew longitudinal data for public opinion trends.

Key Findings

  • Epistemic warrant hierarchy: Hard sciences produce predictions that replicate at rates exceeding 90%, while social science and macroeconomic forecasting show calibration gaps of 20-30 percentage points and positive result rates near 97%
  • Registered report divergence: Scheel et al. found 96% positive results in standard publications vs. 44% in registered reports — a 52-point gap revealing the selection filter's magnitude
  • Expert calibration gap: Survey of Professional Forecasters show 53% confidence with 23% accuracy; Tetlock's tournament data shows media-prominent experts perform worst (r = -0.12 for media contact frequency)
  • Clinical reversal: 13% of RCTs examined by Herrera-Perez et al. found established medical practices to be ineffective; NCCN Category 1 evidence covers only 6-8% of oncology guidelines
  • Identity-protective cognition: Kahan's motivated numeracy experiments show that higher numeracy increases polarization on identity-relevant topics while improving accuracy on neutral ones
  • Composite threshold: Domains meeting 2 of 3 criteria (RR divergence >20 pts, calibration gap >20 pts, falsifiability timescale >30 yr) are structurally positioned for the inversion to operate

Key References

Fanelli, D. (2010)

"Positive" Results Increase Down the Hierarchy of the Sciences. PLoS ONE, 5(4), e10068.

Scheel, A. M., Schijen, M., & Lakens, D. (2021)

An Excess of Positive Results: Comparing the Standard Psychology Literature With Registered Reports. Advances in Methods and Practices in Psychological Science, 4(2).

Tetlock, P. E. (2005)

Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press.

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (2017)

Motivated Numeracy and Enlightened Self-Government. Behavioural Public Policy, 1(1), 54-86.

Herrera-Perez, D., Haslam, A., Crain, T., et al. (2019)

A Comprehensive Review of Randomized Clinical Trials in Three Medical Journals Reveals 396 Medical Reversals. eLife, 8, e45183.

Download Full Paper

Access the complete research paper with detailed methodology, empirical evidence, and formal proofs.

Download PDF