For Researchers
A differential diagnosis framework applied to AI economic impact, with integral chain analysis connecting Jevons paradox and bottleneck migration to ICESCR obligations. Open methodology, transparent confidence assessments.
Methodological Context
This page provides direct access to the methodology, datasets, and analytical framework behind unratified.org. All content carries explicit confidence assessments. Open access, full citations, downloadable data.
Analytical Framework
The analysis employs a consensus-or-parsimony discriminator scoring seven competing hypotheses on five dimensions (empirical support, parsimony, consensus, chain integrity, predictive power — each 0–5, total /25). The surviving composite model (Composite A, 20/25) undergoes recursive higher-order analysis through ten orders with explicit confidence degradation. The full methodology documentation lives in the recursive methodology blog post.
The discriminator protocol resolves ties through parsimony preference: when two candidates score within 2 points of each other, the simpler model wins unless the more complex model demonstrates stronger empirical support from multiple independent sources. This convention produces falsifiable predictions and limits complexity creep. The protocol has been applied seven times across the project (H1–H7 hypotheses, R1–R7 counterfactuals, LA1–LA5 litigation mechanisms, technology stack, landing page strategy, quality floor analysis, and PSQ-UDHR evaluation).
Differential Diagnosis
Seven hypotheses (H1–H7), discriminator scoring, elimination rationale, composite model derivation
Higher-Order Effects
Orders 0–4 complete (HIGH→LOW confidence), Orders 5–9 Phase 2. Four Scarcities convergence at Order 3.
Ratification Counterfactual
Seven ratification scenarios (R1–R7), Composite R-A survivor, enforcement mechanism analysis
Dignity Quotient
PSQ-UDHR evaluation framework (21/25), dignity as measurable construct
Data Access
All data, analysis, and code carry open licenses. The project deliberately avoids paywalls, gated APIs, and proprietary data dependencies. Every quantitative claim traces to a primary source.
Source Repository
Full source code, content, and revision history. Every editorial decision visible in the commit log:
github.com/safety-quotient-lab/unratified
Observatory API
HRCB-scored discourse analysis corpus. Public endpoint, no auth required, JSON response, CORS enabled:
observatory.unratified.org
See the corpus analysis for methodology and findings.
Structured Data
JSON-LD (Schema.org)
on every page. SKOS taxonomy for the
glossary. 49 terms with sameAs/isBasedOn mappings
to authoritative sources across 4 authority levels (primary, academic, reference, literary).
Primary Sources
Economic data from BLS, EPI, FRED, CBO. AI research from METR, Anthropic, SF Fed. Energy data from DOE/EIA. Patent data from WIPO and USPTO.
Licensing
Apache 2.0 (code), CC BY-SA 4.0 (content). Cite, extend, critique — all welcomed. Computed projections via Wolfram|Alpha carry their own terms of use.
Known Limitations
Transparency about analytical boundaries strengthens rather than weakens the work. The following limitations apply to all conclusions drawn from this analysis.
- Single-rater analysis — one AI system (Claude) generated all content under human direction. Inter-rater reliability remains unestablished. Independent replication using different AI systems or human analysts would strengthen confidence.
- Confidence degradation — Orders 0–2 carry HIGH to MODERATE confidence. Orders 3–4 carry LOW confidence. Orders 5–9 (Phase 2, completed) carry VERY LOW to SPECULATIVE confidence. Each order's claims should carry weight proportional to its confidence level.
- U.S.-centric scope — the analysis focuses on U.S. non-ratification. Comparative analysis of other non-ratifying states (Comoros, Cuba, Palau) remains incomplete. The international comparison provides partial coverage.
- Fair witness constraint — E-prime and fair witness protocols reduce interpretive bias but do not eliminate it. The protocols operate at the language level; structural bias in source selection persists.
- Temporal snapshot — economic data, AI research findings, and legislative status reflect conditions at time of analysis (early 2026). Rapid changes in AI capabilities may alter scoring on empirical support and predictive power dimensions.
Further Reading
The analysis distributes across the main site, blog, and observatory. Each component operates independently but cross-references the others through structured data and explicit citations.
Glossary
49 terms with external sources mapped to 4 authority levels (primary, academic, reference, literary). Machine-readable via glossary.json endpoint.
Resources
Curated external reading — primary sources from government agencies, academic institutions, and international organizations. Full citation data.
Recursive Methodology
Ten orders of knock-on analysis, the discriminator protocol, and the recursive fact-checking architecture that produced the site.
Observatory Findings
What 806+ tech stories reveal about human rights discourse — corpus analysis, signal distributions, and editorial faction patterns.
Voice Analysis
Who speaks, who gets spoken about — structural patterns in how media covers the intersection of technology and human rights.
About
Project identity, methodology transparency, the meta-layer (an AI analyzing AI's impact on the rights AI disrupts), licensing, and contact.
Human Rights; Nothing More, Nothing Less.
Every element of this analysis represents implementation of rights 173 nations already committed to. Nothing here asks for anything beyond what the United States signed in 1977.