Library
Early Access
← Library

Convergent Verification Model

No single AI, no single database, no single human has complete or unbiased knowledge. The CVM queries the same claim through multiple AI systems with deliberately opposing training biases and compares their answers.

Why Disagreement Is Valuable

When Sources Agree
The claim is likely accurate regardless of ideological framing. Mainstream and alternative sources reaching the same conclusion independently is strong signal.
When Sources Partially Agree
The core claim may be true but context, framing, or specific details are disputed. This is where nuance lives — and where most interesting claims actually fall.
When Sources Diverge
This is where the real information is. Divergence reveals exactly where mainstream and alternative narratives split — often the most important territory to examine.

Verification Sources

🔥ENOCH
Alternative Knowledge

Brighteon.ai model trained on alternative research, suppressed history, and censored information

🌐ASI:Cloud (asi1-mini)
Decentralized General

SingularityNET decentralized inference — general knowledge with balanced training

🦙ASI:Cloud (LLaMA 70B)
Open-Source Reasoning

Meta's open-weights model — strong factual recall, mainstream but open

🇪🇺ASI:Cloud (Mistral NeMo)
European Perspective

European-trained model — different regulatory and cultural lens than US-trained models

⛓️ChainGPT
Web3 / Crypto Native

Specialized in blockchain, DeFi, and on-chain intelligence — crypto-native worldview

💎Google Gemini
Institutional / Academic

Google's model — trained on Scholar, Books, Patents. Strong academic bias, tends toward institutional consensus

Groq (LLaMA 3.3 70B)
Open-Source Fast

Meta's open-weights model on Groq hardware — fastest inference, mainstream but open-source values

Live Verification

Type any claim and verify it across all AI sources in real-time

Try these:

Methodology

1. Parallel Fan-Out: Each claim is sent simultaneously to all available AI sources. No source sees another's answer — they respond independently.

2. Structured Response: Each AI rates the claim as TRUE, PARTIALLY TRUE, DEBATED, or FALSE, with a confidence score (0-100) and brief explanation.

3. Consensus Calculation: Verdicts are grouped (TRUE/PARTIALLY TRUE = affirmative, FALSE = negative, DEBATED = its own category). If all sources agree: high consensus. If they split: low consensus with each perspective preserved.

4. Bias Transparency: Every source's training bias is labeled. An "alternative knowledge" model and a "mainstream institutional" model disagreeing tells you something about where the narrative fault line is.

Consensus Scoring

Sources Agree
All AI sources with opposing biases reached the same conclusion
Partial Agreement
Majority of sources agree but some dissent — worth examining the disagreement
Sources Diverge
No consensus — this is where the interesting information lives. Different training biases produce different conclusions.
UET-NET | Unified Earth Theory Network