Deliberative AI — Category Definition
Deliberative AI
is a new category
Generative AI answers questions. Deliberative AI makes decisions.
The difference is not a feature. It is a fundamentally different architecture — and a fundamentally different use.
MyCorum.ai
March 2026
9 min read
What deliberative AI means — exactly
Every AI tool you use today is a generative AI system.
You write a prompt. One model produces one response.
The model synthesizes from its training, applies whatever reasoning it has,
and returns an answer. That answer is one perspective, from one system,
at one moment. It does not know what it does not know.
Deliberative AI is architecturally different.
It does not ask one model what it thinks.
It assembles multiple independent models — each with a distinct mandate —
and forces them to disagree before it allows them to converge.
The output is not an answer. It is a calibrated recommendation.
Definition — Deliberative AI
A deliberative AI system structures adversarial disagreement between
multiple independent AI models to produce calibrated recommendations on complex decisions —
complete with a confidence score, preserved minority position,
and explicit conditions under which the recommendation should be revised.
The word "deliberative" is chosen precisely.
In political philosophy, deliberative democracy holds that legitimate decisions
emerge from structured public reasoning among disagreeing parties —
not from the preferences of the majority, and not from a single authority.
Deliberative AI applies the same principle to machine reasoning:
the quality of a decision is a function of the quality of the disagreement
that preceded it.
Why the difference matters
For most uses of AI — summarizing a document, drafting an email,
answering a factual question, generating an image — generative AI is exactly right.
One model, one answer, immediately useful.
But decisions are different.
A decision involves uncertainty, competing priorities, unknown information,
and consequences that extend forward in time.
When you ask a single AI model to help you decide,
you are asking one perspective — trained by one team,
on one corpus, with one set of blind spots —
to resolve a question that was designed to resist resolution.
The fundamental problem is not that AI models are wrong. It is that a single model cannot know what it does not know. It has no adversary. It has no dissent. It produces confidence without calibration.
Deliberative AI introduces what a single model cannot provide:
structured opposition.
Not through a voting mechanism, not by averaging responses,
and not by asking models whether they agree with each other.
Through adversarial pressure — independent analysis, forced cross-critique,
and a synthesis that must account for the positions that did not converge.
Generative AI — single model
- One perspective, one answer
- No mechanism for structured disagreement
- Confidence is self-reported, uncalibrated
- No preserved minority position
- Hallucination without internal check
- Trained to converge, not to challenge
- Optimal for: answers, content, retrieval
Deliberative AI — Le Corum
- Five independent minds, distinct mandates
- Adversarial pressure across multiple rounds
- Confidence measured algorithmically (KLE)
- Minority Report preserved and mandatory
- Cross-model critique surfaces blind spots
- Trained to disagree before converging
- Optimal for: decisions, strategy, high stakes
How deliberative AI works — the architecture
MyCorum.ai's deliberative engine, Le Corum,
is built around four architectural properties that distinguish
deliberative AI from any multi-model or agentic AI system:
R1 Isolation
All five minds analyze independently via parallel async calls. No model sees another's response until all five have completed. This is enforced in code — not through prompting. Independence in Round 1 is a hard guarantee, verified by SHA-256 proof in the telemetry.
Epistemic divergence
Disagreement is measured algorithmically using Kernel Language Entropy on text embeddings — not self-evaluated. The confidence score reflects actual semantic divergence across the panel, not how confident each model says it is.
Anti-convergence
If agreement exceeds 90% or the biodiversity index drops below 0.25, The Contrarian is triggered — not by a prompt, but by a conditional in the orchestrator Python code. Consensus that is too easy is treated as a failure mode, not a success.
Mandatory dissent
The Minority Report is a required field in the output schema — strict: true. Le Corum cannot produce a synthesis that omits the dissenting position. The model that did not converge is heard, always.
The five minds — and their mandates
Each of Le Corum's five personas has a distinct analytical mandate. They are not five instances of the same model. They are five architecturally different systems, assigned different roles:
⚖
The Architect
Structure, financial rigor, process. Never lets emotion obscure the numbers.
🌐
The Strategist
Macro forces, competitive positioning, long-context synthesis. Sees around corners.
🔬
The Engineer
Technical feasibility, operational accuracy. Exposes what sounds good but breaks in practice.
🛡️
The Counsel
Risk, ethics, regulatory exposure. The voice that asks "but what if."
🧭
The Contrarian
Adversarial challenge, blind spots. Programmed to find why you are wrong.
A deliberation — from question to recommendation
1
MyPilot frames the question
Your question is analyzed for domain, complexity, and stakes. MyPilot selects the service (The Expert, The A-Team, or The Dream Team) and prepares each persona's mandate.
2
Round 1 — Independent analysis
Five minds analyze in parallel. No cross-contamination. Each produces a structured response before any other has completed. R1 isolation is enforced in code.
3
Adversarial rounds — Challenge and critique
The orchestrator measures semantic divergence. If agreement is premature, additional rounds are triggered: cross-critique, devil's advocate, assumption surfacing. Up to 4 rounds for The A-Team. The Dream Team adds a fundamentally different element: the human Decision-Maker in the room — pausing between rounds, steering the deliberation, and holding final authority. Le Corum deliberates with you, not for you.
4
Corum Synthesis — The recommendation
GO / PIVOT / STOP recommendation with confidence score, decision matrix, action plan, information gaps, falsification conditions — and the Minority Report from the mind that did not converge.
Where deliberative AI fits — the category map
Deliberative AI is not a subset of existing AI categories. It is a distinct category with a distinct use case:
Category
Generative AI
Produces content, answers, summaries, and code from a single model. Designed for fluency and speed.
Best for: answers, content, research, code
Category
AI Search
Retrieves and synthesizes information from the web with citations. Designed for current information access.
Best for: research, fact-checking, news
Category
Agentic AI
Autonomous systems that take sequences of actions — browsing, coding, task execution — on a user's behalf.
Best for: automation, workflows, task execution
New Category — MyCorum.ai
Deliberative AI
Structures adversarial disagreement between independent AI models to produce calibrated recommendations on complex decisions. Designed for decisions where being wrong is expensive.
Best for: strategy, investment, M&A, hiring, high-stakes choices
The clearest way to distinguish deliberative AI from every other category:
deliberative AI is the only category designed to preserve the position of the mind that disagreed.
Every other AI system optimizes for a single best answer.
Le Corum treats the dissenting view as data — sometimes the most important data.
Who deliberative AI is for
Not every question needs deliberation.
If you need an answer, use generative AI — it is faster, cheaper, and excellent at it.
If you need to find information, use an AI search tool.
Deliberative AI is for the decisions where the cost of being wrong
exceeds the cost of being slow.
Market entry. Capital allocation. Strategic pivots. Key hires.
Regulatory compliance with ambiguity. Acquisitions.
Situations where a confident single answer is the most dangerous outcome
because it forecloses the questions you did not ask.
The test: if you would convene a board, a committee, or a panel of advisors before acting — that is a deliberative AI use case. Le Corum is the committee that is available at 2am, costs $3, and has read everything relevant to your sector.
MyCorum.ai — the first deliberative AI platform
MyCorum.ai was founded on a single conviction:
the most consequential decisions of the next decade
will be made with AI assistance.
The question is not whether AI will be in the room.
The question is whether the AI in the room will challenge you —
or simply confirm what you already believed.
Le Corum was built to challenge.
Five independent minds. Enforced independence.
Algorithmic measurement of disagreement.
A recommendation that is only as confident as the deliberation that produced it.
MyCorum.ai is the first deliberative AI platform —
the first system built not to answer questions,
but to make decisions better.
The deliberative AI category is new.
In two years, it will not be.
The question has been waiting.
Le Corum is ready.
Start with a decision that matters. Five independent minds, structured disagreement, one calibrated recommendation.