What is Automatic fact-check?
Automatic fact-check is the programmatic verification of a story's claims against verifiable sources — when integrated into an AI editorial pipeline, it happens during drafting (not as a separate post-production step), reducing hallucination risk before publication.
In short
- Verification happens during drafting, not after — inline fact-check.
- Each claim is checked against the evidence dossier sources.
- The editor receives the story with per-claim verification status ready for review.
Full definition
Automatic fact-check in the editorial AI context is programmatic verification done during story generation, rather than as a separate step before publication. It becomes viable when research has been done before drafting and an evidence dossier exists mapping claims to sources.
The term is distinct from post-publication fact-check (responding to reader complaints) and traditional newsroom fact-check (one journalist checking another's work). In editorial AI, automatic fact-check is a layer of the editorial pipeline — it doesn't replace human review, it gets ahead of the problem.
How it works
- During drafting, the platform confronts each claim in the text with the dossier's sources.
- Claims without supporting sources are flagged for review (never published as fact).
- Claims with divergent sources are surfaced — the editor decides which version prevails.
- The consolidated output reaches the editor with each claim tagged: confirmed, partial, divergent, unsupported.
Practical example
In a story about a drug launch, AI confirms regulator approval via the official source (status: confirmed), flags the cited indication as 'partial' (the label mentions adult use, but the story wrote pediatric), and surfaces a divergence between the manufacturer and an independent study on efficacy. The editor receives each claim with status and decides.
Automatic fact-check vs Traditional post-production fact-check
Traditional fact-check is a human or semi-automated step done after drafting — or worse, after publication, in response to a reader. Automatic fact-check in editorial AI happens inline, before the editor opens the story. The editor reviews text already mapped against sources; they don't reinvent the work.
Frequently asked questions
Does automatic fact-check eliminate human review?
No. It reduces basic-checking labor, but editorial decisions (prioritization, angle, ethics) stay with the editor. Automatic fact-check anticipates the problem; human review validates and contextualizes.
Does it work for any beat?
It works best in beats with online sources verifiable in real time (sports, tech, politics, finance). In investigative journalism with confidential human sources, automatic fact-check covers the public facts that underpin the story; the confidential source stays with the newsroom.
See how Typedit uses automatic fact-check
The verifiable editorial AI platform applies this concept in production — at Brazilian newsrooms with 10M+ monthly readers.
Related terms
Evidence dossier
An evidence dossier is the set of verified sources mapped to each claim of a story — confirmed sources, AI-suggested sources, divergent ones, per-claim verification status, and editorial revision history — accessible for pre-publication review and post-publication audit.
Verifiable editorial AI
Verifiable editorial AI is the category of AI platforms for journalism whose core differentiator is showing the provenance of every claim — research first, write second, with an evidence dossier per story and the editor in command.
Real-time verified sources
Real-time verified sources are the references an editorial AI platform consults during research — instead of relying solely on the model's frozen training knowledge, the platform fetches current content and checks source authority before drafting.
AI hallucination (in journalism)
AI hallucination in journalism is the generation of invented facts by an AI tool — a person who doesn't exist, a statement that wasn't made, a number out of context. It's unacceptable in professional journalism and the core problem that verifiable editorial AI solves.