What is Evidence dossier?
An evidence dossier is the set of verified sources mapped to each claim of a story — confirmed sources, AI-suggested sources, divergent ones, per-claim verification status, and editorial revision history — accessible for pre-publication review and post-publication audit.
In short
- Every claim in the story is traced back to the source supporting it.
- Sources are classified as confirmed, AI-suggested, or divergent.
- The dossier is archived and auditable after publication, per story.
Full definition
In a verifiable editorial AI platform, the evidence dossier is the central artifact — the thing that distinguishes this kind of tool from a generic AI writer. Every story produced ships with a structured set of sources consulted during research, mapped against each claim in the resulting text.
The dossier isn't a post-hoc report — it's the product of research done before drafting. When an editor opens a story for review, they see not only the text but the sources backing each paragraph, with status labels (confirmed, suggested, divergent) and the editorial revisions applied so far.
For a newsroom, the strategic value of the dossier shows up in three moments: editorial review (faster, safer decisions), an internal investigation or reader complaint (the full trail is accessible), and upcoming regulatory compliance (per-story audit, expected by emerging AI-in-journalism regulations).
How it works
- During research, AI searches sources in real time and classifies each by reliability and relevance to the pitch.
- During drafting, each claim in the text is linked to the source(s) supporting it — that mapping is what populates the dossier.
- Sources the AI proposes but that need editorial validation are flagged as 'suggested' until an editor confirms.
- Sources that contradict each other are flagged as 'divergent' — the editor picks the prevailing version and the rationale is logged.
- After publication, the dossier is versioned: subsequent edits produce new versions while preserving history.
Practical example
In a sports-coverage story, AI confirms the team's official starting lineup via the club's primary source (confirmed), suggests a tactical analysis from a specialist blog (suggested — pending validation), and detects a discrepancy between two outlets on attendance numbers (divergent — the editor picks the version to publish and the rationale stays in the history).
Evidence dossier vs Traditional post-production fact-check
Traditional fact-check happens after the story is written, usually as a separate step before publication or in response to a reader complaint. An evidence dossier is built during research — it guides the writing rather than correcting it after. The difference is structural: one defers the problem, the other prevents it.
Frequently asked questions
Does the dossier include confidential sources?
No. Confidential sources (in investigative journalism, for example) are not entered into the dossier accessible to the platform — that workflow stays with the human newsroom under standard editorial practice.
Who has access to a story's dossier?
Editors and editors-in-chief of the publication, per configured permissions. In media groups, directors may have consolidated visibility; editors of each title see only what's theirs.
Can the dossier be exported for external audit?
Yes. The dossier can be exported in auditable formats (PDF and structured JSON) for internal investigations, regulatory requirements, or reader responses. The content respects data protection law and the publication's editorial AI policy.
See how Typedit uses evidence dossier
The verifiable editorial AI platform applies this concept in production — at Brazilian newsrooms with 10M+ monthly readers.
Related terms
Verifiable editorial AI
Verifiable editorial AI is the category of AI platforms for journalism whose core differentiator is showing the provenance of every claim — research first, write second, with an evidence dossier per story and the editor in command.
Real-time verified sources
Real-time verified sources are the references an editorial AI platform consults during research — instead of relying solely on the model's frozen training knowledge, the platform fetches current content and checks source authority before drafting.
Automatic fact-check
Automatic fact-check is the programmatic verification of a story's claims against verifiable sources — when integrated into an AI editorial pipeline, it happens during drafting (not as a separate post-production step), reducing hallucination risk before publication.
Editorial audit trail
An editorial audit trail is the complete, versioned, exportable history of decisions made on each AI-assisted story — sources consulted, claims modified, editorial overrides, and timestamps — ready for internal investigations, reader inquiries, and regulatory compliance.