Our Commitment to Responsible AI in Local Journalism
How Mueller Today uses artificial intelligence to strengthen — not replace — human reporting, and what that means for the stories you read.
Last updated March 26, 2026 · Version 1.0
Why This Policy Exists
Mueller Today uses AI tools to help cover the Mueller neighborhood and greater Austin area. AI lets a small local team publish more frequently, research more deeply, and serve readers in ways that would otherwise require a much larger staff.
But AI introduces real risks — to accuracy, to trust, and to the community relationships that make local journalism matter. This policy exists because our readers deserve to know exactly how we use these tools, where humans remain in control, and what we will never automate away.
We believe AI should make local journalism more thorough and more accessible — never less honest. Every policy decision below is guided by that standard.
Five Founding Principles
These principles govern every decision we make about AI at Mueller Today. When we evaluate new tools, expand into new formats, or face an edge case this document doesn't cover, we return to these five commitments.
1. Transparency First
Readers will always know when AI contributed to a story. We disclose what AI did, what humans did, and why. No story is published without a visible transparency record.
2. Human Authority
A named human editor reviews and approves every piece of content before publication. AI proposes; humans decide. Editorial judgment is never delegated to a machine.
3. Accuracy Over Speed
We treat all AI output as unvetted source material. Every factual claim is independently verified by a human before publication — the same standard we hold for any source.
4. Community Accountability
We serve Mueller and Austin. If our AI-assisted reporting causes harm, we correct it publicly, learn from it openly, and adapt this policy accordingly. Readers can contact us directly with concerns.
5. Fairness and Inclusion
AI systems carry the biases of their training data. We actively review AI output for cultural sensitivity, representational fairness, and potential harm — particularly in coverage of our diverse local community. When AI-generated imagery depicts people or places, a human reviewer evaluates it for accuracy and cultural appropriateness before publication.
How AI Fits Into Our Reporting
Our editorial process is designed so that AI handles mechanical work — scanning sources, structuring data, generating drafts — while humans handle everything that requires judgment, verification, and voice. Here is the pipeline every AI-assisted story follows:
Source Discovery
Automated monitoring identifies potential stories from public sources (government filings, event listings, local outlet reporting).
AI · Automated
Source Approval
A human editor reviews the flagged source, confirms it is credible and newsworthy, and greenlights it for development.
Human · Editorial Gate
Research & Preparation
AI extracts key facts, gathers supplementary context from public records and prior coverage, and organizes everything into a structured briefing.
AI · Automated
Fact Verification
Every factual claim, quote, and data point is independently verified by a named human fact-checker against original sources. Each check is logged with the reviewer's initials and timestamp.
Human · Required
Draft Generation
AI produces an initial draft from verified material using an editorially selected writing style appropriate to the story type.
AI · Automated
Editorial Review & Rewrite
A human editor rewrites for clarity, tone, and AP style. The editor checks for accuracy, fairness, balance, and adherence to our editorial standards. The editor's name is recorded.
Human · Editorial Gate
Image Review
If AI-generated illustrations are used, a human reviewer evaluates them for accuracy, cultural sensitivity, and appropriateness. All AI-generated visuals are labeled as illustrations, never presented as photographs.
Human · Required
Publication
The final article is published with a complete transparency record, named byline, and the "AI-Assisted Reporting" label visible to readers.
Human · Final Approval
Every article's complete editorial trail — including timestamps, reviewer names, and the specific AI and human steps taken — is available to readers through the "See how this story was built" panel on each story.
What AI May and May Not Do
Use Case
Status
Conditions
Source monitoring & discovery
Approved
Flagged sources must be reviewed and approved by a human editor before development begins.
Transcription
Approved
AI transcriptions must be checked against original audio/video by a human before any quotes are used.
Research & background gathering
Approved
Supplementary sources must be cited. AI-gathered facts require the same verification as any other source.
Data parsing & structuring
Approved
Public records, event listings, meeting agendas. Output reviewed by a human before use.
Draft generation
Approved
Only from verified material. Must be rewritten and approved by a named human editor.
Copyediting & style checks
Approved
Suggestions only; a human editor makes all final decisions.
Translation
Approved
AI translations must be reviewed by a fluent human speaker before publication.
Social media summaries
Approved
Generated from published articles only. Reviewed before posting.
SEO optimization
Approved
Headlines and metadata suggestions. Human editor approves final text.
AI-generated imagery
Approved
AI-generated visuals are clearly labeled as illustrations in their captions (e.g., "Illustration generated with AI"). Human review for accuracy, cultural sensitivity, and appropriateness is required before publication.
Content personalization
Restricted
If implemented, must expose readers to a broad range of stories. Must not create filter bubbles or suppress viewpoints. Requires regular bias audits.
Publishing without human review
Prohibited
No AI-generated or AI-assisted content may be published without review and approval by a named human editor.
Fabricated bylines or sources
Prohibited
We will never attribute AI-generated content to a fictitious person. Every byline represents a real human who stands behind the work.
Altering photographs, video, or audio
Prohibited
AI may not be used to alter photographs, video, or audio without clearly disclosing the modification to readers. Unmodified photographs are never presented as AI-generated, and AI-altered media is never presented as original.
Generating deepfakes or synthetic media of real people
Prohibited
No synthetic imagery, audio, or video that depicts identifiable real individuals.
Replacing staff with AI
Prohibited
AI augments our team's capabilities. It is not used to eliminate editorial positions or reduce human oversight.
Visual Journalism Standards
Visual content carries the highest risk to audience trust. Readers need to know whether what they're seeing is real or generated, and we treat that distinction as non-negotiable.
Clear labeling, always. Every AI-generated image is labeled in its caption — not buried in metadata. Example: "Illustration generated with AI." If a photo has been altered with AI tools, the alteration is disclosed in the caption.
Photographs stay real. We do not present AI-altered images as unmodified photographs. If AI is used to enhance or modify a photo, that modification is clearly disclosed to readers.
No real people in AI imagery. AI-generated visuals will not depict identifiable real individuals. When a story involves real people, we use actual photographs or no imagery at all.
Human image review. Every AI-generated visual is evaluated by a named human reviewer for accuracy, cultural sensitivity, and tone before selection.
Editorial justification required. We use AI-generated imagery only when it genuinely serves the story — not as decoration or to fill space. Human-captured photography is always preferred when available.
Our Transparency Record
Every AI-assisted article on Mueller Today includes:
The "AI-Assisted Reporting" label — visible at the top of the article, before the reader begins.
The "See how this story was built" panel — an expandable editorial trail showing every step of the article's creation, including:
Which steps were performed by AI and which by humans
Timestamps for each step
The names or initials of human reviewers
The specific claims that were fact-checked, the sources they were checked against, and who verified each one
The original source(s) that informed the article
Image labeling — AI-generated visuals are labeled in their captions, not hidden.
Source attribution — The original reporting that informed an AI-assisted article is credited and linked.
We don't just tell readers that AI was used. We show them exactly what it did, step by step, with receipts. We believe this level of transparency sets the standard for AI-assisted local journalism.
Data Privacy & Source Protection
No confidential material enters AI systems. We do not input unpublished source identities, whistleblower communications, or other sensitive information into any AI tool.
Public sources only. Our AI pipeline processes publicly available information — published articles, government records, public event listings, and press releases.
Vendor diligence. We evaluate AI tool providers for their data handling practices. We do not use tools that retain our input data for model training without explicit opt-out.
Reader data. If we implement AI-driven features like personalization or recommendations, we will clearly disclose what data is used and provide readers with controls to opt out.
Guarding Against Bias
AI systems reflect the biases in their training data. In local journalism, this can manifest as underrepresentation of communities, culturally insensitive language, or skewed framing. We address this through:
Human review for cultural sensitivity. Every AI-assisted article and AI-generated image is reviewed by a human editor with specific attention to representation, framing, and potential harm to local communities.
Diverse sourcing. We actively monitor whether our AI-assisted coverage represents the full diversity of the Mueller neighborhood and Austin area. If coverage patterns show gaps, we address them editorially.
Regular audits. We periodically review our published AI-assisted content for patterns of bias in topic selection, source selection, imagery, and language.
Reader feedback. We welcome community input on how our coverage represents (or fails to represent) the neighborhood. Contact us at the address below.
Accountability & Governance
Corrections
If an error appears in an AI-assisted article, we follow the same corrections policy as any other story: we correct it promptly, note the correction visibly, and update the transparency record to reflect what happened.
Oversight
An internal editorial committee is responsible for:
Evaluating new AI tools before adoption
Monitoring AI output quality and accuracy
Reviewing this policy at least twice per year
Investigating any reader complaints about AI use
Keeping current with evolving industry standards and best practices in AI-assisted journalism
Staff Training
All editorial staff receive training on our AI tools, this policy, and the ethical considerations specific to AI-assisted journalism. Training is updated as tools and standards evolve.
Commitments We Make to Readers
We will always:
Tell you when AI was involved in creating a story
Show you the full editorial trail — AI steps, human steps, fact-checks, timestamps
Have a named human editor approve every story before publication
Verify every factual claim independently, regardless of its source
Label AI-generated images as illustrations, never as photographs
Credit the original reporting that informed our coverage
Correct mistakes publicly and promptly
Update this policy as the technology and our understanding evolve
We will never:
Publish AI-generated content without human editorial review
Use fake bylines or attribute AI work to fictitious people
Alter photographs, video, or audio with AI without clearly disclosing it
Create synthetic depictions of real individuals
Feed confidential sources or unpublished sensitive information into AI tools
Use AI to replace editorial staff
Hide the role AI played in any piece of content
Questions, Concerns, or Feedback
Every article includes a feedback button so you can ask questions or share concerns about how AI was used in that specific story — right where you're reading it.
Feedback button on every article
See something that doesn't look right? Have a question about how AI contributed to a story? Use the feedback button on any article to tell us directly. Your message is tied to that specific story so our editors can review it in context.
"See how this story was built"
Every article also includes a transparency panel showing the complete editorial trail — every AI step, every human review, every fact-check with timestamps and reviewer names.
For broader questions about this policy or our AI practices in general:
Email: editor@muellertoday.com
This policy is a living document. As AI tools evolve and as we learn from our readers, we will update it. We welcome your input.
Standards We Follow
Our policy draws on guidance from leading journalism ethics organizations: