Technical documentation of our AI models, limitations, and methodology
Last updated: February 3, 2026 | Version 1.0
Used for comprehensive article analysis, credibility assessment, and bias detection
Used for enhanced analysis including behavioral influence detection and psychological framing
We use multiple models to cross-validate analyses and reduce single-model biases. GPT-4 provides broad coverage and strong reasoning capabilities, while Claude 3.5 offers detailed contextual analysis and nuanced understanding of persuasion techniques.
Limitation: AI models may misinterpret satirical content as sincere reporting or fail to detect subtle sarcasm in opinion pieces.
Impact: Satire may receive incorrect credibility scores. Sarcastic loaded language may be flagged literally.
Mitigation: We flag entertainment and satire sources where possible. Users should verify article type.
Limitation: Analysis is limited to English language content. Non-English articles cannot be accurately assessed.
Impact: No coverage of foreign language news sources or international perspectives not published in English.
Mitigation: We clearly indicate language limitations in our documentation.
Limitation: Very long articles (>15,000 words) may be truncated, causing analysis to miss key information in the latter portions.
Impact: Investigative long-form journalism may receive incomplete analysis.
Mitigation: We prioritize analysis of article openings where key claims typically appear.
Limitation: Scores are comparative within our dataset, not absolute truth measures. A "high credibility" score means the article shows many credibility signals relative to others analyzed, not that all claims are verified.
Impact: Users may incorrectly interpret scores as definitive truth ratings.
Mitigation: We include disclaimers on all analysis pages and clearly define what scores represent.
Limitation: For breaking news, AI models cannot assess factual accuracy since facts are still being established. Early reporting may lack context.
Impact: Analysis of breaking news focuses more on tone and framing than credibility of specific claims.
Mitigation: We note article publication date and recommend checking for updates.
Limitation: Highly technical articles in specialized fields (advanced physics, medicine, etc.) may exceed the model's domain expertise, leading to incomplete assessments.
Impact: Credibility assessments of technical claims may be less reliable than for general news.
Mitigation: We recommend consulting domain experts for technical topics.
Limitation: AI models may miss cultural nuances, local context, or region-specific political dynamics that affect how news should be interpreted.
Impact: Bias detection may not account for all cultural framing conventions.
Mitigation: We acknowledge cultural limitations in our methodology documentation.
We evaluate our system on:
We review and update our AI models quarterly (every 3 months) or when significant improvements become available from model providers. Major methodology changes are announced and documented in our Methodology changelog.
This System Card is reviewed and updated when we identify new limitations, change models, or modify our evaluation approach. All updates are date-stamped at the top of this page.
We use semantic versioning for our methodology:
Current Version: 1.0.0
If you have questions about our AI models, identified a limitation not documented here, or want to report a systematic error, please contact us:
General inquiries: support@auren.news
Dispute an analysis: disputes@auren.news
Manage your cookie settings
We use cookies to enhance your experience, analyze site traffic, and personalize content. You can choose which cookies you allow. Essential cookies are required for basic site functionality.
Start analyzing news with confidence