Full Fact is a British fact-checking organization with a world-wide footprint. It processes a third of a million sentences every weekday. Full Fact monitors newspapers, TV broadcasts, radio shows, social media, and parliamentary records across multiple languages. Without AI, this scale would crush any human team.

"We are detecting maybe half a million things we can fact check each day," Andy Dudfield, Full Fact's head of AI, told audiences at the International Journalism Festival in April 2025. The organization has spent nearly a decade developing AI tools that help human journalists find the claims worth checking. Full Fact employs 15 full-time journalists who write detailed fact-check articles (typically 1,000 words each) and eight software engineers and data scientists who build AI tools to support their work. 

Finding the Signal in the Noise

Full Fact's core challenge mirrors what many newsrooms face: too much content, too little time. The organization processes over 300,000 sentences daily in English alone, monitoring everything from parliamentary debates to podcasts.

Full Fact developed a two-stage generative AI system launched in May 2025. First, keyword lists identify relevant topics like energy policy, asylum seekers, or health claims. Then an AI ranking system scores claims based on their "checkworthiness" - how likely they deserve a fact-checker's attention.

The system evaluates whether claims affect many people versus just one person, whether they're specific and measurable, and whether they're personal experiences (which can't be verified) or future predictions (which can't be fact-checked). Each morning, journalists receive ranked lists of the day's top claims organized by topic.

This checkworthy tool caught a misreported prostate cancer study, leading Full Fact to contact the study's author. The author corrected their report and contacted the newspaper, which issued a correction.

David Corney, Full Fact's senior data scientist, explained during a June 2025 meeting in London with German news media executives (Chefrunde Study Tour): "We don't waste time or resources looking at the wrong thing. We also don't miss things which are important, even if they weren't necessarily front page news."

Tracking Claim Repeats Across Media

False claims don't disappear after one fact-check. They resurface in different outlets, spoken by different people, often with slight variations. Full Fact built a claim-matching system that compares all monitored media against all recently fact-checked claims.

It’s an enormous technical challenge: 10 million daily sentence comparisons yielding only dozens of genuine matches. The system combines traditional machine learning with generative AI, handling paraphrasing and variations while accounting for how truth can change over time.

"Language is so ambiguous," Corney noted. "There are so many ways of expressing the same idea that deciding whether two sentences mean the same thing or not is quite hard even for humans to do."

When the system flagged MPs repeating the same misleading claim about migrant numbers, Full Fact contacted them directly to request corrections. The tool enables rapid response to repeated falsehoods.

New Tools: Health Claims and Election Promises

Full Fact's newest prototype, codenamed "Raphael," integrates Google's Gemini model to identify harmful health claims in videos. When a two-hour podcast shared vaccine misinformation, Raphael pinpointed the problematic sections, saving fact-checkers from reviewing the entire recording.

The system links back to exact video timestamps, allowing fact-checkers to verify claims without hunting through hours of content. Full Fact plans to incorporate Raphael into their main tool suite for client organizations later in 2025.

The organization also built an AI-powered government tracker monitoring approximately 300 election promises from the UK's current government. The system uses Google searches to find relevant documents, generates questions to locate related content, extracts information, and creates timelines highlighting progress updates that might otherwise slip past notice.

Pre-bunking: Getting Ahead of Misinformation

Full Fact's newest experiment involves "pre-bunking" - identifying and countering misinformation before it spreads. Working with Spanish fact-checker Maldita and the European Fact-checking Standards Network, they focus on short videos across platforms like TikTok, YouTube Shorts, and Instagram Reels.

The process identifies harmful narratives in one country, abstracts underlying patterns (blaming government, foreign conspiracy theories, claiming "everything is fake"), then predicts similar claims in other countries during comparable events. The goal: publish preventive fact-checks before misinformation takes hold.

"Imagine perhaps next month, there's a forecast of immediate rainfall, say in France or Germany," Corney said, describing their flood misinformation example. "We can anticipate the claims you're going to see.” The project started with five languages and aims to expand to 20 by the end of 2025.

Going Global: 40 Organizations in 30 Countries

Full Fact's tools now support over 40 fact-checking organizations across three languages and 30 countries. Through 2024, these tools helped fact-checkers monitor 12 national elections.

Partners include Africa Check (operating in South Africa, Nigeria, Kenya, and Senegal), French-language tools serving West Africa, and Arabic versions covering the Middle East and North Africa, including Syria, Libya, and Gaza.

When Full Fact introduced their tools to 25 Arabic-speaking fact-checking organizations, users reported that media monitoring became faster and simpler. Many started live-monitoring events for the first time, resulting in over 200 fact-check articles published using claims identified through Full Fact's systems.

What AI Can and Cannot Do

Full Fact maintains strict boundaries around AI capabilities. The technology excels at pattern recognition, text analysis, and multilingual processing without additional training. It effectively identifies interesting versus mundane content and extracts information from documents.

But Full Fact avoids asking AI to determine truth or falsehood. "No model, generative or conventional, can reliably answer that as it depends on real-world knowledge and reasoning," the organization explains.

The organization doesn't use AI to write publishable content, which would itself require fact-checking. Human journalists handle all article writing, followed by careful review and editing by colleagues.

Revenue and Sustainability Challenges

Full Fact operates as a charity with diverse funding: public donations, philanthropic organizations, Meta payments for platform fact-checking, and software licensing to international organizations. Recent changes in tech company funding have created uncertainty across the fact-checking sector.

At the GlobalFact 2025 conference, CEO Chris Morris emphasized that fact-checkers' data has value for AI companies training models to recognize facts and deception. This creates new revenue opportunities as traditional funding streams shift.

Technical Architecture and Real-Time Capabilities

Full Fact's system combines multiple technologies: Elasticsearch for text search, SQL databases for structured data, WikiData linking for entity identification, and semantic similarity matching for claim comparison. The hybrid approach balances the pattern recognition strengths of traditional AI with the multilingual capabilities of generative models.

The organization developed real-time transcription capabilities for live events. During a Liberian presidential debate streamed live on Facebook, Full Fact's tools monitored the five-hour event and provided real-time claim identification to local journalists, enabling live fact-checking while voting was still taking place.

Focus on Harmful Information

Full Fact's philosophy centers on AI as assistant, not replacement. "We're not trying to replace journalists or fact checkers with algorithms," Corney stated. "Journalists are really good at writing stories, really good at finding stories, far better than AI will ever be. We're trying to help fact checkers basically be more effective.”

"We will always keep human experts in the loop," Full Fact states, "partly to minimize the risks around bias and hallucinations that come with fully-automated systems but mostly because even the most advanced AI does not come close to human reasoning."

The organization maintains focus on harmful rather than merely wrong information. "In healthcare, if someone says, 'you've got a lump in your breast, don't see a doctor, just use this castor oil,' we can say, here’s the science. It's not only wrong, it'll kill you if you believe that”, Corney explains.

Five Lessons for News Publishers

1. Build AI workflows around editorial priorities, not technical capabilities: Full Fact's "checkworthiness" scoring demonstrates how AI should serve editorial judgment rather than replace it. The system evaluates claims based on journalistic criteria: impact scale, specificity, verifiability. News publishers can apply similar thinking to content curation, prioritizing stories that match their editorial mission rather than chasing whatever AI can technically produce.

2. Hybrid AI approaches outperform single solutions: Full Fact combines traditional machine learning with generative AI, using each for its strengths. Traditional models excel at classification tasks, while generative AI handles multilingual content and semantic understanding. Publishers wrestling with AI tool selection should consider mixing approaches rather than betting everything on one technology.

3. Real-time monitoring creates competitive advantage: Full Fact's ability to flag repeated claims across media outlets gives them speed and authority in corrections. News publishers can build similar monitoring systems for their beats, tracking how stories evolve across competitors and identifying follow-up opportunities. The technical infrastructure (Elasticsearch, semantic matching) is accessible to most newsrooms.

4. Data licensing offers new revenue streams: Fact-checkers’ curated data has value for AI training. News publishers sitting on archives of verified, high-quality content should explore licensing opportunities with AI companies. Clean, factual data commands premium prices in training markets.

5. Strict boundaries prevent AI failures: Full Fact never asks AI to determine truth or write publishable content. This disciplined approach avoids the hallucination and accuracy problems plaguing other AI implementations. News publishers should establish similar guardrails, defining clearly what AI can and cannot do in their workflows before deployment rather than discovering limits through public failures.

Keep Reading

No posts found