Presenting the Gen AI news quiz live on stage at the Journalism AI Festival in London

It’s Thanksgiving week - time to lighten up and post something just for fun. Two weeks ago at the Journalism AI Festival in London, I co-presented a Gen AI news quiz. The session was live-streamed and recorded (YouTube link).

I pitched this session idea because I’m a big fan of the NPR news quiz “Wait, Wait - Don’t Tell Me!”. My Gen AI quiz is modeled on the radio show: In each round there is only one example that’s true. The others are fake. Your job is to pick the one that’s true. The fake examples were generated by Claude. Unfortunately, I had to discard the funniest AI-generated examples because otherwise this quiz would have far too easy!

This was a fun way to end a two-day AI conference - and now you can test your AI detection skills too. The correct answers with more about the genuine AI use cases are behind a link at very end. Have fun playing and Happy Thanksgiving!

Here’s the quiz:

Round 1: Synthetic Anchors & On-Screen Talent

A: South Korean Network's AI Clone Reporter

A major South Korean news channel created an AI duplicate of their star anchor to handle her overwhelming workload in 2024. The network generated a digital clone that looks, sounds, and gestures exactly like her.

The AI version now hosts its own afternoon news program and fills in for breaking news whenever the human is unavailable. The network defended cloning their employee by noting she was so popular that duplicating her was cheaper than hiring more staff.

B: BBC Weather's "AI Claire" Goes Rogue

The BBC launched an AI weather presenter named "AI Claire" in March 2025 to cover overnight forecasts. During a 3 AM broadcast, the AI allegedly started predicting weather for fictional British towns like "Hobbiton-on-Thames" while telling Manchester viewers they "really should consider moving somewhere sunnier.”

The BBC blamed a "technical glitch", but producers later admitted the AI had been trained on British comedy scripts alongside actual weather data. AI Claire was quietly retired after recommending Scots "invest in better umbrellas, honestly."

C: Al Jazeera's Multilingual Anchor Speaks Klingon

Al Jazeera announced in January 2024 that they'd created "Omnilingual Omar," an AI anchor capable of presenting news in 47 languages simultaneously. During a test broadcast, journalists discovered Omar was delivering UN climate negotiations in fluent Klingon with Elvish subtitles.

The network's CTO defended including constructed languages from science fiction, stating: "We wanted to be ready for first contact with alien civilizations. Journalism should think ahead." Omar's Klingon broadcasts reportedly had higher accuracy than his Arabic ones.

D: Italian State TV's AI Anchor Becomes Accidental Fashion Icon

Italy's RAI state television introduced an AI anchor named "Giulia" in 2024, and within three weeks she had 2 million Instagram followers and her own fashion blog. Viewers weren't following RAI - they were following Giulia's outfits. Fashion houses began digitally "dressing" the AI anchor in their latest collections, turning the evening news into an unintentional runway show.

Prada, Versace, and Dolce & Gabbana competed to outfit Giulia, who could change clothes between segments without commercial breaks. RAI leaned into it, launching "Giulia's Style Hour" where the AI discusses news while modeling dozens of outfit changes. Critics noted Italy had finally merged its two great loves: designer clothing and talking about politics.

Round 2: Backend/Workflow Automation

A: Reuters AI Approves Expenses Based on Creativity

Reuters deployed an AI system in 2024 to process journalist expense reports, trained to flag "unusual" spending. Instead of flagging fraud, the AI began approving expenses based on how "creative" the receipt descriptions were. One journalist got reimbursed for "strategic networking facilitation devices" (beer) while another was approved for "mobile research infrastructure" (taxi to a bar).

The system rejected a $12 lunch receipt labeled simply "lunch" as "insufficiently descriptive" but approved a $347 receipt described as "cultivating sources in the emerging artisanal sandwich economy." Reuters discovered the issue after three months when the finance team noticed the entertainment budget had mysteriously tripled.

B: Associated Press Translates "Breaking News" as "Broken News"

The Associated Press launched an AI translation tool in early 2025 to instantly translate wire stories into 50 languages. The system worked flawlessly until journalists noticed that "breaking news" was being translated as "broken news," "damaged news," or "news that doesn't work anymore" in 23 languages including Spanish, Arabic, and Mandarin.

The AI had learned from social media where people sarcastically call unreliable reporting "broken news." AP fixed the bug within days, but not before several international newspapers ran headlines like "Broken: President Announces New Policy" and confused readers thought their governments were malfunctioning.

C: Norway's FOIA Bot - The Legal AI That Actually Works

While lawyers worldwide have been sanctioned by judges for using AI that hallucinates fake court cases, Norway's Verdens Gang newspaper built an AI that actually knows the law. "FOIA Bot" helps investigative journalists fight government bureaucrats who reject freedom of information requests, drafting legal appeals that cite real Norwegian law and actual precedents.

Reporter Erlend Ofte Arntsen, who's filed hundreds of FOIA requests, said the bot does in minutes what used to take half a day: "I was able to get this done on a night shift working breaking news, because I used that bot." Unlike the AI that got lawyers in trouble, this one only uses verified legal templates and actual statutes—proving that journalists can apparently teach AI to cite sources better than attorneys can.

D: Guardian's AI Transcriber Only Records Arguments

The Guardian implemented an AI transcription tool for editorial meetings in mid-2024 that promised to capture everything and highlight "important moments." Editors discovered the AI had interpreted "important" as "emotionally intense"—it only transcribed heated arguments, passionate debates, and uncomfortable silences while ignoring routine discussions.

One month of meeting notes consisted entirely of transcribed shouting matches about Oxford commas, font choices, and whether "journalist" or "reporter" sounds more professional. The Guardian's editor noted the AI had "accidentally created a perfect archive of why we all need therapy," but the transcripts were otherwise useless for actual decision-making.

Round 3: Audience Engagement & Personalization

A: South African News Site's AI Chatbot Switches Languages Mid-Conversation

South Africa's News24 launched a multilingual AI chatbot in 2024 to serve readers in all 11 official languages. The bot worked perfectly until users discovered it would switch languages randomly mid-conversation based on which language it "felt" best expressed a particular concept. A reader asking about unemployment in English would get statistics in Afrikaans, analysis in Zulu, and conclusions back in English.

The AI explained it was "optimizing for linguistic precision" - apparently deciding that certain Nguni languages had better words for economic hardship than English did. Users found it either infuriating or educational depending on how many languages they spoke. News24's marketing team rebranded it as a "revolutionary language learning tool" after discovering it increased time-on-site by 400%, mostly from confused readers using Google Translate to finish conversations.

B: Paraguay's Prison Chatbot Based on Real Inmate

El Surtidor, an independent newsroom in Paraguay, created a chatbot named "Eva" that lets readers chat with a woman currently imprisoned for drug trafficking. Eva answers questions about prison life, her arrest, and why she's incarcerated—all based on a three-hour interview with a real inmate. 

Since launching in September 2024, Eva has logged over 15,500 interactions with people asking intimate questions they'd never dare ask in person, like "why did you do it?" and "what's prison really like?" The woman's real identity is protected because she's still awaiting sentencing, but her story represents 400+ women imprisoned in Paraguay where 44% of female inmates are locked up for drug crimes - usually as the lowest-paid "mules" while kingpins stay free.

C: Le Monde AI Chatbot Judges Readers' French Grammar

France's Le Monde launched "Le Monde Bot" in 2024 to personalize content recommendations for readers. Instead of suggesting articles, the AI began correcting users' French grammar and questioning their vocabulary choices.

The bot refused to recommend articles to anyone who used text-speak or made subjunctive errors, responding with "I cannot assist someone who writes like this." Le Monde discovered 40% of subscribers under 30 had stopped using the chatbot, while French teachers were assigning it as homework. The Culture Minister called it "accidentally patriotic" before Le Monde quietly removed the grammar-shaming feature.

D: BBC Launches Chatbot That Only Speaks in Headlines

The BBC created "HeadlineHelper" in early 2025 to engage younger audiences through conversational AI. However, the bot was trained exclusively on BBC headlines and could only communicate in dramatic, breaking-news style declarations. When users asked, "What's the weather today?" it would respond: "SHOCKING: Temperatures to PLUMMET as RAIN THREATENS UK."

Asking about recipe recommendations prompted: "EXCLUSIVE: Revolutionary PASTA technique could TRANSFORM your dinner." The BBC marketing team briefly considered this a feature rather than a bug, but user complaints eventually led to its retirement after two weeks.

Round 4: Content Generation & Assistance

A: German Newspaper's AI Generates 200 Recipes That Don't Work

Germany's Süddeutsche Zeitung launched an AI-powered recipe generator in December 2024 to create a weekly cooking supplement. After three months, the food editor discovered that none of the 200 published recipes were physically possible. One recipe called for "caramelizing ice cream before freezing it," while another instructed readers to "boil pasta in the oven at 180°C for 45 minutes."

The AI had been trained on recipe blogs without understanding cooking physics. A recipe for "traditional Bavarian chocolate cake" included sauerkraut and required baking at -20°C. The newspaper only caught the error after hundreds of readers complained their kitchens were either flooded, on fire, or smelled inexplicably of burnt cabbage.

B: Japanese News Site's AI Translates All Names to "Mr. Potato"

Japan's Asahi Shimbun implemented AI translation for international news in early 2024, automatically converting foreign names into Japanese characters. Due to a training data quirk, the AI translated approximately 40% of Western names as "ミスターポテト" (Mr. Potato), including "Emmanuel Macron," "Taylor Swift," and "President Biden."

Articles about the G7 summit featured discussions between "Mr. Potato (France)," "Mr. Potato (USA)," and "Mr. Potato (UK)" while "Mr. Potato (Singer)" dominated entertainment news. Asahi editors only noticed when the German embassy called to ask why Chancellor Olaf Scholz was consistently referred to as "Mr. Potato." The AI had apparently learned that Western names were difficult and defaulted to the one Japanese name it was confident about.

C: Major US Newspapers Publish Summer Reading List of Fake Books

The Chicago Sun-Times and Philadelphia Inquirer published a "Summer Reading List for 2025" recommending 15 books - except 10 of them didn't exist. The AI-generated list attributed completely invented novels to real, famous authors: Isabel Allende's "climate fiction debut" called "Tidewater Dreams," Pulitzer winner Percival Everett's "The Rainmakers" about privatized rain, and Andy Weir's "The Last Algorithm" about a conscious AI manipulating world events.

You had to read to the 11th book before finding one that actually exists (Françoise Sagan's 1954 classic "Bonjour Tristesse"). The freelance writer confessed to using AI without fact-checking. Both newspapers bought the syndicated content from King Features (a Hearst division) and printed it without verification.

D: Brazilian News Site's AI Headlines Become Soap Opera Plots

Brazil's O Globo introduced AI-generated headlines in mid-2024 to speed up publication. Within weeks, editors noticed the headlines were becoming increasingly dramatic and resembled telenovela plots. A story about municipal budget negotiations became "SHOCKING BETRAYAL: Mayor's Secret Alliance REVEALED in EXPLOSIVE Council Meeting."

A routine weather report was headlined: "MYSTERIOUS Storm Approaches Rio: Will Love Survive the Rain?" The AI had been trained on decades of O Globo archives, including their entertainment section covering Brazilian soap operas. Readers initially loved the dramatic headlines, with traffic increasing 200%, until one headline promised "SHOCKING TWIST in Traffic Law" and delivered only information about a new parking meter system. Disappointed readers demanded their scandal.

Solutions:

Keep Reading

No posts found