Hi, I am media innovation journalist Ulrike Langer and you are reading the one year anniversary edition of News Machines. As always: If you like what you see here please subscribe, upgrade, share with your friends or support this newsletter by clicking on the ad.

This is how Nano Banana 2 imagines my collaboration with Claude. My two cats Kenny and Dimitry have no opinion about AI but they do enhance the workflow.
There is a confession in this anniversary edition, and I might as well put it at the top: I write with AI. I don't just use it for research assistance and faster document parsing. The drafting, the arguing, the revising - all of it happens in a constant loop between me and a machine. That extensive collaboration with AI has made me a more efficient and productive journalist than before. But beyond that, I believe that this process has made me a better journalist than ever in my whole career of more than three decades.
I am aware this is not what you are supposed to say. This is heresy.
The current professional admissions of AI use in media generally run something like this: I use AI tools for research, for summarizing sources, for productivity - but never for the writing. That's all human. The implication is that the writing is where the craft lives. It is pure and AI does not get to touch it. Watching this performance for nine months, from the inside of a different experience, I have come to think this mantra protects a narrow definition more than a principle. It is a definition that isn't very useful in the age of AI. But did it ever capture journalistic reality?
Woodward and Bernstein are still the most famous reporters in journalism history. Their work went through extensive editing loops with Washington Post editors and editor-in-chief Ben Bradlee. Only a fraction of their own words may have made it into the final printed stories. But nobody questions whether they were the actual reporters of the Watergate scandal. Investigative journalists whose findings end up as searchable public databases - the ICIJ's Panama Papers team, ProPublica's data reporters - often produce almost no prose at all. The reporting is the extraction and verification of truth, not the sentences built around it.
At the other end of the spectrum, Hunter Thompson's Gonzo journalism was so fused with his voice, perception and imagination that the prose was inseparable from the act of witnessing. Both ends of that spectrum are reporting. They always were.
You Can't Automate Good Judgement
AI promises speed and efficiency, but it’s leaving many leaders feeling more overwhelmed than ever.
The real problem isn’t technology.
It’s the pressure to do more with less — without losing what makes your leadership effective.
BELAY created the free resource 5 Traits AI Can’t Replace & Why They Matter More Than Ever to help leaders pinpoint where AI can help and where human judgment is still essential.
At BELAY, we help leaders accomplish more by matching them with top-tier, U.S.-based Executive Assistants who bring the discernment, foresight, and relational intelligence that AI can’t replicate.
That way, you can focus on vision. Not systems.
There was a time when writing didn't have to pass a purity test. The rewrite desk was standard practice at wire services for decades. A reporter called in findings and someone else shaped the story. The reporter was still the reporter. What made them one was not the prose - it was the judgment, the sourcing, the verification, the accountability for what was true. AI shifts production toward the research and data end of that historical spectrum. That is not new. It is an acceleration of a tendency that was always there.
When Cleveland Plain Dealer editor Chris Quinn published his February letter describing a journalism school graduate who declined a reporting job because it didn't include original writing, the media industry spent weeks arguing about what that meant. Some very smart people have chimed in on this case from various perspectives - among them, Pete Pachal and Dante Ciampaglia - so I won't repeat all of the arguments about whether AI should be assigned the role of a writer and what this means for the future of journalism.
Very few people have described what it actually means to write with AI - not whether you should, but what happens when you do it every day, over months, to produce work that carries your name. I have been doing it and I have some opinions about it.
When I say writing with AI, I mean writing with Claude. Claude captures my style and how I structure things well enough that I can trust it with a first draft. The most tedious part of being a journalist is often turning the framework with all the elements that you have into prose. Claude gives me something to work with instead of having to start at a blank screen. I don't always let Claude go first, but it unlocks writer's block by always giving me that option.
Writing with AI is not an on/off switch. Between producing slop with no human oversight and not touching AI at all there is a huge spectrum. My interaction with Claude has so many iterations that in the final version there is often hardly anything left of Claude's first draft. Is this my work then or Claude's? It's mine, of course. I have trained Claude specifically for my editorial standards. It is solely my judgement what goes into a published story and I take full responsibility for it.
The most valuable thing Claude does in our collaboration is not the writing but how it challenges me. If you've built and refined the workflow, set and refined the rules and trained your AI assistant to challenge your assumptions at every step, it's like having a very smart editor at your fingertips who constantly pushes you to think more deeply and try harder.
I work alone. Most newsletter writers do. Most freelancers do. The editorial infrastructure that institutional journalism takes for granted - an editor who interrogates your premise, a colleague who catches the weak argument, a desk that sees what you cannot in your own copy - does not exist for the solo practitioner. It never has. Most of us accepted that as the condition of independent work.
What I have now is a collaborator who tells me when the argument does not hold, when the lede is burying the real story, when the structure is working against me. Nine months of that feedback loop has sharpened my thinking in ways that working alone did not.
This does not make questions about AI use in journalism less real. AI introduces specific error types like hallucinations. I don’t see these very often with Claude but every once in a while there is one and my editorial loop has to be rigorous enough to catch it. Any mistakes that have slipped through since I built my human-machine workflow have been my own, they were not AI hallucinations. And there would have been more mistakes if Claude hadn’t caught them.
Writing with AI is not for everyone. I get why magazines like The Atlantic or The Onion won’t let AI anywhere near their writing. I covered their arguments in previous posts here in News Machines. But most of us don’t write essays or satire. My writing is analytical and practical. What matters is not whether every word originates with me but whether I do the reporting and my editorial process is rigorous enough that I am proud to put my name on it.
Premium subscribers have access to the master document that I upload to Claude every time I start a new project. It contains all of my research, style and translation rules that I refined in nine months of AI-assisted writing.
Subscribe to the premium tier to read the rest.
Become a paying subscriber to get access to the full post and other premium content.
Upgrade nowA subscription gets you:
- Access to full posts with more metrics and actionable checklists
- Quarterly AI in media trend reports (starting in April 2026)
- Immediate download access to my new 28-page report "The Human-Augmented Newsroom"
- No ads (except for occasional special promotions)


