The Coming Storm: Why Newsrooms Need New Rules for the AI Era

Based on an article originally published in German on journalist.de. This English version has been revised and updated.

The Storm Has Already Made Landfall

In 2023, I published a blog post describing what I called The News Carousel, a near-future scenario where news organizations would deploy AI to read every piece of news content across the web within seconds of publication, extract statements, evaluate trustworthiness, and instantly republish whatever their audiences needed to know. I thought I was being provocative.

News Carousel Rendered with DALL·E 3

Today, I lead a company that delivers almost that same technology. Over the last 12 months, it has assisted journalists in Germany and other countries in writing tens of thousands of news articles.

We’re not alone. Several companies are changing how news is gathered, analyzed and distributed, from startups to CMS suppliers, to large media companies. At the same time, industry conferences still host heated panels about whether AI belongs in newsrooms. At this year’s International Journalism Festival in Perugia, AI and journalism expert David Caswell observed a deep ‘fault line’ between those who view AI as an existential threat to journalism and those accepting it. Fortunately, the pragmatists are trying to redirect the debate from whether AI belongs in newsrooms to how we implement it responsibly.

The technology we and others are building does something paradoxical:

  • On one hand, it threatens the traditional business model where original reporting has value because it takes time for competitors to find, read, and rewrite the facts. When AI can process thousands of articles per minute, that time advantage vanishes. This market reality follows its own logic, as predictable as gravity.
  • On the other hand, AI promises to liberate editorial staff from the repetitive tasks that consume much of their day: Monitoring what everyone else is writing and repackaging it for their own audiences. Instead, they can focus on what algorithms cannot do: Building sources, witnessing events first hand, and conducting interviews that uncover the original stories that no amount of web scraping would reveal. They should always be the ones that provide the angle to the objective facts delivered by the AI. And finally, journalists must continue to be the ones that exercise the editorial judgment that ensures power remains answerable to people and that weighs public against private interests or harm. These are decisions that must be made decentrally, and that require human values.

This split, between threat and opportunity, defines where journalism stands today. We’re past the point of asking whether AI will transform news. The question now is whether the industry will shape this transformation through deliberate choices and principles, or simply let competitive forces determine the outcome. The storm isn’t coming; we’re already in it.

The Erosion Is Already Visible

Under the intense market pressure of the last 25 years, news organizations have been making understandable but concerning compromises. These aren’t failures of ethics but rational responses to competitive forces. I won’t focus on the obvious one, the polarizing drive to spread controversy for profits, because it’s a more complex problem and not as accentuated by the introduction of AI as several other challenges:

Consider what AI enables in personalization, when controlled by the publishers. Even before AI, major outlets are (some openly, some not) using readers’ historic preferences as part of their front-page news cycle algorithms, mimicking the success of aggregators. Soon, they will be able to adapt the actual wording and titles, should they want. Even seemingly benign adaptations, simplifying language for readers who engage with simpler articles, risk creating intellectual echo chambers. When algorithms trap readers in their comfort zones, we lose one of journalism’s essential functions: challenging audiences to grow. The question isn’t whether this drives engagement (it does), but whether we want editorial news to follow Netflix down a path where everyone watches their own version of reality.

Now consider what AI enables in personalization, when not controlled by the publishers, but by the AI labs. The content licensing gold rush indicates the direction this is taking. When Associated Press, The Guardian, and Axel Springer strike deals with OpenAI, and when smaller editorials make similar deals with Tollbit and ProRata, they aren’t just monetizing their content. They’re silently handing value over to the audience gatekeepers of the future, the personal AI assistants. Personalization will always give users what they want at the expense of exposing them to diverse viewpoints. Personal AI assistants may come to represent the ultimate personalization: perfect relevance leading to perfect fragmentation. When everyone has their own AI curator, we lose the shared narrative that makes democratic discourse possible. And for the media companies themselves: Well, trust and brand recognition are your long term value creating assets. The more they deteriorate, the more you will become a content factory. (I hope these deals are short-term.)

Then there’s pressure from programmatic advertising. Editorial teams (particularly in smaller outlets) consider which keywords attract higher ad rates, subtly shifting story selection and phrasing. Companies like ours receive requests to do this, because, clearly, we could do it 10 times more efficiently. We won’t offer this service, but our competitors might.

Another troubling trend involves schemes like those offered by agencies such as Baden Bower, which guarantee placement in major outlets for a fee. When Rolling Stone publishes Top 10 PR Agencies in 2025, it’s essentially Baden Bower showcasing their own services. Readers can barely distinguish this content marketing from genuine editorial coverage.

Each of these pressures shows how editorial independence is strained. They represent the digital era’s assault on traditional journalism values, each compromise made in the name of survival. And then comes AI-powered fact extraction, adding a new dimension to these existing challenges. And we can’t expect lawmakers or the courts to solve this problem for the industry: legally, news organizations have almost no protection. Facts cannot be copyrighted. The Feist principle in the US, the Database Directive limitations in the EU, the absence of “hot news” protection in most jurisdictions, all point to one conclusion: any AI system can extract and republish facts from any news source, instantly, legally, at scale.

So, with AI content production in high gear and the business model of creating original content challenged, is it through the Baden Bowers and algorithmic keyword stuffing that the industry will earn its revenues in the future?

When Machines Read Everything

The answer to that dark question lies in recognizing what AI can do for journalism.

Editorial publications face existential challenges. This analysis by Benedict Evans documents a 75-year decline in per capita spending on news, a steady loss of audience attention as entertainment options multiplied. What began as gradual erosion became a crisis after 2000, when revenues collapsed globally. The Reuters Digital News Report from June 17, 2025 confirms the result: audiences now prefer social media, video platforms, and influencers over traditional news sources. The situation worsened rapidly after 2020: social platforms abandoned news distribution, search referrals plummeted, and now AI search summaries threaten to bypass news sites entirely.

Let’s be clear: it wasn’t increased productivity that eliminated journalism jobs, it was vanishing revenues. An increase in productivity is exactly what this business needs to compete. AI can provide it.

Beyond productivity, AI can transform journalism’s verification capabilities. Where humans spend hours tracing one quote, AI can instantly follow chains through dozens of retellings to the original source. It can check today’s statements against years of public records and catch contradictions no reporter would remember. This isn’t just faster fact-checking, it’s a fundamental quality shift. Instead of trusting ‘credible sources,’ journalists can verify every statement against its entire history. Instead of catching errors after publication, they can prevent them. And instead of asking readers to trust their judgment, they can show their work: this claim originated here, contradicted these previous statements, and spread through these channels. This matters, especially now, when the Reuters report identifies influencers as the public’s biggest concern for false information (47% globally), yet these same personalities increasingly shape news consumption.

AI can also discover hidden narratives across disparate sources. Like when newspapers in different regions suddenly start using identical phrases about a controversial policy issue, revealing coordinated PR campaigns or shared ownership structures. Or it can help establish indices tracking political positions across time and geography, potentially building analysis based on vast amounts of credibility-rated information.

But journalism’s greatest challenge may be the fragmentation of truth itself. It took 400,000 NASA employees to put Neil Armstrong on the moon, but only one person to spread the idea it was a hoax. Social media fragments viewpoints, valuable for democratic diversity but destructive when unchecked. Mass media once provided the consolidation necessary for democratic stability. We’re dangerously out of balance. While AI won’t solve this alone, it can help traditional journalism compete more effectively in this fractured landscape.

From Competition to Collaboration

Technology won’t determine whether AI serves or exploits journalism. The principles governing its use will. And those principles can’t emerge from individual newsrooms racing against each other, they require collective action.

Consider the current landscape through game theory. When every outlet fears being left behind, rational self-interest drives compromises: Accepting AI company terms, implementing engagement algorithms, considering keyword optimization, signing deals with OpenAI. Each decision makes sense individually but degrades the ecosystem collectively. 

The principles needed aren’t philosophical abstractions but operational specifics. Should AI-assisted content always carry human bylines? Will organizations disclose AI use transparently, or hide it behind traditional mastheads? Who gets access to these powerful tools, only newsrooms with editorial standards, or anyone willing to pay?

At my company, Open Mind, we’re making these choices explicit by making our Open Mind Charter public later this month: No services to non-editorial entities, mandatory human signatures on AI-assisted work, no keyword stuffing, and a set of other principles. Next, we plan to collaborate with competitors to establish industry-wide standards before destructive competition makes cooperation impossible.

Individual company policies aren’t enough. Journalism has the infrastructure for collective response, even without enforcement mechanisms. Global organizations like the IFJ or UNESCO could define concrete AI principles, not vague aspirations but specific operational standards. Major news brands could lead by adopting these standards publicly. Instead of every outlet posting unread AI policies on their websites, they could simply declare: “We follow the Global Editorial AI Principles v2.0.”

This mirrors successful standardization in other fields. Software developers don’t write individual licenses; they adopt MIT, Apache, or GPL standards that everyone understands. These create common language and clear expectations. Journalism needs similar anchors, recognized standards that make principles actionable and verifiable.

The Reuters report shows trust in established news brands remains crucial for verification, even as audiences fragment across platforms. This trust represents journalism’s competitive advantage, but only if the industry protects it collectively. Individual outlets implementing AI responsibly while competitors cut corners won’t preserve that trust, it requires industry-wide commitment to shared standards.

The window for establishing these principles is open but narrowing. As AI capabilities advance and competitive pressures intensify, the cost of coordination rises. Soon, divergent practices may become too entrenched to harmonize. The question isn’t whether journalism will have AI principles, market forces guarantee that. The question is whether journalism will write them collectively, or have a set of different rules written by technology companies, advertisers, and individual outlets.

Will newsrooms recognize their shared interest in time? Can global journalism organizations move quickly enough? These are practical challenges that will determine whether AI amplifies journalism’s essential functions or accelerates its commercial dissolution.

Image source: gemini-2.5-flash-image-preview (aka Nano Banana), September 06, 2025

About the author:

Tor Kielland is the CEO and co-founder of Open Mind, a company specializing in AI-driven solutions for responsible news writing.