How CEE Newsrooms Are Drawing the Lines on AI

In the rush to adopt Artificial Intelligence, the question is often "What can the technology do?" But for the 18 participants of the CEE.AI Lab, the more critical question during the first phase of our accelerator was: "What should we allow it to do?"

Over the past weeks, newsrooms from Ukraine, Poland, Belarus, Romania, Moldova, Latvia, Lithuania, and Bulgaria have drafted their internal AI Ethical Charters.

While their operational models differ—ranging from the massive public broadcaster LRT to the agile, social-first Darik Digital—a powerful consensus has emerged. These policies reveal that in Central and Eastern Europe, where disinformation is a weapon of war, trust is the only currency that matters.

Here is a case study on how these newsrooms are defining the rules of the road.

1. The "Human in the Loop" is Non-Negotiable

Every single policy submitted shares one foundational commandment: AI does not publish. The human journalist is always the final barrier between the algorithm and the audience.

  • The "Pilot" Metaphor: The Kyiv Independent explicitly frames AI as a "tool" akin to a spell-checker or a transcriber, not a colleague. Their policy states: "AI does not replace journalists... It empowers us to do better work, but the final responsibility always lies with the human author".

  • Total Liability: Stirile Transilvaniei (Romania) codifies this strictly: "The human editor bears full responsibility for any content generated or assisted by AI". There is no "the computer made a mistake" excuse allowed.

  • The 80/20 Rule: Spider’s Web (Poland) suggests a practical ratio: AI can do the heavy lifting of data processing, but the "creative spark, ethical judgment, and final polish" must remain 100% human.

2. The "Red Lines": Where AI is Banned

The most revealing parts of these documents are the prohibitions. These "No-Go Zones" reflect the specific fears and threats of the region.

  • Security as a Life-or-Death Matter: For Hrodna.life, operating in exile from Belarus, data privacy isn't just about GDPR—it's about prison. Their policy explicitly bans entering any sensitive personal data or source information into public AI models like ChatGPT. The risk of a leak identifying a source inside Belarus makes this a hard red line.

  • Opinion & Analysis: Denník Postoj (Slovakia) and The Kyiv Independent draw a line at "thought." AI can summarize a report, but it cannot write an op-ed or analyze a political situation. The "soul" of the publication—its values and worldview—must come from biological neurons.

  • War Reporting: Texty.org.ua, known for its data journalism on the war, prohibits using Generative AI to create factual narratives about the conflict without a "triple-check" protocol. The risk of hallucination in war reporting is too high.

3. The Transparency Spectrum: When to Label?

This was the most debated topic. When does a "tool" become a "co-author"? The cohort adopted a nuanced, tiered approach.

  • The "Invisible" Tier: Most newsrooms (including LRT and TVNET) agree that using AI for spell-checking, translation, or transcription does not require a public label. This is considered standard editorial efficiency.

  • The "Assisted" Tier: Spider’s Web introduced a "Supported by AI" label for articles where AI helped structure the text or gather data, but the writing is human.

  • The "Generated" Tier: If an image is created by Midjourney, or a summary is written by ChatGPT, the disclosure must be prominent. LRT (Lithuania) mandates that audiences must be informed "clearly and immediately" if they are interacting with synthetic content. NewsMaker (Moldova) emphasizes that this transparency is key to distinguishing their brand from the flood of "junk" content sites.

4. The Visual Frontier: Deepfakes and Aesthetics

The newsrooms are particularly cautious about visual AI, fearing a degradation of reality.

  • No Synthetic Events: 24chasa (Bulgaria) and LRT strictly prohibit using AI to generate photorealistic images of events that could have happened but didn't (e.g., a fake photo of a protest). Real news requires real photos.

  • Illustrations vs. Photos: diez.md (Moldova) allows AI for generic illustrations (e.g., "a concept of inflation") but strictly labels them. They refuse to use AI to "improve" the faces of real people in news photos, viewing it as a distortion of truth.

5. Innovation with Integrity

Despite the rules, these policies are not anti-technology. They are pro-journalism.

  • Texty.org.ua encourages the aggressive use of AI for coding and data structuring, recognizing it as a superpower for small teams.

  • Darik Digital sees AI as a key partner in adapting content for social media, allowing them to reach younger audiences without burning out their staff.

Next
Next

How Texty.org.ua uses AI tools for investigations