Earlier this month, Italian newspaper Il Foglio experimented with handing the reins over to AI. In their announcement — which was written by AI — Il Foglio inaugurated their new editions, designed and written top-to-bottom by ChatGPT Pro. They claimed that major news outlets have already been implementing the technology “with the aim of improving the production, distribution and accessibility of information.” And the AI missed the mark.

After just a week of what was meant to be a month-long experiment, Claudio Cerasa, editor of Il Foglio, revealed that the real purpose of the experiment was to “sound the alarm about the future of journalism” and illustrate the importance of original reporting by real life human beings. Upon Il Foglio AI’s launch, The Guardian had reported that “The articles were structured, straightforward and clear, with no obvious grammatical errors,” (which Cerasa admitted was prevented by having two humans manually — and heavily — editing before publication) but that “none of the articles published in the news pages directly quote any human beings.”

Further down the line, Cerasa found that the AI had hallucinated entire stories and events, and plagiarized or reported inaccurately on real ones, some of which were publicly fact-checked by director of Cornell Tech’s Security, Trust and Safety Initiative Alexios Mantzarlis, once director of the International Fact-Checking Network. Editors of the paper started pulling back, letting the stories run riddled with spelling and grammatical errors, but removed fake news to preserve the integrity of the newspaper, adding a disclaimer in place of the byline, warning, “Text made with AI.”

Bad timing? Launching such a controversial and covert experiment during a time when mistrust in official news outlets is skyrocketing and media organizations worldwide test the waters of AI integration, Politico writer Giulia Poloni called deliberately exposing readers to low-quality writing and reporting “playing with moral fire at the worst possible moment in history.”

What can both journalists and readers learn from this? Charlie Beckett, an expert on AI in journalism at the London School of Economics and Political Science believes that the future of AI in journalism will (and should) be limited to assisting human writers and fielding queries from readers. Anything else, like “creating original content […] is really dangerous,” and that both journalists and writers should engage with this technology with a grain of salt.

AI is still far from producing human-quality articles, and ChatGPT concurs. After one week of running AI-published editions, Cerasa asked ChatGPT to assess its work. The verdict? “Artificial intelligence can write well” but that “writing well is not yet journalism.” Ceraso’s advice to journalists is to do what they can to be better than the machines, which means holding themselves to a high standard of accuracy and originality. Beckett agrees: “It’s easy to mock [AI] for a few inaccuracies, but, you know, I can find distortions and inaccuracies in mainstream media every day.”