
The Chicago Sun-Times and the Philadelphia Inquirer find themselves at the center of an AI-related gaffe after they published syndicated content packed with unidentifiable quotes from fake experts and imaginary book titles created using generative artificial intelligence.
The articles were published in the papers' "Heat Index" special sections - a multipage insert filled with tips, advice and articles on summertime activities. The insert, which was published by the Sun-Times on Sunday and by the Inquirer on Thursday, was syndicated by King Features, a service from the Hearst media company that produces comics, puzzles and supplemental material. (King Features did not respond to a request for comment.)
The use of AI-generated content in the insert was first reported by 404 Media, a tech-focused news publication, after it was shared across social media Tuesday by writers and podcasters who discovered the stories in print.
Many pointed out quotes attributed to experts and professors who don't seem to exist, or at least don't have a significant online presence. Similarly, some pieces in the package featured quotes that social media sleuths said could not be found online - such as one from Brianna Madia, the author of a van-life book called "Nowhere for Very Long," talking about hammock culture to Outside Magazine in 2023. Interviews she did with the magazine in 2019 and 2017 did not feature any discussion on hammocks, and she does not appear in any of the magazine's 2023 stories online.
The section's "Summer reading list for 2025" recommended not only fake books such as "Tidewater Dreams" by Isabel Allende and "The Last Algorithm" by Andy Weir, but also imaginary titles from authors Brit Bennett, Taylor Jenkins Reid, Min Jin Lee and Rebecca Makkai. (The list does feature some real books, including Françoise Sagan's "Bonjour Tristesse" and André Aciman's "Call Me by Your Name.")
"It is unacceptable for any content we provide to our readers to be inaccurate. We value our readers' trust in our reporting and take this very seriously," Victor Lim, senior director of audience development for Chicago Public Media, said in a statement.
"We've historically relied on content partners for this information, but given recent developments, it's clear we must actively evaluate new processes and partnerships to ensure we continue meeting the full range of our readers' needs," he added.
Lisa Hughes, the publisher and CEO of the Philadelphia Inquirer, said the special section was removed from the e-edition after the discovery was made. "Using artificial intelligence to produce content, as was apparently the case with some of the Heat Index material, is a violation of our own internal policies and a serious breach," she said in a statement to The Washington Post.
Much of the content for the section was written by Marco Buscaglia, a Chicago-based freelance writer who used AI chatbots during the writing process, he told The Post in an interview Tuesday. Buscaglia said the insert, which he began writing in February with a March deadline, wasn't written with any specific cities in mind, and he didn't know which newspapers would run it.
Buscaglia said there was "no excuse" for not double-checking his work. When he started writing the recommended books list, Buscaglia said, he considered looking at Goodreads or calling local bookstores for recommendations. But instead, he asked AI chatbots for help. (Buscaglia said he was unsure which chatbot he used, though he said it was either ChatGPT or Claude.)
"I'm very responsible about it. I do check things out, but in this case, I mean, I totally missed it," he said about using AI in his reporting. "I feel like, if given the opportunity, I would approach these things differently and have a lot, you know, obviously better set of filters."
"I do feel that it also misrepresents the Sun-Times, the Philadelphia Inquirer," he said, adding: "I feel bad about that, too - that the papers somehow [get] associated with that."
The misstep comes as the media industry wrestles with the advent of AI. Large language models and AI chatbots don't always search the web for information, relying on preinstalled knowledge, which can lead them to spit out incorrect or misleading information. Critics have said that newspapers that use AI tools risk exposing readers to low-quality reporting and misinformation, contributing to a rising mistrust of journalism.
(COMMENT, BELOW)