The Los Angeles Times and AI: Can Technology Restore Trust in a Troubled Newsroom?

A digital representation of AI analyzing news articles with political bias labels

The Los Angeles Times is undergoing major upheaval, as its billionaire owner, Patrick Soon-Shiong, makes sweeping changes to the nearly 150-year-old publication. Repeated rounds of layoffs and allegations of editorial interference have severely impacted staff morale. Now, Soon-Shiong is turning to artificial intelligence as the supposed solution to restore public trust in the paper and improve its financial outlook.

This week, Soon-Shiong announced the introduction of an AI-powered bias meter designed to assess and label opinion articles based on their perceived political alignment. Developed by the startup Particle.News—founded by ex-Twitter engineers—the tool categorizes articles as “Left, Center Left, Center, Center Right, or Right.” Additionally, a separate AI-powered section called “Viewpoints,” created by Perplexity AI, will present contrasting perspectives alongside opinion pieces.

These features won’t be limited to traditional opinion columns but will extend to any article deemed to express a particular stance, according to a statement given to The Guardian. To distinguish such content from straight news reporting, these articles will now be marked with a new “Voices” label, while standard news pieces will remain unaffected by the AI features.

A Push for Transparency—or Just Another Layer of Editorial Control?

Media trust is at an all-time low, and experimenting with new methods to regain credibility makes sense. However, critics argue that Soon-Shiong’s efforts to promote transparency ring hollow, given his own history of editorial interference.

In 2024, the billionaire was widely criticized for sharing an altered opinion piece that, according to its original author, was modified post-submission to portray Robert F. Kennedy Jr. in a more favorable light. Further controversy erupted when several staff members resigned after Soon-Shiong reportedly meddled in coverage of an altercation involving one of his personal acquaintances.

Such actions undermine the credibility of his AI-driven push for journalistic integrity. If a newspaper’s owner has a track record of overriding editorial decisions, how much trust can readers place in an algorithm that categorizes bias?

The LATimes Guild Speaks Out Against AI-Generated Viewpoints

The Los Angeles Times’ editorial union is not necessarily opposed to providing readers with multiple perspectives. However, they strongly object to using AI to do so. Unlike human editors, AI-generated viewpoints will not be scrutinized for accuracy, and AI models are notorious for fabricating information. Just recently, ChatGPT incorrectly listed Oscar nominees from the wrong year, raising concerns about its reliability in journalism.

According to The Guardian, the AI tool has already suggested alternative viewpoints that were, in fact, already addressed within the articles themselves—suggesting a lack of real comprehension.

“The money for this initiative could have been better spent on supporting journalists in the field,” argues Matt Hamilton, vice president of the LATimes Guild. “We haven’t had a cost-of-living raise since 2021, yet the company is funding AI tools instead of investing in its own newsroom.”

The Risks of an AI-Dominated News Landscape

The rise of AI-generated journalism raises serious concerns about misinformation. As more AI-written content floods the internet, these outputs may eventually be used to train future AI models—creating a cycle of pseudo-factual information with little human oversight.

Perplexity AI, one of the key players in this initiative, has already drawn criticism for its stance on journalism. The company has been accused of scraping news articles without permission and repackaging them in chatbot responses under the guise of fair use. During a labor strike at The New York Times, Perplexity’s CEO, Aravind Srinivas, even suggested that AI could replace striking journalists altogether.

If Soon-Shiong follows the same logic, could the next step be AI-generated news articles at the LA Times? Given his willingness to sideline human journalists, it wouldn’t be surprising.

The Billionaire Influence on Journalism

Other major media organizations are also experimenting with AI, albeit in subtler ways. The Washington Post, for instance, now uses AI to summarize articles and highlight key takeaways. However, the paper is also facing internal turmoil. Billionaire owner Jeff Bezos, once a hands-off investor, is now exerting greater editorial influence, shifting the opinion section towards an aggressively pro-capitalist stance. The Post has already lost hundreds of thousands of subscribers since Bezos reportedly blocked an endorsement of Kamala Harris.

Just a few years ago, billionaires like Bezos and Soon-Shiong were hailed as the saviors of traditional journalism, shielding newspapers from the financial devastation caused by the internet. In retrospect, it was naive to believe that these wealthy owners would act as impartial stewards of the free press. Conflicting interests have always loomed large. Now, the priority for Bezos seems to be maintaining good relations with President Trump—potentially securing government contracts for his aerospace venture, Blue Origin.

The Future of AI in Journalism

The integration of AI into newsrooms is inevitable, but whether it enhances journalism or erodes it remains to be seen. AI has the potential to assist journalists, streamline workflows, and offer readers additional context. However, when wielded by owners with political or financial motives, it risks becoming yet another tool for control and obfuscation.

For now, the Los Angeles Times’ AI-driven bias meter is being framed as a step toward greater transparency. But given its owner’s history, many are questioning whether this is truly about building trust—or simply another way for a billionaire to reshape the narrative in his favor.