
With the rise of Artificial Intelligence, it’s becoming harder and harder to know what information is credible. For a decade, “fake news” has been used to describe false or misleading stories, but now AI has made the problem even more complicated. Now, even trusted outlets are experimenting with AI-generated content.
In 2023, the media company Gannett, the owner of USA Today, The Indianapolis Star, and the Cincinnati Inquirer, published AI-written sports recaps and headlines, which caused an understandable distrust amongst consumers. Many readers felt the articles lacked accuracy and human voice, which is arguably the most important part of journalism. One of the biggest concerns with websites like ChatGPT, they can be wrong more often than people realize. According to the National Institutes of Health, AI-generated content is only right 88.7 percent of the time. While that may sound high, the mistakes that misinform readers can still spread quickly online. When journalists write a story to inform, they’re expected to be giving factual information 100% of the time.
AI in journalism doesn’t stop at sports recaps. News outlets frequently use AI to draft financial reports, summaries and even entire articles. At first, it seems more efficient, but the problem is that journalism isn’t about speed or cost. Real journalists bring context, fact checking and accountability to a story. An AI program would easily make up statistics, misquote people or miss the bigger picture. The use of AI in news also risks erasing the human voices that make journalism meaningful. Reporters connect with communities, ask questions and provide stories that impact readers. If news becomes automated, we risk losing that connection that makes the news worth reading.
Before, a false story had to be written and shared by a person, which took time. Now, AI can generate hundreds of convincing articles, posts or images in seconds. This means misinformation can travel faster than ever before, making it harder for people to separate fact from fiction. Because of this trust in journalism and reliable sources is at risk. Common Sense Media published a study showing new research shows U.S. teens age 13–18 trust online services like generative AI.
Beyond misinformation, there’s an irreversible environmental cost to the use of AI. Training and running large AI systems require massive amounts of electricity. According to research from the University of Massachusetts Amherst, training a single AI model can produce as much carbon dioxide as five cars over their entire lifetimes. On top of that, data centers where AI systems are stored and run, consume an average 550,000 gallons of water per day to stay cool.
While AI can be useful in certain research areas, it should be used sparingly. From spreading false information to the damage to the planet the risks are real. As students and future leaders, we should all be cautious about accepting AI as the new normal. Technology should help us move forward, not make it harder to find news or protect our planet.
To help the damage of AI, we need to start holding companies accountable for their use of AI and take the time to truly inform ourselves on the environmental sacrifice of AI services.