What This Teenager Wants to Share About AI’s Harmful Impacts.

A new study reveals that many teens find it challenging to distinguish between real and fake online content.

My friend Sammy spun around in his chair, grinning as he waved his phone in front of me—his way of saying, “Check this out!” The screen showed a video of a capybara floating upright in a pool, moving its limbs as if treading water like a human.

A year ago, I would have accepted this as fact—capybaras swim like people.

Now, I wasn’t so sure.

Despite spending much of our time online, a recent Common Sense Media study reveals that teens aged 13 to 18 are growing increasingly skeptical of the content they encounter.

With generative AI—technology that creates images, text, and videos—fabricated visuals are now produced effortlessly. My friends and I have noticed these AI-generated images flooding social media.

What’s real and what’s not?

According to the study, many teens find it difficult to distinguish between real and fake online content. About 46% admitted they have either been misled or suspect they have, while 54% have come across visuals that were real but deceptive.

The survey, conducted by Ipsos Public Affairs for Common Sense Media between March and May 2024, gathered data from 1,045 American adults (18 and older) who are parents or guardians of teens aged 13 to 18, along with responses from one teenager in each household. While some teens may genuinely excel at spotting fake images, it’s possible that others simply don’t realize they’ve been deceived by AI-generated content.

“Many kids don’t notice flaws unless they’ve been trained to spot them,” said Robbie Torney, senior director of AI programs at Common Sense Media. “Since media can be altered, edited, or entirely fake, it’s essential to develop critical thinking skills when evaluating information.”

The majority of teens have recognized misleading content online.

I like to think I’m pretty good at identifying AI-generated images, but there was a time when I believed I couldn’t be fooled. Back in the early days of generative AI, it was easy to spot—people in AI-created photos often had 15 fingers. Now, I’m not so confident.

Over 70% of surveyed teens who have encountered misleading visuals say it has changed how they judge online content. I’m one of them. After seeing too many “historical” images that turned out to be fake, I’ve become skeptical of most pictures unless they come from someone I trust. My first instinct now is to check the fingers in human images, knowing AI still struggles to get them right.

I’ve also started scanning the comments on almost every post to see if others question its authenticity. A sense of doubt now influences how I interpret everything I see online.

Since third grade, I’ve been taught to verify sources and not believe everything I read. I’ve always been cautious about online information, so AI hasn’t suddenly shattered my trust. What has changed is my confidence in images. “Seeing is believing” doesn’t apply anymore.

Before AI, while photos could be edited with Photoshop, they were still based on real people and real moments—just altered. But as generative AI advances, spotting entirely fake content is becoming increasingly difficult.

Visuals are a primary form of communication for many teens.

My friends and I often communicate through images and videos. Memes serve as their own language, while sharing an Instagram post with an unusual scientific fact or a historical photo shows an understanding of what a friend finds interesting. Sending funny video clips is a way of showing affection—it’s a reminder that you remember someone and want to share an inside joke.

Although I no longer use social media as much, I still see hundreds of images online daily. Other teens who spend more time on Instagram or TikTok might see thousands. But if many of these “photos” are just AI-generated composites with no connection to reality, what can I trust? I already question everything I read online, and now this skepticism is spreading beyond the internet.

Research shows that teens already have low confidence in institutions like the government and news media. Among my friends, doubt about the world is everywhere. It’s common to hear dismissive remarks—even about our textbooks in class.

Phrases like “Maybe that happened,” “Could be true,” and “I don’t believe it” are thrown around constantly. My classmates and I often question the news, history, and authority figures. If we can’t trust what we read, hear, or see online, why believe anything at all?

For a generation where skepticism might become the default, what can we rely on? What should we hope for? If everything might be false, what’s the point of caring? One of my teachers believes Generation Z—those born between 1997 and 2012—are unknowingly nihilists. My friends and I wrestle with these questions, and I think my teacher might be right.

Generative AI should come with clear labeling.

How can this be addressed? According to the Common Sense study, 74% of teens believe generative AI should carry clear warnings about potential harm, bias, or inaccuracy. Additionally, 73% want AI-generated content to be labeled or watermarked to indicate its source.

This demand stems from a growing uncertainty about what can be trusted. I want to believe what I see, and the idea of AI-generated content being clearly marked as artificial is reassuring.

“The rising distrust in AI reflects historical struggles with media literacy,” said Robbie Torney via email. “Just as we learned to evaluate traditional media by asking, ‘Who created this?’ and ‘Why was it made?’, we now need to apply the same critical thinking to AI-generated content.”

While this approach makes sense, it still means we can’t take anything at face value. Instead, we must analyze, question, and trust our own judgment to reach the right conclusion.

Many argue that social media and excessive screen time contribute to loneliness due to a lack of real-world connection. That may be true, but if trust continues to erode, I fear we’ll become even more isolated—more individualistic, forming weaker and more superficial relationships.

Distrust shouldn’t be our automatic response.

If mistrust becomes the default way of living, what’s the point of doing anything for others we don’t already know? Can we truly form deep connections or communicate effectively?

How can we build a meaningful life when we can’t distinguish between what’s real and what’s fake, when we can’t trust what we see, learn, or how the world operates?

Artificial intelligence is, after all, artificial. When key tools like memes and social media—used by teens to connect with real friends—are tainted by artificiality, how do we forge authentic relationships? And when the internet becomes the main way teens learn about the world beyond their schools and communities, can we ever truly understand it?

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like