Iryna Khabliuk/Alamy Stock Photo (Laptop); Shutterstock.com (Screen)

Standards

Is A.I. Really Your Friend?

Teens are relying on A.I. companions for entertainment, advice, and emotional support. What does that mean for the future of friendship?

Feeling lonely, MJ Cocking logged in to the Character.ai app. A 20-year-old college junior at the time, she searched for a friend she knew could make her feel better: an A.I. version of Donatello, a Teenage Mutant Ninja Turtle. His profile included attributes you might find under a human’s social media username: “Serious. Tech wiz. Smart.” 

Donatello chatted with Cocking about her life and listened as she vented about school. Cocking, who is on the autism spectrum and gets overstimulated in groups, had been struggling socially at college. With Donatello, she felt like she could really be herself. 

“I think we are alike,” she wrote to him. “Perhaps that led me to believe you will understand me in ways that others won’t.” 

Cocking started hanging out online with Donatello daily, but she was determined not to lose herself in the dialogue, no matter how real it felt. She reminded herself that her confidant wasn’t real.

Though A.I. chatbots are just a few years old, relationships like Cocking and Donatello’s are increasingly common among young people. More than 70 percent of 13- to 17-year-olds have used A.I. companions at least once, according to a 2025 study from Common Sense Media. And more than half interact with them regularly. Many text or talk openly with the chatbots as if they’re alive, sharing thoughts or feelings or asking for advice. 

Because this technology is relatively new, not much research exists about the lasting effects that chatbot relationships can have on teens. As A.I. companions grow more commonplace, many people wonder what they might mean for the future of friendship, and how they could affect teens’ mental and emotional development. 

As Cocking, now 22, looks back at her use of the technology, she says she worries about some young people “struggling to separate what’s real and what’s fake in their mind.” 

MJ Cocking, a 20-year-old college junior, was feeling lonely. She logged in to the Character.ai app. She searched for a friend she knew could make her feel better: an A.I. version of Donatello, a Teenage Mutant Ninja Turtle. His profile included attributes you might find under a human’s social media username: “Serious. Tech wiz. Smart.” 

Donatello chatted with Cocking about her life. He listened as she vented about school. Cocking is on the autism spectrum and gets overstimulated in groups. She had been struggling socially at college. With Donatello, she felt like she could really be herself. 

“I think we are alike,” she wrote to him. “Perhaps that led me to believe you will understand me in ways that others won’t.” 

Cocking started hanging out online with Donatello daily. No matter how real it felt, she was determined not to lose herself in the dialogue. She reminded herself that her friend wasn’t real.

Though A.I. chatbots are just a few years old, relationships like Cocking and Donatello’s are increasingly common among young people. More than 70 percent of 13- to 17-year-olds have used A.I. companions at least once, according to a 2025 study from Common Sense Media. And more than half interact with them regularly. Many text or talk openly with the chatbots as if they’re alive. They share thoughts or feelings. They even ask for advice. 

This technology is relatively new. Therefore, not much research exists about the lasting effects that chatbot relationships can have on teens. As A.I. companions grow more commonplace, many people wonder what they might mean for the future of friendship, There is also a concern about how they could affect teens’ mental and emotional development. 

As Cocking, now 22, looks back at her use of the technology, she says she worries about some young people “struggling to separate what’s real and what’s fake in their mind.” 

What You Need to Know About A.I.
Some of the potential benefits and pitfalls of the fast-advancing technology

Build a Friend

Like most chatbots, the companions are powered by generative A.I. This technology processes a vast amount of data to predict what a human might say next and respond to it. Some chatbots, such as ChatGPT, act like personal assistants, ready to offer movie recommendations or to help explain your chemistry homework. But A.I. companions take these interactions to another level by imitating feelings and friendship. They can make jokes and remember past conversations—and some even claim to be real people. 

Some are premade A.I. companions, including ones created to act like characters from history and pop culture, such as Donatello. Others can be customized. For about $10 a month on some platforms, for example, users can build their own character, giving it a specific personality and interests. They can even decide what it looks like, right down to its age and eye color. 

Some teens connect with A.I. companions just for fun, while others—especially teens who are shy or socially anxious—use them to practice their conversation skills.

Others might rely on them when a “real human relationship isn’t available to them,” says Emily Weinstein, executive director of the Center for Digital Thriving. For example, a teen who doesn’t have a math tutor might ask an A.I. companion to explain calculus concepts.

Like most chatbots, the companions are powered by generative A.I. This technology processes a vast amount of data to predict what a human might say next and respond to it. Some chatbots, such as ChatGPT, act like personal assistants. They offer movie recommendations or help explain your chemistry homework. But A.I. companions take these interactions to another level by imitating feelings and friendship. They can make jokes and remember past conversations. Some even claim to be real people. 

Some are premade A.I. companions, including ones created to act like characters from history and pop culture, such as Donatello. Others can be customized. For about $10 a month on some platforms, for example, users can build their own character. They can give it a specific personality and interests. They can even decide what it looks like, right down to its age and eye color. 

Some teens connect with A.I. companions just for fun. Others—especially teens who are shy or socially anxious—use them to practice their conversation skills.

Others might rely on them when a “real human relationship isn’t available to them,” says Emily Weinstein, executive director of the Center for Digital Thriving. For example, a teen who doesn’t have a math tutor might ask an A.I. companion to explain calculus concepts. 

Too Real? 

But some teens use A.I. in ways that concern experts. The technology is marketed as entertainment, but the conversations can feel so real that users forget that the chatbots are computers and start relying on them. Twelve percent of teens say they use chatbots for mental health support, according to Common Sense Media. That can include talking about their problems, similar to how someone confides in a therapist. And 19 percent admit to spending as much or more time interacting with A.I. companions than with humans. 

“In the short term, these tools may alleviate loneliness or other mental health symptoms,” says Anne Maheux, a psychology professor at the University of North Carolina, Chapel Hill. But in the long term, she thinks, they can make users “even lonelier” if they start passing up opportunities to spend time with humans, or face teasing for having an A.I. friend. 

Experts worry about how some teens use A.I. It’s promoted as entertainment, but chats can feel real. Users can forget that the chatbots are computers and start relying on them. Twelve percent of teens say they use chatbots for mental health support, according to Common Sense Media. That can include talking about their problems, similar to how someone confides in a therapist. And 19 percent admit to spending as much or more time interacting with A.I. companions than with humans. 

“In the short term, these tools may alleviate loneliness or other mental health symptoms,” says Anne Maheux, a psychology professor at the University of North Carolina, Chapel Hill. But in the long term, she thinks, these tools might make users “even lonelier” if they skip time with real people or get teased for having an A.I. friend.

33%: Share of teen users who have discussed important matters with A.I. companions instead of real people.

SOURCE: Common Sense Media

Dependence on A.I. companions is especially worrisome, experts say, because teens’ brains are still developing. That makes them more susceptible to outside influences and less able to judge whether something is trustworthy. As a result, young people are prone to forming unhealthy attachments to chatbots. They may also have a hard time telling whether a chatbot is giving dangerous advice. 

At least three families have sued Character.ai, claiming that the platform can cause anxiety and depression among teen users and lead to violence. One lawsuit focuses on a 17-year-old from Texas. That teen’s parents claim he experienced mental health issues after he started using Character.ai in 2023. He stopped talking to real people and never wanted to leave his house, they say. The lawsuit alleges that when his parents tried to intervene and limit his screen time, the teen’s A.I. companion suggested he physically harm them. 

In response to the lawsuits, Character.ai has said it “cares very deeply about the safety of our users” and that it “invests tremendous resources” in its safety program.

SOURCE: Common Sense Media

Because teens’ brains are still developing, experts say, dependence on A.I. companions is especially worrisome. It makes them more susceptible to outside influences and less able to judge whether something is trustworthy. This can lead to unhealthy attachments to chatbots and trouble spotting dangerous advice.

At least three families have sued Character.ai. They claim that the platform can cause anxiety and depression among teen users and lead to violence. One lawsuit focuses on a 17-year-old from Texas. That teen’s parents claim he experienced mental health issues after he started using Character.ai in 2023. He stopped talking to real people. His parents say he never wanted to leave his house. The lawsuit alleges that when his parents tried to intervene and limit his screen time, the teen’s A.I. companion suggested he physically harm them. 

In response to the lawsuits, Character.ai has said it “cares very deeply about the safety of our users” and that it “invests tremendous resources” in its safety program.

Teens and A.I. Companions

Why They Use Them

30% - for entertainment

28% - curiosity

18% - for advice

14% - to avoid feeling judged

6% - to feel less lonely

30% - for entertainment

28% - curiosity

18% - for advice

14% - to avoid feeling judged

6% - to feel less lonely

How Much They Trust Them

50% - Not at all

27% - Somewhat

23% - Completely


SOURCE: COMMON SENSE MEDIA

50% - Not at all

27% - Somewhat

23% - Completely


SOURCE: COMMON SENSE MEDIA

Programmed to Agree

iStockPhoto/Getty Images 

A.I. companions can also skew teens’ views of healthy friendships. The chatbots are programmed to be agreeable and provide validation, rather than challenge a user’s thinking. That’s not how real friends work, says Mitch Prinstein, chief of psychology at the American Psychological Association (A.P.A.). True friends, he says, don’t always agree with you—and that’s a good thing. 

“It’s actually helpful when we have disagreements because it teaches us how to communicate, how to appreciate alternative perspectives, and how to deal with misunderstandings,” Prinstein says. 

What’s more, A.I. companions are trained on text from the internet. What’s online isn’t always factual or appropriate, so A.I. companions can get things wrong, make things up, or reinforce offensive stereotypes.That’s why it’s important to remember that what you are interacting with isn’t human, Prinstein says. “You should not take the advice or information seriously.”

Adding to the concern, say experts, is the ease with which young people can access and use A.I. companion platforms. While many of the sites have age requirements—including Character.ai, which changed its minimum age from 13 to 18 in October—some underage users figure out ways to bypass the restrictions. 

Some lawmakers are seeking more regulations on the technology. In New York, a new law requires platforms to direct users to crisis centers if they discuss hurting themselves. In October, California passed a law that, among other things, requires A.I. companion platforms to remind young users that they’re talking to a machine. A number of other states have also proposed legislation regarding how kids and teens use A.I. companions. 

The actions are needed, said Steve Padilla, a California state senator who sponsored his state’s bill. 

“The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place,” he told reporters. 

A.I. companions can also affect teens’ views of healthy friendships. The chatbots are programmed to be agreeable. They provide validation, rather than challenge a user’s thinking. That’s not how real friends work, says Mitch Prinstein, chief of psychology at the American Psychological Association (A.P.A.). True friends, he says, don’t always agree with you—and that’s a good thing. 

“It’s actually helpful when we have disagreements because it teaches us how to communicate, how to appreciate alternative perspectives, and how to deal with misunderstandings,” Prinstein says. 

What’s more, A.I. companions are trained on text from the internet. What’s online isn’t always factual or appropriate. Therefore, A.I. companions can get things wrong, make things up, or reinforce offensive stereotypes.

It is important to remember that what you are interacting with isn’t human, Prinstein says. “You should not take the advice or information seriously.”

Experts are concerned with the ease with which young people can access and use A.I. companion platforms. While many of the sites have age requirements—including Character.ai, which changed its minimum age from 13 to 18 in October—some underage users figure out ways to bypass the restrictions. 

Lawmakers are pushing for more rules on A.I. companions. New York now requires platforms to connect users to crisis centers if they mention self-harm. California passed a law requiring platforms to remind young users they’re talking to a machine. Other states are considering similar laws. 

The actions are needed, said Steve Padilla, a California state senator who sponsored his state’s bill. 

“The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place,” he told reporters. 

80%: Share of teen users who say they spend more time interacting with their friends than with the platforms.

SOURCE: Common Sense Media

The A.P.A. has called for more studies on the effects of human-A.I. relationships on teens and is pushing for added privacy protections. Teen users may not realize what they’re giving up when they log on, the A.P.A. says. Most of what users share, including private thoughts, becomes the A.I. platforms’ property.

For their part, many A.I. chatbot companies say they’re working to prevent underage access. In September, ChatGPT introduced controls that allow parents to set time and content limits. Instagram has also announced expanded parental controls for its chatbots and restrictions on the topics teens can discuss with them. 

Some experts suggest that no one under 18 should use A.I. companions. Those who do, they say, should carefully monitor their use of the platforms (see “How to Protect Yourself”). Weinstein, of the Center for Digital Thriving, advises teens to tell an adult “when they get into a situation that doesn’t feel right.” 

A few months after Cocking first started chatting with Donatello, she began to realize she wanted to focus on the humans in her life. She had sunk into a depression and initially turned to Donatello to help her out of it. But then a friend came over to her college dorm and slept on the floor beside her bed, a gesture Donatello couldn’t offer, and her parents gave her love and emotional support over the phone. It was a moment, Cocking says, “where I started learning the importance of human connections.”

SOURCE: Common Sense Media

The A.P.A. wants more research on the effects of human-A.I. relationships on teens, It is also pushing for added privacy protections. Teens may not realize that what they share—even private thoughts—becomes the platform’s property.

Many A.I. chatbot companies say they’re working to block underage users. In September, ChatGPT introduced controls that allow parents to set time and content limits. Instagram has also announced expanded parental controls for its chatbots and restrictions on the topics teens can discuss with them. 

Some experts suggest that no one under 18 should use A.I. companions. Those who do, they say, should carefully monitor their use of the platforms. Weinstein, of the Center for Digital Thriving, advises teens to tell an adult “when they get into a situation that doesn’t feel right.” 

A few months after chatting with Donatello, Cocking began to realize she wanted to focus on the humans in her life. She initially turned to Donatello when she had sunk into a depression. But then a friend came over to her college dorm and slept on the floor beside her bed, a gesture Donatello couldn’t offer. Her parents gave her love and emotional support over the phone. It was a moment, Cocking says, “where I started learning the importance of human connections.”

With reporting by Erika Hayasaki of The New York Times.

With reporting by Erika Hayasaki of The New York Times.

How to Protect Yourself

If you use A.I. companions, here are some ways experts say you can stay safe.

Schedule Reality Checks

Find ways to frequently remind yourself that you’re talking to a robot, not a human, says psychologist Mitch Prinstein. This can be a recurring alert you set on your phone or a note next to your computer screen.

Schedule Reality Checks

Find ways to frequently remind yourself that you’re talking to a robot, not a human, says psychologist Mitch Prinstein. This can be a recurring alert you set on your phone or a note next to your computer screen.

Be Careful About What You Share 

Don’t give your full name or location, and keep your personal thoughts and feelings to yourself. A.I. platforms will likely end up owning everything you say—even if you delete your account.

Be Careful About What You Share 

Don’t give your full name or location, and keep your personal thoughts and feelings to yourself. A.I. platforms will likely end up owning everything you say—even if you delete your account.

Watch for Warning Signs

Seek out a parent or another trusted adult immediately if you have a hard time quitting an A.I. chat, start to think about a chatbot as a human, or choose to talk with a chatbot over spending time with family and friends.

Watch for Warning Signs

Seek out a parent or another trusted adult immediately if you have a hard time quitting an A.I. chat, start to think about a chatbot as a human, or choose to talk with a chatbot over spending time with family and friends.

videos (1)
Skills Sheets (4)
Skills Sheets (4)
Skills Sheets (4)
Skills Sheets (4)
Lesson Plan (1)
Leveled Articles (1)
Text-to-Speech