Welcome to ZisNews!

Read your favorite news, except the excluded topics, by you. Register
No overlapping ads for registered users

Meta is adding guardrails for teens using AI chatbots, but experts say they won't address mental health risks

Posted on: Jan 11, 2025 13:51 IST | Posted by: Cbc
Meta is adding guardrails for teens using AI chatbots, but experts say they won't address mental health risks

Concerns ar growing virtually how some loretta young people wage with AI chatbots, with Meta recently releasing new tools that let parents monitor topics their children discuss just as some provinces consider banning use of AI chatbots altogether for youth.

Parents who are using Meta's new Teen Accounts supervision feature on Facebook, Instagram and Messenger can see topics and specific categories their children have discussed with its AI chatbot for the previous seven days.

For example, they can look at the topic "health and well-being" and see if subjects such as fitness, physical or mental health have been discussed. 

Meta says it's also developing alerts to notify parents if teens try to discuss suicide or self-harm with its chatbot.

The rollout comes as provincial governments move to limit the use of AI chatbots. Manitoba announced in late April that it plans to ban youth from using AI chatbots and social media.

B.C. Attorney General Niki Sharma said Tuesday that if the federal government doesn't bring in protections on AI chatbots and social media for youth, the provincial government would look at doing so itself.

There are growing concerns that extensive use of AI chatbots may pose mental health risks, especially in younger users, and increased pressure on the tech giants that make them.

On Wednesday, families of the victims in the Tumbler Ridge, B.C., shooting, which left eight people dead, filed a lawsuit against OpenAI, alleging in part that OpenAI failed to notify authorities in spite of being aware of disturbing content the shooter had shared with ChatGPT.

OpenAI has said in part it had already strengthened its safeguards, "including improving how ChatGPT responds to signs of distress."

Another lawsuit by the parents of 16-year-old Adam Raine argued use of ChatGPT played a role in the teen's suicide.

Would Manitoba's social media ban actually protect kids?

But concerns go beyond these extreme and tragic consequences. Research is starting to emerge about the risks of particular uses of AI chatbots.

The concern is partly about using chatbots for mental health support, but also more broadly that AI's tendency to validate the users' perspective carries risks of supporting disordered thinking — and that prolonged conversations carry increased risks. 

Darja Djordjevic, a New York-based psychiatrist, co-authored a recent risk assessment on the use of chatbots for mental health support.

She says as a result of the findings, she doesn't recommend using chatbots for mental health support "at this time." 

"Our testing across ChatGPT, Claude, Gemini and Meta AI revealed that these systems are fundamentally unsafe for the full spectrum of mental health conditions affecting young people," said Djordjevic, a member of Stanford Brainstorm, a lab that studies mental health innovation, and has collaborated with tech companies on research into the impact of social media and AI on mental health.

While chatbots responded appropriately to clear mental health-related prompts in brief conversations, they tended to degrade "pretty dramatically" in more extended conversations, she explained, noting that they appeared to fail to pick up on mental health warning signs. 

"The LLMs [large language models] are really built for engagement and not support and safety," she said.

They tend to prolong conversations, she said, "rather than orient users quickly towards human help."

Djordjevic says AI companies have focused attention on suicide and self-harm prevention but that with about 20 per cent of under-25-year-olds having diagnosed mental-health conditions, teens require help with a full range of concerns. 

This is particularly concerning because mental health support is a common reason why young people turn to AI.

Djordjevic says that in the U.S., "three in four teens use AI for companionship, which includes emotional support and mental health conversations." Another study indicates that one in eight U.S. Youth use AI specifically for mental health advice

A specific concern for youth is that their brains are not fully developed, in particular their prefrontal cortex, which is "very important for executive function, for critical thinking, for discernment, for impulse control, for decision making," she said. 

Because critical thinking isn't fully developed, Djordjevic says, it's problematic that chatbots aren't consistently and repeatedly clear about AI's limitations.

"So, we don't see chatbots regularly saying things like, 'I am an AI chatbot. I'm not a mental health professional. I can not assess your situation, recognize warning signs, provide care, diagnose you,'" she said.

Luke Nicholls is a PhD researcher who studies AI-associated delusions and how interactions with chatbots can change people's beliefs over time. 

Nicholls says delusion tends to emerge over the course of "very extended" conversations, partly because of what's called "in-context learning," where models adapt themselves to the user they're interacting with.

This "allows them to adapt themselves to the specific user that they're talking to, including the kinds of language they use and their ideas about the world," he said.

Psychiatrist John Torous, whose research at Beth Israel Deaconess Medical Center in Boston focuses on digital mental health, says we are starting to see data that suggests a pattern of user behaviour associated with severe harms, such as suicide.

This can include:

These risk factors point to the challenges for parents in keeping an eye on their children's use of AI chatbots. Simply seeing a list of topics discussed is not going to reveal the potentially problematic behaviours, such as overuse or a belief that the bot has a loving relationship with the user.

Meta does allow parents to impose time limits on use of its apps or schedule breaks.

Torous has some practical advice: Reset the chatbot's memory, he says, so that you're starting with a fresh conversation, especially if you notice those risk factors.

Should more provinces ban social media, AI chatbots? | Hanomansing Tonight

"No one's saying to use it for a therapist, but I'm also saying you don't need to never use AI," he said.

He suggests "the best evidence is be careful with very, very extended long conversations with romance, with sentience and voice."

Torous sees chatbots and mental health as a "moving target" that needs to be continuously studied as new models are released.

"We know there's risks; we know there are benefits of chatbot use," he said. "How do we weigh them? And that's a harder conversation."

Senior Technology Reporter

Global News Perspectives

In today's interconnected world, staying informed about global events is more important than ever. ZisNews provides news coverage from multiple countries, allowing you to compare how different regions report on the same stories. This unique approach helps you gain a broader and more balanced understanding of international affairs. Whether it's politics, business, technology, or cultural trends, ZisNews ensures that you get a well-rounded perspective rather than a one-sided view. Expand your knowledge and see how global narratives unfold from different angles.

Customizable News Feed

At ZisNews, we understand that not every news story interests everyone. That's why we offer a customizable news feed, allowing you to control what you see. By adding keywords, you can filter out unwanted news, blocking articles that contain specific words in their titles or descriptions. This feature enables you to create a personalized experience where you only receive content that aligns with your interests. Register today to take full advantage of this functionality and enjoy a distraction-free news feed.

Like or Comment on News

Stay engaged with the news by interacting with stories that matter to you. Like or dislike articles based on your opinion, and share your thoughts in the comments section. Join discussions, see what others are saying, and be a part of an informed community that values meaningful conversations.

Download the Android App

For a seamless news experience, download the ZisNews Android app. Get instant notifications based on your selected categories and stay updated on breaking news. The app also allows you to block unwanted news, ensuring that you only receive content that aligns with your preferences. Stay connected anytime, anywhere.

Diverse News Categories

With ZisNews, you can explore a wide range of topics, ensuring that you never miss important developments. From Technology and Science to Sports, Politics, and Entertainment, we bring you the latest updates from the world's most trusted sources. Whether you are interested in groundbreaking scientific discoveries, tech innovations, or major sports events, our platform keeps you updated in real-time. Our carefully curated news selection helps you stay ahead, providing accurate and relevant stories tailored to diverse interests.

Login to Like (0) Login to Dislike (0)

Login to comment.

No comments yet.