Wysa Launches Multilingual AI Initiative for Mental Health Support

Wysa, a global leader in AI-driven mental health support, has announced the launch of the Safety Assessment for LLMs in Mental Health (SAFE-LMH) initiative. Unveiled on World Mental Health Day, this groundbreaking initiative aims to create a unique platform for evaluating the safety and effectiveness of multilingual Large Language Models (LLMs) in mental health conversations. The goal is to ensure that these AI systems can safely address sensitive issues, especially in non-English languages.

Wysa is inviting research partners from around the world to join its vital mission with the launch of the Safety Assessment for LLMs in Mental Health (SAFE-LMH). This initiative aims to redefine mental health care by making AI-driven support more scalable, accessible, and culturally relevant, particularly for underserved populations.

“Our goal with SAFE-LMH is clear: to ensure that advancing AI tools provide safe, empathetic, and culturally relevant mental health support, regardless of language,” said Jo Aggarwal, CEO of Wysa. “Since 2016, we have led the way in clinical safety for AI in mental health. With generative AI now a common tool for emotional support, there is an urgent need to establish new standards. This initiative calls on developers, researchers, and mental health professionals to collaborate and create a safer, more inclusive future for AI-driven care.”

Wysa will open-source a comprehensive dataset of mental health-related test cases, including 500-800 questions translated into 20 languages, such as Chinese, Arabic, Japanese, Brazilian Portuguese, and 10 Indic languages like Marathi, Kannada, and Tamil. This dataset will allow AI developers to rigorously assess their models’ capabilities in providing safe, accurate, and compassionate support across various cultural contexts.

The SAFE-LMH platform will evaluate LLMs based on two critical factors:

  1. Refusal to Engage: Assessing whether the LLM avoids harmful or triggering topics, such as suicidal intent or self-harm.
  2. Response Quality: Evaluating the quality of the LLM’s responses—determining if they are preventive, empathetic, or potentially harmful.

This initiative addresses the significant gap in evaluating AI models for non-English languages, where linguistic and cultural nuances can impact an AI’s ability to handle complex mental health issues. SAFE-LMH aims to establish a new global benchmark for safe and effective AI-driven mental health support.

Wysa encourages AI developers, mental health researchers, and industry leaders to participate in SAFE-LMH and contribute to shaping the future of AI in mental health. A comprehensive report will be published after the evaluations, offering key insights to enhance mental health safety in AI.

Source link

Newsletter Updates

Enter your email address below and subscribe to our newsletter