Serving Whom?

Ethical and Practical Limits of AI Mental Health Chatbots for Marginalized Communities

Authors

  • Omotunde Falade Stanford University

DOI:

https://doi.org/10.60690/v4rcyz59

Keywords:

AI mental health chatbots, Health equity, Culturally responsive design

Abstract

AI mental health chatbots could offer a promising solution to the care gap faced by underserved communities, especially during times of social isolation and health crisis. These tools provide round-the-clock, low-cost, and stigma-free support. Yet, this paper explores how current implementations face challenges that may limit their long-term efficacy and equitable impact. Drawing on recent empirical studies and ethical scholarship, I evaluate the capabilities and constraints of AI chatbots as short-term mental health supports and propose safeguards that promote inclusive, culturally relevant, and ethically responsible deployment. This study presents new empirical insights from a qualitative interview study with 18 low-income, first-generation community college students of color, whose reflections on chatbot use underscore the importance of trust, cultural resonance, and long-term engagement. Rather than comparing or critiquing specific products, this research centers community voices and advocates for collaborative, community-informed improvement. It also draws from an ethically sourced and protected dataset created by Dr. Harriett Jernigan at Stanford University, which provides a compelling counterexample of culturally specific chatbot design.

Shadow photo of young people on a green background with the words, inclusive mental health

Downloads

Published

2025-04-03