| Thursday, November 6, 2025 | |
|---|---|
| 5:00 AM - 6:00 AM | |
As generative AI continues to evolve, its potential to reshape mental health care is becoming increasingly clear.
This two-part webinar series explores how Gen-AI is transforming the mental health landscape by opening new paths of support for individuals around the world and expanding access to personalized care on a scale never seen before. As AI becomes integrated into therapy, support systems, and the lives of hundreds of millions of people, it’s essential to understand both the exciting opportunities and the challenges that lie ahead.
Part II takes on the tough questions about ethics in AI and mental health. Topics include privacy concerns, the risk of bias - especially related to culture and the tendency to favor mainstream perspectives that may exclude neurodiverse and underrepresented groups - and the crucial need for transparency and fairness in AI development and use.
Key Discussion Points:
-
Privacy and Data Security in AI Mental Health Tools
How can we protect sensitive user data while leveraging AI for personalized care? What standards should govern storage, sharing, and consent in AI-driven mental health applications? -
Bias, Representation, and Equity
How can we identify and mitigate biases in AI models that may marginalize neurodiverse individuals, underrepresented cultural groups, or non-mainstream perspectives? -
Transparency and Explainability
How important is it for users and clinicians to understand how AI makes decisions, and what strategies can ensure AI recommendations are interpretable and trustworthy? -
Ethical Boundaries of AI in Therapy
Where should we draw the line between AI as a supportive tool versus AI making autonomous decisions in mental health care? What are the risks of over-reliance or misapplication? -
Regulation, Accountability, and Oversight
Who is responsible when AI fails or causes harm in a mental health context? How can policy, professional guidelines, and ethical frameworks keep pace with rapid technological development?

Fractional Strategic Initiatives Lead and Webinar MC
eMHIC
ENGLAND
Fiona Costello is a seasoned healthcare professional with over 15 years of experience spanning the NHS, private, and non-profit sectors. Currently serving as Strategic Initiatives Lead at eMHIC, International Business Development Director at Aire Innovate, and SVP Partnerships at Brain+, Fiona has consistently demonstrated her commitment to improving health services through technology. Her notable achievements include expanding digital mental health technology within NHS services at SilverCloud Health, contributing to the Canadian Mental Health App Assessment Standard with ORCHA, and enhancing digital health delivery in India. At Aire Innovate, Fiona advocates for the adoption of low-code platforms in healthcare, believing these tools can streamline processes and enhance patient care. With a focus on driving innovation safely and effectively, Fiona remains dedicated to learning and growing in the digital health field, always striving to contribute positively to healthcare outcomes through innovative solutions.

Chief of Clinical Services and Ops
Wysa
INDIA
Smriti Joshi is a Clinical Psychologist with over 22 years of experience in the field of mental health, and a decade-long focus on digital mental health and remote care delivery. She brings both practitioner and lived client perspectives to her work, advocating strongly for ethical, inclusive, and clinically safe innovations in mental health care.
Smriti has been instrumental in shaping India's telemental health landscape. She contributed to the tele-counselling guidelines developed by the Indian Association of Clinical Psychologists (IACP), and trained hundreds of psychologists across the country to transition to teletherapy during the pandemic. Her work reflects a deep commitment to equipping mental health professionals with the skills and ethical grounding required for successful digital service delivery.
Recognized nationally and internationally as a thought leader in telemental health, Smriti has authored several articles, research papers and book chapters and is often invited to speak on the future of mental health in digital ecosystems. She is a passionate advocate for embedding emotional safety, equity, and human connection in digital mental health tools and systems.
Currently, Smriti serves as Chief of Clinical Services and Ops and Board Member at Wysa, a global mental health platform that combines AI-powered emotional support with human-led care.At Wysa, she leads efforts to ensure clinical quality, safety, and user trust while fostering innovation that meets the mental health needs of diverse populations
She also serves as the Vice President North Zone. Clinical Psychology Society of India.

Assistant Professor
Stanford Center for Biomedical Ethics
USA
Nicole Martinez-Martin received her JD from Harvard Law School and her doctorate in social sciences (comparative development/medical anthropology) from the University of Chicago. Her broader research interests concern the impact of new technologies on the treatment of vulnerable populations. Her graduate research included the study of cross-cultural approaches to mental health services in the Latine community and the use neuroscience in criminal cases. Her recent work in bioethics and neuroethics has focused on the ethics of AI and digital health technology, such as digital phenotyping or computer vision, for medical and behavioral applications.
She has served as PI for research projects examining ethical issues regarding machine learning in health care, digital health technology, digital contact tracing, and digital phenotyping. She has examined policy and regulatory issues related to privacy and data governance, bias and oversight of machine learning and digital health technology. Her K01 career development grant, funded through NIMH, focuses on the ethics of machine learning and digital mental health technology. Recent research has included examining bias, equity and inclusion as it pertains to machine learning and digital health, as well as social implications of privacy and data protections on marginalized groups.

