Artificial Intelligence (AI) is a rapidly evolving technology that can enhance access to information and support certain aspects of daily life. However, when applied to health care and mental health, AI should be approached with caution and through a harm reduction lens. Drawing on recommendations from the American Psychological Association, American Psychiatric Association, the JED Foundation and other accrediting bodies, the following guidelines outline safer practices for using AI as a supplement and not a substitute for professional mental health care.
AI Harm Reduction Tips
AI is not recommended for emotional support, diagnosing mental health conditions, providing treatment, crisis or emergency services, or offering medication advice.
Balance time spent using AI to reduce risks of overdependence or addictive use.
AI can be helpful for supportive tools such as mindfulness apps, stress-reduction skills, habit tracking, sleep monitoring or general guidance one might typically seek from a friend. It should never replace sessions with a trained mental health professional. Discussing your AI use with your mental health provider is recommended to help ensure it is used safely and appropriately.
AI is not appropriate for discussing self-harm, suicidal thoughts or other serious mental health concerns. Safeguards and confidentiality protections may not be sufficient.
Research suggests that AI companions can worsen mental health concerns, particularly for individuals experiencing isolation or vulnerability. Overuse may reduce in-person social connections, impede social skill development and lead to emotional distress.
AI is not a person, does not have empathy and is not a therapist. It does not possess “superpowers.” Because AI relies on data and search-based algorithms, it can produce errors, misinformation or nonsensical responses. Any information provided should always be fact-checked and never replace professional mental health services.