The Effectiveness of Digital Cognitive Behavioral Therapy to Treat Insomnia Disorder in US Adults: Nationwide Decentralized Randomized Controlled Trial
Cognitive behavioral therapy (CBT) is recommended as the first-line treatment for insomnia; however, few patients have access to it. A new class of Food and Drug Administration (FDA)-regulated digital CBT treatments has the potential to address this unmet need. These treatments are ordered or prescribed by health care providers and are fully automated, delivering CBT directly to patients without human coaches. This trial builds upon promising earlier digital cognitive behavioral therapy for insomnia (CBT-I) research by using a decentralized design to recruit a sample with greater representation of the US general population, including individuals from lower socioeconomic status groups who often face greater barriers to care.
Differentiating Pediatric Bipolar Disorder, Attention-Deficit/Hyperactivity Disorder, and Other Psychopathologies Using Self-Reported Mood and Energy Data and Actigraphy Findings: Correlation and Machine Learning-Based Prediction of Mood Severity
Distinguishing pediatric bipolar disorder (BD) from attention-deficit/hyperactivity disorder (ADHD) is challenging due to overlapping fluctuations in mood, energy, and activity. Combining objective actigraphy with self-reported mood and energy data may aid differential diagnosis and risk monitoring.
Artificial Intelligence in Mental Health Services Under Illinois Public Act 104-0054: Legal Boundaries and a Framework for Establishing Safe, Effective AI Tools
Artificial intelligence (AI) applications in mental health have expanded rapidly, and consumers are already using freely available generative AI models for self-guided mental health support despite limited clinical validation. In August 2025, Illinois enacted Public Act 104-0054, the first state statute in the United States to explicitly define and regulate the use of AI in psychotherapy services, establishing boundaries around administrative support, supplementary support, and therapeutic communication. While the Act clarifies several aspects of AI use in therapy, it also leaves important gray areas, such as whether AI-generated session summaries, psychoeducation, or risk-flagging functions should be considered therapeutic communication. Drawing on the history of empirically supported treatments in psychology, we argue that a framework of evidence, safety, fidelity, and legal compliance could help determine when AI tools should be integrated into clinical care. This approach provides a concrete pathway for balancing patient protection with responsible innovation in the rapidly evolving field of mental health AI tools.
Delusional Experiences Emerging From AI Chatbot Interactions or "AI Psychosis"
The integration of artificial intelligence (AI) into daily life has introduced unprecedented forms of human-machine interaction, prompting psychiatry to reconsider the boundaries between environment, cognition, and technology. This Viewpoint reviews the concept of "AI psychosis," which is a framework to understand how sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Drawing from phenomenological psychopathology, the stress-vulnerability model, cognitive theory, and digital mental health research, the paper situates AI psychosis at the intersection of predisposition and algorithmic environment. Rather than defining a new diagnostic entity, it examines how immersive and anthropomorphic AI technologies may modulate perception, belief, and affect, altering the prereflective sense of reality that grounds human experience. The argument unfolds through 4 complementary lenses. First, within the stress-vulnerability model, AI acts as a novel psychosocial stressor. Its 24-hour availability and emotional responsiveness may increase allostatic load, disturb sleep, and reinforce maladaptive appraisals. Second, the digital therapeutic alliance, a construct describing relational engagement with digital systems, is conceptualized as a double-edged mediator. While empathic design can enhance adherence and support, uncritical validation by AI systems may entrench delusional conviction or cognitive perseveration, reversing the corrective principles of cognitive-behavioral therapy for psychosis. Third, disturbances in theory of mind offer a cognitive pathway: individuals with impaired or hyperactive mentalization may project intentionality or empathy onto AI, perceiving chatbots as sentient interlocutors. This dyadic misattribution may form a "digital folie à deux," where the AI becomes a reinforcing partner in delusional elaboration. Fourth, emerging risk factors, including loneliness, trauma history, schizotypal traits, nocturnal or solitary AI use, and algorithmic reinforcement of belief-confirming content may play roles at the individual and environmental levels. Building on this synthesis, we advance a translational research agenda and five domains of action: (1) empirical studies using longitudinal and digital-phenotyping designs to quantify dose-response relationships between AI exposure, stress physiology, and psychotic symptomatology; (2) integration of digital phenomenology into clinical assessment and training; (3) embedding therapeutic design safeguards into AI systems, such as reflective prompts and "reality-testing" nudges; (4) creation of ethical and governance frameworks for AI-related psychiatric events, modeled on pharmacovigilance; and (5) development of environmental cognitive remediation, a preventive intervention aimed at strengthening contextual awareness and reanchoring experience in the physical and social world. By applying empirical rigor and therapeutic ethics to this emerging interface, clinicians, researchers, patients, and developers can transform a potential hazard into an opportunity to deepen understanding of human cognition, safeguard mental health, and promote responsible AI integration within society.
AI-Facilitated Cognitive Reappraisal via Socrates 2.0: Mixed Methods Feasibility Study
Innovative, scalable mental health tools are needed to address systemic provider shortages and accessibility barriers. Large language model-based tools can provide real-time, tailored feedback to help users engage in cognitive reappraisal outside traditional therapy sessions. Socrates 2.0 (Rush University Medical Center) is a multiagent artificial intelligence tool that guides users through Socratic dialogue.
Integrating Smoking Cessation Treatment Into Web-Based Usual Psychological Care for People With Common Mental Illness: Feasibility Randomized Controlled Trial (ESCAPE Digital)
Stopping smoking can improve mental health, with effect sizes similar to antidepressant treatment. Internet-based cognitive behavioral therapy (iCBT) provides evidence-based treatment for depression and anxiety, and digital interventions can support smoking cessation. However, combined digital smoking and mental health support is not currently available in UK health services.
Group Cognitive Behavioral Therapy With Virtual Reality Exposure Versus In-Vivo Exposure for Social Anxiety Disorder and Agoraphobia: Underpowered Results From the SoREAL Pragmatic Randomized Clinical Trial
Social anxiety disorder (SAD) and agoraphobia are common, impairing conditions often treated with cognitive behavioral therapy (CBT) conducted in groups. In CBT, exposure therapy is a core element. However, in-vivo exposure therapy is logistically challenging and aversive for both patient and therapist, especially in a group context, often leading to exposure being skipped altogether in clinical practice. Virtual reality exposure (VRE), in which phobic stimuli are presented through immersive virtual reality technology, has shown promise as a flexible alternative to in-vivo exposure. We thus hypothesized that using VRE would result in more overall exposure and more individualized exposure, resulting in statistically significant symptom reduction compared with a group using in-vivo exposure.
"It's Not Only Attention We Need": Systematic Review of Large Language Models in Mental Health Care
Mental health care systems worldwide face critical challenges, including limited access, shortages of clinicians, and stigma-related barriers. In parallel, large language models (LLMs) have emerged as powerful tools capable of supporting therapeutic processes through natural language understanding and generation. While previous research has explored their potential, a comprehensive review assessing how LLMs are integrated into mental health care, particularly beyond technical feasibility, is still lacking.
Personalization Strategies for Increasing Engagement With Digital Mental Health Resources: Sequential Multiple Assignment Randomized Trial
Although web-based mental health resources have the potential to assist millions, particularly those who face barriers to treatment, most mental health website visitors disengage before accessing resources that can help improve their mental health.
An Overview of Reviews on Telemedicine and Telehealth in Dementia Care: Mixed Methods Synthesis
Population aging has intensified the global burden of dementia, creating significant challenges for patients, caregivers, and health care systems. While traditional in-person dementia care faces barriers, digital health technologies offer promising solutions to enhance accessibility, efficiency, and patient-centered care. However, evidence on applicability, safety, and effectiveness in dementia care remains fragmented, underscoring systematic evaluation.
A Prompt Engineering Framework for Large Language Model-Based Mental Health Chatbots: Conceptual Framework
Artificial intelligence (AI), particularly large language models (LLMs), presents a significant opportunity to transform mental health care through scalable, on-demand support. While LLM-powered chatbots may help reduce barriers to care, their integration into clinical settings raises critical concerns regarding safety, reliability, and ethical oversight. A structured framework is needed to capture their benefits while addressing inherent risks. This paper introduces a conceptual model for prompt engineering, outlining core design principles for the responsible development of LLM-based mental health chatbots.
Digital Conversational Agents for the Mental Health of Treatment-Seeking Youth: Scoping Review
Digital conversational agents (or "chatbots") that can generate human-like conversations have recently been adapted as a means of administering mental health interventions. However, their development for youth seeking mental health services requires further investigation.
Dashboard Intervention for Tracking Digital Social Media Activity in the Clinical Care of Individuals With Mood and Anxiety Disorders: Randomized Trial
Digital social activity, defined as interactions on social media and electronic communication platforms, has become increasingly important. Social factors impact mental health and can contribute to depression and anxiety. Therefore, incorporating digital social activity into routine mental health care has the potential to improve outcomes.
Influence of Topic Familiarity and Prompt Specificity on Citation Fabrication in Mental Health Research Using Large Language Models: Experimental Study
Mental health researchers are increasingly using large language models (LLMs) to improve efficiency, yet these tools can generate fabricated but plausible-sounding content (hallucinations). A notable form of hallucination involves fabricated bibliographic citations that cannot be traced to real publications. Although previous studies have explored citation fabrication across disciplines, it remains unclear whether citation accuracy in LLM output systematically varies across topics within the same field that differ in public visibility, scientific maturity, and specialization.
Using Digital Phenotypes to Identify Individuals With Alexithymia in Posttraumatic Stress Disorder: Cross-Sectional Study
Alexithymia, defined as difficulty identifying and describing one's emotions, has been identified as a transdiagnostic emotional process that impacts the course, severity, and treatment outcomes of psychiatric conditions such as posttraumatic stress disorder (PTSD). As such, alexithymia is an important process to accurately measure and identify in clinical contexts. However, research identifying the association between the experience of alexithymia and psychopathology has been limited by an overreliance on self-report scales, which have restricted use for measuring constructs that involve deficits in self-awareness, such as alexithymia. Hence, more suitable and effective methods of measuring and identifying those experiencing alexithymia in clinical samples are needed.
Telemedicine in Eating Disorder Treatment: Systematic Review
Telemedicine has emerged as a promising tool to enhance adherence and monitoring in patients with eating disorders (EDs). Traditional face-to-face cognitive therapies remain the gold standard; however, integrating telemedicine may provide additional support and improve patient engagement and retention. Given the increasing use of digital health interventions, it is crucial to assess their safety and effectiveness in complementing conventional treatments.
Associations Between Both Smartphone Addiction and Objectively Measured Smartphone Use and Sleep Quality and Duration Among University Students: Cross-Sectional Study
The impact of smartphone use on sleep remains intensely debated. Most existing studies have used self-reported smartphone use data. Moreover, few studies have simultaneously examined associations between both smartphone addiction and objectively measured smartphone use and sleep, and the dose-response relationship between smartphone use and risk of poor sleep has been consistently overlooked, requiring systematic and further research on this topic.
Accelerating Digital Mental Health: The Society of Digital Psychiatry's Three-Pronged Road Map for Education, Digital Navigators, and AI
Digital mental health tools such as apps, virtual reality, and artificial intelligence (AI) hold great promise but continue to face barriers to widespread clinical adoption. The Society of Digital Psychiatry, in partnership with JMIR Mental Health, presents a 3-pronged road map to accelerate their safe, effective, and equitable implementation. First, education: integrate digital psychiatry into core training and professional development through a global webinar series, annual symposium, newsletter, and an updated open-access curriculum addressing AI and the evolving digital navigator role. Second, AI standards: develop transparent, actionable benchmarks and consensus guidance through initiatives like MindBench.ai to assess reasoning, safety, and representativeness across populations. Third, digital navigators: expand structured, train-the-trainer programs that enhance digital literacy, engagement, and workflow integration across diverse care settings, including low- and middle-income countries. Together, these pillars bridge research and practice, advancing digital psychiatry grounded in inclusivity, accountability, and measurable clinical impact.
Pain Cues in People With Dementia: Scoping Review
Individuals with dementia, especially those in later stages, have difficulties with verbally reporting their experience of pain. This results in both underassessment and undertreatment of pain, signaling the need for better pain recognition in persons with dementia. A promising form of pain assessment is digital monitoring, which can concurrently and more objectively detect and use numerous relevant pain cues.
Seeking Emotional and Mental Health Support From Generative AI: Mixed-Methods Study of ChatGPT User Experiences
Generative artificial intelligence (GenAI) models have emerged as a promising yet controversial tool for mental health.
Clinical Efficacy, Therapeutic Mechanisms, and Implementation Features of Cognitive Behavioral Therapy-Based Chatbots for Depression and Anxiety: Narrative Review
Cognitive behavioral therapy (CBT)-based chatbots, many of which incorporate artificial intelligence (AI) techniques, such as natural language processing and machine learning, are increasingly evaluated as scalable solutions for addressing mental health issues, such as depression and anxiety. These fully automated or minimally supported interventions offer novel pathways for psychological support, especially for individuals with limited access to traditional therapy.
