Guest: Dr Jane Rosenthal
In several earlier episodes with Doctors Rohan Shah, Stan Tuhrim, and Julia Mayerle we explored how AI, writ broadly, has been put to beneficial use in clinical situations, particularly in planning cardiac surgery, stroke diagnosis, and performing colonoscopies. Increasingly people are asking medical questions of chatbots and acting on the bot's output. As with most technologies, generative AI systems, chatbots, can be used to do good or to cause harm.
However, if you are not a medical professional, you might not pose a question to the chatbot with sufficient context. For instance, "I have a pain in my shoulder," might be a sign of cardiac disease, or a strained muscle, or a lesion in the arm. And a person may misinterpret the chatbot output and act inappropriately. Having ChatGPT on your phone does not make you a qualified surgeon, physician's assistant, or nurse practitioner. Nor does it put you in touch with one!
We tend to associate output in our native language that sounds human with sentience, consciousness, and even empathy. And we associate seeming to be human with all sorts of emotional connections that computer systems might imitate but certainly do not have.
In 2023 a New York Times journalist reported that a chatbot had declared love for him and urged him to divorce his wife. Stories abound of vulnerable people harming themselves after lengthy exchanges with GenAI chatbots. In a recent case, a vulnerable teen discussed suicide with a chatbot and asked for feedback about the noose he had fashioned. In yet another case a clearly delusional person was encouraged to murder his mother and then commit suicide.
Medical leaders are concerned that use of chatbots in diagnosis and recommendation of treatment without real-time supervision by experienced professionals may lead to harm. Other appearances of GenAI in the medical space are causing alarm bells to ring across the community. Physicians are asking whether these GenAI tools can be and will be used responsibly. GenAI tools have not been designed to fulfill the Hippocratic oath to do no harm. Regrettably, chatbot systems are classed as self-help or wellness tools. As such they fall outside of existing regulatory regimes.
Today we will discuss the ethical implications of GenAI tools with a seasoned clinician who has extensive experience thinking about medical ethics in the context of a major medical center.
Dr. Jane Rosenthal completed medical school at George Washington University and interned at Beth Israel Medical Center. She completed her Residency in Psychiatry, a Fellowship in Consultation Liaison Psychiatry and her Psychoanalytic Training at Columbia University Irving Medical Center. She later earned certifications in Bioethics and Medical Humanities.
Over the years, Dr Rosenthal has served as the psychiatric consultant to the Columbia Fertility Service. She has also held leadership roles at the Columbia Psychoanalytic Center and at Tisch Hospital / NYU Langone Medical Center where she founded and directed their first Consultation Psychiatry service. She later directed the psychiatry service at the Perlmutter Cancer Center / NYU Langone.
She has been a member of the Tisch Hospital Ethics Committee since 2008, and served as its chair for three years. Over the years she has taught and supervised medical students, residents and fellows.
Recorded: 2026-02-02
To stream this episode from most podcasting platforms, search for the ABA podcast "To the extent that ..."
To Read Further
https://www.columbiapsychiatry.org/profile/jane-l-rosenthal-md
https://nyulangone.org/news/nyu-langone-psychiatrist-helps-people-cancer-feel-they-can-live-disease
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk
NYTimes, "A Teen Was Suicidal. ChatGPT Was the Friend He Confided In." https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
[Documentation of Advanced Directives, assignment of medical proxies - strategic outcome of ethics work by hospitals.]
Common Sense Media reports (https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions) that "[t]welve percent of teens can share things they wouldn't tell friends or family" with AI chatbots.
"FDA Oversight: Understanding the Regulation of Health AI Tools", 2025 November 10, https://bipartisanpolicy.org/issue-brief/fda-oversight-understanding-the-regulation-of-health-ai-tools/
"The Law of Attachment" https://ai.columbia.edu/news/ai-companion-regulation-law-attachment-harm
"Why we can't just supervise AI like we supervise humans," https://stevenadler.substack.com/p/why-we-cant-just-supervise-ai-like
https://www.washingtonpost.com/lifestyle/2025/12/23/children-teens-ai-chatbot-companion/
https://slaterj.substack.com/p/no-one-gets-out-alive-why-the-us