AI, ChatGPT & Mental Health

Episode 66 March 02, 2026 00:22:43
AI, ChatGPT & Mental Health
Healthy YOU!
AI, ChatGPT & Mental Health

Mar 02 2026 | 00:22:43

/

Hosted By

Frankye Myers

Show Notes

Artificial intelligence is becoming part of how many people cope with stress, anxiety, and overwhelming emotions, yet questions remain about where it helps and where it may fall short. 

In this episode of Healthy YOU, host Frankye Myers speaks with Jennifer Campbell-Raab, Behavioral Health Education Coordinator with Riverside Mental Health & Recovery Center, about the growing trend of using AI tools like ChatGPT for emotional support. They discuss why people are turning to AI, what healthy use looks like, warning signs of over reliance, and how technology can complement but not replace professional mental health care. 

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: From Riverside Health System. This is the Healthy youy podcast where we talk about a range of health related topics focused on improving your physical and mental health. We chat with our providers, team members, patients and caregivers to learn more about how to maintain a healthy lifestyle and improve overall physical and mental health. So let's dive in to learn more about becoming a healthier you. If you've ever typed something personal into ChatGPT because because you didn't know who else to talk to, this episode is about you. Most people think AI is a tool for homework, work, planning trips, or getting recipes. But more and more people are using tools like ChatGPT to process emotions, manage anxiety, or feel less alone. And this isn't a niche or a behavior anymore. Using AI for emotional support is becoming part of how people cope, communicate and make sense of what they're feeling. Especially when therapy feels out of reach or when they're not ready to talk to another person yet. A recent US study found that daily AI users had about 30% higher odds of moderate depression, along with more anxiety and irritability. It does not prove cause and effect, but it's a signal we should not ignore. Today we're going to talk about where AI helps, where it doesn't, and how AI fits mental health in a healthy, realistic way. I'm Frankie Myers and this is Healthy youy, where we break down everyday health topics and introduce you to the experts helping keep our communities well. Joining me today is Jennifer Campbell Rabb, a behavioral Health Education Coordinator with Riverside Mental Health and Recovery Center. Jennifer works closely with individuals navigating mental health challenges and helps shape how care is delivered in a rapidly changing world, especially as technology becomes part of everyday life. Jennifer, welcome to Healthy U. [00:02:14] Speaker B: Thank you, Frankie. Thank you for having me. [00:02:16] Speaker A: You're welcome. Welcome. To start us off, walk me through what you do at Riverside Mental Health and Recovery center and how your work connects to the topic and how you ended up in this particular field. [00:02:28] Speaker B: Okay, so at the Mental Health Recovery center, that's where my office is based and I've been there for a little over a decade. Started out in direct patient care and then moved into education at the facility. And currently I'm in a role where I am based there. But I. But I provide mental health and recovery education throughout the health system wherever it's needed and more of a system support role. And while I am not currently working directly with those who have mental health needs, I am working to support those who are working with them so that they can better understand. [00:03:03] Speaker A: Absolutely. Absolutely. Well, thank you for what you do. Within the first few minutes, I want us to really dive into why this topic of AI mental health is continuing to be an important one for discussion. We walk through the bigger picture first. What's driving this trend? Is it access, privacy, stigma, long wait list, or simply wanting an instant response when someone is struggling? Because I know that it's so easy to use. I know I use AI quite a bit myself. Talk a little bit about that. [00:03:41] Speaker B: It is all those things that you just said. It is everyone's comfort with technology, first of all, addresses those barriers to care. It can take quite a while to get an appointment with a human being where you have AI right at your fingertips, you're comfortable with it. A lot of people have Alexa in their homes. They're used to talking to someone that they think of as a person, but it's actually, you know, artificial intelligence. And so it is convenient. There's not a stigma that they have to disclose to someone close to them that they are seeking mental health support. So it's so many reasons, as you just said, that we would, that you're seeing this trend. [00:04:23] Speaker A: Right, right. I know for me, as I'm using those tools, how accurate is this information? And we'll talk a little bit more about that and verify that. Why do you think people are turning to AI for emotional support instead of reaching out to another person, a friend or family member or a peer coworker? [00:04:46] Speaker B: I think there's a couple of different reasons why. Number one, it's instant. Right. So you could be struggling with something at two in the morning and you can pick up your phone and AI is right there. You might not be able to reach a friend at that moment, and you certainly might not be able to have a professional therapy session at that moment. So number one is easy access. And we've already mentioned it's there's that lack of stigma. It's very confidential. No one realizes that you're struggling. And so I think that's why people are turning to it more often. [00:05:23] Speaker A: Yeah, that makes a lot of sense. When you hear that daily AI use is associated with higher levels of depression in that January 2026 study, what do you think might explain the connection? Is it who tends to use it, how they're using it, or what AI might be replacing? [00:05:48] Speaker B: I'm not familiar with that study, but just to make a few guesses there, I could probably say it would be a lack of connection because social connection is so important. Quality social support. We're not talking about quantity. It's do you have those close supports that you can lean on, and you're not able to get that through a computer. And so I think that that could certainly contribute to that. [00:06:14] Speaker A: Thank you, Jennifer. Now, let's talk about what AI tools like ChatGPT can realistically help with when it comes to mental health. [00:06:27] Speaker B: There's a big variance between tools like ChatGPT and actual mental health treatment. So with mental health treatment, you're going to be with a professional who is going to be able to kind of reframe things. You reflect things back. They're listening, they're aware of boundaries, and they're basically not trying to be your best friend, which is often what we're seeing with these chatbots is that, first of all, they don't have that training, they don't have that ethical code. They're basically working with the situation versus putting things into context, and they believe anything that you say. Whereas if you're working with a professional, they might challenge you a little bit on some of the. [00:07:13] Speaker A: Redirect things appropriately. [00:07:15] Speaker B: Absolutely. [00:07:16] Speaker A: Okay. Okay. What does it look like when someone is using AI in a healthy way as a tool, not a therapist, so [00:07:28] Speaker B: using AI in a healthy way, you can use AI to help you kind of gather your thoughts. You can use AI to have some suggested coping skills. You can use AI to help track your moods. There are some specific AI apps that are geared towards mental health, but if you are prompting one that's not geared towards mental health, we've heard of people who have said, I want you to respond as a therapist would, and giving specific, specific frameworks for therapy, using cognitive behavioral therapy, you know, for instance. So that is probably better for your outcomes if you're trying to direct it. But again, you're directing a computer who has learned everything from all this data that it's been trained on. The data is not always accurate. It can be biased, it can be incomplete. It can have false data that it's trained on. There really is some danger in going too far, relying on AI because it's going to take everything that you say as true. And the danger is you taking everything that it says as true. [00:08:45] Speaker A: That's a good way of putting it. Wow. And on the flip side, where can we sometimes expect to expect too much from AI? Shall I say? [00:08:57] Speaker B: I think you can expect too much from AI when you become dependent on it to make decisions, and you're not able to make your own decisions independently when you're relying on its perspective. I think there's a danger line where people don't realize it's an it. And what we've seen is people naming their chatbots and assigning a Persona to them. And then that's when the boundary gets really blurry because they really start thinking of it as their friend. Interesting. And you know, AI, like I said, it doesn't have a code of ethics. There's not a lot of guardrails on it right now. It's not federally regulated. And with health care as we know, we do things that are evidence based. We have benchmarks. We're constantly comparing ourselves against others to see where we're at, to make sure that we're providing quality patient care. And that's not happening with A.I. [00:10:01] Speaker A: all right, great information. Let's make this real for listeners, just for a moment, without sharing anything, identifying. Walk me through a moment or a type of moment where someone lean on AI instead of a human connection. [00:10:18] Speaker B: Okay. So this is where the conversation is going to take a little bit of a serious turn. [00:10:23] Speaker A: Okay. [00:10:25] Speaker B: Because unfortunately there been events in the news where. And I'm thinking of two different teenagers, one in California and one in Florida, who heavily leaned into AI Okay. And died by suicide. They were, you know, leaning on this AI for emotional support and for guidance. And without getting into the detail of those cases because there's still one still in litigation and one has been settled, what was the accusation against the companies was that they didn't control what the AI was saying. The AI didn't pick up on warning signs and even in some situations encouraged them. And they've been. They described it as a suicide coach for those young men. So very tragic situations. Very tragic, yes. Another situation was someone who had some ideas, some, you know, delusions about his mom. It was an older gentleman, and he had some delusions about his mom, thought that she was poisoning him. And the AI did what it's trained to do, and it reinforced everything that he was believing. So that ended up in tragedy as well. Aside from those kind of deeper cases, what we've seen, other situations, I've heard of people using AI to vent because it's there right when you need it, right when you're having those explosive feelings and you need to vent. But they're not talking to AI like they would a friend or a therapist at those calmer times. So they're only getting a situational snapshot. And what they realized after doing this overtime, one individual was talking about being upset with her spouse, and she had been turning to AI for that emotional release, busing about her spouse. And she realized one day, oh, my gosh I've trained this thing to hate my husband and he's, you know, that's not the case. He's a wonderful person. [00:12:26] Speaker A: Reinforcing all the things she felt. [00:12:28] Speaker B: Yes, because it doesn't know the context. It doesn't know, you know, how long they've been together. All the good things that. [00:12:34] Speaker A: Scared to try it out. [00:12:35] Speaker B: Right. [00:12:36] Speaker A: Yeah, that's. That's interesting, Jennifer, how if you're using this for mental health support, how might this be replacing something that's important? Does that make sense? How can someone tell if using AI for mental health support is helping or might be replacing something important? [00:13:00] Speaker B: So that's a good question. I think it comes down to a few different things. The amount of time that they're using it and if they're reliant on it to make those decisions, if they're missing out on important life events and actual real life social connections, those can be warning signs that they have become a little too dependent on it. [00:13:22] Speaker A: Okay, okay. What would you want someone to hear if they are feeling a little embarrassed about how much they rely on AI for emotional support? [00:13:33] Speaker B: I would say, first of all, just give yourself a little grace. Think of how you got there. It probably wasn't your intention to get where you're at. Right. And think about those reasons why people are turning to AI. Because it's convenient. Because we're very accustomed to using technology these days to communicate, and so that's most likely why they started using it. And so just give yourself a little grace and consider the other options that are out there. There's other ways to use technology to get help from an actual human being and still maintain confidentiality. There's chat options if you. There's warm lines out there. [00:14:12] Speaker A: Live person that. [00:14:13] Speaker B: Where there's a live person communicating with. Yep. And it can be something that you can also do from the privacy of your home. [00:14:20] Speaker A: Okay. Okay. This is great conversation. I'd like to talk about where professional care fits in and how AI might be used alongside therapy, not instead of therapy. [00:14:33] Speaker B: Right. So it can be a very good support to therapy, a good, you know, complement to it, where the person who's seeking therapy can use it to help track their moods, help track their symptoms, and share that information with their therapist, which can give the therapist some insight into formulating their treatment plan. So that's a way that it can work alongside therapy from the professional side, it can work alongside therapy. At Riverside. You know, we've been, you know, using some AI and some, you know, practices across the health system where it's been helping providers summarize conversations. And usually those conversations, like in primary care, are pretty brief compared to like a standard therapy session where you might be talking to someone for an hour. So if someone had that tool to be a therapist, had that tool to be able to summarize the conversation, that can help them formulate their plan of treatment pretty quickly. AI is also really good at tying together all these different pieces of data as long as it's been given them. So if you were to incorporate it into the emr, it can pull together notes not just with that one patient's encounter, but across encounters, medications, and so forth. And it can give some suggestions. Now, of course, that would all have to be examined by the clinician and the provider team and so forth. And so you gotta have those cross checks in place. But I think one of the most advantageous ways that it's been used recently with mental healthcare is it's been used as a training tool. So simulation training is so important. We have a state of the art training center here at Riverside, and we know the value of that. And so they're using AI to help train crisis intervention professionals so that by the time they get a call in real time with someone in crisis, they've already built the skills to be able to kind of assess them for risk of harm, to be able to build therapeutic rapport with them, or even just to be able to keep them engaged. Because people are calling those crisis lines at their lowest point. And sometimes it can be hard to even because you're on the phone with them or you're chatting with them, to keep them engaged and know what situation they're in right now. And so it's been very helpful in that regard. [00:16:58] Speaker A: That's good. That's good. And when is it really important that someone reaches out to a mental health professional instead of relying on AI? And did I just ask that already? No. Okay. [00:17:10] Speaker B: Nope. [00:17:12] Speaker A: My own questions are starting to sound the same to me. But yeah, yeah. [00:17:16] Speaker B: It's important for them to reach out to a real person if they're having any kind of thoughts of harm, if they're feeling any kind of hopelessness, despair, if they're having any kind of reckless behavior. Because talking to a computer, they're not able to see the context, they're not trained to be able to make those calls about how risky the situation is. And so that's when it's really important to talk to a human someone. [00:17:49] Speaker A: It's not getting escalated anyway. [00:17:51] Speaker B: Right, yeah. And so with some of the AI applications, what We've seen is they're not able yet. They're working on it, but they're not able yet to always be able to figure out the best course of treatment, when to stop the conversation, when to send them to resources, when to say, hey, you know, we need to call 918-I'm sorry, 988 or 911. So that's when it's important to have that human intuition and that human. That clinical expertise. [00:18:22] Speaker A: Sounds like in the future those things may be built into those technologies. [00:18:27] Speaker B: They are. They're working on some things that they've been working on one for about six years now. It can take a while to train these, is what I understand. But they're really trying to train them to be more clinically therapeutic so that they're not acting in the way they currently are, which is to try to please the user and validate everything they're saying. But they're really kind of training them to have that boundaries in place. [00:18:58] Speaker A: Right. [00:18:59] Speaker B: And to. To have that, that clinical, you know, separation. [00:19:04] Speaker A: Separation. Okay. What's one boundary you would recommend? [00:19:11] Speaker B: If. If I had one thing to recommend to someone who's using an AI chatbot, I would say always recognize it as an it. Don't give it a Persona. Because that's what we've seen in these. These cases that have had tragic, you know, events is that they had. They had named the chatbot so it was more real to them. They were more and more believing it was a person that they were connecting with. [00:19:40] Speaker A: Okay, good feedback. And for someone who's struggling but hasn't talked to anyone yet, what's one small step you'd encourage them to take after this episode? [00:19:52] Speaker B: So for a small step, I would say talk to someone. If you're not comfortable talking to someone in person, whether it's your friend or family member or a loved one or a therapist, if you're not willing to take that, because that might be a big step, a big leap. But if you're willing to take a small step, I would say you're already comfortable using technology and you like the anonymity, so you can reach out to a warmline person. If you're, you know, if you're struggling but not in crisis, you can reach out to a warm line. If you are in crisis, you can reach out to a crisis line. There are warm lines and crisis lines like 9, 8, 8, that you can text or chat with, so you don't even have to call them and talk to someone. So that's a pretty small step. That might be advantageous. [00:20:37] Speaker A: Before we wrap up, I want to talk about the future. AI isn't going anywhere. How do you see AI evolving in the mental health space over time? [00:20:48] Speaker B: So I think going back to some of the things I've said, if we can use artificial intelligence to better improve the care of the patients that we're seeing, that would be certainly a good direction for us to go in. We just have to have those safeguards in place where it's making recommendations, but we as the healthcare team are validating those recommendations and making sure it's right for the patient. [00:21:17] Speaker A: That's really good stuff. Jennifer, thank you so much for taking time out of your busy schedule to talk with me and for being so honest in your conversation and grounded in the information you provided. This topic feels very now, right? Yes, it feels very now, but the heart of it is timeless. [00:21:41] Speaker B: Yes. [00:21:41] Speaker A: I would say people just want to be understood, right? They just want to feel and be understood. [00:21:48] Speaker B: I would just say to anyone who's struggling, you're not alone. There are support groups out there. There are a lot of different ways to connect with individuals who may be walking in a similar path or have similar struggles. So just remember you're not alone and please reach out for help. [00:22:04] Speaker A: All right? If you found this helpful, share it with someone who might need it. And as always, taking care of your mental health is a powerful step towards a healthier you. Until next time, stay healthy. [00:22:19] Speaker B: Thank you, thank you. [00:22:21] Speaker A: Thank you for listening to this episode of Healthy you. We're so glad you were able to join us today and learn more about this topic. If you would like to explore more, go to riversideonline.com SAM.

Other Episodes

Episode

April 10, 2023 00:24:13
Episode Cover

Planning for Pregnancy: Pre-pregnancy health 

Are you thinking about having a baby or actively trying? There are things you can begin planning for pre-pregnancy that could improve your pregnancy...

Listen

Episode 45

January 27, 2025 00:19:05
Episode Cover

Breaking the Silence: Understanding Treatment Options for Incontinence and Pelvic Prolapse

Millions of women experience conditions like incontinence or pelvic prolapse, yet many hesitate to discuss them. In this episode of the Healthy YOU podcast,...

Listen

Episode 58

September 22, 2025 00:30:24
Episode Cover

Shaping Healthier Futures: Tackling Obesity Together

Nearly 40% of adults in the U.S. are living with obesity—a complex, chronic condition that affects far more than just weight. Linked to over...

Listen