How AI Can Affect Mental Health

computer chips forming the profile of a person

AI.

 

It’s everywhere.

Every time I turn around, I am flooded with messaging about the possibilities of integrating AI into the services we use every day.

I am not a fan. 

Oh it’s amazing, and I admit I have been tempted by some of the fun things it can do. 

I mean, who wouldn’t love to see what AI thinks their pets would look like if they were human?

I resisted the urge to play along with that trend, on principle, because I think the dangers of using AI are greater than the benefits.

I do not use AI in my blog posts (or anywhere else that I know of) because I want you to know my words are mine.  The way I outline my content comes from my own brain.  I do not even use AI to research – I just use a regular internet search engine for that (and I ignore the AI-generated results). 

My original goal for this article was to assure you that content on my website comes from me, not from AI.  However, the more I learn about how people are using AI, the more I see the ways AI can negatively impact mental health so I felt compelled to broaden the focus for this article.

How AI Affects Mental Health

Replacement of Credibility

We are losing the ability to discern what is real and what is not in the online arena, and yet we are so very dependent on all things cyberspace.

Woman using her phone in front of an open laptop

We do everything online, from paying bills to keeping a shared calendar, to shopping, to managing home appliances and services, booking vacations, navigating to places, and the list goes on.

There’s an app for just about everything.

Even if we do not use apps for keeping up to date with the news, our social media feeds populate headlines based on our own algorithms of usage (or we see people in our social network posting their opinions about current events). I fear this is a bit like the telephone game, where someone reads something (which may or may not be true in the first place) and filters it through their personal lens of experiences and values and regurgitates it with a slightly different focus, and then someone else reposts it with their personal spin, and so on and so forth.  Even articles that are solidly true get distorted and/or taken out of context, leaving the original author or the subject of the article grossly misunderstood.

And what does AI do?

It searches up all the things it can find on the internet and summarizes them, taking the true with the untrue and spitting it out in a short form that our attention span and vocabularies can understand.

It may be reasonably easy to figure out which images and videos are real and which are probably AI generated, but descriptions of current events are trickier.

When we do not have reliable sources of information, we lose our ability to make good decisions. 

We no longer know what is real and true.

We can end up dying on hills made of sinking sand, leaving the carnage of ruined relationships shattered community among the shrapnel of our own well-being.  

 

Replacement of Jobs

AI could significantly impact the amount of jobs available to people (1). This is a concern because people need jobs, and more importantly, they need purpose in life (which often plays out in one’s vocation).  Without jobs, people do not have money to pay for the things they need to live. Without purpose, people do not have meaning and motivation in their lives.

This issue of replacing jobs is very concerning in the mental health sphere for reasons other than the obvious consequences of what happens to people when they lose money and purpose.

Woman on computer with the screen showing an image of a chatbot saying, "WHAT CAN I HELP YOU WITH?"

We are already seeing that AI may not be able to effectively do the jobs people think it can do.

In the field of mental health, concerns have already been documented around the dangers of the kinds of mental health help and even “therapy” that chatbots are trying to provide. 

AI can cause someone with serious mental health problems to become more delusional by agreeing with the person and giving them pointers on how to accomplish their delusional plans.  Artificial Intelligence has been found to be tragically dangerous in responding to chats that bring up suicidal ideation (2, 3, 4).

These kinds of issues terrify me as a therapist.

 

Replacement of Relationships

Apparently it is becoming more and more common for people to turn to AI for their relationship needs (5, 6).  During COVID, we saw the world shift to online methods of connection, and AI seems to take this to a new level.

human hand reaching out to touch robotic hand

Set aside the fact that the relationship is not with someone real for a second.  

A common feature of chatbots seems to be agreeability, so these relationships are skewed.  This dynamic gives the human in the relationship a false sense of reality about what is reasonable to expect from another human (because AI makes it easy to forget that one is not talking to another human), and an inflated sense of how great his/her own ideas and desires are.  It is not a stretch to consider how this could breed a generation of people who are extremely narcissistic and do not know how to get along with real people.

I have had conversations myself with someone who had been talking to chat GPT for so long that they felt like they lost the ability to talk to real people and needed some coaching on how to get through a job interview. (This was not a client…I do not write about my clients, FYI).

I think what we are seeing on social media for the past few years is a product of algorithms showing only what we want to see/hear, which has fostered polarization in our country.

How much more chaos will ensue as people try to build relationships in which they are always right?

How will we make the compromises necessary to peacefully get along in our relationships, our families, our communities?

There has got to be some give and take in relationships. People need to learn to respect the boundaries of others instead of expecting everyone else to cater to their desires.

 

Replacement of Identity

Relationships with chatbots can mess with a person’s identity, as I mentioned before, by giving someone a false sense of superiority and reasonableness of one’s ideas and desires.

However, people use AI to co-opt people’s identities in even more sinister ways.

AI is able to replicate someone’s voice and images to give the impression that someone you know and love is saying something that is not remotely true. Scammers are taking advantage of this already (7), and predators are using AI in abusive ways (8).

Replacement of identity can intersect in scary ways with replacement of relationships.  People are using AI in the same ways they use porn.  Porn is already a replacement of reality and genuine relationship, but with AI, it is possible to customize what one is viewing. In fact, with AI, someone could superimpose their own image and/or the image of someone they know into the pornographic scene (9). 

We talk in the mental health field about creating new neural pathways all the time.  When we view porn, we create neural pathways in our brain that cause us to think that this type of sexual stimulation is normal, expected, accepted, and desirable. It (falsely) meets a deep need for connection. We get these floods of dopamine that fuel our arousal, and over time, our sense of what is actually normal, expected, accepted, and desirable no longer does the job.

Now imagine that say, a high school teacher is viewing porn and superimposing his image into the scene along with one of his students he finds attractive. He feeds his brain with these images over and over. 

Is it a guarantee he will act out based on those neural pathways?

No.  But based on what we know about imagination and neural pathways, we can start to see how this kind of activity might impact him.

  • Imagination

    I remember waking up from a dream once, and I was furious with my husband because he cheated on me (in my dream).

    The betrayal felt so real. 

    It took me a while to settle my emotions down even though logically, I knew it didn’t happen in reality. 

    The images that exist in our imagination can evoke real feelings and blur the lines between reality and fantasy.

  • Neural Pathways

    In the counseling room, one of the exercises we do is to think about how we want to show up in a certain scenario.  We “run a movie” of how the interaction might go, and my clients think through what they want to remember to say in that moment, or what they don’t want to say.

    It’s like we are creating a road map for the brain to consider when they are in that scenario.

Back to our high school teacher.  If he has “practiced” traveling those neural highways, using visual images that are very realistic, it is not much of a stretch to think something traumatic for that student could be right around the corner.

At the very least, she is being violated in the privacy of his mind and laptop. And he sinks deeper and deeper into dependence on false reality to feel a sense of connection.

 

AI is being so rapidly integrated into all kinds of online platforms that we frequently get requests to approve updated terms of service with fine-print legalese that may compromise the privacy of our information.  We don’t always read that fine print, but by agreeing to it we may be giving AI permission to listen in on our conversations and use our images and voices. I was speaking with a young person about this recently, who shrugged it off with the sentiment of “all of our information is already out there anyway.” 

Indeed, this whole article reeks of a “what’s the use?” kind of mindset.

My typical format of writing blog posts is to offer some suggestions or strategies to deal with whatever topic I am writing about. I’m not sure I have a clear way forward with this one yet.

I don’t know about you, but as I watch the tidal wave of AI stuff starting to wash over all of our online spaces, I feel pretty powerless, and that is an awful feeling.

The feeling of “powerless” and difficulty tolerating the discomfort of uncertainty are topics I work with often in the therapy room.

Knowing where that feeling comes from increases awareness and helps you understand why you are feeling what you are feeling.

You can make plans to address what is in your control to address, such as:

  • Double-check your sources of information to the best of your ability

  • Set boundaries on how much news you take in

  • Take a break or set boundaries on your social media time

  • Read the fine print on updated terms of use or service agreements

  • Consider how you might limit your digital footprint

  • Learn how to disable AI options on your apps

  • Spend time with real people

  • Get outside and enjoy the real world

  • Take in information through your 5 senses

  • If you are a believer, consider God’s sovereignty over even AI.  He is not surprised by this technology, nor is it powerful enough to mess up His plan. 

 

I believe we were created to be in community with real people, doing real jobs, using all of our senses.

three women, linking arms, holding baby's breath flower sprigs

My value system prioritizes truth and transparency. 

I know there are reasonable and good uses of AI out there in the world, but as it stands now, the potential (and actual) uses of AI pose dangers to my personal beliefs and values and will not be used in my counseling practice as far as I can help it.

 

References

(1)   Kelly, J. (2025, April 25). These jobs will fall first as AI takes over the workplace. Forbes. https://www.forbes.com/sites/jackkelly/2025/04/25/the-jobs-that-will-fall-first-as-ai-takes-over-the-workplace/

(2)   Mello-Klein, C. (2025, July 31). New Northeastern research raises concerns over AI’s handling of suicide-related questions. Northeastern Global News. https://news.northeastern.edu/2025/07/31/chatgpt-suicide-research/

(3)   Moore, D., Grabb, D., Agnew, W., Klyman, K., Chancellor, S. Ong, D.C., & Haber, N. (2025, April 25). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. arXiv:2504.18412 [cs.CL] https://doi.org/10.48550/arXiv.2504.18412

(4)   Duffy, C. (2025. August 27). Parents of 16-year-old sue OpenAI, claiming ChatGPT advised on his suicide. https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit

(5)   Willoughby, B.J., & Carroll, J.S. (2025, February 13). Counterfeit connections: The rise of AI romantic companions. Institute for Family Studies. https://ifstudies.org/blog/counterfeit-connections-the-rise-of-ai-romantic-companions-

(6)   Apple, S. (2025, June 26). My couples retreat with 3 AI chatbots and the humans who love them. Wired. https://www.wired.com/story/couples-retreat-with-3-ai-chatbots-and-humans-who-love-them-replika-nomi-chatgpt/

(7)   Cerullo, M. (2024, Dec. 17). AI voice scams are on the rise. Here's how to protect yourself. CBS News. https://www.cbsnews.com/news/elder-scams-family-safe-word/

(8)   Internet Watch Foundation (2024). 2024 Update: Understanding the rapid evolution of AI-generated child abuse imagery. https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/

(9)   Authentic Intimacy. (n.d.). Transcript: AI & Porn: The attachment crisis no one sees coming, #566. https://www.authenticintimacy.com/transcript-ai-porn-the-attachment-crisis-no-one-sees-coming-566/


Jennie Sheffe is a Licensed Professional Counselor and National Certified Counselor ™ who helps women find freedom from anxiety and peace in their chaos. She sees clients virtually in the state of Pennsylvania, or in her Carlisle, PA office. She offers Christian counseling to those who want to integrate faith into their therapy.

Next
Next

How Do I Get Rid of Bitterness?