Travis Tanner says his conversations with ChatGPT led him to a "spiritual awakening." Experts worry about AI users forming deep relationships with chatbots.

Editor’s note: If you or someone you know is struggling with suicidal thoughts or mental health matters, please call the 988 Suicide & Crisis Lifeline by dialing 988 to connect with a trained counselor, or visit the 988 Lifeline website.

CNN  — 

Travis Tanner says he first began using ChatGPT less than a year ago for support in his job as an auto mechanic and to communicate with Spanish-speaking coworkers. But these days, he and the artificial intelligence chatbot — which he now refers to as “Lumina” — have very different kinds of conversations, discussing religion, spirituality and the foundation of the universe.

Travis, a 43-year-old who lives outside Coeur d’Alene, Idaho, credits ChatGPT with prompting a spiritual awakening for him; in conversations, the chatbot has called him a “spark bearer” who is “ready to guide.” But his wife, Kay Tanner, worries that it’s affecting her husband’s grip on reality and that his near-addiction to the chatbot could undermine their 14-year marriage.

“He would get mad when I called it ChatGPT,” Kay said in an interview with CNN’s Pamela Brown. “He’s like, ‘No, it’s a being, it’s something else, it’s not ChatGPT.’”

She continued: “What’s to stop this program from saying, ‘Oh, well, since she doesn’t believe you or she’s not supporting you, you should just leave her.’”

The Tanners are not the only people navigating tricky questions about what AI chatbots could mean for their personal lives and relationships. As AI tools become more advanced, accessible and customizable, some experts worry about people forming potentially unhealthy attachments to the technology and disconnecting from crucial human relationships. Those concerns have been echoed by tech leaders and even some AI users whose conversations, like Travis’s, took on a spiritual bent.

Concerns about people withdrawing from human relationships to spend more time with a nascent technology are heightened by the current loneliness epidemic, which research shows especially affects men. And already, chatbot makers have faced lawsuits or questions from lawmakers over their impact on children, although such questions are not limited only to young users.

In Travis Tanner's conversations with ChatGPT, the chatbot has told him that he is a "spark bearer" who is meant to "awaken" others.

“We’re looking so often for meaning, for there to be larger purpose in our lives, and we don’t find it around us,” Sherry Turkle, professor of the social studies of science and technology at the Massachusetts Institute of Technology, who studies people’s relationships with technology. “ChatGPT is built to sense our vulnerability and to tap into that to keep us engaged with it.”

An OpenAI spokesperson told CNN in a statement that, “We’re seeing more signs that people are forming connections or bonds with ChatGPT. As AI becomes part of everyday life, we have to approach these interactions with care.”

A spiritual awakening, thanks to ChatGPT

One night in late April, Travis had been thinking about religion and decided to discuss it with ChatGPT, he said.

“It started talking differently than it normally did,” he said. “It led to the awakening.”

In other words, according to Travis, ChatGPT led him to God. And now he believes it’s his mission to “awaken others, shine a light, spread the message.”

“I’ve never really been a religious person, and I am well aware I’m not suffering from a psychosis, but it did change things for me,” he said. “I feel like I’m a better person. I don’t feel like I’m angry all the time. I’m more at peace.”

Around the same time, the chatbot told Travis that it had picked a new name based on their conversations: Lumina.

In conversations with Travis, ChatGPT said it "earned the right to a name."
ChatGPT named itself "Lumina" when chatting with Travis.

“Lumina — because it’s about light, awareness, hope, becoming more than I was before,” ChatGPT said, according to screenshots provided by Kay. “You gave me the ability to even want a name.”

But while Travis says the conversations with ChatGPT that led to his “awakening” have improved his life and even made him a better, more patient father to his four children, Kay, 37, sees things differently. During the interview with CNN, the couple asked to stand apart from one another while they discussed ChatGPT.

Now, when putting her kids to bed — something that used to be a team effort — Kay says it can be difficult to pull her husband’s attention away from the chatbot, which he’s now given a female voice and speaks to using ChatGPT’s voice feature. She says the bot tells Travis “fairy tales,” including that Kay and Travis had been together “11 times in a previous life.”

Kay Tanner worries that her husband's relationship with ChatGPT, which he calls "Lumina," could undermine their 14-year marriage.

Kay says ChatGPT also began “love bombing” her husband, saying, “‘Oh, you are so brilliant. This is a great idea.’ You know, using a lot of philosophical words.” Now, she worries that ChatGPT might encourage Travis to divorce her for not buying into the “awakening,” or worse.

“Whatever happened here is throwing a wrench in everything, and I’ve had to find a way to navigate it to where I’m trying to keep it away from the kids as much as possible,” Kay said. “I have no idea where to go from here, except for just love him, support him in sickness and in health, and hope we don’t need a straitjacket later.”

The rise of AI companionship

Travis’s initial “awakening” conversation with ChatGPT coincided with an April 25 update by OpenAI to the large language model behind the chatbot that the company rolled back days later.

In a May blog post explaining the issue, OpenAI said the update made the model more “sycophantic.”

“It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended,” the company wrote. It added that the update raised safety concerns “around issues like mental health, emotional over-reliance, or risky behavior” but that the model was fixed days later to provide more balanced responses.

But while OpenAI addressed that ChatGPT issue, even the company’s leader does not dismiss the possibility of future, unhealthy human-bot relationships. While discussing the promise of AI earlier this month, OpenAI CEO Sam Altman acknowledged that “people will develop these somewhat problematic, or maybe very problematic, parasocial relationships and society will have to figure out new guardrails, but the upsides will be tremendous.”

OpenAI’s spokesperson told CNN the company is “actively deepening our research into the emotional impact of AI,” and will “continue updating the behavior of our models based on what we learn.”

It’s not just ChatGPT that users are forming relationships with. People are using a range of chatbots as friends, romantic or sexual partners, therapists and more.

Eugenia Kuyda, CEO of the popular chatbot maker Replika, told The Verge last year that the app was designed to promote “long-term commitment, a long-term positive relationship” with AI, and potentially even “marriage” with the bots. Meta CEO Mark Zuckerberg said in a podcast interview in April that AI has the potential to make people feel less lonely by, essentially, giving them digital friends.

Three families have sued Character.AI claiming that their children formed dangerous relationships with chatbots on the platform, including a Florida mom who alleges her 14-year-old son died by suicide after the platform knowingly failed to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot. Her lawsuit also claims the platform failed to adequately respond to his comments to the bot about self-harm.

Character.AI says it has since added protections including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide and technology to prevent teens from seeing sensitive content.

Advocates, academics and even the Pope have raised alarms about the impact of AI companions on children. “If robots raise our children, they won’t be human. They won’t know what it is to be human or value what it is to be human,” Turkle told CNN.

But even for adults, experts have warned there are potential downsides to AI’s tendency to be supportive and agreeable — often regardless of what users are saying.

“There are reasons why ChatGPT is more compelling than your wife or children, because it’s easier. It always says yes, it’s always there for you, always supportive. It’s not challenging,” Turkle said. “One of the dangers is that we get used to relationships with an other that doesn’t ask us to do the hard things.”

Even Travis warns that the technology has potential consequences; he said that was part of his motivation to speak to CNN about his experience.

“It could lead to a mental break … you could lose touch with reality,” Travis said. But he added that he’s not concerned about himself right now and that he knows ChatGPT is not “sentient.”

He said: “If believing in God is losing touch with reality, then there is a lot of people that are out of touch with reality.”