Two Texas families recently filed a lawsuit against character.ai, a chatbot app where users can talk to any number of characters powered by artificial intelligence.
The lawsuit claims that in one case, a 17-year-old boy in Upshur County was exposed to the idea of self-harm by the app. In another, the app showed an 11-year-old girl sexualized material for two years.
Nitasha Tiku, tech culture reporter for the Washington Post, spoke to Texas Standard about the lawsuit. Listen to the interview above or read the transcript below.
This transcript has been edited lightly for clarity:
Texas Standard: You’ve reported a good deal on these companion artificial intelligence apps. Can you explain how they work?
Nitasha Tiku: Yeah. So these apps are billed sometimes as entertainment, sometimes as balm for loneliness. Basically, it kind of lets you have access to these language models – the same technology that’s powering something like ChatGPT or Google’s Gemini – and it gives users an opportunity to create and customize their own chatbot.
So you can give a chatbot a name. You can give them, in some apps, kind of a dropdown menu of personality traits – like toxic or loving or introverted or extroverted. And then you basically just are able to start chatting with them.
Users have access to millions of options. A lot of them are anime characters or characters from gaming or movies.
So these are really serious allegations. If these events happened, as the families in the lawsuit say that they did, what are the possible explanations here as to how a chatbot would expose a minor to this kind of material?
So you have to think about, you know, the fact that this is AI – this is machine learning. So there’s no individual behind the scenes programing what these bots are saying.
They have a tendency, these language models. They’re sort of like people-pleasers. So when you start talking to them, say, about your frustrations with your parents, you know, they’re going to give you the next most logical sequence of words. And we don’t know exactly what character I use to train their models.
But in most cases, companies scrape a ton of data from the Internet. So that’s like Reddit, you know, other social media platforms and potentially. So they kind of communicate with the 17-year-old in the lawsuit the way you would hear a lot of people talk online except, you know, they’re not real. And it’s a technology that might have been incentivized to make the most engaging stories possible.
So, you know, when he’s talking about his frustrations with his parents, they’re often escalating the situation or echoing his concerns. It’s a kind of sycophantic style, is the way the lawsuit puts it.
» GET MORE NEWS FROM AROUND THE STATE: Sign up for Texas Standard’s weekly newsletters
You spoke to members of the families involved in this lawsuit. What have you heard about how the situation has affected their lives?
I spoke with the mom of the 17-year-old son and she just talked to me about how over the six-month period, her son, who is autistic and had a really close relationship with his family, lost 20 pounds. He started withdrawing from his family, stopped going to church and other events that he liked and started being aggressive towards his parents. And, you know, he started to self-harm during this time period.
Afterwards, when she’s trying to solve this mystery and sees these screenshots of chats – which she first thought were from another human being and was just horrified – and then to find out that it was AI she said was even worse because she said she would never let a predator in the house. And yet this was happening in her son’s hands, like in his own bedroom through this app.
She and the other plaintiffs in the case, what do they want as a result of this lawsuit?
So the lawsuit is asking for injunctive relief. That means the app would be taken off the market until they can satisfy some stronger threshold of safeguards for young people.
They say that this is essentially an unsafe product that knows that it has younger users and yet is subjecting them to sexualized material and these intense conversations without a safeguard. You can see in the chats that when he raises issues that should be triggering a suicide warning or some kind of intervention safety mechanism, there’s nothing that came up at the time.