Kids are encountering unwanted content on Instagram despite teen filtered accounts

And increasingly children are interacting with AI chatbots online.

By Sarah AschOctober 16, 2025 2:33 pm,

How do we keep young people safe online?

Lawmakers have long debated this issue. Texas legislators considered a law this year that would have severely restricted the use of social media by minors. That was on top of the existing Scope Act, which requires parental consent for someone under 18 to create a social media account.

But is all this working?

Our tech expert Omar Gallaga recently wrote for CNET about a study commissioned by child advocacy groups on teens and unsafe content on Instagram. The study found that 60% of teens ages 13 to 15 who used Instagram in the last six months were encountering either unsafe content or unwanted messages.

“Some of [the messages were] from people they suspect were adults, some of them trying to initiate romantic conversations with them,” Gallaga said. “And then as far as unsafe content, even when they’re trying to avoid it or block it, they’re seeing lots of instances of violence and sexually suggested content.

“What’s interesting is that Instagram, owned by Meta, which also owns Facebook and WhatsApp, have been taking some measures to try to fix this, but apparently those measures are not doing enough to protect kids.”

One of those measures was the creation of Instagram Teen Accounts, which have more restrictions and content filters.

“But a lot of that content, apparently, according to the study at least, is still getting to [teens],” Gallaga said. “One of the disturbing things that came out of the study was that they’re gotten so used to it that they’ve just sort of ignored it. That’s not good. They shouldn’t be exposed to it in the first place.”

» GET MORE NEWS FROM AROUND THE STATE: Sign up for Texas Standard’s weekly newsletters

Gallaga also wrote about a Pew Research poll that found increasingly young children, ages 5 and up, are using AI chatbots. This poll included 3,000 U.S. parents and asked questions about how they manage screen time for their children. The poll found that about 15% of 11- to 12-year-olds are being exposed to chatbots, while about 7% of kids 8 to 10 are.

“We’ve talked about OpenAI and ChatGPT. There’s all kinds of issues around are these leading people down the wrong paths as far as mental health? Do they have misinformation or copyrighted material or things that kids shouldn’t be exposed to?” Gallaga said. “More disturbing is that OpenAI just announced that they’re gonna lift some of the restrictions on ChatGPT, that they’re gonna allow things like adult erotica on the chatbot. So even as they’re loosening restrictions, we’re seeing more and more kids using ChatGPT as a tool.”

Parenting experts say it’s not realistic for parents to have complete control over everything their children do on the internet.

“So it’s important to have those conversations up front to tell kids what kinds of things they might encounter online, whether it’s on Instagram or using chatbots,” Gallaga said. “Certainly if your kids are going to start using ChatGPT … it’s important to understand that not everything that a chatbot tells you is necessarily true. It may have bad information.”

Gallaga said it’s also important for parents to remind young people that chatbots are not substitutes for human interaction.

“If you really want good information, you should ask a parent or teacher or an adult,” he said. “A chatbot is not a substitute for asking a real person some of these questions that kids might have.”

If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.