When it comes to artificial intelligence, we’re still living on the digital frontier. Almost every business seems to be weaving AI into the way they do things, and many of us rely on large language models in our work and personal lives.
But there’s misinformation and fake imagery, too, sometimes threatening harm to those who use AI, or are forced to interact with it.
The Standard’s Shelly Brisbin says a new Texas law regulating AI addresses some potential overreach of the technology, while giving tech companies the flexibility they feel they need to continue innovating.
The Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, was sponsored by South Lake Republican Rep. Giovanni Capriglione. It was signed into law in June, and immediately gained attention from privacy and AI regulation advocates for its mix of restrictions on AI use and support for business development of technology.
It takes effect Jan. 1, 2026.
The law requires local and state government agencies to disclose the use of AI, and prevents them from ranking individuals based on social scoring.
The law also bans capturing biometric identifiers, like facial or retina scans, or fingerprints, without permission.
The law allows the attorney general to fine violators up to $10,000 and sets up a citizen complaint system.
It also includes protection for minors interacting with AI. Deep fake child exploitation content is banned, and companies are not allowed to create AI tools that promote self-harm.
Business groups supported the measure, which includes a 10-member state AI council that can monitor the industry and the enforcement of the law and flag any changes they feel are needed, or perceived excesses in the law’s enforcement.















