Companies that produce artificial intelligence models – like Google, Meta, and Open AI – are feuding amongst themselves over regulation.
These AI heavyweights also disagree about whether AI tech should be built and owned by one company, or should rely on so-called open source software, where the code underlying it is available for anyone to see and use.
Tech expert Omar Gallaga says companies are jockeying for position when it comes to influencing regulators.
Highlights from this segment include:
– The latest conflict between AI companies began when Meta’s lead AI executive called out Google for what he called “massive corporate lobbying” aimed at influencing regulation in the U.S. and abroad. Meta, seemingly incongruous, supports an open source model of AI development that would allow regulators, and anyone else, to inspect the code that makes up artificial intelligence tools. Google responds that Meta was engaging in fear-mongering, trying to make AI seem scary to regulators and the public.
– Advocates for open source AI say it will allow more companies to be involved in producing AI tools, keeping tech monopolies at bay.
– Government regulators have heard from many industry companies, looking to play a role in the development of guard rails on AI development. The issue for governments is which approach to accept and apply when making rules.