With a sweeping executive order released yesterday, President Biden has taken on the growing role artificial intelligence, or AI, is likely to play in our lives – from military and government uses, to how language models can be adapted to be more inclusive when it comes to race and gender.
The order came together with input from multiple government agencies and industry groups who represent companies that are developing the technology.
Cason Schmit is an assistant professor in the Department of Health Policy and Management at Texas A&M. He recently co-authored a paper in the journal, Science, proposing a governance model for ethical AI development. Listen to the interview above or read the transcript below.
This transcript has been edited lightly for clarity:
Texas Standard: This order covers a lot of ground. Do you think the administration’s right that AI’s the kind of technology that needs a quick, wide-ranging response from government?
Cason Schmit: Absolutely. You know, regulating an evolving technology like AI, it’s very difficult. The stated goal of the executive order is to harness AI for good, and it acknowledges that to achieve the benefits of AI, you’ve got to mitigate the risks. And that is a very challenging thing to do early on when we don’t yet know what those benefits or risks might look like.
What parts of this order stood out to you? What are the most important issues addressed here?
Well, it’s a massively ambitious first step. And what immediately struck me is that it provides an initial important step in the direction of AI governance, but it provides flexibility to adapt as the technology changes. And that’s an incredibly important thing to do when there is so much uncertainty about the benefits and risks.
You don’t want to inhibit AI, progress and development and those benefits, but you also want to try to prevent the most substantial risks.
Well, I mean, there are some substantial things that this order does. Particularly what stood out to me was that the Commerce Department’s been directed to issue guidance for labels and watermarking. And this deals with, you know, issues of helping the public differentiate between real interactions and something that might be generated by AI or software. I mean, this goes to disinformation, but it seems to be just sort of the tip of the iceberg.
You also have, for instance, an order to companies developing this technology to share their safety test results and other information with the government before they release whatever AI project they’re working on out to the world. Is that something that the government will be able to enforce?
Without a doubt the lack of enforcement options is probably one of the biggest problems with the current executive order.
That said, there are many who study AI governance that think that industry is going to accept voluntary standards and guidance to avoid a blunt and inflexible regulatory framework that comes with a lot of uncertainty given the rapidly changing technology and substantial penalties.
Is this going to be good enough? I mean, where does this kind of leave us?
If this is the last statement from the federal government, it’s not going to be good enough. The important thing is that we begin down a path towards specific and sufficient guidelines for responsible development of AI moving forward.
It’s virtually impossible to integrate the cutting edge discoveries and the risks that come with them into the decision making process for policymaking in a timely manner. And so having a quickly adapting set of often voluntary rules, guidelines, best practices and standards can allow the industry to move innovation forward. And if they’re adhered to appropriately, manage the emerging risks.