How an AI system beating a video game could lead to better autonomous vehicles

Researchers at UT-Austin and Sony AI America say their software defeating champion players of the Gran Turismo Sport racing game is a milestone accomplishment.

By Laura Rice & Shelly BrisbinFebruary 14, 2022 1:37 pm,

Computers beating humans at games of skill like chess and poker have made news since the 1990s. Now, artificial intelligence, or AI, is taking a victory lap over the best human drivers of the auto racing game, Gran Turismo. But what researchers from the University of Texas at Austin and Sony AI America learn from how a computer performed against humans in a video game could translate into better autonomous driving technology in the future.

Peter Stone is the executive director of Sony AI America and a professor of computer science at UT-Austin. Listen to the interview above or read the transcript below.

This transcript has been edited lightly for clarity:

Texas Standard: Tell us about this competition between human gamers and your AI system. How is it set up and what were the results?

Peter Stone:  It’s set up as a as a competition between the very best human performers of a game called Gran Turismo Sport. It’s popular in the e-sports community. There’s some very expert human drivers in this game who actually train, sometimes using this game, to be physical race car drivers on the Formula One circuit. And we set out to create a computer program that could, using a similar interface to people, outperform them in a head-to-head race.

Anyone who’s done one of these driving simulations knows that there are often cues that wouldn’t exist in the real world. Did you try to filter for those and replicate the real world as much as possible?

Yes, we did. In fact, it would be possible to put the computer AI agent inside of the video game, but we didn’t do that. We had it get the same kind of perceptual signals that people do, and have to send back the same control signals, like when to accelerate, when to turn the steering wheel and when to break.

And there are things that you could do that would give a computer an unfair advantage, like, you have it act many, many more times per second than a person could. And we limited it to try to keep a level playing field so that the computer program was interacting with the computer game in exactly the same way that a person would, or as close as possible, given that the computer doesn’t have eyes and hands and things like that.

So, once again, robot versus human; robot wins. Give us the takeaway for AI, and what’s going to happen in the real world as you see it.

It’s difficult to predict exactly what the implications will be, but this is very important for a number of reasons. You mentioned in the introduction that there’s been similar kind of landmarks in the world of artificial intelligence – in games like chess, Go and poker. But in all of those situations, there’s time for the computer program to think. It’s a turn-taking game; the human gets a turn, then nothing changes and the computer gets a turn and nothing changes for a time. The control system that is happening in Gran Turismo is real-time, so that the decisions have to be made very, very quickly. And there’s a sort of a razor-thin control window where, if you go a little bit too fast, the car spins out, a little bit too slow and you lose your advantage. And so there’s a lot more real-world kinds of interactions that happen in this setting than in the sort of board games

You used a technology called “deep-reinforcement learning.” What should we know about that approach?

The most important thing to understand is that it’s learning from experience. So, rather than a person writing a set of rules like, when there’s a car in front of you by 5 meters, press the accelerator or something like that, it allows the computer to practice – to play the game over hours and see for itself or understand for itself what happens when it, in a particular situation, accelerates or breaks or turns the wheel.

And so it’s learning from experience, and the reinforcement learning is that there’s a signal that comes back that says how fast you are going or did you win the race, and it’s up to the computer or the program to learn what actions are going to lead to the most favorable outcome.

But of course, in the virtual world, there are margins for error; someone wins and someone loses. There is no margin of error in the real world where you’re talking about human beings. Could you tell us a little bit about the larger significance of this development for the future of self-driving vehicles?

Gran Turismo, the game, is actually very, very real in its simulation. The game models things like friction and wind resistance and things like that. So a lot of the control that’s happening is, in principle, possible in the real world.

Now, as you said, the goal of this project was to win in the video-game setting. It would be another sort of level of challenge to take it out onto real cars; it would be really exciting to do that. But this reinforcement learning learns from a lot of experience, and when it does, it has to make mistakes. And that’s something that’s fine to do in a video game. You don’t want the car crashing into a wall in the real world while it’s learning.

So there are certainly challenges and barriers to taking it to a real car. But there are other applications. There’s all kinds of interactions that computers and robots have with people in real time. Just imagine a robot walking, trying to navigate through a crowd of people on a busy street. Similarly, [a self-driving car] is going to need to account for what are the emotions and movements of the other people. How can it predict its path forward? Much of the same technology that’s being used at high speeds in Gran Turismo could be used in safer sort of low-speed environments and real robots.

If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.