,

“AI almost always delivers better performance.”

Ghost Founder and CEO John Hayes's presentation at CAR MBS on making AI robust and reliable for autonomous driving.

By Ghost

August 29, 2023

minute read

Ghost founder and CEO John Hayes was onstage at the Center for Automotive Research’s annual seminar in Traverse City, Michigan, giving a presentation on AI development in autonomous driving. His talk featured new techniques for testing and validating neural networks to ensure high reliability out on the road.

He then joined a panel with automotive executives from Aptiv, Nexteer, Aurora and the 5G Automotive Association (5GAA) to discuss “How to make Connected and Automated Vehicles Robust and Reliable”, expanding on the path forward for new technologies in this safety critical application.

Video of the presentation and Q&A is available above, with full transcript below.

**

John Hayes

Ghost: Autonomy Software for Consumer Cars

Good morning again. This is my first time at this conference, and so I'm happy to see you all here. I want to begin by just introducing myself. So I'm John Hayes, I work for Ghost. I founded Ghost, I'm the CEO. And we produce software for autonomy, and in particular, autonomy in consumer cars.

So when we started in 2017, the dominant way that people thought about autonomy was people movers. And we started with a vision, saying, "How do you get that to work in every single car?" And I think that that's pretty interesting because bringing those benefits, we are a country of drivers, we drive 90% of our trips, and so that seemed to be an excellent market to go after.

What we focus on is attention-free autonomy, which means that you have redundancy, you have true readings of the environment. We want to go to every car. We build on top of ordinary microcontrollers, and not supercomputers in trunks. And we want to, we focus on making software. So we don't do magic sensors. We buy things and integrate things that are readily available off the shelf.

Attention-Free Autonomy on the Highway

The end experience that we think that everyone is looking for is, you're driving along the highway, and you just let go, and the car is automated. And every car should be automated all the time at a lot of levels. This is the most casual interaction, all the way up to an end-to-end, sort of point-to-point navigation system.

A New Recipe for Autonomy, Based on Software

But today, we're gonna talk mainly about software. And so if you look at the California accident reports, so every year in California, every company that tests autonomy, including us, has to report every single accident that occurs. And if you read those with an engineering mindset, one of the things that you come to is that almost all of the problems, almost all the collisions that actually occur are software. And it's not that you had a sensor failure. It's not that even the road was unusual. It's not that people did something unpredictable. It's that the software failed to interpret the environment, or had some rigidity that caused it to behave in an unusual way.

Software is becoming a more and more important thing. There's so much great engineering that's been done on the hardware side that we're seeing that hardware become more reliable and more predictable. And the other problem is that it's the job of the software to predict the times when the hardware is gonna be unreliable, and still come to a conclusion about what to do.

Why Did We Turn to AI?

The biggest change, and this started in 2013, was we started turning to AI. And the reason that we turned to AI is that there were a whole bunch of problems that seemed intractable, and a lot of it had to do with image recognition. Could you identify where roads were? Could you identify objects in the world in a reliable way? We have decades and decades of attempts to build algorithms to try and answer these questions. And it turns out, and this started in 2013, was that, if you can make the computer design an algorithm in the form of a neural network, that you could get superior results.

Now the upside is you get an answer, and not only that, almost any very, very complex computer science problem is being subject to AI, and you almost always get better performance. And so the first generation of fully-autonomous vehicles that you saw really focused on AI for perception. And now what you're seeing is that they continue to encounter problems in planning. And so the solution is to also do AI in planning, and do AI in control systems, and do AI in noise reduction. You're seeing this just penetrate over and over again because the algorithmic attempts to solve these problems turn out to be unreliable. And that's sort of structural.

It's structural because most conventional software was designed for computer-to-computer communication or human-to-computer communication that was very, very constrained, often had very low bandwidth. So you could design software and prove whether it worked because you could actually test every single input. But now we're taking inputs that are like video. And the problem with video is that you can't test every single video. That's just not possible. And so on one side, it sort of, it giveth, where it says, "Hey I'm gonna give you an answer to every video you present," but then it takes away because you have to adopt a different methodology in order to tell whether this works in a generalizable way. And that can be just measured as straight up bandwidth.

Now the other thing that's happened is Moore's law has been very, very kind to us, in that we can put more and more of these models in the car and in the data center, really inexpensively. And so now it's basically the go-to approach for anything related to AV. And I think that almost anything related to computer interaction with the natural world or with humans, we're gonna see more and more of that represented as AI.

The AI Development Process

What this does is this creates a big change in the software development process. Traditionally, you would come up with a specification of what you wanted the software to do, you'd write the software, you'd write some unit tests, you'd write write some regression tests, and then you would ship it. And then you'd keep testing in the environment, but you go into this maintenance cycle. And the first thing that happens in the AI development process is you realize that testing and development are actually the same thing. And so what happens is you create a neural network layout and loss function. This is your mathematician who's doing this. This is fairly complex.

But then you build your dataset, your dataset of what you want the network to learn. You train it against that dataset. You have a holdout dataset, which is exactly the same data. You test it in simulation in the real world. And the thing that AI gets accused of is often being a black box. The good news is, is that this year, and for a few years hence, there are ways to examine AI models. You can take them apart, you can see what parts of the models are activating, and this ends up being very good guidance for the datasets that you have to add. So if I'm getting the wrong answer, I can go and say, "Which actual pixels in the scene were giving me that answer?" And then I can find out what I have to add to my dataset.

Now what's missing in here is any sort of regression testing, any sort of like, any sort of testing at all other than end-to-end testing. And the problem is, is because the dataset is the test itself. So you're saying, "Hey, make it fit this model," and my dataset is the test, and it is the development at the same time. And so what you've done is you've kicked the problem down to say...

How Do I Know My Dataset is Complete?

"How do I know that my dataset is complete, relative to the real world?" And we'll see is like this is not a possible question to answer ahead of time. And so given that you can't break it down to every pixel, every type of noise, what you start doing is creating categories.

And you say, "Look, I want to make sure that there are a minimum number of salient features in my data. What can I encounter in the world?" And so you begin by choosing some obvious things, like I want to make sure that every output I expect from the model, in terms of things that could occur in the world, are present in the data. I want to make sure that actual interference and actual noise that occurs with the sensors is represented in the data. And often this is produced by a process called augmentation where what you do is you take real data and then you distort it. Often you add effects like lens flare, and fog, and other things.

And then I wanna know what the environmental conditions are. I want to know, am I looking at salient features of the environment that are predictive of performance. Now the good news is, to extract these dimensions from images is actually getting easier. And part of the reason it's getting easier is because of, well, more AI. If you take this into a data center, and you can run whatever model you want without constraints of what's going on in the car, you can start extracting these secondary features that turn out to be important.

Like a simple example is why are trees and other shadows in there, because we found that you would get differential performance, depending on the time of day and depending on whether trees were present in the scene. Well, I'm not gonna make a tree detector that runs in the car, but making one of those that works in the data center is actually quite practical. Are there trees? And I can code that.

And what I can do is I can make sure that my dataset is balanced for those types of inputs. So this is trying to progress from a scenario-based testing, where you say, "Hey, there's a bunch of things that are hard." I mean we test in Michigan, we test in California. Almost every company starts from testing, hey, I wanna make sure that the road right outside my office works. And then they proceed to how do I make sure this works everywhere. So now we begin by accumulating these dimensions so that we have a way that we can generalize to anything that might exist. And you're gonna build this up over time. This is a creative process.

How Do I Know I Have All the Dimensions?  Performance Measurement

But now the problem is how do we know we have all the dimensions. And this becomes a different process where you say, "Okay, I want to know what the system performance is." And this is where you have to introduce a mid-range. By mid-range, I mean, on one end you have what you've designed. You can guess a lot of the dimensions. You can design or design a lot of the dimensions up front, as to what may be salient. You can also wait for collisions.

The problem is you don't want collisions. And the other thing is that they're going to be rare enough that it's going to be very hard to generalize. And so what you want is intermediate performance metrics. Does my vehicle actually behave? And are the metrics internal to the system telling me whether it's performing well or not?

And so what I'm looking for is a few things. It's like, well, I do want end-to-end. I do want to know is the consumer reacting and trusting the system. When they don't trust the system, probably means that behavior was a little bit weird, and that's something we have to address. Do I have weird controls? Is my perception, are the metrics dropping? Are they soft? And then I also want the model to be self-reflecting on what its confidence is. If the model is consistently low confidence, I want to compare that to the scene to see is the information actually present in the scene.

The Extended Development Loop

And so this is now your extended development process, where you test, you collect events, you find out do any of the dimensions I have predict my events. When the dimensions I have predict events, I know what to do. When I come up with events where no dimension predicts it, then I probably have to go and find new dimensions. And this is also a creative process where I have dimensions, and then I look at it, or I have events, and I look at them, and I try and think about what dimensions are present. And you're gonna do this over and over again. And there's a point where the growth rate of new dimensions drops to a point where you can predict how many dimensions are left to find.

Development Never Stops

So to wrap this up in a system, the other thing that's important is that the development is never going to stop. After you ship the product, the behavior of the development team is not materially different from before you ship the product. And what that means is it's continuous. You're doing the same data analysis. You're doing the same telematics. You have to be able to retrieve original sensory data so that you can keep building up your datasets over time as you get more experience on the road.

And the other thing is that, aside from the connectivity, it means that almost every AV product that ships will have a long beta period, probably of years, before you get consumer acceptance, and before you actually develop the internal belief that it works.

So that is the end. Thank you very much.

**

Panel Q&A - "How to make Connected and Automated Vehicles Robust and Reliable”

Doug Patton

So the first thing I wanna start with, Robin and John, you guys talked about data. Robin, you talked about collecting steering data, and road data, and that kind of data. Are you going to collect that or is the OEM gonna collect that? Who's gonna own the data? And will you share the data?

John Hayes

I don't think that there's any reasonable integration path that doesn't include sharing original sensory data. And by original sensory data, I mean actual images and videos, actual time series that come from the various components. And we actually collect any data that you can see on the CAN bus because we found that that's important. Collecting the data itself is actually a really complex endeavor. And the issue is is that the vast majority of driving is really, really uneventful. And so one, it's you want to do a minimization of how much data you collect, but even deciding what data to collect is going to be relative to the software, relative to the events that that software is generating. And then you're gonna want to get small segments, so that you're really only collecting a tiny percentage of that data. And I think in terms of transmission and storage, there's a lot of cloud-based solutions that would work. I don't think that that's going to be the sticking point. What's going to be the sticking point is can you make a development process where it is clear to the OEMs that you must collect data in order to succeed.

Doug Patton

Okay, John Hayes, what do you think about those guys corrupting all your software when they put this other stuff in there with it?

John Hayes

I think that everyone agrees, actually, on that change in architecture, that there's a central computer to make that work. And with EVs, it becomes incredibly important because everything about the behavior of an EV is software defined. There's no connection, like the pedal doesn't control throttle, it controls a piece of firmware that's a mapping function that takes a lot of information to account to figure out what's going to happen. And so the question then becomes, can you define reasonable standard interfaces between components in the car. Now my belief is that you absolutely can, that the fundamental functions of the car haven't changed in a long time. And so it should be entirely possible to create an application framework that can get core sensors, which is like, where's my raw video, where's my raw radar, where are my ultrasonics, get those real data streams instead of pre-processing them. And you have complex bag of software. Then what is the car going to do, in terms of what dynamics outputs do I want? And as those components get increasingly sophisticated, essentially, you want a self-driving system to be able to just say, "Please generate me this torque vector by whatever means necessary, incorporating the information that you have about the dynamics of the car." Like we don't, right now, we tune for every car, we wanna stop doing that. We want a communication of what are the capabilities of the car, of this model of car, in this time and place according to friction. Just tell us what we can do, and then we can make a plan based on the outcomes that the car can produce. And so I think that there's a back and forth, but I haven't seen anyone actually trying to find that interface. And I think that there's a lot of opportunity to make cars more standard, as well as giving you access to the fundamental data streams and fundamental controls.

Doug Patton

Does anybody in the audience have a question? Yes, sir.

Audience Member

Thank you so much for a really powerful presentation. And if you look back four or five years, or 10 years, there were two school of thoughts. One was two initiatives, one was the connected vehicle, and another was electrification. And it seems like electrification has been taking, getting a little bit more steam. There's like a lot more dollars being spent in that space. And it seems like the connected and automated vehicles have been getting a backseat. And I think that's primarily because of expectations, because we had higher expectations of where we will be today. And there have been some challenges, for example, using brute force, just putting all the technology in the car and not having the infrastructure to be balanced with it. So some of those challenges, like the infrastructure being not strong enough, too many potholes on the road, and takes a lot of dollars to put that infrastructure, to take the brute force off the vehicle, in just terms of technology and to be able to share the information between vehicle to vehicle, and then having so many vehicles on the road which are not sophisticated, really old. So how do you see this all coming together, and the connected vehicles really getting the center stage and delivering on people's expectations? Thank you.

Doug Patton

Anybody want to take a shot at that? Who is your question directed to specifically?

- I think.

Audience Member

It's an open conversation, in the sense that I think, as a team, we are lagging behind the electrification team, and the loss of challenges in that space. And I think we are kind of, in a way, failing to deliver on that as a team.

Doug Patton

John?

John Hayes

What we've seen in the industry, including Tesla, is almost every company has serially done EV before connected and autonomous vehicles. It's across the board. There isn't a single company where their first generation EV, or even their second generation EV, had significant AV content in it. And I think it's because there's just, there's so much re-architecture that goes into just producing an EV that a company, even a very large company, doesn't have the bandwidth to do AV after. So I think that's just a normal progression of technology. And I think it's also the case that the underlying technology, in terms of connectivity, in terms of sensors, has actually improved over the last five years. If you look at presentations from autonomous vehicle companies, even recently, they will still indicate that cameras are incapable of seeing in the sun, or they're incapable of seeing in the dark, and that's not true anymore. They've been getting better. It's like you get a another bit about every five years, on top of the general improvement of 1% a year. And so now we're past the point where we worry so much about sensors being the problem, and Moore's law is a real force. I think the compute available in cars, it is increasing at 30% or 40% a year, which opens up those new applications. And so I think you're gonna see more companies not thinking that they have to build a chip to get this capability.

Audience Member

Hi, this is Trip Bonds. I work for Dynamic Map platform. Washington Post reported recently that Tesla has had 17 fatalities and 750 accidents attributed to its ADAS or AV program. So my question is to all of you, if you could be CEO of Tesla for a day, what changes would you make to the autopilot, full self-driving program, in order to stop the carnage.

Doug Patton

John, you wanna try that one first, since you're the closest to that?

John Hayes

So I've talked to engineers in Tesla, and they actually have a pretty good observation program. And so first what you're seeing is that they are unique among auto manufacturers in that there was a request put out by NHTSA to report any sort of level two accident to them. And so they've actually had the connectivity to report, unlike almost every other auto manufacturer. So one is you're seeing an over reporting. I think that they've taken an approach where they want to do a little bit of self-driving everywhere. Like they've added parking lots, they have full self-driving beta. But their ultimate goal is to get coverage. Our ultimate goal is a bit different, is like we want to perfect one domain of driving, freeway driving, which is, has its own complexity, and look at how you build the redundancies in that system, and how you make it really perfect, really eyes off, which is what consumers want. They want to be really eyes off and brains off. And so that would be my strategy is to say, "Let's make that perfect and slow down a little bit of driving everywhere."

Industry