New research shows driverless car software is significantly more accurate with adults and light skinned people than children and dark-skinned people.
Statistically there are going to be some things they will do worse at but is it significant?
Researchers ran more than 8,000 images through the software and found that the self-driving car systems were nearly 20% better at detecting adult pedestrians than kids, and more than 7.5% better at detecting light-skinned pedestrians over dark-skinned ones. The AI were even worse at spotting dark-skinned people in low light and low settings, making the tech even less safe at night.
I’d say a 20% difference is pretty significant.
But the question is, is it a significant difference from human perception?
I would say that’s not actually at all a relevant question, but a form of whataboutism, since this is looking at just driverless programs and comparing how they are with themselves, and what problems with programming and training models could result in that difference.
a form of whataboutism
Agreed. The argument of matching autonomous vehicle perceptions with human perception should be completely irrelevant. When an autonomous vehicle has that significant of a margin of error, who ends up being responsible for the accident? When humans are involved, the driver is responsible. Is a manufacturer liable in the event of all autonomous vehicle caused accidents? Guaranteed corporations will rally and lobby to make that not possible. The situations aren’t the same and a huge selling point of autonomous vehicles has always been that they should be the safest form of piloting a vehicle.
When an autonomous vehicle has that significant of a margin of error, who ends up being responsible for the accident?
There’s some details to be sorted out, of course, but this isn’t the major question people make it out to be.
When humans are involved, the driver is responsible.
As is the owner, at least in the US. People will stay responsible for their vehicles (and, more relevantly, for insuring them).
Is a manufacturer liable in the event of all autonomous vehicle caused accidents?
If it turns out to be a defect, of course they are. They are even without the vehicle having autonomy. If they become responsible for more of the vehicle’s performance, of course it stands to reason they’ll be responsible for more of the outcomes as well.
a huge selling point of autonomous vehicles has always been that they should be the safest form of piloting a vehicle.
Which is exactly why it is relevant to compare their safety to that of human drivers.
The crucial point in autonomous car adaption ought to be whether they are better than humans or not. So if they hit fewer children than human drivers do, they’re better, even if they were a further 20% better at avoiding adults.
I disagree. I’m not at all trying to dismiss the significance of the problem; apologies that I didn’t communicate that effectively. The systems absolutely SHOULD get better and be better than us. However, I think that comparing the difference to human perception helps enable us to discover differences in how these systems view and process the world around them and gives us hints at how to improve them. After all, humans also have these problems to some extent, and knowing how much better or worse AI is at it can either help improve safety for human drivers too or maybe highlight systemic issues that need to be addressed outside of the vehicle (e.g., daylighting, improved crosswalk infrastructure, et cetera).
this is misleading. if you read the study, you’ll see the aggregate miss rate is much worse for dark skinned people and children, but that’s weighted by the models that haven’t been trained specifically for use with pedestrian detection. the pedestrian detection models, i.e. the ones people actually are going to use, were within 1% as accurate for skin tone, and were about 10% worse at detecting kids. since kids are already harder to see for human drivers, that seems much more similar to human performance.
the other thing it doesn’t mention is how bad they are at detecting people in general. the best one misses people roughly 5% of the time, which seems pretty high for something that’s supposed to be driving a car
Great, self-driving cars can be childfree and racist, just like some human drivers!
This seems like a big deal.