Google has been developing the driverless car for over half a decade, and with each passing year these automated chauffeurs move farther away from science fiction and closer to reality. Five states have passed laws enabling such vehicles, and as of April of this year, over 700,000 miles have been driven without an accident.
While commercialization is still a long way off, last week a debate raged over the ethical situations that may arise when a driverless vehicle is put into a situation where a crash is imminent. As a hypothetical example, what if a crash was unavoidable, and a driverless car had to choose where to guide the car – either to collide with a schoolbus or a volvo? Physics dictates that hitting an object of larger mass will statistically be the safest, but there are kids on board! What is the correct decision? Is the law, and by extension, the public, prepared for cars that “choose” the best scenario?