From the Chair

By Phillip Krein

There have been dozens of articles (with varying levels of information and opinions) on the recent highly publicized fatal crash involving a Tesla Model S operating in a beta-testing auto-pilot mode[1].  Given the complexities and imperfections of any transportation devices or system, this type of tragic accident must be treated as an essential learning experience.  Long before this news, people have been asking members of the TEC about autonomous transportation and how to prevent any accidents.  Here are four comments, perhaps to provoke more thought about this:

 1. It is still early in the development of autonomous transportation.  For any such vehicle, massive sensor integration remains essential.  For now, multiple sensor technology types (including radar, lidar, ultrasound, cameras with visual processing, accelerometers, gyros, road contact sensors, environmental condition sensors, driver condition sensors, GPS, and many others) need to be used to provide redundancy and surety.  These are not necessarily expensive (many are packaged in a typical smart phone), but it is too soon to attempt to optimize sensor configurations.


2. Defensive driving should be a primary paradigm for autonomous driving.  Like an expert chess player, a system needs to plan ahead many steps, with a range of contingency paths based on what other vehicles might do or unexpected obstacles that might arise.

3. Speaking of defensive driving, computers have already bested human experts in chess, and more recently in Go[2].  Autonomous drive developers should set their sights high:  demonstrate that an autonomous vehicle can outperform the best professional drivers across a full range of extreme situations and hazards. Show us that the combination of fast computers, quick power electronics, and high-performance electric drives can be better than any human driver – in any situation that can be tested.

4. Several people have asked about the ethical dilemmas:  Given an emergency situation that imposes only unsafe choices (e.g. protect vehicle occupants vs. pedestrians), how should an autonomous vehicle make the choice?  I believe this is a false dichotomy.  After all, faced with such a situation, a human driver does not have enough time to make a proper choice, and cannot model the expected consequences of a particular action.  Instead, an effective defensive driver avoids getting into such situations.  How can additional information – look-ahead sensing, infrastructure information, pattern recognition – be added to the mix such that an autonomous vehicle always has safe options?

 

[1]  Tesla Motors blog:  “A tragic loss,” June 30, 2016.  Available:  https://www.teslamotors.com/blog/tragic-loss
[2]  C. Koch, “How the computer beat the Go master,” Scientific American (online site), March 19, 2016.  Available:  http://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/

 


About the Newsletter

Editors-in-Chief

Jin-Woo Ahn
Co-Editor-in-Chief

 

Sheldon Williamson
Co-Editor-in-Chief

TEC Call for Articles 2023 - Advances in Charging Systems

The TEC eNewsletter is now being indexed by Google Scholar and peer-reviewed articles are being submitted to IEEE Xplore.

To submit an article click here.