Experience Counts, Particularly in Safety-Critical Areas

Now Is the Time for Substantive Conversations about Safety for Autonomous Vehicles

By Prof. Amnon Shashua

Society expects autonomous vehicles to be held to a higher standard than human drivers. Following the tragic death of Elaine Herzberg after being hit last week by a self-driving Uber car operating in autonomous mode in Arizona, it feels like the right moment to make a few observations around the meaning of safety with respect to sensing and decision-making.

First, the challenge of interpreting sensor information. The video released by the police seems to demonstrate that even the most basic building block of an autonomous vehicle system, the ability to detect and classify objects, is a challenging task. Yet this capability is at the core of today’s advanced driver assistance systems (ADAS), which include features such as automatic emergency braking (AEB) and lane keeping support. It is the high-accuracy sensing systems inside ADAS that are saving lives today, proven over billions of miles driven. It is this same technology that is required, before tackling even tougher challenges, as a foundational element of fully autonomous vehicles of the future.

To demonstrate the power and sophistication of today’s ADAS technology, we ran our software on a video feed coming from a TV monitor running the police video of the incident. Despite the suboptimal conditions, where much of the high dynamic range data that would be present in the actual scene was likely lost, clear detection was achieved approximately one second before impact. The images below show three snapshots with bounding box detections on the bicycle and Ms. Herzberg. The detections come from two separate sources: pattern recognition, which generates the bounding boxes, and a “free-space” detection module, which generates the horizontal graph where the red color section indicates a “road user” is present above the line. A third module separates objects from the roadway using structure from motion – in technical terms: “plane + parallax.” This validates the 3D presence of the detected object that had a low confidence as depicted by “fcvValid: Low,” which is displayed in the upper left side of the screen. This low confidence occurred because of the missing information normally available in a production vehicle and the low-quality imaging setup from taking a video of a video from a dash-cam that was subjected to some unknown downsampling.

Images from a video feed watching a TV monitor showing the clip released by the police. The overlaid graphics show the Mobileye ADAS system response. The green and white bounding boxes are outputs from the bicycle and pedestrian detection modules. The horizontal graph shows the boundary between the roadway and physical obstacles, which we call “free-space”.

» Click for full image

The software being used for this experiment is the same as included in today’s ADAS-equipped vehicles, which have been proven over billions of miles in the hands of consumers.

Recent developments in artificial intelligence, like deep neural networks, have led many to believe that it is now easy to develop a highly accurate object detection system and that the decade-plus experience of incumbent computer vision experts should be discounted. This dynamic has led to many new entrants in the field. While these techniques are helpful, the legacy of identifying and closing hundreds of corner cases, annotating data sets of tens of millions of miles, and going through challenging preproduction validation tests on dozens of production ADAS programs, cannot be skipped. Experience counts, particularly in safety-critical areas.

The second observation is about transparency. Everyone says that “safety is our most important consideration,” but we believe that to gain public trust, we must be more transparent about the meaning of this statement. As I stated in October, when Mobileye released the formal model of Responsible Sensitive Safety (RSS), decision-making must comply with the common sense of human judgement. We laid out a mathematical formalism of common sense notions such as “dangerous situation” and “proper response” and built a system to mathematically guarantee compliance to these definitions.

The third observation is about redundancy. True redundancy of the perception system must rely on independent sources of information: camera, radar and LIDAR. Fusing them together is good for comfort of driving but is bad for safety. At Mobileye, to really show that we obtain true redundancy, we build a separate end-to-end camera-only system and a separate LIDAR and radar-only system.

More incidents like the one last week could do further harm to already fragile consumer trust and spur reactive regulation that could stifle this important work. As I stated during the introduction of RSS, I firmly believe the time to have a meaningful discussion on a safety validation framework for fully autonomous vehicles is now. We invite automakers, technology companies in the field, regulators and other interested parties to convene so we can solve these important issues together.

Professor Amnon Shashua is senior vice president at Intel Corporation and the chief executive officer and chief technology officer of Mobileye, an Intel company.

About Intel
Intel (NASDAQ: INTC) expands the boundaries of technology to make the most amazing experiences possible. Information about Intel can be found at newsroom.intel.com and intel.com.

Intel and the Intel logo are trademarks of Intel Corporation in the United States and other countries.

*Other names and brands may be claimed as the property of others.