Experts Weigh in on Mobileye's AV Safety Model

发布时间:2017-10-27 00:00
作者:Ameya360
来源:Junko Yoshida
阅读量:1177

  A technical paper recently published by Mobileye, “On a Formal Model of Safe and Scalable Self-Driving Cars,” has struck more than one nerve with observers of the autonomous driving industry.

  A central issue in the controversy stirred up by the paper is Mobileye’s assertion that the industry needs a mathematical model that absolves an autonomous vehicle (AV) from blame for an accident, as long as it follows a pre-determined set of “clear rules for fault in advance.”

  Questions erupted, from “How dare an industry which makes a product defines for itself what the definition of safety is?” to “Wait, are ‘safety’ and ‘assigning faults’ the same thing?”

  EE Times has since reached out to experts in academia whose research interests range from robotics and embedded computer systems to autonomous vehicle safety and human-robotic interactions. We asked them to break down Mobileye’s proposal, talk about what they agree with and what they find problematic, and steps they recommend for the industry.

  It turns out that the academics have initially given Mobileye an overwhelmingly positive response. They applaud the company for sticking its neck out and tackling head-on the hardest issue in the robocar debate. The paper was written by Amnon Shashua, Mobileye CEO/Intel senior vice president, and Shai Shalev-Shwartz, Mobileye’s vice president of technology.

  Asked about Mobileye’s technical paper, Phil Koopman, professor at Carnegie Mellon University, told us, “Overall, I think it's great to see an initial rigorous approach that talks about autonomous vehicle safety. Every vehicle must have some approach to deciding what it's allowed to do and what it's not. So, I applaud the authors for starting down that path.”

  Missy Cummings, a Duke professor who also serves as director of the school’s Humans and Autonomy Lab, agreed. “I appreciate that Mobileye is thinking so deeply about these issues.”

  But both Koopman and Cummings regard Mobileye’s proposal as only a “first step.” The proposal’s resilience in the real world — especially when autonomous vehicles must co-exist and interact with human-driven vehicles — is more of a great leap. Mobileye’s definition of what might be safe for autonomous cars needs to be subjected to the rigors of the real world.

  Emphasizing the value of Mobileye’s effort to pose a concrete proposal on robocar safety, Koopman said, “Nobody will get a proposal like this perfect the first time, but that's OK.  We're going to have to try a lot of approaches to representing and formalizing AV safety before we find one that works in practice.”

  The two Mobileye authors discuss “safety” in their paper by explaining that their policy is “provably safe, in the sense that it won’t lead to accidents of the AV’s blame.”

  Duke’s Cummings, however, noted that the notion of “provably safe” is not new. She referred to a number of academic papers already published on the topic on the Internet. The list includes a paper entitled as "Provably Safe Robot Navigation with Obstacle Uncertainty," written by Brian Axelrod, Leslie Pack Kaelbling and Tomás Lozano-Pérez.

  Cummings told us that the thorniest problem with provable safety has not changed: “What computer scientists consider to be provably safe from a mathematical perspective does not mean proving safety in the way that test engineers would consider safe.”

  Assumptions must be questioned

  Both Koopman and Cummings cautioned that assumptions made by Mobileye should not be taken for granted. They need to be questioned. Koopman noted, “There are some assumptions that I'd be surprised hold up in the real world.”

  An example pointed out by Cummings was software bugs.

  Here’s how the authors framed the safety issue in their technical paper:

  …We are now discuss sensing mistakes that lead to non-safe behavior. As mentioned before, our policy is provably safe, in the sense that it won’t lead to accidents of the AV’s blame. Such accidents might still occur due to hardware failure (e.g., a breakdown of all the sensors or exploding tire on the highway), software failure (a significant bug in some of the modules), or a sensing mistake. Our ultimate goal is that the probability of such events will be extremely small — a probability of 10-9 for such an accident per hour.

  Cummings challenged Mobileye's claims that potential problems caused by software bugs will be extremely small. She referred to a report that discusses the historic nature of automobile safety recalls due to software problems.

  One of Koopman’s concerns is lidar and radar failure. “It's hard to believe that lidar and radar failure independence will work out as well as the discussion assumes,” he said. Koopman noted, “Someone will have to demonstrate they are true in practice, not just assume them. And almost certainly there are assumptions that are false that the authors didn't even realize they made.”

  As far as Koopman is concerned, “That's how safety goes — it is the surprises that are the hard part, so you need to plan for surprises and pay careful attention to noticing them when they happen.”

  However, he added, “But I'm happy that the authors are stating the assumptions that they know they are making, because then we have a starting point for testing assumptions.”

  Define safety

  Koopman isn’t too worried about the way Mobileye defined safety. He explained, “The claim made by the paper is that if they have a rigorously defined assignment of faults, you can build a system that will behave in such a way that a fault is never assigned to it.”

  He said, “From that system's point of view, that makes it safe.” He added, “If every car on the road had such a policy and followed it, then they argue things would be reasonably safe.”

  While acknowledging concerns about this concept in actual practice, Koopman once again stressed, “Understanding why it might not work out is the important part. It's great to see a really concrete proposal that we can think about and learn from.”

  Robocars learn to game the system

  If that’s the case, where does Koopman see potential problems for Mobileye?

  He’s concerned that that an autonomous vehicle (AV) “might learn to game the system.”

  In the real world of human driving, humans sooner or later find loopholes in rules of the road. And somehow, we all learn to take advantage of them. Why wouldn’t an “intelligent” robocar figure out the same tricks?

  Koopman speculated on this.

  For example, consider a continuous line of autonomous vehicles  in one lane with a single gap in the line that has half a car worth of extra room. A human driver dives into that spot, then slams on the brakes to give his car sufficient following distance.  By the way this scenario is constructed, one or more of the following AVs will be in an unsafe situation because there isn't enough space for all of them.  The human-driven vehicle basically "stole" the following distance space.

  Koopman said, “This exact scenario I think falls into the no-win scenario that the authors mention, but to really understand the proposal such chains of events need to be considered.” He asked: “What if an AV learns to game the system in a similar way due to a loophole that permits it to do a similar maneuver without ever violating its internal safety rules?”

  He said, “I am not saying that it's a cut-in like the example I just gave, but what if there's is a loophole due to measurement uncertainty or pessimistic assumptions that need to be made in practice?  It's likely that machine learning systems will find any such loophole and exploit it.  And probably we won't think of them all in advance.”

  He concluded: “We should expect that machine learning is likely to be good at learning how to exploit loopholes, since in general it's not going to have common sense about what behaviors are violating the spirit instead of the letter of the law.”

  At this point, Koopman can’t say exactly where the loopholes in the system might be. But he stressed, “That's the kind of thing that has to be thought through. And it's the kind of thing that will end up being a surprise in practice with any approach to defining safety in a rigorous way.”

  Brute force won’t work

  The tech paper’s authors expose one of the fallacies of autonomous driving. It’s a topic less discussed, but the authors sounded the alarm on the “brute force” approach taken by a majority of technology players.

  They wrote:

  The issue with most current approaches is centered around a “brute force” state of mind along three axes: (i) the required “computing density”, (ii) the way high-definition maps are defined and created, and (iii) the required specification from sensors. A brute-force approach goes against scalability and shifts the weight towards a future in which unlimited on-board computing is ubiquitous, somehow the cost of building and maintaining HD-maps becomes negligible and scalable, and that exotic super advanced sensors would be developed, productized to automotive grade, and at a negligible cost.

  A future for which any of the above holds is indeed plausible but having all of the above hold becomes a low-probability event. The combined issues of Safety and Scalability contain the risk of “Winter of AV.”

  The goal of this paper is to provide a formal model of how Safety and Scalability are pieced together into an AV program that society can accept and is scalable in the sense of supporting millions of cars driving anywhere in the developed countries.

  The brute force approach is taken for granted, particularly when the autonomous driving industry talks about testing. When companies like Tesla or Waymo talk about the safety of their vehicles, the first thing they mention is brute-force miles of driving performed on road.

  Mobileye noted that the data-intensive validation process that most AV developers seem to be planning is “not feasible (whether performed on-road or in a simulated environment).” Koopman agreed. “Brute-force testing won't get us to adequate AV safety for full-scale deployment,” Koopman said.

  In that context, he approves of Mobileye’s proposal. “If you can't say very clearly and precisely what ‘safe’ means, then you can't very well measure whether a car is safe enough. So I think it's really useful to have a definition for ‘safe.’”  However, he added, “Equating ‘safe’ to ‘not my fault’ has some pitfalls, but it's a reasonable starting point.  It will almost certainly need to evolve from there.”

  Creating a system that’s safe

  Defining what is "safe" alone, however, does not settle the issue, according to Koopman. “You still need to create a system that actually is safe (from Mobileye's proposal that means is never at fault),” he said. “That means you're going to need to make sure that the real system actually works so as to avoid being at fault, and that any loopholes aren't a problem.”

  The advantage of “formal methods and mathematical proofs” is that they can in principle be proven correct. The disadvantage is that “they always have underlying assumptions, and the assumptions might not hold true in the real world,” Koopman noted.

  Next steps for the industry

  One of the first things that the industry may need to discuss is not just defining what’s right for the system but whether that’s a reasonable approach for the real world.

  Koopman said, “There is more to driving than not being at fault. Human drivers have expectations of what other cars will do. If someone else behaves unreasonably it might technically make a mishap your fault, but you might still be really upset about things.”

  He explained: “For example, if autonomous vehicles panic stop when a lower braking force would get the job done, they could provoke not-their-fault accidents.  I'm not saying designers would do this on purpose, but rather simply that there is more to good autonomous driving than never being at fault. The authors might address some of this in their "comfort" metrics, but I expect this area will need more exploration.”

  How well robocars work and play with others [human drivers] will be critical to measure the success of autonomous driving.

  While Mobileye’s proposal painstakingly explains a formula that calculates the safe longitudinal distance between two vehicles in autonomous driving. But here’s the thing. Koopman asked: “How can a human driver know where the keep-out zones are for another [autonomous] car?”

  He theorized as follows:

  If the autonomous vehicle leaves large safety buffers around it, in heavy traffic other cars will likely cut in to fill the gaps. For that matter, how would a human know where the line is being drawn? If a human edges in one inch past the line, does that mean the autonomous vehicle is blame-free if there is a crash? The real world is a bit grayer in practice.

  The at-fault rules need to also be specified in human-accessible terms or there is a risk of holding human drivers to standards that are unrealistic to expect a human to conform to.  It's important that any approach like this be a reasonable compromise between expectations on machines vs. expectations on humans.

  Koopman added, “Also, in practice, I think a lot of driving behavior is based on assumptions about what other drivers are likely to do.  It could be that adhering to the mathematical rules makes vehicles so conservative that they have problems when mixed with human traffic driving flow.” In other words, “If someone fields an AV that has a huge amount of crashes, being able to blame it on humans 100 percent of the time doesn't make it a good design. So, blame rules are a starting point, not an ending point to practical safety,” said Koopman. “If AVs have ‘safe’ but ‘weird’ or unpredictable driving behaviors, it would be no surprise if crashes are provoked by AVs acting in unexpected ways, whether the AV is able to blame humans for the crash or not.” He concluded: “Plays well with others is an important autonomous vehicle trait.”

  Dukes Cummings believes there is much left for the industry to do before autonomous cars are up and running.

  In a chapter of the forthcoming book, "Safe, Autonomous and Intelligent Vehicle," to be published by Springer, she has written that discusses certification issues surrounding AVs, she rejects the assumption that AVs are physically fit and ready to drive.

  She wrote:

  …While AVs will not need to be assessed on their physical readiness per se, their vision systems contain many weaknesses, with more emerging in basic research demonstrations almost daily. Such weaknesses are due to both sensor limitations but also software-driven postprocessing which makes them vulnerable to hacking.

  Given that these perception systems are the heart of any autonomous car, it is critical that weaknesses in these systems are both known and mitigated, making testing of these systems especially critical.

  Cummings also believes the body of knowledge that human drivers are tested on is “highly relevant” when determining the extended beyond line of sight (EBLOS) of AVs.

  Perhaps more important, she pointed out, “Just as the FAA requires human pilots to exhibit demonstrable knowledge across various areas of operations including how to mitigate risk for known contingencies, AVs should be required to demonstrate their boundaries of competence in driving scenarios that range from mundane to life threatening.”

  She added, “Given the known weaknesses of perception systems, it is particularly important that both engineers and regulators have a clear understanding of how the probabilistic algorithms embedded in AVs perceive and mitigate risk.”

  The problem is that research into “explainable AI” is just beginning. The limit for such approaches is still not well understood, she cautioned.

  The bottom line? Cummings is concerned about an absence of industry consensus on a minimum set of safety standards. There’s no plan for testing such cars, including identification of the corner cases that define the worst possible scenarios.

  Cummings concluded: “Significantly more work is needed to develop principled testing protocols and there are many important lessons to be learned from how humans have been licensed in driving and aviation domains.

(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
BD71847AMWV-E2 ROHM Semiconductor
TL431ACLPR Texas Instruments
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
型号 品牌 抢购
ESR03EZPJ151 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
TPS63050YFFR Texas Instruments
IPZ40N04S5L4R8ATMA1 Infineon Technologies
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。

请输入下方图片中的验证码:

验证码