DThree19

Is your deep detector safe? verification and certification of neural networks

CVPR Workshop, 17 June 2019, Long Beach, CA, US

SDD2019 is a half-day single track workshop at CVPR 2019. The workshop will be held on 17-June-2019 in Long Beach, CA, US. It is supported by Visteon Corporation, University of Oxford, University of Genova and BMW Group.

Long Beach Convention Center

Scope

Pulina et al. 2010 was one of the first works to look into verifying a multi- layer neural network. Though there have been several other methods proposed since then -- e.g., linearizing the activation Bastani et al. 2016, assuming differentiability Hein et al. 2017, binary activation Narodytska et al. 2017 and tools such as Reluplex (D. Dill) and Planet (R. Ehlers) -- adoption of these methods on state-of-the-art applications of deep learning has been missing. Much of this is also rooted in the fact that verifying even a piece-wise linear network is NP-hard and scalability is a huge challenge because most of the state-of-the-art neural networks are fairly deep. Note that this approach is slightly different from those finding effective strategies to counter the attacks such as recent papers from P. Liang; discussing such a connection is also a worthy pursuit. In contrast with major focuses of academic research in deep learning, provable safety is the single most important factor in the application of deep networks in any context that involves human and safety; a prominent example is in autonomous vehicles. Ability to certify such deep detectors are not only crucial to obtain a certified product that can be commercialized but can also save billions of dollars that will otherwise be lost in deploying a redundant, low-performance parallel system that is based on certifiable classical approaches. There are signs that industrial players of all sizes are deeply interested in exploring the avenues; wide spectrum partnerships across OEMs and tier-I are formed (e.g., industry-wide collaboration in Germany) and tier-II like Qualcomm, Nvidia and ARM have come up with hardware architecture ensuring redundancy-based safety. We propose the workshop so that the cutting-edge vision research could meet verification-enabled safety in the context of autonomous driving and mobile robotics.
Hence, we are particularly interested in these topics:
  • Formal verification of deep neural networks (FV-DNN)
  • Branch-and-bound techniques in verifying deep networks
  • Specialized network architectures or methods that enables scalable verification
  • Relation between FV-DNN and adversarial attacks
  • ASIL certifiability of deep networks
  • Classical vision techniques for ASIL certifications
  • Applied and safe networks such as for detecting vehicles and pedestrians