27. SafeTRANS Industrial Day (German)

 

(Only available in German)

Das Fachsymposium des 27. SafeTRANS Industrial Days fand am 2. Dezember 2020 erstmalig virtuell statt als Kooperation von:

      

Thema

****
Guarantees for object detection, classification, and intent detection in dynamic environments

 

Programm zum Download

Programm

09:15 – 09:30 Begrüßung
 

Prof. Dr. Werner Damm, SafeTRANS

09:30 – 10:00 Multi-Level Monitoring Towards Safe Use of Modern Neural Network in Transportation
 

Dr. Michael Paulitsch, Intel

Vortragsfolien (passwortgeschützt)

10:00 – 10:30 Must AI Have Common Sense in Order to Allow Guarantees?
 

Dr. Christian Müller, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)

    • Abstract

      Robustness of AI is seen as one of the key requirements for establishing trust. Today’s relatively narrow AI systems based on Deep Learning alone often work well in domains they were trained on, but they cannot be trusted with anything that has not been precisely anticipated by their designers. That is particularly important when the stakes are high like in SafeTRANS applications. What is missing from AI today is broader intelligence. AI needs to be able to deal not only with specific situations for which there is an enormous amount of relevant data available, but also problems that are novel, and variations that have not been seen before. Real life is open-ended; no data perfectly reflects the ever-changing world. There are no fixed rules, and the possibilities are unlimited. Highway driving in good weather is relatively amenable to narrow AI, because highways themselves are largely closed systems; pedestrians are not allowed, and even cars have limited access. But urban driving is much more complex; what can appear on a road in a crowded city at any given moment is essentially unbounded. Human drivers routinely cope with circumstances for which they have little or no direct data. In this talk I will open up the question how much common sense an AI needs to allow guaranteed behavior. While the ultimate answer is not yet at hand, I will outline a number of promising research directions that are heavily intertwined with each other: explainable AI, hybrid AI, and innate Knowledge. Finally, because of its relevance for SafeTRANS, I will briefly introduce the work of the newly established ethics group at DFKI I belong to with some food-for-thought slides.

Vortragsfolien (passwortgeschützt)

10:30 – 11:00 Pause
11:00 – 11:30 Perzeptionsphänomene in der Szenarienanalyse
 

Dr. Roman Gansch, Robert Bosch GmbH

  • Abstract

    Im öffentlich geförderten Projekt “Verifikation- und Validierungsmethoden automatisierter Fahrzeuge im urbanen Umfeld“ (VVM) arbeitet ein breites Spektrum an Industriepartnern und Forschungsinstituten an Methoden um eine Freigabe hochautomatisierter Fahrzeuge zu ermöglichen. Ein wichtiger Bestandteil davon ist die systematische Analyse der Szenarien in denen sich dieses Fahrzeug fortbewegen kann. Der hohe Komplexitätsgrad und offene Kontext dieser Szenarien im urbanen Umfeld stellen dabei eine große Herausforderung dar und die Perzeptionsfähigkeit hat einen maßgeblichen Einfluss ob sich das Fahrzeug in einem Szenario sicher verhalten kann. In dem hier vorgestellten Arbeitspaket des VVM Projekts wird der Einfluss der Perzeption auf die Kritikalität von Szenarien durch die Charakterisierung von Perzeptionsphänomenen wie Spiegelung, Verdeckung, Sensorreichweite etc. untersucht. Dafür werden Methoden erarbeitet um diese Phänomene in einem passenden Abstraktionsgrad zu identifizieren und anschließend daraus Güteanforderung an die Perzeption abzuleiten sowie eine Bewertungsgrundlage für eine ausreichende Perzeption zu schaffen.

Vortragsfolien (passwortgeschützt)

11:30 – 12:00 Assuring safety of autonomous systems in dynamic environments
 

Dr. Rasmus Adler, Fraunhofer IESE

  • Abstract

    Structured assurance cases based on the goal structuring notation (GSN) or similar notations are a promising means for assuring safety objectives for autonomous system behavior in dynamic environments. In standardization, e.g., UL 4600 or VDE-AR-E 2842-61, and current research projects like V&V Methoden, it is considered as core concept for assurance and certification. However, it is still unclear how to assure and certify that an autonomous system will always understand the current situations good enough to manage all risks. This talk will discuss promising approaches to deal with this issue like the SINADRA approach for situation-aware dynamic risk assessment, Digital Dependability Identities for assured shared perception, uncertainty monitoring of AI-based perception and safety performance indicators for gaining confidence in claims and assumptions about the dynamic environment. Further, it will structure and relate these approaches to advanced systems engineering and continuous safety management of socio-technical systems.

Vortragsfolien (passwortgeschützt)

12:00 – 12:30 Diskussion und Ergebnissicherung
12:30 – 13:30 Mittagspause
13:30 – 14:00 What constitutes a viable statistical guarantee?
 

Prof. Dr. Martin Fränzle, Carl von Ossietzky Universität Oldenburg

  • Abstract

    For many means of object detection, object classification, and intent interpretation currently discussed, our understanding of the full set of origins of and reasons for failures as well as the pertinent modes of error propagation remains fragmentary. Rigorous modeling of conditions likely leading to critical error as well as formal analysis of potentially hazardous causal chains based on such modeling thus seem elusive goals, at least for the time being. Statistical verification methods based on exhaustive physical or virtual testing provide a seemingly straightforward alternative. But are they conceptually as simple as they seem? In the talk, we address major issues encountered on the way to viable statistical guarantees for complex, only partially understood functionality, like situation awareness in dynamic environments.

Vortragsfolien (passwortgeschützt)

14:00 – 14:30 Machine Learning: A Safety Assessor's View
 

Prof. Dr. Jens Braband, Siemens Mobility GmbH

  • Abstract

    In this presentation we discuss how systems with Machine Learning (ML) algorithms can undergo safety assessment. This is relevant, if ML is used in safety related applications. This holds also for railway systems, where ML is expected to take a role in railway automation. We demonstrate our thoughts with a simple example and propose a research challenge that may be interesting for the use of ML in safety-related systems.

Vortragsfolien (passwortgeschützt)

14:30 – 15:00 Pause
15:00 – 15:30 Neural circuit policies
 

Prof. Dr. Radu Grosu, TU Wien

  • Abstract

    A central goal of artificial intelligence is to design algorithms that are both generalisable and interpretable. We combine brain-inspired neural computation principles and scalable deep learning architectures to design compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. We show that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. This system shows superior generalisability, interpretability and robustness compared with orders-of-magnitude larger black-box learning systems. The obtained neural agents enable high-fidelity autonomy for task-specific parts of a complex autonomous system.

Vortragsfolien (passwortgeschützt)

15:30 – 16:00 The Curse and Blessing of Dimensionality: Increasing Safety in Autonomous Driving Using 3D Object Detection
 

Nils Gählert, Daimler AG

  • Abstract

    In autonomous driving, camera based object detection is one important task to be solved within the environment perception pipeline. Object detection in academia often limits itself to standard object detection, i.e. drawing a 2D bounding box around objects. In practical applications like autonomous driving this representation is not sufficient for dynamic objects. Too many information about important geometric properties of the objects of interest is lost, like their orientation or their dimensions. However, this information is required for safe autonomous driving. Using 3D bounding boxes as a more complex representation of objects solves this problem, but at the same time induces higher model complexity and thus an increased runtime. In this talk, I will outline the required steps to evolve the well-studied 2D object detection frameworks into 3D detection frameworks while retaining both the accuracy as well as the speed of 2D models despite their higher model complexity. I will focus on the required adjustments in the training data, the evaluation measures and adjustments within the object detection frameworks to guarantee the best possible trade-off between detection accuracy and speed in 3D object detection.

Vortragsfolien (passwortgeschützt)

16:00 – 16:30 Guaranteed Integrity by Byzanthine Architecture - How the International Space Station (ISS) Keeps Orbit
 

Götz Anspach von Bröcker, Airbus Defence and Space

16:30 – 17:00 Diskussion und Ergebnissicherung
17:00 Ende der Veranstaltung
ab 17:15 Mitgliederversammlung, nur für SafeTRANS-Mitglieder