09:30 – 09:45 |
Begrüßung |
|
Dr. Stefan König, IAV GmbH
Vortragsfolien (passwortgeschützt)
Prof. Dr. Werner Damm, SafeTRANS |
09:45 – 10:15 |
Prof. Dr. Philipp Slusallek, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI) |
|
Digital Reality: Using AI to Train, Optimize, and Certify AI
-
The world around us is highly complex but AI Systems must be able to reliably make accurate decisions that in many cases may even affect human lives. With Digital Reality we propose an approach that instead of only relying on real data, learns models of the real world and uses synthetic sensor data generated via simulations for the training, optimization, and -- most importantly -- the validation and certification of AI Systems. This is extended by a continuous process of validating the models against the real world for improving and adapting the models to a changing environment. The approach is very generic but applies specifically to the challenges posed by autonomous driving, including the entire set of sensor systems. I will discuss the general approach as well as some of its applications currently under development at DFKI.
Vortragsfolien (passwortgeschützt)
|
10:15 – 10:45 |
Prof. Dr. Martin Fränzle, Carl von Ossietzky Universität Oldenburg |
|
Trainierte und selbstlernende Softwarekomponenten in sicherheitskritischen Cyber-Physischen Systemen: Chance oder schwer kontrollierbares Risiko?
-
In diesem Impulsvortrag wird der Versuch unternommen, die durch den Einsatz von KI-Komponenten in sicherheitskritischen Cyber-Physischen Systemen induzierten Chancen und Risiken einander gegenüber zu stellen. Hierzu werden zunächst verschiedene Varianten der Einbettung von Trainings- bzw. Lernphasen in den Entwurfsprozess sowie der Integration von KI-Komponenten in heterogene Architekturen skizziert werden, um anschließend anhand einiger Leitanwendungen die induzierten Sicherheitsbelange zu diskutieren. Der Vortrag schließt mit einer Charakterisierung des Stands der Forschung und Technik bezüglich (semi-)formaler Methoden der Validierung und Verifikation derartiger Systeme und der Relation der solcherart derzeit erreichbaren Qualitätszusicherungen zum gesellschaftlichen Bedarf.
Vortragsfolien (passwortgeschützt)
|
10:45 – 11:15 |
Kaffeepause und Networking |
11:15 – 11:45 |
Florian Thaler, VIRTUAL VEHICLE Research Center |
|
Reinforcement Learning for Automated Vehicle Control
-
Acknowledgements In Austria the project was funded by the program Mobilitat der Zukunft" and the Austrian Federal Ministry for Transport, Innovation and Technology (bmvit). The publication was written at VIRTUAL VEHICLE Research Center in Graz and partially funded by the COMET K2 { Competence Centers for Excellent Technolo- gies Programme of the Federal Ministry for Transport, Innovation and Technology (bmvit), the Federal Ministry for Digital, Business and Enterprise (bmdw), the Aus- trian Research Promotion Agency (FFG), the Province of Styria and the Styrian Business Promotion Agency (SFG).
The aim of the research project is to use Reinforcement Learning (RL) to train an agent that is capable of navigating a robot in a complex real world environment. We use the mobile robotic test platform, SPIDER, developed at the Virtual Vehicle research labs as a test bed. The agent is trained to follow a predetermined path while avoiding stationary obstacles like trees or walls and moving obstacles like other vehicles or pedestrians along or near the path. The learning process/the training of the agent is performed in a simulated virtual environment by a RL algorithm. The main concept of RL- algorithms is inspired by the human learning behavior: During the training the agent performs actions within the virtual environment, a driving simulation in our case, and earns feedback from the environment - either a reward or a punishment. The rewarding strategy we are using encourages the agent to steer towards way points on the path and to keep distance to any kind of obstacles. Basically the agent gets rewarded when he is moving in the direction of the next way point and gets punished if not. The distance to obstacles is measured using detection rings surrounding the agent. The more regions in those rings are blocked due to obstacles the more the agent gets punished. This allows it to evolve a strategy to earn as much rewards as possible and to find as a consequence a policy of how to follow a path without crashing into obstacles. In real world driving the SPIDER observes the environment using four LIDAR sensors. The SPIDER's runtime environment, ROS, provides the environment data in the form of 2D cost maps, which encode the objects detected in the SPIDER's surrounding as 2D pixel images. These 2D pixel images and vectors connecting SPIDER's center of mass and way points on the given path are provided to the agent as input. We have trained the agent due to the abovementioned methodology in a coarse- grained grid world environment renouncing on the in uence of mass, acceleration or deceleration forces in the vehicle control. The first experimental results show the feasibility of the approach: The agent is capable to follow a given path in a variety of scenarios and to drive round stationary obstacles and to evade moving obstacles. Currently we are extending the model to include the vehicle dynamics of the SPIDER robot and to use larger, more fine-grained pixel images as input of the agent. Both changes are refinements of the original simplified model and thus we are confident that the reinforcement learning approach will provide as an agent beeing capable to steer SPIDER in real world scenarios.
Vortragsfolien (passwortgeschützt)
|
11:45 – 12:15 |
Sara Bertram, IAV GmbH |
|
Let’s do the time warp again – Machine Learning Framework Time Travel
Vortragsfolien (passwortgeschützt)
|
12:15 – 12:45 |
Diskussion und Ergebnissicherung |
12:45 – 14:15 |
Mittag und Automotive Demonstration von IAV |
14:15 – 14:45 |
Dr. Daniel Schneider, Fraunhofer IESE |
|
Sicherheitsnachweis für Systeme mit ML-Komponenten – Herausforderungen und Lösungsansätze
-
Systeme sind zunehmend hoch automatisiert und intelligent. Das wirtschaftliche Potential ist gewaltig, gleichzeitig besteht aber für die einzelnen Unternehmen die Gefahr im internationalen Wettbewerb abgehängt zu werden. Künstliche Intelligenz, und Machine Learning im Besonderen, erweist sich zunehmend als wichtiger Enabler der Automatisierung, wobei die Gewährleistung und der Nachweis der Sicherheit von ML-Komponenten eine noch weitgehend ungelöste Herausforderung darstellt. Das Grundproblem ist, dass hier menschenverständliche und analysierbare Spezifikationen fehlen, was die Anwendung etablierter Normen und Methoden zur Absicherung sehr erschwert. Dieser Vortrag beginnt mit einer Einführung zum Thema Machine Learning, beschreibt dann die Herausforderungen bezüglich der Gewährleistung der Sicherheit und diskutiert schließlich mögliche Lösungsansätze. Diese können generell zwei unterschiedlichen Kategorien zugeordnet werden: Der Absicherung des ML Verhaltens selbst und der Realisierung eines „traditionellen“ parallelen Überwachungskanals. Wesentliche Inhalte des Vortrags: - Kurze Einführung zum Thema ML illustriert durch verschiedene Beispiele - Herausforderungen bei der Absicherung von ML Komponenten - Möglichkeiten und Limitierungen bzgl. der "Härtung" des ML-Verhaltens - Ansätze zur Schaffung eines parallelen Überwachungskanals - Vision eines integrierten Safety- und ML-Engineering welches die beiden o.g. Aspekte integriert
Vortragsfolien (passwortgeschützt)
|
14:45 – 15:15 |
Prof. Dr. Christoph Lüth, Universität Bremen |
|
"Ich bin sicher“ - Selbstverifiziernde Cyber-Physiche Systeme
-
Die Sicherheit von Cyber-Physischen Systemen kann duch formale Verifikation wesentlich erhöht werden. Um der hierbei auftretenden Zustandsexplosion entgegenzuwirken haben wir in den letzten Jahren Techniken der Selbstverifikation entwickelt, in denen das System sich zur Laufzeit selbst verifiziert. Durch die Instantiierung von Variablen zur Laufzeit wird der Zustandsraum drastisch verkleinert, und damit formale Verifikation wieder möglich. Hierbei können entweder Variablen instantiiert werden, derenWerte über längere Zeit konstant bleiben, oder deren Instantiierung den Zustandsraum besonders drastisch verkleinert. Eine Verfeinerung dieser Technik berücksichtigt die zeitliche Dimension: wir verifizieren das Systemverhalten für genau die Zustände, die innerhalb eines gegeben Zeitraums erreicht werden können. Diese Techniken erlauben es auch insbesondere, Systeme zu verifizieren, über deren Struktur wenig bekannt ist, insbesondere solche, deren Kontrollalgorithmus KI-basierten Verfahren nutzt. Der Vortrag stellt die grundlegenden Techniken der Selbstverifikation vor und illustriert die Herangehensweise an Fallstudien.
Vortragsfolien (passwortgeschützt)
|
15:15 – 15:45 |
Kaffeepause und Networking |
15:45 – 16:15 |
Prof. Dr. Achim Rettberg, Hochschule Hamm-Lippstadt |
|
Deep Learning Based Approach for User Monitoring in Autonomous Driving
-
New car technologies and infrastructures influence the driver and as well the passenger behavior. Technologies like autonomous driving or assisted driving allows the driver to focus on other activities. That means, the modern cars should support this activities and react, through actuators, on the user status, as his/her mood or health conditions. Furthermore, modern cars can allow an observation of the drivers by monitoring specific health data from several sensors. Additionally, wearables are becoming common in users life. They are constantly monitoring user activities, vital signals and providing immediate feedback to the users. These devices can offer important information about users that can be used to different purposes when connected to other systems or devices. A set of physical devices that interact with a virtual cyberspace through communication networks is the base idea of Cyber Physical Systems. The CPS is a new generation of systems which integrates physical and computational capabilities that interacts with humans through Human Machine Interfaces. The ability of interacting, expanding functionalities, and even learning new capabilities is one key component of this new generation of devices integration. These devices compose the edges of the Internet of Things. The Internet of Things works as a bridge connecting physical devices to cyber space and it has introduced new kind applications, where it is possible to integrate many data sources, actuators and people. These applications required cloud infrastructures to support a huge amount of data and the ability to process that data. However, not all applications are all the time connected or they have time constraints to be respected, as for example in the automotive area. With this kind of applications, the idea of Computing on the Edge plays one important role in the system, where it is possible to have processing power embedded on devices that can provide answers respecting deadlines. By combining and analyzing all this data with Deep Learning algorithms, a modern car is able to identify drivers and/or passengers health status, as well as the mood and then improve the driver/passenger experience. The main idea of this project is to setup an adaptable service architecture middleware to allow sensor fusion and integration of Deep Learning algorithms, that is able support the previously mentioned car services. The proposed architecture must support both edge as cloud based approach based on the requirements of this kind of systems. With this approach, systems can offer both new functionalities for users inside of a car as well as new safety mechanisms for autonomous cars where may need users help in some situations.
Vortragsfolien (passwortgeschützt)
|
16:15 – 17:00 |
Diskussion und Ergebnissicherung |
17:00 |
Ende der Veranstaltung |