Autonomous systems, like self-driving cars and collaborative robots, must occasionally ask people around them for help in anomalous situations. They form a new generation of interaction platforms that provide a comprehensive multimodal presentation of the current situation in real-time, so that a smooth Transfer of Control (ToC) to human agents is guaranteed. Several scientific questions are associated with this ToC, including what should cause a ToC, when and how to notify a user, and how to manage many of these situations. In this seminar, we will investigate several methods of Artificial Intelligence (AI) that may be applied to these challenges.

This seminar covers theoretical and practical aspects of the ToC challenges. Attendees will investigate a topic or particular scenario based on scientific publications, as well as experiment with real data or implement a small demonstration.

  • Good English skills (Literature will be English)
  • Basic programming skills (the practical part will involve some programming)
  • For the Machine Learning topics, basic ML knowledge is recommended
Grading Criteria:
  • Presentation of a topic based on a scientific paper
  • Active participation in the discussion of presented topics and moderation of one session
  • Realization of a practical assignment, e.g. implementation or model creation (in groups of 2-3 people)
  • Registration is now closed. We will provide the final list of topics, presentation dates, references, and supervisors very soon.
  • There is no seminar meeting on 23.04.18 and 30.04.18
  • Here you can find the slides from the introductory talk including some tips and guidelines for the presentations
1) Cause of Transfer

Determining cases when a transfer of control should occur and to whom control should be transferred on the example of autonomous robots. Given their sensors, abilities (actuators) and the environment (human positions and skills), a decision is made.

AI Methods: Robot Programming, Context Modeling Practical Assignments*: Monitoring and Decision Module

2) Time of Transfer

When should an autonomous system initiate transfer of control to a human? On the example of autonomous vehicles, we investigate what is the best time to transfer control from a system to a human based on personal (e.g. reaction time) and situational (e.g. driving speed) factors.

AI Methods: Machine Learning, Deep Learning Practical Assignments*: Prediction Model

3) Mode of Transfer

The transfer can be communicated using a number of channels and interaction options. We investigate different multimodal concepts, cognitive aspects, and multi-level dialogue.

AI Methods: Multimodal Interaction Design, Dialogue Development Practical Assignments*: Dialogue System for ToC

4) Management of Transfer

In an environment where multiple human agents can receive control from multiple robots on a regular basis, they need to be able to have an overview of the autonomous agents and their situation. On the example of a retail bot, we create a dashboard that gives human agents an overview, alerts, and a way to return control to the robot.

AI Methods: Situation Summary, Plan Generation, Return of Control Practical Assignments*: Development of a Management Dashboard

Please note: The current assignment topics are subject to change / extension.


In case you do not have access to your paper, please contact your supervisor.

Date Name Paper Moderator Supervisor
07.05.18 Osama Haroo A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI Baris Cakar Florian Daiber
07.05.18 Baris Sönmez Supporting Trust in Autonomous Driving Florian Daiber
14.05.18 Baris Cakar Takeover Time in Highly Automated Vehicles: Noncritical Transitions to and From Manual Control Guillermo Reyes
14.05.18 Hamza Anwar Minimum Time to Situation Awareness in Scenarios Involving Transfer of Control from Automated Driving Suite Rafael Math
28.05.18 Payman Goodarzi Emergency, automation off: Unstructured transition timing for distracted drivers of automated vehicles Ruben Garcia Ucharima Rafael Math
28.05.18 Tri Huynh "Take over!" How long does it take to get the driver back into the loop? Hassan Kanso Frederik Wiehr
04.06.18 Hassan Kanso Incremental learning algorithms and applications Shiya Wang Guillermo Reyes
11.06.18 Ruben Garcia Ucharima Guiding attention in controlled real-world environments Maha Siddiqui, Baris Sönmez Michael Feld
18.06.18 Amr Gomaa Fine-Tuning Deep Neural Networks in Continuous Learning Scenarios Adina Pohle Guillermo Reyes
25.06.18 Shiya Wang Adaptive probabilistic fission for multimodal systems Ibrahim Atwi Yannick Körber
25.06.18 Maha Siddiqui Context-based generation of multimodal feedbacks for natural interaction in smart environments Melvin Chelli Yannick Körber
02.07.18 Adina Pohle An assistive robot to support dressing strategies for planning and error handling Osama Haroo Tim Schwartz
02.07.18 Melvin Chelli Multimodal execution monitoring for anomaly detection during robot manipulation Tri Huynh Tim Schwartz
09.07.18 Ibrahim Atwi Improved human–robot team performance through cross-training Hamza Anwar, Amr Gomaa Tim Schwartz
Practical Assignments

Cause of Transfer 1: A topic related to programming of industry robots to be detailed later. Some aspects of this project (related to robots) cannot start before June 18th.

Cause of Transfer 2: A topic related to programming of industry robots to be detailed later. Some aspects of this project (related to robots) cannot start before June 18th.

Time of Transfer 1: Driving Simulator Simulation and logging of ToC situations. Design, implement and conduct a small user study using the existing driving simulator OpenDS and other tools. The experiment should log appropriate features (e.g. of user, car, environment) to measure user reaction times in different ToC situations. Helpful: Good programming skills in Java; XML

Time of Transfer 2: Deep Learning Task Design and implement a deep learning model to predict certain traffic scenarios which could possibly help to estimate the time of transfer. Possible aspects could be the detection of traffic signs, critical traffic situations, distracted drivers etc. Helpful: Good programming skills in python, C++, … ; basic knowledge of image processing; Machine Learning / Deep Learning knowledge

Mode of Transfer 1: Natural Language Generation for ToC in the context of Human-Robot Interaction Situation: A robot is executing a task defined in a task model. On failure, the control has to be transferred to a human. Based on the current task a textual summary has to be generated such that a human is capable of understanding the situation. Your task is to

  • Generate text from a task model which summarizes the situation using a template-based approach (e.g. ARRIA)
  • Find context-based parameters (e.g. cognitive load, current user activity, robot’s task, user profile, user preferences, …) to improve the template selection / generation

Mode of Transfer 2: Multimodal Output Selection and Adaptation for ToC in the context of Autonomous Driving Situation: A semi-autonomous car is currently steering autonomously while the human is occupied with another task. The human has to take over the control because of an unexpected event (e.g. construction site, accident, …). The human has to be informed about the imminent transfer via suitable combination of modalities.

Your task is to

  • Find context-based parameters (e.g. cognitive load, current user activity, user profile, user preferences, urgency, …)
  • Create a procedure for selecting a suitable combination of auditive and visual output based on the context (e.g. using OptaPlanner) Adapt the presentation based on the context
  • Optional: Integration into OpenDS (driving simulator)

Management of Transfer 1: Towards Real-time Estimation of Situation Awareness from Human Gaze in Highly Automated Driving In an existing driving simulator setup, a standard paper test for situation awareness will be executed (e.g. SAGAT) and compared against an modeled estimate deducted from the users gaze behavior.

Management of Transfer 2: Towards Human-Robot Transfer of Control in Retail Environments Based on an 3D scan of a reference shopping shelf, the students will work on recognizing states of action in which a robot has to transfer control to humans due to its inability to solve the situation. A standard tablet computer that is mounted on a telepresence robot will be used to capture images of the shelf and recognize, e.g. an out-of-stock or a misplaced product situation. In such a situation the crucial transfer of control should be designed and implemented in an exemplary manner.