Title: Model-Based AI for Safe and Trusted Human-Autonomy Teaming
Abstract : As autonomous systems are increasingly being adopted into application
solutions, the challenge of supporting interaction with humans is becoming more
apparent. Partly this is to support integrated working styles, in which humans and
intelligent systems cooperate in problem-solving, but also it is a necessary step
in the process of building trust as humans migrate greater responsibility to such
systems. The challenge is to find effective ways to communicate the foundations of
autonomous-system behaviours, when the algorithms that drive them are far from
transparent to humans. In this talk we consider the opportunities that arise in AI
Planning, exploiting the model-based representations that form a familiar and
common basis for communication with users, particularly in scenarios involving
human-autonomy teaming. In the talk we'll also present the set of new features
available in the new recent version of ROSPlan.
Bio: Daniele Magazzeni is Associate Professor in the Department of Informatics at
King's College London, where he leads the Trusted Autonomous Systems hub and he is
Co-Director of the Centre for Doctoral Training on Safe and Trusted AI. Dan's
research interests are in Safe, Trusted and Explainable AI, with a particular focus
on AI planning for robotics and autonomous systems, and human-autonomy teaming. Dan
is the President-Elect of the ICAPS Executive Council. He was Conference Chair of
ICAPS 2016 and Workshop Chair of IJCAI 2017. He is Co-Chair of the IJCAI-19
Workshop on XAI, and Co-chair of the ICAPS-19 Workshop on Explainable Planning.