University of Cambridge > Talks.cam > Microsoft Research Machine Learning and Perception Seminars > Towards ad hoc interactions with robots

Towards ad hoc interactions with robots

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

A primary motivation for work within my group is the notion of autonomous agents that can interact, robustly over the long term, with an incompletely known environment that continually changes. In this talk I will describe results from a few different projects that attempt to address key aspects of this big question.

I will begin by looking at how task encodings can be made effective using qualitative (geometric) structure in the strategy space. Using examples that may be familiar to many machine learning researchers – such as control of an inverted pendulum and bipedal walkers – we will explore this connection between the geometric structure of solutions and strategies for dealing with a continually changing task context. The key result here would be regarding ways to combine exploitation of ‘natural’ dynamics with the benefits of active planning.

Can there be similarly flexible encodings for more general decision problems, beyond the domain of robot control? I will describe recent results from our work on policy reuse and transfer learning, demonstrating how it is possible to construct agents that can learn to adapt, through a process of belief updating based on policy performance, to a changing task context including the case where the change may be induced by other decision making agents.

Finally, building on this theme of making decisions in the presence of other decision making agents, I will briefly describe results from our recent experiments in human-robot interaction where agents must learn to influence the behaviour of other agents in order to achieve their task. This experiment is a step towards general and implementable models of ad hoc interaction where agents learn from experience to shape aspects of that interaction without the benefits of prior coordination and related knowledge. I will conclude with some remarks on the potential practical uses of such models and learning methods in a wide variety of applications ranging from personal robotics to intelligent user interfaces.

This talk is part of the Microsoft Research Machine Learning and Perception Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity