University of Cambridge > Talks.cam > CUED Control Group Seminars > Price Mechanisms for Distributed Control Synthesis

Price Mechanisms for Distributed Control Synthesis

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Guy-Bart Stan.

Many control applications involve several decentralized control units, each with access to specific information about the system state. Still, the bulk of control theory is developed in a centralized setting, where all measurements are processed together to compute the control signals. This paradigm has conceptual advantages, but also inherent limitations in terms of complexity and robustness. The purpose of this seminar is to sketch how some ideas from economics and game theory may help to go beyond the traditional paradigm and support distributed control synthesis for dynamical systems.

Iterative synthesis procedures with provable convergence to a Nash equilibrium for classes of games are generally hard to obtain. Similar difficulties appear in general equilibrium theory of economics when it comes to price negotiations aiming for a Walras equilibrium. However, engineering applications often allow us to make the following two simplifying assumptions:

1. Every agent has a utility function that is ``linear in money’’.

2. Agents have no incentive to hide or encode information in their decisions.

Under the first assumption, a classical argument shows that price convergence towards an equilibrium is achieved by the gradient algorithm. In this presentation we use the same method for iteration of control policies towards a Nash equilibrium. We give discrete time conditions for convergence that can be verified locally without access to the global model anywhere. Simple examples are discussed.

This talk is part of the CUED Control Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity