Your Search Results

Use this resource - and many more! - in your textbook!

AcademicPub holds over eight million pieces of educational content for you to mix-and-match your way.

Experience the freedom of customizing your course pack with AcademicPub!
Not an educator but still interested in using this content? No problem! Visit our provider's page to contact the publisher and get permission directly.

Identifying tractable decentralized control problems on the basis of information structure

By: Mahajan, A.; Teneketzis, D.; Nayyar, A.;

2008 / IEEE / 978-1-4244-2925-7

Description

This item was taken from the IEEE Conference ' Identifying tractable decentralized control problems on the basis of information structure ' Sequential decomposition of two general models of decentralized systems with non-classical information structures is presented. In model A, all agents have two observations at each step: a common observation that all agents observe and a private observation of their own. The control actions of each agent is based on all past common observations, the current private observation and the contents of its memory. At each step, each agent also updates the contents of its memory. A cost function, which depends on the state of the plant and the control actions of all agents, is given. The objective is to choose control and memory update functions for all agents to either minimize a total expected cost over a finite horizon or to minimize a discounted cost over an infinite horizon. In model B, the agents do not have any common observation, the rest is same as in model A. The key idea of our solution methodology is the following. From the point of view of a fictitious agent that observes all common observations, the system can be viewed as a centralized system with partial observations. This allows us to identify information states and obtain a sequential decomposition. When the system variables take values in finite sets, the optimality equations of the sequential decomposition are similar to those of partially observable Markov decision processes (POMDP) with finite state and action spaces. For such systems, we can use algorithms for POMDPs to compute optimal designs for models A and B.