Multi-agent control with limited interpersonal comparability
The fundamental complexity of multiple dynamic resource allocation problems and, more generally, multi-agent control problems lies in determining a solution that trades off the cost incurred by the different participants.
A critical, yet often underappreciated, dimension of the complexity of these decisions lies in the fundamentally limited comparability, or even non-comparability, of the agents’ underlying preferences. For example, when prioritizing one set of drivers over another in a traffic management solution, how can we decide to prioritize a smaller set of drivers with extremely high urgency rather than the large set of drivers that has minimal urgency? A fundamental design assumption is that we can compare the gains and losses of different users and, therefore, trade them off with one another. This seemingly innocuous assumption plays a crucial role and needs to be carefully examined.
The limited comparability of utilities emerges from the impossibility of revealing individual preferences truthfully (e.g., when monetary auctions are not allowed) and from fairness principles that require the decision maker to "normalize" the agents' preferences. It also occurs because of the practical complexity of measuring the private agents' preferences on the same scale.
This class of problems has been a major economics topic, particularly in social choice theory (Arrow, Sen, Roberts, etc.). Results in this stream of literature include famous impossibility results (Arrow's theorem) and constructive prescriptions on what social welfare functions are admissible depending on whether players can quantify and compare the intensity of their preferences. These concepts have received some recent attention in computer science and in machine learning, but have been underexplored in many engineering domains, including control and decision systems, where often relatively brute approaches such as simply adding costs/benefits are favored. Distributed optimization algorithms and control strategies typically work with social welfare functions (utilitarian, max-min / egalitarian) that implicitly rely on stringent comparability assumptions.
As socio-technical applications continue to mature and fairness becomes increasingly important, the subtleties of social choice theory ought to be folded into control systems and applications.
Saverio Bolognani
Network control theory, distributed control
System theory of algorithms, game theory
Online optimization, complex systems
ETH Zurich
Heinrich Nax
Behavioral game theory, economics, philosophy
Market Design and Learning in Games
Markets and collective goods
ETH Zurich & University of Zurich