The Problem of Social Cost in Multi-Agent Universal Reinforcement Learning

While I have worked on aspects of AI safety for quite a few years now, in particular privacy technologies and confidential computing, I am a late convert on the importance of Artificial General Intelligence (AGI) safety research and did not take the problem seriously until about 1 year ago. My mindset has now changed completely and I really believe AGI safety research is one of the key problems of our time. In fact, I feel the AGI safety problem is so important that I decided to step off the management track in April this year to devote more time to work on technical aspects the problem. On that front, I am happy to report that the first fruit of labour is now available for sharing, in the form of this paper that a couple of ANU colleagues and I have just put on arXiv.

The paper title is The Problem of Social Cost in Multi-Agent General Reinforcement Learning: Survey and Synthesis.

Here’s the abstract:

The AI safety literature is full of examples of powerful AI agents that, in blindly pursuing a specific and usually narrow objective, ends up with unacceptable and even catastrophic collateral damage to others. In this paper, we consider the problem of social harms that can result from actions taken by learning and utility-maximising agents in a multi-agent environment. The problem of measuring social harms or impacts in such multi-agent settings, especially when the agents are artificial generally intelligent (AGI) agents, was listed as an open problem in Everitt et al, 2018. We attempt a partial answer to that open problem in the form of market-based mechanisms to quantify and control the cost of such social harms. The proposed setup captures many well-studied special cases and is more general than existing formulations of multi-agent reinforcement learning with mechanism design in two ways: (i) the underlying environment is a history-based general reinforcement learning environment like in AIXI; (ii) the reinforcement-learning agents participating in the environment can have different learning strategies and planning horizons. To demonstrate the practicality of the proposed setup, we survey some key classes of learning algorithms and present a few applications, including a discussion of the Paperclips problem and pollution control with a cap-and-trade system.

The paper can be accessed here: https://arxiv.org/abs/2412.02091


Leave a comment