About the Tutorial
Many real-world applications of multiagent systems (MAS) are open agent systems (OASYS) where the sets of agents, tasks, and capabilities can dynamically change over time. Often, these changes are unpredictable and unknown in advance by the decision making agents utilizing their capabilities to accomplish tasks. In contrast, most methods for autonomous decision making (whether planning, reinforcement learning, or game theory) assume that the set of agents, tasks, and capabilities are static throughout the lifetime of the system. Mismatches between the assumptions of the agents’ reasoning and models of the environment vs. the underlying dynamics of the environment can risk critical failure of agents deployed to real-world applications. In this tutorial, we will (1) introduce OASYS as a challenging complexity of decision making in multiagent systems, illustrating different sources of openness in several real-world applications of MAS, (2) summarize state-of-the-art solutions for decision making in OASYS within both the multiagent planning and multiagent reinforcement learning paradigms, and (3) highlight several promising avenues of future research that would enhance the ability of agents to reason within OASYS.
Learning Outcomes
Participants will enhance their understanding of (1) the challenges of multiagent decision-making in open agent systems caused by the sets of agents, tasks, and/or agent capabilities changing over time in dynamic and unpredictable ways, (2) state-of-the-art planning and reinforcement learning solutions for autonomous reasoning in open agent systems, including a comparison of their relative strengths and weaknesses in different scenarios, and (3) active areas of current and future research for improving reasoning in such challenging multiagent environments.
Target Audience
We expect two target audiences for this tutorial. First, researchers studying multiagent decision making (e.g., planning, reinforcement learning, and game theory practitioners) interested in exploring additional challenging environment complexities created by real-world applications of multiagent reasoning. Second, AAMAS attendees who work in applied multiagent systems developing solutions in the types of applications that involve OASYS and are seeking decision making solutions for such problems. In this way, we hope to provide useful background information for both audiences, as well as bring together researchers across the decision making and application spaces of AAMAS for cross-pollination of ideas and development of new collaborations.
Tutorial Outline
-
Topic 1: Introduction to Open Agent Systems
- Types and Sources of Openness
- Challenges Caused by Openness
- Example Applications
-
Topic 2: Multiagent Reasoning in OASYS
- Origins of OASYS
- Background: (Multiagent) Markov Decision Processes
- Planning and Reinforcement Learning Solutions
-
Topic 3: Emerging and Future Research in OASYS
- Addressing Scalability to Many-Agent Systems
- Focusing on Task and Type Openness
- Human-Agent Interactions
- Topic 4: Community Discussion and MOASEI Competition
Presenters
- Adam Eck, Oberlin College
- Prashant Doshi, University of Georgia
- Leen-Kiat Soh, University of Nebraska
Tutorial Slides
Our slides are available here (current version 0.1, posted 03/28/2025)
Contact
For any queries or further information, please reach out to us at:
Email: aeck@oberlin.edu
Acknowledgements
This research was supported by a collaborative NSF Grant #IIS-2312657 (to P.D.), #IIS-2312658 (to L.K.S.), and #IIS-2312659 (to A.E.). Additionally, this work was completed utilizing the Holland Computing Center of the University of Nebraska, which receives support from the UNL Office of Research and Economic Development, and the Nebraska Research Initiative. Finally, we thank the graduate and undergraduate students who have contributed to the development of OASYS: Muthukumaran Chandrasekaran, Maulik Shah, Anirudh Kakarlapudi, Keyang He, Gayathri Anil, Bala Duggirala, Ceferino Patino, Alireza Saleh Abadi, Tyler Billings, Daniel Firebanks-Quevedo, Quinn Barker-Plummer, Tumas Rackaitis, Gaurab Pokharel, Ran Liu, Tianxing Zhu, Zejian Huang, Dung Do, Kenean Yemane Kejela, Quan Nguyen, Ryn Lazorchak, Menard Simoya, and M. Daud Zarif.