< back

Parameter Tuning Method for Multi-agent Simulation using Reinforcement Learning

Masanori HIRANO, Kiyoshi IZUMI

The 9th International Conference on Behavioral and Social Computing, pp. 1-7, Oct. 29, 2022


Conference

The 9th International Conference on Behavioral and Social Computing (BESC 2022)

Abstract

This study proposes a reinforcement-learning-based method for efficient parameter tuning in multi-agent simulations (MAS). MAS usually has a high computational burden because of the agents; thus, it is important to tune its parameters efficiently. Our proposed method is based on a deep deterministic policy gradient (DDPG) and has three additional components proposed in this study: an action converter, a redundant full neural network actor, and a seed fixer. As an experiment, we employed a parameter-tuning task in an artificial financial market simulation. A Bayesian estimation-based method was employed as a baseline. The results show that our model tends to outperform the baseline in terms of tuning performance and indicate that the three additional components of the proposed method are essential. Moreover, the critic of DDPG effectively functions as a surrogate model, that is, as an approximate function of the simulation, which allows the actor to appropriately tune the parameters.

Keywords

multi-agent simulation; parameter tuning; reinforcement learning; deep deterministic policy gradient (DDPG); artificial financial market;

doi

10.1109/BESC57393.2022.9995509


bibtex

@inproceedings{Hirano2022-besc,
  title={{Parameter Tuning Method for Multi-agent Simulation using Reinforcement Learning}},
  author={Masanori HIRANO and Kiyoshi IZUMI},
  booktitle={The 9th International Conference on Behavioral and Social Computing},
  pages={1-7},
  doi={10.1109/BESC57393.2022.9995509},
  year={2022}
}