Understanding Human-AI Cooperation Through Game-Theory and Reinforcement Learning Models

This paper describes an empirical study conducted to understand how different modern reinforcement learning algorithms and game theory scenarios could create different cooperation levels in human-machine teams. Three different reinforcement learning algorithms (Vanilla Policy Gradient, Proximal Policy Optimization, and Deep Q-Network) and two different game theory scenarios (Hawk Dove and Prisoners dilemma) were examined in a large-scale experiment. The results indicated that different reinforcement learning models interact differently with humans with Deep-Q engendering higher cooperation levels. The Hawk Dove game theory scenario elicited significantly higher levels of cooperation in the human-artificial intelligence system. A multiple regression using these two independent variables also found a significant ability to predict cooperation in the human-artificial intelligence systems. The results highlight the importance of social and task framing in human-artificial intelligence systems and noted the importance of choosing reinforcement learning models.

Previous
Previous

Fostering Human-Agent Team Leadership by Leveraging Human Teaming Principles

Next
Next

Adaptive Autonomy as a Means for Implementing Shared Ethics in Human-AI Teams