Adaptive Autonomy as a Means for Implementing Shared Ethics in Human-AI Teams

This paper proposes a two-part model in order to implement a dynamic ethical code for AI teammates in human-AI teams. The first part of the model proposes that the ethical code, in addition to its team role, be used to inform an adaptive AI-agent of when and how to adapt its level of autonomy. The second part of the model explains how that ethical code is consistently updated based upon the AI agent’s iterative observations of team interactions. This model makes multiple contributions to the community of human-centered computing, because teams with higher levels of team cognition exhibit higher levels of performance and longevity. More importantly, it proposes a model for more ethical use of AI teammates on human-AI teams that is applicable to a variety of human-AI teaming contexts and permits room for future innovation.

Previous
Previous

Understanding Human-AI Cooperation Through Game-Theory and Reinforcement Learning Models