Addressing the Spread of Trust and Distrust in Distributed Human-AI Teaming Constellations

As autonomous systems mature, the applied industry has begun adopting the technology. Many of these applications include the creation of human-artificial intelligence (AI) teams, which promise to increase the already known advantages of working in team environments. The efficacy of the AI agents that make up these teams has always been a significant focus. However, a shift from technical ability to social ability has occurred. A newfound emphasis on trust within these human-AI teams has prompted research on supporting trust between humans and AI teammates, how AI affects trust between the humans within the team, and how team composition (majority AI versus majority human) influences trust development. Even the efficacy of trust repair strategies, adapted from human-automation interaction, is being explored in human-AI teaming. In the current paper, we examine an essential component of trust that has yet to receive requisite attention: the spread of trust within and across constellations of teams. To this end, we discuss the potential impacts of trust within and across human-AI teams and constellations on team efficacy. From this discussion, several challenges are proposed in five major research questions that should be addressed to enable more effective human-AI teams and human-AI teaming constellations.

Previous
Previous

The Effect of AI Teammate Ethicality on Trust Outcomes and Individual Performance in Human-AI Teams

Next
Next

Selective Sharing is Caring: Toward the Design of A Collaborative Tool to Facilitate Team Sharing