Towards Ethical AI: Empirically Investigating Dimensions of AI Ethics, Trust Repair, and Performance in Human-AI Teaming

This manuscript describes an experiment conducted to determine the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. Specifically, forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. The results indicated that teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. As such, it appears that ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation.

Previous
Previous

I See You: Examining the Role of Spatial Information in Human-Agent Teams

Next
Next

Exploring the Relationship Between Ethics and Trust in Human-Artificial Intelligence Teaming