Toward a Unified Theory of Learned Trust in Interpersonal and Human-Machine Interactions

Document Type

Article

Publication Date

12-1-2019

Abstract

A proposal for a unified theory of learned trust implemented in a cognitive architecture is presented. The theory is instantiated as a computational cognitive model of learned trust that integrates several seemingly unrelated categories of findings from the literature on interpersonal and human-machine interactions and makes unintuitive predictions for future studies. The model relies on a combination of learning mechanisms to explain a variety of phenomena such as trust asymmetry, the higher impact of early trust breaches, the black-hat/white-hat effect, the correlation between trust and cognitive ability, and the higher resilience of interpersonal as compared to human-machine trust. In addition, the model predicts that trust decays in the absence of evidence of trustworthiness or untrustworthiness. The implications of the model for the advancement of the theory on trust are discussed. Specifically, this work suggests two more trust antecedents on the trustor’s side: perceived trust necessity and cognitive ability to detect cues of trustworthiness. (PsycInfo Database Record (c) 2020 APA, all rights reserved)

DOI

10.1145/3230735

Find in your library

Off-Campus WSU Users


Share

COinS