Philip Bobko (Committee Member), Jennie J. Gallimore (Advisor), Subhashini Ganapathy (Committee Member), Michael E. Miller (Committee Member), Pratik J. Parikh (Committee Member)
Doctor of Philosophy (PhD)
This study investigated the extent to which the well-known precursors of interpersonal trust (ability, benevolence, integrity, or ABI) could be exploited, redefined, or added to when considering and developing models of trust between humans and technology. The ABI model explains only about half of the variation in interpersonal trust (Colquitt, Scott, & LePine, 2007), so two additional precursors to trust from the interpersonal and automation trust domains - transparency and humanness - were identified and studied. The experimental task involved users interacting with an automated aid (image processing and recommender system) through a simulated unmanned ground vehicle (UGV) interface to identify suspected insurgents in a typical Middle-Eastern urban environment. Aid reliability dropped during the middle-third of the task, due in part to environmental disturbances affecting the aid's image processing performance. Aid transparency was manipulated by exposing users to analytic processing states and aid humanness was manipulated through a human voice with high affect messages versus a machine voice with low affect messages. Results indicated transparency produced inconsistent effects on trust (assessed through subjective ratings) and reliance behavior (defined as participants changing their initial response in favor of the aid’s recommendation). This may have occurred because participants interpreted transparency in a broader context which included intent (Lyons and Havig, 2014), rather than in the narrower, operationalized context of algorithm understanding. Humanness, which may have signaled intent, generally improved trust and reliance. Participants may also have had preconceived notions of transparency which differed from the experimental manipulation. This research also examined whether participants applied perceptions of ABI to the interaction with the technology. Perceived ability and perceived benevolence / integrity were found to be explanatory links in the relationship between humanness and trust, suggesting ability and benevolence / integrity 1) were perceived characteristics in the automation design, and 2) influenced trust. The proposed factors, transparency and humanness, extend the number of precursors to trust in an automated context and manifest primarily as perceived attributes. Finally, trust and reliance were differentially sensitive to a drop, and subsequent recovery, in aid reliability – trust varied with aid reliability, whereas reliance failed to recover.
Department or Program
Ph.D. in Engineering
Year Degree Awarded
Copyright 2016, all rights reserved. My ETD will be available under the "Fair Use" terms of copyright law.