Document Type


Publication Date





Intelligent and complex systems are becoming common in our workplace and our homes, providing direct assistance in transport, health and education domains. In many instances the nature of these systems are somewhat ubiquitous, and influence the manner in which we make decisions. Traditionally we understand the benefits of how humans work within teams, and the associated pitfalls and costs when this team fails to work. However, we can view the autonomous agent as a synthetic partner emerging in roles that have traditionally been the bastion of the human alone. Within these new Human-Autonomy Teams we can witness different levels of automation and decision support held within a clear hierarchy of tasks and goals. However, when we start examining the nature of more autonomous systems and software agents we see a partnership that can suggest different constructs of authority depending on the context of the task. This may vary in terms of whether the human or agent is leading the team in order to achieve a goal. This paper examines the nature of HAT composition whilst examining the application of this in aviation and how trust in such stystems can be assessed.