Document Type


Publication Date





For decades, spatial auditory displays have been considered to be a promising technology to help fight pilot disorientation and loss of SA. Inherently heads-up, these displays can provide time-critical spatial information to pilots about navigational targets, air and runway traffic, wingman location, and even the attitude of one’s aircraft without placing additional demands on the already over-tasked visual system. Unfortunately, currently-fielded auditory displays often suffer from poor spatial fidelity, particularly in elevation, due to their use of a one-size-fits-all (i.e., non-personalized) head-related transfer function (HRTF), the set offilters responsible for creating the spatial impression. The current study investigated the utility of combining a spatial cue (non-personalized HRTF) with one of two auditory symbologies, one providing both object and location information, and the other only location information. In one case, ecologically-valid sounds were paired with a particular class of visual object, and spatial cues indicated a plausible target elevation (e.g., a squeak indicated the target was a rat on the floor). In the other condition, the cue was a broadband sound, the repetition rate of which indicated target elevation (i.e., the cue provided only location information, not object information). Results indicate that target acquisition times were lower when meaningful (i.e., ecologically-valid) cues were added to non-personalized spatial cues when compared to the case in which the source-based cues provided no information about the target source. These results indicate that careful construction of auditory symbology could improve performance of cockpit-based spatial auditory displays when personalized, high-fidelity spatial processing is not practical.