The acts of observation and perception provide the building blocks for all human knowledge (Locke, 1690); they are the processes from which all ideas are born; and the sole bond connecting ourselves to the world around us. Now, with the advent of sensor networks capable of observation, this world may be directly accessible to machines. Missing from this vision, however, is the ability of machines to glean semantics from observation; to apprehend entities from detected qualities; to perceive. The systematic automation of this ability is the focus of machine perception -- the ability of computing machines to sense and interpret the contents of their environment. Despite early successes within narrow domains, analyzing data of a single modality (e.g., facial recognition), a general solution to machine perception remains elusive. This state of affairs is the result of difficult research challenges, such as the ability to model the process of perception in order to efficiently and effectively interpret the growing stream of multimodal (and incomplete) sensor data. People, on the other hand, have evolved sophisticated mechanisms to efficiently perceive their environment; including the use of background knowledge to determine what aspects of the environment to focus attention. Over the years, many cognitive theories of perception have been proposed, evaluated, revised, and evolved within an impressive body of research. These theories present a valuable steppingstone towards the goal of machine perception, to embody this unique human ability within a computational system. This talk will describe the information processes involved in perception that will serve as an ontological account of knowledge production. The ontology of perception, IntellegO (Greek: “to perceive”), derived from cognitive theories of perception, provides a formal semantics of perception by defining these information processes that enable the conversion of low-level observational data into high-level abstractions. IntellegO is currently being applied within several domain applications, including a weather-alert service, a fire-detecting robot, and a mHealth application to help lower hospital readmission rates for patients with chronic heart disease. We will demonstrate through these examples how massive amounts of multimodal sensory data is converted into contextual knowledge for improved situational awareness.
Henson, C. A.,
& Sheth, A. P.
(2012). Semantics of Perception: Towards a Semantic Web Approach to Machine Perception. .