Document Type

Article

Publication Date

12-2007

Abstract

Millions of sensors around the globe currently collect avalanches of data about our world. The rapid development and deployment of sensor technology is intensifying the existing problem of too much data and not enough knowledge. With a view to alleviating this glut, we propose that sensor data, especially video sensor data, can be annotated with semantic metadata to provide contextual information about videos on the Web. In particular, we present an approach to annotating video sensor data with spatial, temporal, and thematic semantic metadata. This technique builds on current standardization efforts within the W3C and Open Geospatial Consortium (OGC) and extends them with Semantic Web technologies to provide enhanced descriptions and access to video sensor data.

Comments

Presented at the W3C Video on the Web Workshop, San Jose, CA, December 12-13, 2007.

Additional Files

W3C_Video_on_the_Semantic_Sensor_Web.pdf (1160 kB)
Presentation


Share

COinS