Document Type
Article
Publication Date
12-2007
Abstract
Millions of sensors around the globe currently collect avalanches of data about our world. The rapid development and deployment of sensor technology is intensifying the existing problem of too much data and not enough knowledge. With a view to alleviating this glut, we propose that sensor data, especially video sensor data, can be annotated with semantic metadata to provide contextual information about videos on the Web. In particular, we present an approach to annotating video sensor data with spatial, temporal, and thematic semantic metadata. This technique builds on current standardization efforts within the W3C and Open Geospatial Consortium (OGC) and extends them with Semantic Web technologies to provide enhanced descriptions and access to video sensor data.
Repository Citation
Henson, C. A.,
Sheth, A. P.,
Jain, P.,
Pschorr, J.,
& Rapoch, T.
(2007). Video on the Semantic Sensor Web. .
https://corescholar.libraries.wright.edu/knoesis/212
Included in
Bioinformatics Commons, Communication Technology and New Media Commons, Databases and Information Systems Commons, OS and Networks Commons, Science and Technology Studies Commons
Comments
Presented at the W3C Video on the Web Workshop, San Jose, CA, December 12-13, 2007.