Bridging the Gap between Atomic and Complex Activities in First Person Video
Document Type
Article
Publication Date
8-5-2021
Identifier/URL
136361436 (Orcid)
Find this in a Library
Abstract
In this work, we describe a system for classifying activities in first-person video using a fuzzy inference system. Our fuzzy inference system is built on top of traditional object-and motion-based video features and provides a description of activities in terms of multiple fuzzy output variables. We demonstrate the application of the fuzzy system on a well known dataset of unscripted first person videos to classify actions into four categories. Comparing the results to other supervised learning techniques and the state-of-the-art, we find that our fuzzy system outperforms alternatives. Further, the fuzzy outputs have the potential to be much more descriptive than conventional classifiers due to their ability to handle uncertainty and produce explainable results.
Repository Citation
Schneider, B.,
& Banerjee, T.
(2021). Bridging the Gap between Atomic and Complex Activities in First Person Video. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
https://corescholar.libraries.wright.edu/cse/632
DOI
10.1109/FUZZ45933.2021.9494553