Management and Processing of Continuous Streams from Moving Sensors

IIS-0307908


Principal Investigator

Shahram Ghandeharizadeh
University
of Southern California

Computer Science Department SAL 300

Los Angeles, CA 90089-0781

Tel : (213) 740-4781

Fax : (213) 740-7285

Email: shahram@usc.edu

URL : http://dblab.usc.edu/


Co-PI

Cyrus Shahabi
University
of Southern California

Computer Science Department SAL 300

Los Angeles, CA 90089-0781

Tel : (213) 740-8162

Fax : (213) 740-5807

Email: Shahabi@usc.edu

URL : http://infolab.usc.edu/

 

Keywords

Stream processing
Moving sensors
Physical device independence
Spatio-temporal pattern detection

Project Summary

Moving sensors refers to an emerging class of data intensive applications that impacts disciplines such as communication, health-care, scientific applications, etc. These applications consist of a fixed number of sensors that move and produce streams of data as a function of time. With communication, for example, a hearing impaired individual might utilize a haptic glove that translates hand signs into written (spoken) words. The glove consists of a sensor for each finger joint that reports its location as a function of time, producing streams of data. We propose to investigate a framework to capture both the semantics and essence of an activity pertaining to these streams.

The proposed undertaking is challenging because the produced data streams are almost always: 1) multidimensional, 2) spatio-temporal, 3) continuous, 4) large in size, and 5) noisy. In this proposal, we outline a multi-layer framework that represents the sensory data at different levels of abstraction. Using this foundation, we propose to investigate two broad research topics. First, at the lowest level of abstraction we plan to investigate techniques for both efficient acquisition of multi-sensor streams and robust transformation of the streams for data mining purposes. At the higher levels, we propose a variety of tools to abstract the streams into spatio-temporal predicates, templates and languages. Second, with multi sensor devices, an activity might be performed in a slightly different manner each time. We intend to develop a methodology that captures the essence of an activity by computing both its invariant and variation over time. This will result in a set of boundaries, or ``Envelope of Limits", termed EoL, for an activity. An EoL is a spatio-temporal stream that might filter noise to facilitate recognition.

To illustrate, consider the following example in the context of hand sign recognition application (in particular, ASL: American Sign Language) using a haptic glove. A one-level recognition paradigm would be used as follows. First, it would be trained with a set of S signs. Next, it would be used to recognize a sign s such that s in S. This paradigm would suffer from the following key limitation: It can neither recognize a new sign s' not in S, nor a known sign made by a different haptic glove. In this proposal we show that our multi-level framework is easily extensible to recognize new signs and modular enough to handle new input devices. Using the concept of EoL, the framework compensates for a user's slight variations when performing a sign. The number of "moving sensor" applications supported by our methodology quantifies its extensibility.

Other metrics used to evaluate our framework are its: a) accuracy in detecting spatio-temporal features, b) robustness to noise, c) time and space complexity, and d) adaptation to other devices. Some of our proposed designs might yield negative results. Hence, the proposal is written to demonstrate the richness of the target application (with no intention of appearing ambitious). Thus, a negative result might be a contribution by revealing subclasses of our target domain that require a different framework.

Publications and Products

The start date of this proposal is September 15, 2003. This report is written prior to this date and there are no publications or products to attribute to this proposal as yet.

Project Impact

The impact of this effort will be multi-faceted. From a database technology perspective, it will introduce design and implementation of concepts that advance (a) processing of noisy data streams in real-time, (b) multidimensional data mining, and (c) spatio-temporal databases in support of moving sensor devices, e.g., haptic devices. From a multimedia perspective, it furthers the field by incorporating a new mode for conveying information: touch and motor skills. (While humans have five senses, most multimedia research efforts are focused on our visual and auditory senses.) Finally, from an application perspective, our research results provide enabling technology to ensure extensibility, modularity, and physical data independence from the stream producing devices. The proposed research educates graduate students. In addition to publications, we plan to maintain a web site to disseminate our sample databases and software prototypes.

Goals, Objectives and Targeted Activities

During Year 1, our goals are two folds. First, we intend to extend the existing foundation to detect spatial signs with real-time recognition capabilities. Second, we investigate detection of temporal predicates. During Year 2, we investigate detection of spatio-temporal predicates. In Year 3, we investigate Envelope of Limits (EoL) as an application of our framework to detect spatio-temporal patterns.

Area References

J. Eisenstein, S. Ghandeharizadeh, L. Huang, C. Shahabi, G. Shanbhag and R . Zimmermann. Analysis of Clustering Techniques to Detect Hand Signs. In 2001 International Symposium on Intelligent Multimedia, Video, and Speec h Processing, May 2001, Kowloon Shangri-La, Hong Kong.

C. Shahabi, L. Kaghazian, S. Mehta, A. Ghoting, G. Shanbhag, M. McLaughlin, Analysis of Haptic Data for Sign Language Recognition, 9th International Conference on Human Computer Interaction, New Orleans, August 2001.

J. Eisenstein, S. Ghandeharizadeh, C. Shahabi, G. Shanbhag and R. Zimmerma nn. Alternative Representations and Abstractions for Moving Sensors Databases. In Proceedings of the Tenth International Conference on Information and Knowledge Management (CIKM), November 2001.

J. Eisenstein, S. Ghandeharizadeh, L. Golubchik, C. Shahabi, D. Yan and R. Zimmermann. Device Independence and Extensibility in Gesture Recognition. In IEEE Virtual Reality Conference (VR), LA, CA, March 2003.

Potential Related Projects

IIS-0238560: Management of Immersive Sensor Data Streams

Project Websites

http://dblab.usc.edu/MovingSensors is the main website for our project and will contain its related software, data, and demo.