BLOGS WEBSITE

CDIT research appears at top Ubiquitous Computing conference

Two research papers from CDIT researchers have been accepted by 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2016), to be held in Heidelberg, Germany, 12-16 September, 2016.

This is a fantastic achievement and UbiComp is arguably the most prestigious conference in the area of ubiquitous computing. Congratulations.

Paper 1: AudioGest: Enabling Fine-Grained Hand Gesture Detection by Decoding Echo Signals (Wenjie Ruan, Quan Z. Sheng Lei Yang, Tao Gu, Peipei Xu, and Longfei Shangguan)

Hand gesture is becoming an increasingly popular means of interacting with consumer electronic devices, such as mobile phones, tablets and laptops. In this paper, we present AudioGest, a device-free gesture recognition system that can accurately sense the hand in-air movement around user’s devices. Compared to the state-of-the-art, AudioGest is superior in using only one pair of built-in speaker and microphone, without any extra hardware or infrastructure support and with no training, to achieve fine-grained hand detection. Our system is able to accurately recognize various hand gestures, estimate the hand in-air time, as well as average moving speed and waving range. We achieve this by transforming the device into an active sonar system that transmits inaudible audio signal and decodes the echoes of hand at its microphone. We address various challenges including cleaning the noisy reflected sound signal, interpreting the echo spectrogram into hand gestures, decoding the Doppler frequency shifts into the hand waving speed and range, as well as being robust to the environmental motion and signal drifting. We implement the proof-of-concept prototype in three different electronic devices and extensively evaluate the system in four real-world scenarios using overall 3,900 hand gestures that collected by five users for more than two weeks. Our results show that AudioGest can detect six hand gestures with an accuracy up to 96\%, and by distinguishing the gesture attributions, it can provide up to 162 control commands for various applications.

Paper 2: Learning from Less for Better: Semi-Supervised Activity Recognition via Shared Structures Discovery (Lina Yao, Feiping Nie, Quan Z. Sheng, Tao Gu, Xue Li, and Sen Wan)

Despite the active research into, and the development of, human activity recognition over the decades, existing techniques still have several limitations, in particular, poor performance due to insufficient ground-truth data and little support of intra-class variability of activities (i.e., the same activity may be performed in different ways by different individuals, or even by the same individuals with different time frames). Aiming to tackle these two issues, in this paper, we present a robust activity recognition approach by extracting the intrinsic shared structures from activities to handle intra-class variability, and the approach is embedded into a semi-supervised learning framework by utilizing the learned correlations from both labeled and easily-obtained unlabeled
data simultaneously. We use ℓ2,1 minimization on both loss function and regularizations to effectively resist outliers in noisy sensor data and improve recognition accuracy by discerning underlying commonalities from activities. Extensive experimental evaluations on four community-contributed public datasets indicate that with little training samples, our proposed approach outperforms a set of classical supervised learning methods as well as those recently proposed semi-supervised approaches.

 

This entry was posted in News, Web Technologies. Bookmark the permalink.
 

Comments are closed.