麻豆影视文化传媒在线看|果冻传媒91制片厂麻豆|性色网站|国产成人吃瓜网|麻豆文化传媒百度云|韩国黄色一级黄色片|成人电影区|糖心vlog是真的吗|黄瓜视频丝瓜视频香蕉视频|国产精品视频在一区鲁鲁,性感丰满美乳巨乳,蜜桔影院91制片厂,爱豆传媒陈可心作品名字

Home>LATEST NEWS

A paper from Tsinghua HCI Group wins CHI 2023 Honorable Mention Award

A paper from Tsinghua HCI Group "Enabling Voice-Accompanying Hand-to-Face Gesture Recognition with Cross-Device Sensing" recently won the CHI 2023 Honorable Mention Award.

Voice interaction is a natural and always-available interaction modality on wearable devices such as headphones and smart watches. Limited by the implicitness of modality information in speech and natural language understanding technology, modality control in speech interaction (such as distinguishing the wake-up state) is still a challenging problem. Users need to repeat wake-up words to actively switch the mode or target device, which brings additional burden to the interaction.

In this paper, the authors investigated voice-accompanying hand-to-face (VAHF) gestures for voice interaction. They targeted on hand-to-face gestures because such gestures relate closely to speech and yield significant acoustic features (e.g., impeding voice propagation). They conducted a user study to explore the design space of VAHF gestures, where they first gathered candidate gestures and then applied a structural analysis to them in different dimensions (e.g., contact position and type), outputting a total of 8 VAHF gestures with good usability and least confusion.

To facilitate VAHF gesture recognition, they proposed a novel cross-device sensing method that leverages heterogeneous channels (vocal, ultrasound, and IMU) of data from commodity devices (earbuds, watches, and rings). Their recognition model achieved an accuracy of 97.3% for recognizing 3 gestures and 91.5% for recognizing 8 VAHF gestures, proving the high applicability.

Finally, they discussed the design space of VAHF gestures, including triggering and interrupting process design through more flexible voice interaction, shortcut key binding, visual information directional binding, etc. They hope that their work can promote more intelligent voice interaction with parallel information such as gestures and body movements as parallel channels.

The authors of this article are Zisu Li, Chen Liang, Yuntao Wang, Yue Qin, Chun Yu, Yukang Yan, Mingming Fan, Yuanchun Shi. If you want to know more about their work, please visit the lab’s webpage (https://pi.cs.tsinghua.edu.cn/) for more details.

From Department of Computer Science and Technology

Editor: Li Han


Copyright 2001-2021 news.tsinghua.edu.cn. All rights reserved.