ArtInHCI’24-MWHCI

ArtInHCI2024 Session: Multimodal and wearable human-computer interaction (ArtInHCI’24-MWHCI) is conducted under the 2nd International Conference on Artificial Intelligence and Human-Computer Interaction, held during October 25-27, 2024 in Kunming.

The scope of this symposium encompasses the exploration of cutting-edge research and emerging applications in the field of multimodal and wearable human-computer interaction. As hardware technologies continue to evolve, including the miniaturization of wearable sensors such as data gloves, inertial measurement units, and surface electromyography electrodes, the integration between human and machine is reaching unprecedented levels. Concurrently, advancements in hardware are driving progress in multimodal sensing technology, which involves the fusion of input data from at least two modalities to provide richer information for human-computer interaction. The motivation behind this symposium lies in the recognition of the pivotal role played by multimodal and wearable human-computer interaction in fostering the next generation natural user interface (NUI). This symposium seeks to facilitate discussions on the latest research findings, innovative methodologies, and practical applications in this rapidly evolving field. By bringing together experts and practitioners from academia and industry, we aim to foster collaboration, inspire new ideas, and propel advancements in multimodal and wearable human-computer interaction.


Potential topics include but are not limited to the following:
  • The applications of multimodal sensing technology in human-computer interaction
  • Wearable human-computer interaction techniques and methods
  • Multimodal data fusion and processing, emotion recognition and affective computing based on multimodal inputs
  • Interface design for multimodal human-computer interaction
  • Intelligent assistance and augmentation technologies based on multimodal and wearable devices
  • Cognitive load and user experience in multimodal and wearable human-computer interaction
  • Other researches in multimodal or wearable human-computer interaction

ArtInHCI’24- MWHCI Committee Member


Associate Professor Wentao Wei
Nanjing University of Science and Technology

Areas of Expertise: natural human-computer interaction, wearable gesture interaction, multi-modal sensing user interface, generative machine learning, sEMG-based intelligent gesture recognition algorithm, multi-channel physiological signal recognition and processing

Brief Introduction: Dr. Wentao Wei received the Ph.D. degree in computer science and technology from Zhejiang University. He is currently an Associate Professor with the School of Design Arts and Media, Nanjing University of Science and Technology. His research interests include natural human-computer interaction, wearable gesture interaction, multi-modal sensing user interface, sEMG-based pattern recognition algorithm. His research has received support from the National Natural Science Foundation of China and the Natural Science Foundation of Jiangsu Province. He has released more than 15 publications, including papers, monograph, patents, and software copyrights. Two of his first-authored papers have each been cited over 200 times. He also serves as a reviewer for academic journals such as IEEE JBHI, IEEE TNSRE, IEEE RAL, IEEE SENS J, ACM TIST.