Human-Machine Collaboration and Intelligent Perception: Frontiers of Human-Machine Collective Intelligence and Brain-Machine Intelligence
August 22 (Friday) 13:00-15:00
3rd Floor, VIP Hall

Guest Introductions

Pan Yan

National University of Defense Technology

Introduction:
       Pan Yan is an associate researcher at the National Key Laboratory of Information Systems Engineering, National University of Defense Technology. He is a joint Ph.D. student from Northwestern Polytechnical University and the University of Maryland, and a postdoctoral researcher at the School of Computer Science, National University of Defense Technology (under the supervision of Academician Wang Huaimin). He was selected for the university’s youth talent training program and has hosted multiple national and military-level important projects, including the National Natural Science Foundation, major theoretical and key projects, and special research topics. His research focuses on large-scale human-machine collective intelligence for efficient collaboration theories, algorithms, and applications. He has published over 20 academic papers in top conferences and journals such as UbiComp, ICDE, Infocom, IPSN, TMC, TON, IOT-J, JNCA, TVT, and Information Sciences. His technical achievements have been applied in leading organizations, earning him the "Jinlang-2022" first prize organized by the Military Commission.

 

Talk Title: Data-Driven Human-Machine Collective Intelligence for Efficient Collaboration Algorithms

 

Abstract: 

       With the widespread application of intelligent robots such as drones and autonomous vehicles in scenarios like smart logistics, perception, and home automation, human-machine collective intelligence is becoming an important paradigm for completing complex tasks through human and machine collaboration. This talk focuses on data-driven human-machine collective intelligence for efficient collaboration. The research aims to leverage large-scale human-machine collective intelligence behavior data and task data through short-term and long-term task predictions, large-scale group behavior understanding, and human-machine knowledge transfer to optimize task allocation, promoting more efficient completion of large-scale complex tasks.

Guo Shihui

Xiamen University

Introduction:
       Guo Shihui is a professor and vice dean at the School of Information, Xiamen University. He is an Innovation Leading Talent of the Chinese Academy of Engineering and the Royal Academy of Engineering of the UK. He is also a Xiaomi Young Scholar and a recipient of the Fujian Province Young Eagle Program. Guo graduated with a bachelor's degree from Yuanpei College, Peking University, in 2010, and obtained his Ph.D. from the National Center for Computer Animation in the UK in 2015. His research interests focus on non-intrusive wearable motion capture. He has led national and international research projects and published over 20 papers in top journals and conferences, including ACM TOG, ACM ToCHI, and ACM CHI. His work has earned him nominations for Best Paper at CVPR 2020, Best Poster at ChinaVR 2021, and Best Technical Demonstration at HHME 2024. He is on the editorial board of the international journal *Computer Animation & Virtual Worlds* and *Visual Informatics*. Guo also edited textbooks on virtual and augmented reality, which are widely used in academic settings.

 

Talk Title: Non-Intrusive Wearable Motion Capture

 

Abstract: 

       Wearable motion capture technology eliminates the strict spatial range and lighting conditions required by traditional optical motion capture systems, and it is the key to long-duration, natural human motion capture. However, existing methods typically rely on tight-fitting wearable mediums and complex sensor wear and calibration processes, which inevitably constrain human motion and reduce the authenticity and continuity of data collection, severely limiting application expansion. To overcome these limitations, the research team has innovatively used loose everyday clothing as sensor carriers, while simultaneously developing key technologies such as fabric deformation artifact adaptation and dynamic sensor data calibration. This approach achieves comfortable wear and non-intrusive data collection, opening up new possibilities for large-scale, routine motion capture.

Zhao Sha

Zhejiang University

Introduction:
       Zhao Sha is a distinguished researcher at the Brain-Machine Intelligence National Key Laboratory, Zhejiang University, and a doctoral supervisor. She is the vice secretary-general of the CCF Ubiquitous Computing Special Committee, a senior member of CCF, and a member of the ACM Hangzhou Chapter's executive team. Zhao earned her Ph.D. from the School of Computer Science at Zhejiang University in June 2017, where she also visited Carnegie Mellon University. From 2017 to 2020, she conducted postdoctoral research at Zhejiang University. Her research focuses on brain-machine interfaces and intelligent perception, with an emphasis on non-invasive brain signal decoding and closed-loop regulation. She has published over 50 papers, receiving multiple best paper and cover article awards, including the ACM UbiComp Best Paper Award (CCF A, first author, the first domestic paper in this category). She was awarded the 2022 ACM Rising Star Award. Zhao has led various national and provincial research projects and has been a program committee member for conferences such as AAAI and IJCAI, as well as a reviewer for top conferences and journals like UbiComp and TNSRE.

 

Talk Title: Brain-Machine Intelligence for Ubiquitous Perception and Control

 

Abstract: 

       As the core technology of the new generation of human-machine integration, brain-machine intelligence for ubiquitous perception and control establishes a bidirectional information interaction channel between the brain and the external world through multi-dimensional signal perception and neuro-regulation closed-loop systems. This technology includes two key components: on one hand, it combines multimodal perception technologies such as behavioral data and brain signals to analyze human behavior intentions and cognitive states in real-time; on the other hand, it intervenes precisely in brain functions based on real-time brain state feedback. This "perception-decoding-regulation" closed-loop mechanism enables multi-level understanding from physical behavior to cognitive activity, creating a new paradigm of enhancing human cognitive abilities through external devices. This report will focus on non-invasive brain-machine interface-based brain signal perception, brain state decoding, brain function regulation, and clinical application examples.