Special Committee Incentive Program Forum
August 22 (Friday), 13:00–15:00
Venue: Shenzhen Hall, 3rd Floor

Forum Guests

Yang Tian

Guangxi University

Introduction:
    Associate Professor at the School of Computer and Electronic Information, Guangxi University, and Director of the Virtual Reality Innovation Center at Guangxi University. He received his Ph.D. from The Chinese University of Hong Kong in 2020. His research focuses on extended reality (XR) human–computer interaction. He has proposed a series of interaction techniques and paradigms based on major body segment movements to address fundamental interaction issues in XR operating system interfaces. Among them, his “Kinaesthetic Attachments” concept represents a novel paradigm in pseudo-haptics. His work has been published in leading HCI journals and conferences such as TVCG, CHI, and Ubicomp. Dr. Tian also serves as an executive member of the CCF Human–Computer Interaction Technical Committee, executive member of the CCF CAD & Graphics Technical Committee, publicity chair of the inaugural Human–Machine Computing Conference (HMCC 2025) and the 21st Harmony in Human–Machine Environment Conference (HHME 2025). He has led one National Natural Science Foundation of China (NSFC) project and one Guangxi science and talent project.

Report Title: Towards Efficient, Comfortable, and Accessible XR Operating System Interface Interaction

Abstract:
    With the significant reduction in weight and cost of XR glasses, the adoption of XR technologies is accelerating. However, XR operating system interfaces face fundamental interaction challenges, such as inaccurate distance and orientation estimation, lack of haptic feedback, user fatigue from mid-air hand movements, and the inaccessibility of hand-dominant interaction for users with upper-limb disabilities. These issues hinder widespread real-world application. This talk introduces interaction paradigms based on major body segment movements (arms, legs, and head) for efficient, comfortable, and accessible XR interfaces, including two spatial reference techniques, the novel pseudo-haptics paradigm “Kinaesthetic Attachments” and its applications, an “outward walking” metaphor for system activation, and a head-rotation-only icon selection technique for use while walking. These results provide feasible solutions for the immature XR interface ecosystem.

Xin Tong

The Hong Kong University of Science and Technology (Guangzhou)

Introduction:
    Assistant Professor at the Information Hub, Computational Media and Arts Thrust, HKUST (Guangzhou). Previously, she was an Assistant Professor in Computing and Design at Duke Kunshan University. She also received the NSERC Postdoctoral Fellowship in Canada and worked at Stanford University School of Medicine’s Pervasive Health Technology Lab. She earned her Ph.D. and M.Sc. at Simon Fraser University, where her dissertation won the 2021 Bill Buxton Best HCI Dissertation Award from the Canadian HCI Society. Her research focuses on interfaces, mechanisms, and models for human–AI collaborative interaction, with applications in health, accessibility, and digital cultural heritage. She has published 40+ papers in leading venues (ACM CHI, CSCW, etc.) as first or corresponding author, and serves on committees and as a reviewer for these venues. Since 2021, she has been an executive member of the CCF HCI Technical Committee, and since 2024, a committee member of ACM SIGCHI GBA Local Chapter. She has received multiple awards and funding, including the 2024–2026 Foreign Expert Talent Program, 2025 CCF–Lenovo Blue Ocean Research Fund, ACM CHI 2024 Best Paper Award, and others.

Report Title: Exploring Personalized Learning for Children with Autism and Their Key Stakeholders through AI and Gamification

Abstract:
    This talk introduces several projects exploring how AI and gamified systems can support personalized learning and interventions for children with autism and related stakeholders (parents, teachers, etc.). Examples include: a Minecraft-based game for peer social interaction, a PONG-style tablet game for turn-taking communication, an AI-driven virtual agent for customized parent training and educational support, and LLM-powered systems for personalized language and social learning activities in special education. These systems, co-designed with educators and families, embed learning into everyday contexts, aiming to provide context-aware, personalized support across home, school, and community settings.

Yawen Zheng

Institute of Software, Chinese Academy of Sciences

Introduction:
    Postdoctoral researcher at the Institute of Software, Chinese Academy of Sciences, with a Ph.D. from the School of Software, Shandong University. Her research focuses on user behavior modeling, motion uncertainty, and HCI techniques. She has developed a comprehensive modeling framework covering user differences, spatial target motion, and environmental disturbances in complex interaction contexts. She has participated in several national projects and published 9 papers in venues such as IJHCS, UIST, ACM MM, and ICME, with 3 granted patents.

Report Title: Modeling and Interaction Methods for User Behavior in Moving Target Selection under Complex Interaction Contexts

Abstract:
    This talk addresses the imprecision of moving target selection in complex interactive environments. It explores modeling approaches for selection endpoint distributions considering user differences, target motion in 3D space, and environmental disturbances. A series of endpoint distribution models are presented to systematically understand user behavior in these contexts, providing theoretical and methodological innovations to improve accuracy. The proposed user performance enhancement techniques have been validated in real vehicular environments, significantly improving target selection accuracy in both 2D and 3D tasks.

Hongbo Zhang

Zhejiang University

Introduction:
    Received his Ph.D. in 2025 from the College of Computer Science and Technology, Zhejiang University, under the supervision of Prof. Lingyun Sun. His research focuses on AI and HCI, particularly human–AI collaboration and co-creation enabled by generative AI. During his Ph.D., he published 9 papers in CCF-A or top-tier SCI journals, 6 as first author. His representative works appeared in TOCHI, IJHCS, CHI, and UIST. He participated as a key contributor in multiple national R&D projects. He was awarded the CCF HCI Committee Outstanding Dissertation Award (2025), National Ph.D. Scholarship, and other honors.

Report Title: Research on Hybrid Prototyping Methods Supported by Generative Artificial Intelligence

Abstract:
    Conceptual design is the source of innovation, with prototypes serving as core carriers. This talk introduces a generative hybrid prototyping method framed around the “designer–hybrid prototype–generative AI” triad. The method addresses key challenges in balancing iteration speed and fidelity, prototype granularity and flexibility, and individual creativity versus team collaboration. By integrating generative AI and AR into conceptual design, this work provides new ideas, methods, tools, and paradigms for human–AI collaborative design.

Tianren Luo

University of Chinese Academy of Sciences

Introduction:
    Ph.D. in Computer Science and Technology (2025), University of Chinese Academy of Sciences, advised by Dr. Feng Tian and Dr. Teng Han at the Institute of Software, CAS. His research focuses on HCI and VR, particularly sensory conflict in remapping interactions. He has published 10 papers in top venues (ACM CHI, ACM UIST, IEEE TVCG, IEEE VR), 7 as first author, and received honors including UIST 2024 Best Paper Honorable Mention, CAS President’s Award, Zhu Liyuehua Scholarship, and others. His term “Remapping Interaction” has been adopted as a consensus term and included in the CCF HCI Committee’s “Top Ten Key Scientific Questions in HCI in China.” He has led and participated in multiple national and provincial research projects.

Report Title: Research on Multisensory Conflicts in VR Remapping Interaction

Abstract:
    As HCI evolves from 2D keyboard–mouse to 3D VR interaction, sensory integration now spans external senses (vision, hearing, touch) and internal senses (vestibular, proprioception), providing more natural experiences but also significant sensory conflicts. Remapping interaction, a representative VR technique, modifies the spatial mapping between real bodies and virtual avatars, enabling physical obstacle avoidance, motion optimization, and experience enhancement, but disrupting egocentric references and causing mismatches between vision and vestibular/proprioception, affecting comfort, spatial perception, motor control, and immersion. This talk presents systematic studies addressing three major challenges: (1) constructing typical remapping conditions and mechanistic models of sensory conflict; (2) extending research to multi-user VR and immersive teleoperation, identifying specific conflict features; (3) developing continuous measurement tools for real-time quantification of conflict intensity and creating immersive design tools for conflict-aware prototyping and optimization. These contributions advance both theoretical understanding and practical solutions for VR applications.