Human-Centered Interaction Technology in the Smart Cockpit Era
August 23 (Saturday) 3:45 PM-5:30 PM
Location: Grand Ballroom B, 3rd Floor
| Guest Profile | |
|---|---|
Guo Gang Chongqing University |
Introduction:
Report Title: Advanced Human-Machine Interaction Technologies for Automotive Intelligent Cockpits Empower Improved User Satisfaction
Report Introduction: Based on the definitions in the Intelligent Cockpit Classification and Comprehensive Evaluation White Paper, this report introduces forward-looking technologies for automotive intelligent cockpits and how they can improve user satisfaction, focusing on four aspects: key human-machine interaction technologies, systems and components, infrastructure support, and testing and evaluation. 1. Key Human-Machine Interaction Technologies, including: perception of human status, behavior, intention, preference, emotion, and trust in natural driving conditions; real-time evaluation of performance, load, satisfaction, and safety in human-vehicle-road scenarios using edge-to-cloud integration; construction of a large cloud-based human-machine interaction model; and construction of a vehicle-side human-machine interaction calibration model based on distillation technology. 2. System and component technologies, including: the "Tuling" chassis and active suspension, AR-HUD, "one chip, multiple screens," smart seats, smart air conditioning, smart displays, smart audio, in-vehicle lighting and emotional experience, experiential and functional fragrances, vehicle-to-vehicle information interaction, and expandable devices. 3. Foundational support, including: high-performance cockpit chips, domain controllers and multi-domain collaboration, cockpit operating systems, cockpit technical architecture, cockpit digital base, artificial intelligence and large-scale model applications, and intelligent design and development tool chains. 4. Key testing and evaluation technologies, including: intelligent cockpit function and performance testing and evaluation, bench and road testing, regulatory and R&D testing, component end-of-line testing, in-vehicle real-time testing; user experience testing in semi-natural driving conditions, and real-time user experience testing using a combination of end and cloud in natural driving conditions. 5. Construction of an innovative R&D platform for intelligent cockpit human-computer interaction, empowering users with forward-looking technologies to improve user satisfaction. |
Yang Zhenyu Tongji University |
About:
Report Title: Smart Cockpit Internationalization Challenges and the Chinese Narrative
Report Summary: The globalization of smart cockpits has evolved into a dual competition of technological prowess and cultural understanding. What are the unique user profiles, technological philosophies, application characteristics, and industry chain differences across markets like China, Europe, North America, and Japan? — From Chinese speed to European order, from North American disruption to Japanese empathy. This report deeply analyzes the cultural divide underlying these differences, identifying the core conceptual differences between Eastern-style empathetic interaction and Western-style efficient control. It also proposes the essential integration of functionality, service integration, and cultural resonance required for companies expanding internationally. Ultimately, after facing the shared challenges of data sovereignty, cross-cultural trust, and algorithmic ethics, the conference proposes a globalization formula for Chinese companies: "technology as the spear, culture as the shield, and ecosystem as the king," and envisions "Made in China" taking the world by storm. |
Zhang Jingyu Institute of Psychology, Chinese Academy of Sciences |
Introduction:
Report Title: New Challenges of Intelligence to Cockpit Human-Machine Interaction and Psychological Countermeasures
Report Introduction: With the advancement of intelligence, will the cockpit experience necessarily improve? How can human-machine co-driving be safe? We propose that the core issues of human-machine interaction in the intelligent era will be: how machines understand humans, how humans understand machines, and how to resolve the trust-takeover paradox. We will explore new paradigms to address these issues from new perspectives, including predicting human interaction needs, naturally expressing machine intent, and ecologically maintaining situational awareness. These ideas will facilitate the development of automated interface evaluation and optimization, new in-cabin information display systems, and vehicle-to-vehicle interaction technologies. We urge all colleagues to work together to achieve theoretical breakthroughs while building massive databases with more diverse information and engineering verification systems that require multidisciplinary collaboration. |
Li Hongting Zhejiang University of Technology |
Introduction:
Report Title: Human Factors Considerations for the Fluency of Voice Interaction in Smart Car Cockpits
Report Introduction: Fluency is the subjective experience of an individual regarding the difficulty of information processing. As voice recognition control functions have become standard features in car cockpits, the fluency of voice interaction has become a key research area of interest in the fields of human factors and user experience. This report mainly introduces the concept of fluency in human-computer interaction and the framework for fluency in smart cockpit voice interaction. It also analyzes the key human factors issues of fluency in smart cockpit voice interaction from the perspectives of perception, cognition, and feedback. Finally, it proposes a preliminary index system for fluency in smart cockpit voice interaction from the aspects of voice wake-up, voice control, voice dialogue, application control, and output feedback. |
Ren Wei Chongqing Changan Technology Co., Ltd. |
Introduction:
Report Title: Transformation and Practical Exploration of Cockpit Human-Machine Dialogue Driven by Big Models
Report Introduction: Before the advent of big models, cockpit NLU relied on a "BERT + rules" architecture, which had significant limitations: stable operation in vertical domains like vehicle control required 100,000 pieces of data, and insufficient understanding and generation capabilities for open-domain conversations, making it difficult for commercial use. Big models overcome these bottlenecks. Their powerful language capabilities enable NLU to easily process colloquial expressions, reducing the required data volume to one-tenth of the original amount. Generative capabilities significantly enhance the user experience in scenarios like mobility services and casual conversations. Leveraging agent technology and memory systems, in-vehicle conversations can handle complex tasks and meet personalized needs. Advances in large models with small parameters are driving their deployment on-board, driving the evolution of interaction towards a multimodal "language + vision" model. We are conducting research on fine-tuning efficiency, interpretability, and on-device implementation of large models. |