Neuro-Cognitive Reward Modeling for Human-Centered Autonomous Vehicle Control

Explainable & Ethical AI
Published: arXiv: 2603.25968v1
Authors

Zhuoli Zhuang Yu-Cheng Chang Yu-Kai Wang Thomas Do Chin-Teng Lin

Abstract

Recent advancements in computer vision have accelerated the development of autonomous driving. Despite these advancements, training machines to drive in a way that aligns with human expectations remains a significant challenge. Human factors are still essential, as humans possess a sophisticated cognitive system capable of rapidly interpreting scene information and making accurate decisions. Aligning machine with human intent has been explored with Reinforcement Learning with Human Feedback (RLHF). Conventional RLHF methods rely on collecting human preference data by manually ranking generated outputs, which is time-consuming and indirect. In this work, we propose an electroencephalography (EEG)-guided decision-making framework to incorporate human cognitive insights without behaviour response interruption into reinforcement learning (RL) for autonomous driving. We collected EEG signals from 20 participants in a realistic driving simulator and analyzed event-related potentials (ERP) in response to sudden environmental changes. Our proposed framework employs a neural network to predict the strength of ERP based on the cognitive information from visual scene information. Moreover, we explore the integration of such cognitive information into the reward signal of the RL algorithm. Experimental results show that our framework can improve the collision avoidance ability of the RL algorithm, highlighting the potential of neuro-cognitive feedback in enhancing autonomous driving systems. Our project page is: https://alex95gogo.github.io/Cognitive-Reward/.

Paper Summary

Problem
The main challenge addressed by this research paper is the difficulty in training autonomous vehicles (AVs) to drive in a way that aligns with human expectations. Current autonomous driving systems rely on imitation learning, which can lead to limitations such as the distribution shift problem, where models fail to generalize beyond their training data. This can result in poor performance in out-of-distribution scenarios, such as emergency braking or interactive driving.
Key Innovation
The key innovation of this paper is the development of an electroencephalography (EEG)-guided decision-making framework that incorporates human cognitive insights into reinforcement learning (RL) for autonomous driving. This framework uses EEG signals to predict the strength of event-related potentials (ERP) in response to sudden environmental changes, and integrates this cognitive information into the reward signal of the RL algorithm.
Practical Impact
This research has the potential to significantly improve the performance of autonomous vehicles in complex driving scenarios. By incorporating human cognitive insights into the RL algorithm, the framework can enhance the collision avoidance ability of the AV, leading to safer driving behavior. This could have a major impact on the development of autonomous vehicles, enabling them to better adapt to real-world driving scenarios and reducing the risk of accidents.
Analogy / Intuitive Explanation
Imagine you're driving a car and suddenly a pedestrian steps into the road. Your brain quickly processes the scene and sends a signal to your muscles to react accordingly. This is similar to how the EEG-guided decision-making framework works. It uses EEG signals to capture the brain's rapid processing of visual information and uses this information to guide the AV's decision-making. This allows the AV to react more like a human driver, making it safer and more effective in complex driving scenarios.
Paper Information
Categories:
cs.CV
Published Date:

arXiv ID:

2603.25968v1

Quick Actions