Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach

Autonomous vehicles (AVs) operate in highly dynamic environments, making them essential in modern transportation systems. Their success depends on real-time decision-making in unpredictable traffic scenarios, which are often beyond the scope of initial design assumptions. This unpredictability limit...

詳細記述

書誌詳細
出版年:Proceedings of International Conference on Artificial Life and Robotics
第一著者: 2-s2.0-85219556138
フォーマット: Conference paper
言語:English
出版事項: ALife Robotics Corporation Ltd 2025
オンライン・アクセス:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85219556138&partnerID=40&md5=afdd0bbd85a5b2ced5a2438a773e03e5
id Fujita S.T.; Saruchi S.A.B.; Al-Talib A.A.M.; Wahid N.; Chowdhury A.K.; Maidin S.N.
spelling Fujita S.T.; Saruchi S.A.B.; Al-Talib A.A.M.; Wahid N.; Chowdhury A.K.; Maidin S.N.
2-s2.0-85219556138
Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
2025
Proceedings of International Conference on Artificial Life and Robotics



https://www.scopus.com/inward/record.uri?eid=2-s2.0-85219556138&partnerID=40&md5=afdd0bbd85a5b2ced5a2438a773e03e5
Autonomous vehicles (AVs) operate in highly dynamic environments, making them essential in modern transportation systems. Their success depends on real-time decision-making in unpredictable traffic scenarios, which are often beyond the scope of initial design assumptions. This unpredictability limits the effectiveness of traditional rule-based decision-making systems and predefined cost functions for real-time optimization. In critical applications like autonomous driving, reinforcement learning (RL) agents without safety mechanisms often struggle to converge or require extensive training data to produce reliable policies, leading to challenges in achieving safe and efficient operation. To address these challenges, this paper proposes a reinforcement learning (RL)-based framework. The ego vehicle refines its decision-making abilities by interacting with a simulated traffic environment. A short-horizon safety mechanism (SM) is integrated to ensure safer training by providing alternative safe actions during critical scenarios. The RL agent employs a deep neural network to map system states to optimal actions. The SM generalizes risky states, such as near-misses or collisions, rainy environment during night while creating a stable learning environment that enhances learning efficiency and enables meaningful exploration for optimal policy development. The method was validated in a highway driving scenario with varying traffic densities using the DQN algorithm and the CARLA simulator. Results demonstrated that the integration of the safety mechanism significantly improved learning efficiency and enabled the AV to make safe, reliable decisions even in complex and unpredictable traffic conditions. © The 2025 International Conference on Artificial Life and Robotics (ICAROB2025).
ALife Robotics Corporation Ltd
24359157
English
Conference paper

author 2-s2.0-85219556138
spellingShingle 2-s2.0-85219556138
Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
author_facet 2-s2.0-85219556138
author_sort 2-s2.0-85219556138
title Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
title_short Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
title_full Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
title_fullStr Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
title_full_unstemmed Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
title_sort Autonomous Vehicle Navigation in Highway with Deep Q-Network (DQN) using Reinforcement Learning Approach
publishDate 2025
container_title Proceedings of International Conference on Artificial Life and Robotics
container_volume
container_issue
doi_str_mv
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-85219556138&partnerID=40&md5=afdd0bbd85a5b2ced5a2438a773e03e5
description Autonomous vehicles (AVs) operate in highly dynamic environments, making them essential in modern transportation systems. Their success depends on real-time decision-making in unpredictable traffic scenarios, which are often beyond the scope of initial design assumptions. This unpredictability limits the effectiveness of traditional rule-based decision-making systems and predefined cost functions for real-time optimization. In critical applications like autonomous driving, reinforcement learning (RL) agents without safety mechanisms often struggle to converge or require extensive training data to produce reliable policies, leading to challenges in achieving safe and efficient operation. To address these challenges, this paper proposes a reinforcement learning (RL)-based framework. The ego vehicle refines its decision-making abilities by interacting with a simulated traffic environment. A short-horizon safety mechanism (SM) is integrated to ensure safer training by providing alternative safe actions during critical scenarios. The RL agent employs a deep neural network to map system states to optimal actions. The SM generalizes risky states, such as near-misses or collisions, rainy environment during night while creating a stable learning environment that enhances learning efficiency and enables meaningful exploration for optimal policy development. The method was validated in a highway driving scenario with varying traffic densities using the DQN algorithm and the CARLA simulator. Results demonstrated that the integration of the safety mechanism significantly improved learning efficiency and enabled the AV to make safe, reliable decisions even in complex and unpredictable traffic conditions. © The 2025 International Conference on Artificial Life and Robotics (ICAROB2025).
publisher ALife Robotics Corporation Ltd
issn 24359157
language English
format Conference paper
accesstype
record_format scopus
collection Scopus
_version_ 1828987857713233920