This paper presents a deep reinforcement learning (RL)
approach for training mobile robots to navigate complex envi
ronments using the Twin Delayed Deep Deterministic Policy
Gradient (TD3) method, which is known for its stability in
continuous control tasks. The robot model simulates real
world bicycle kinematics with nonholonomic constraints and
tackles three key navigation tasks: point tracking with obstacle
avoidance, linear path following, and circular path tracking.The
study focuses on enhancing tasks like point tracking, linear
path following, and circular path tracking, aiming to reduce
the distance to the goal, minimize tracking errors, and lower
control effort over time. This approach replaces traditional
methods and significantly improves upon them, enabling the
system to reach targets even at points it hasn’t been trained
on before, thereby boosting efficiency and adaptability. Syn
thetic environments with obstacles are created using the MAT
LAB® Reinforcement Learning toolbox for realistic simula
tions. The system employs an actor-critic neural network that
processes occupancy map data and outputs continuous velocity
commands. Evaluations show the approach’s effectiveness in
teaching robots collision-free navigation, achieving human
level competency in complex environments through iterative
learning. This work demonstrates the potential of model-free
deep RL for real-world mobile robot navigation. |