International Journal of Control, Automation and Systems, 2026 (SCI-Expanded, Scopus)
Snake robots offer remarkable capabilities for navigating complex and dynamic terrains, yet achieving adaptive and energy-efficient locomotion under varying environmental conditions remains a fundamental challenge, particularly for wheel-less designs facing increased frictional forces. To address these challenges, this paper proposes an end-to-end adaptive trajectory generation framework for a wheel-less snake robot using a Deep Reinforcement Learning (DRL) controller based on the Twin-Delayed Deep Deterministic Policy Gradient (TD3) algorithm. The approach allows the robot to autonomously learn optimal locomotion strategies directly from environmental observations, without the need for predefined gaits or extensive manual tuning. To evaluate the effectiveness and adaptability of the proposed controller, reference speeds were randomly varied in simulation to emulate different ground friction conditions, thereby requiring the robot to adjust its trajectories to maintain energy-efficient motion. Simulation results demonstrate that the TD3-based controller successfully generates adaptive and energy-efficient locomotion patterns across varying reference speeds, achieving movements resembling natural snake gaits, such as lateral undulation and concertina locomotion. Moreover, the proposed controller exhibits high generalization ability, consistently achieving a Pareto-optimal trade-off between speed and energy consumption. These results highlight the potential of DRL-based controllers to improve the adaptability and energy efficiency of snake robots, paving the way for practical applications in search and rescue, exploration, and industrial inspection tasks.