Position control of a digital electrohydraulic system with limited sensory data using double deep q-network controller


COŞKUN M. Y., Itik M.

Expert Systems with Applications, cilt.252, 2024 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 252
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1016/j.eswa.2024.124275
  • Dergi Adı: Expert Systems with Applications
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Anahtar Kelimeler: Digital electrohydraulic systems, Double deep q-learning, Electrohydraulic, Independent metering, Position control, Reinforcement learning
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

In this study, we propose a reinforcement learning controller for position control of a digital electrohydraulic system with independent metering valve structure. Our study addresses the inherent challenges of traditional control methods, such as the need for continuous pressure measurement and complex control structures. As a solution, a double deep q-network (DDQN) controller which relies solely on position feedback from D-EHS is suggested. A simulation environment was developed to train the DDQN controller, wherein the artificial neural network structure, training hyperparameters, and reward function were optimized through preliminary studies. The DDQN controller was trained using a sinusoidal position reference signal of a single frequency and evaluated experimentally for its position control performance. Tracking performance was assessed against various position reference signals, including ramp and sinusoidal signal with different frequencies. Robustness against external disturbances was also tested by increasing the nominal supply pressure by 20% and 40%. The results demonstrate that the DDQN controller successfully controls the digital electrohydraulic system and is achieving good tracking performance for the training position reference signal. Furthermore, the controller exhibits satisfactory tracking performance for non-training position reference signals, highlighting its ability to learn the digital electrohydraulic system dynamics with limited training data. Even under varying supply pressure conditions, the DDQN controller maintains an acceptable level of control performance. Our findings emphasize the generalizability and reliability of the DDQN reinforcement learning controller, making it a suitable candidate for application in digital electrohydraulic systems and contributing to the advancement of control theory research.