International Journal of Systems Science, 2026 (SCI-Expanded, Scopus)
Load Frequency Control is essential for maintaining frequency stability in modern power systems with high renewable energy penetration, where nonlinearities, uncertainties, and time-delay effects are unavoidable.Most existing studies optimise controller parameters offline and apply them as fixed values during operation, which limits adaptability under dynamically changing conditions. To overcome this drawback, this paper proposes a reinforcement learning–cooperated dynamic parameter optimisation framework for a PIDn controller based on the Deep Deterministic Policy Gradient (DDPG) algorithm.In the proposed approach, a DDPG agent continuously observes system frequency deviations, tie-line power deviation, and the integral of time-weighted absolute error, and dynamically generates the proportional ((Formula presented.)), integral ((Formula presented.)), derivative ((Formula presented.)), and derivative filter (N) parameters of multiple PIDn controllers in real time. Unlike conventional static or offline-trained methods, the proposed scheme enables online adaptive tuning of controller parameters through continuous interaction with the power system environment.The effectiveness of the proposed control strategy is verified through extensive simulations and real-time hardware-in-the-loop implementation using an OPAL-RT platform.Performance evaluations under various operating conditions, including step load changes, integer-order and fractional-order time delays, intermittent power supply, parameter uncertainties, and governor dead-band nonlinearities, demonstrate that the proposed controller provides superior transient performance and robustness compared to a metaheuristic-optimised PIDn controller.