{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,2]],"date-time":"2026-05-02T07:02:59Z","timestamp":1777705379358,"version":"3.51.4"},"reference-count":10,"publisher":"SAGE Publications","issue":"2","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IFS"],"published-print":{"date-parts":[[2024,2,14]]},"abstract":"<jats:p>Recently, actor-critic architectures such as deep deterministic policy gradient (DDPG) are able to understand higher-level concepts for searching rich reward, and generate complex actions in continuous action space, and widely used in practical applications. However, when action space is limited and has dynamic hard margins, training DDPG can be problematic and inefficiency. Since real-world actuators always have margins and interferences, after initialization, the actor network is likely to be stuck at a local optimal point on action space margin: actor gradient orients to the outside of action space but actuators stop at the margin. If the hard margins are complex, dynamic and unknown to the DDPG agent, it is unable to use penalty functions to recover from local optimum. If we enlarge the random process for local exploration, the training could be in potential risk of failure. Therefore, simply relying on gradient of critic network to train the actor network is not a robust method in real environment. To solve this problem, in this paper we modify DDPG to deep comparative policy (DCP). Rather than leveraging critic-to-actor gradient, the core training process of DCP is regulated by a T-fold compare among random proposed adjacent actions. The performance of DDPG, DCP and related algorithms are tested and compared in two experiments. Our results show that, DCP is effective, efficient and qualified to perform all tasks that DDPG can perform. More importantly, DCP is less likely to be influenced by the action space margins, DCP can provide more safety in avoiding training failure and local optimum, and gain more robustness in applications with dynamic hard margins in the action space. Another advantage is that, complex penalty for margin touching detection is not required, the reward function can always be brief and short.<\/jats:p>","DOI":"10.3233\/jifs-233747","type":"journal-article","created":{"date-parts":[[2023,9,8]],"date-time":"2023-09-08T10:37:31Z","timestamp":1694169451000},"page":"3773-3788","source":"Crossref","is-referenced-by-count":0,"title":["An efficient and robust gradient reinforcement learning: Deep comparative policy"],"prefix":"10.1177","volume":"46","author":[{"given":"Jiaguo","family":"Wang","sequence":"first","affiliation":[{"name":"Northwestern Polytechnical University, Xi\u2019an, China"}]},{"given":"Wenheng","family":"Li","sequence":"additional","affiliation":[{"name":"AVIC Xi\u2019an Aeronautics Computing Technique Research Institute, Xi\u2019an, China"}]},{"given":"Chao","family":"Lei","sequence":"additional","affiliation":[{"name":"School of Computing and Information Systems, The University of Melbourne, Parkville, Victoria, Australia"}]},{"given":"Meng","family":"Yang","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology, Monash University, Clayton Victoria, Australia"}]},{"given":"Yang","family":"Pei","sequence":"additional","affiliation":[{"name":"Northwestern Polytechnical University, Xi\u2019an, China"}]}],"member":"179","reference":[{"issue":"7587","key":"10.3233\/JIFS-233747_ref6","doi-asserted-by":"crossref","first-page":"484","DOI":"10.1038\/nature16961","article-title":"Mastering the game of Go with deep neural networks and tree search","volume":"529","author":"Silver","year":"2016","journal-title":"Nature"},{"issue":"7676","key":"10.3233\/JIFS-233747_ref7","doi-asserted-by":"crossref","first-page":"354","DOI":"10.1038\/nature24270","article-title":"Mastering the game of go without human knowledge","volume":"550","author":"Silver","year":"2017","journal-title":"Nature"},{"key":"10.3233\/JIFS-233747_ref10","first-page":"820","article-title":"Dual learning for machine translation","volume":"29","author":"He","year":"2016","journal-title":"Advances in neural Information Processing Systems"},{"key":"10.3233\/JIFS-233747_ref15","doi-asserted-by":"crossref","unstructured":"Van H.H. , Guez A. and Silver D. , Deep reinforcement learning with double q-learning, Proceedings of the AAAI Conference on Artificial Intelligence 30(1) (2016).","DOI":"10.1609\/aaai.v30i1.10295"},{"issue":"1","key":"10.3233\/JIFS-233747_ref23","first-page":"2018","article-title":"Rainbow: Combining improvements in deep reinforcement learning","volume":"32","author":"Hessel","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"issue":"2","key":"10.3233\/JIFS-233747_ref26","doi-asserted-by":"crossref","first-page":"311","DOI":"10.1016\/0004-3702(92)90058-6","article-title":"Automatic programming of behavior-based robots using reinforcement learning","volume":"55","author":"Mahadevan","year":"1992","journal-title":"Artificial Intelligence"},{"issue":"2","key":"10.3233\/JIFS-233747_ref28","doi-asserted-by":"crossref","first-page":"371","DOI":"10.2514\/1.G001889","article-title":"Computational investigation of environment learning in guidance and navigation","volume":"40","author":"Verma","year":"2016","journal-title":"Journal of Guidance, Control, and Dynamics"},{"issue":"2","key":"10.3233\/JIFS-233747_ref29","doi-asserted-by":"crossref","first-page":"264","DOI":"10.1109\/72.914523","article-title":"Online learning control by association and reinforcement,pp","volume":"12","author":"Si","year":"2001","journal-title":"IEEE Transactions on Neural Networks"},{"key":"10.3233\/JIFS-233747_ref30","doi-asserted-by":"crossref","first-page":"237","DOI":"10.1613\/jair.301","article-title":"Reinforcement learning: A survey","volume":"4","author":"Kaelbling","year":"1996","journal-title":"Journal of Artificial Intelligence Research"},{"issue":"1","key":"10.3233\/JIFS-233747_ref32","doi-asserted-by":"crossref","first-page":"26","DOI":"10.2514\/2.4029","article-title":"Nonlinear flight control using neural networks","volume":"20","author":"Kim","year":"1997","journal-title":"Journal of Guidance, Control, and Dynamics"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/JIFS-233747","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T09:43:41Z","timestamp":1777455821000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/JIFS-233747"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,14]]},"references-count":10,"journal-issue":{"issue":"2"},"URL":"https:\/\/doi.org\/10.3233\/jifs-233747","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,14]]}}}