{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,21]],"date-time":"2026-03-21T01:39:30Z","timestamp":1774057170205,"version":"3.50.1"},"reference-count":53,"publisher":"Springer Science and Business Media LLC","issue":"8","license":[{"start":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T00:00:00Z","timestamp":1752537600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T00:00:00Z","timestamp":1752537600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000266","name":"EPSRC","doi-asserted-by":"crossref","award":["EP\/V024868\/1"],"award-info":[{"award-number":["EP\/V024868\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100000266","name":"EPSRC","doi-asserted-by":"crossref","award":["EP\/V024868\/1"],"award-info":[{"award-number":["EP\/V024868\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Mach Learn"],"published-print":{"date-parts":[[2025,8]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>In human society, the conflict between self-interest and collective well-being often obstructs efforts to achieve shared welfare. Related concepts like the Tragedy of the Commons and Social Dilemmas frequently manifest in our daily lives. As artificial agents increasingly serve as autonomous proxies for humans, we propose a novel multi-agent reinforcement learning (MARL) method to address this issue - learning policies to maximise collective returns even when individual agents\u2019 interests conflict with the collective one. Unlike traditional cooperative MARL solutions that involve sharing rewards, values, and policies or designing intrinsic rewards to encourage agents to learn collectively optimal policies, we propose a novel MARL approach where agents exchange action suggestions. Our method reveals less private information compared to sharing rewards, values, or policies, while enabling effective cooperation without the need to design intrinsic rewards. Our algorithm is supported by our theoretical analysis that establishes a bound on the discrepancy between collective and individual objectives, demonstrating how sharing suggestions can align agents\u2019 behaviours with the collective objective. Experimental results demonstrate that our algorithm performs competitively with baselines that rely on value or policy sharing or intrinsic rewards.<\/jats:p>","DOI":"10.1007\/s10994-025-06823-z","type":"journal-article","created":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T19:36:17Z","timestamp":1752608177000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Achieving collective welfare in multi-agent reinforcement learning via suggestion sharing"],"prefix":"10.1007","volume":"114","author":[{"given":"Yue","family":"Jin","sequence":"first","affiliation":[]},{"given":"Shuangqing","family":"Wei","sequence":"additional","affiliation":[]},{"given":"Giovanni","family":"Montana","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,15]]},"reference":[{"issue":"September","key":"6823_CR1","doi-asserted-by":"publisher","first-page":"66","DOI":"10.1016\/j.artint.2018.01.002","volume":"258","author":"SV Albrecht","year":"2018","unstructured":"Albrecht, S. V., & Stone, P. (2018). Autonomous agents modelling other agents: A comprehensive survey and open problems. Artificial Intelligence, 258(September), 66\u201395. https:\/\/doi.org\/10.1016\/j.artint.2018.01.002","journal-title":"Artificial Intelligence"},{"issue":"2","key":"6823_CR2","doi-asserted-by":"publisher","first-page":"917","DOI":"10.1109\/TCNS.2021.3078100","volume":"9","author":"T Chen","year":"2022","unstructured":"Chen, T., Zhang, K., Giannakis, G. B., & Basar, T. (2022). Communication-efficient policy gradient methods for distributed reinforcement learning. IEEE Transactions on Control of Network Systems, 9(2), 917\u2013929. https:\/\/doi.org\/10.1109\/TCNS.2021.3078100","journal-title":"IEEE Transactions on Control of Network Systems"},{"key":"6823_CR3","unstructured":"Christoffersen, P. J. K., Haupt, A. A., & Hadfield-Menell, D. (2023). Get it in writing: Formal contracts mitigate social dilemmas in multi-agent RL. In: Proceedings of the 2023 international conference on autonomous agents and multiagent systems (pp. 448\u2013456)."},{"key":"6823_CR4","unstructured":"Chu, T., Chinchali, S., & Katti, S. (2020). Multi-agent reinforcement learning for networked system control. In International conference on learning representations (Vol. 1)."},{"issue":"3","key":"6823_CR5","doi-asserted-by":"publisher","first-page":"1086","DOI":"10.1109\/TITS.2019.2901791","volume":"21","author":"T Chu","year":"2020","unstructured":"Chu, T., Wang, J., Codec\u00e0, L., & Li, Z. (2020). Multi-agent deep reinforcement learning for large-scale traffic signal control. IEEE Transactions on Intelligent Transportation Systems, 21(3), 1086\u20131095.","journal-title":"IEEE Transactions on Intelligent Transportation Systems"},{"key":"6823_CR6","doi-asserted-by":"crossref","unstructured":"Du, Y., Ma, C., Liu, Y., Lin, R., Dong, H., Wang, J., & Yang, Y. (2022). Scalable model-based policy optimization for decentralized networked systems. In International conference on intelligent robots and systems (IROS) (pp. 9019\u20139026).","DOI":"10.1109\/IROS47612.2022.9982253"},{"key":"6823_CR7","unstructured":"Foerster, J., Nardell, N., Farquhar, G., Afouras, T., Torr, P. H. S., Kohli, P., & Whiteson, S. (2017). Stabilising experience replay for deep multi-agent reinforcement learning. In 34th international conference on machine learning, ICML 2017 (Vol. 3, pp. 1879\u20131888)."},{"issue":"4","key":"6823_CR8","doi-asserted-by":"publisher","first-page":"660","DOI":"10.1046\/j.0022-0477.2001.00609.x","volume":"89","author":"M Gersani","year":"2001","unstructured":"Gersani, M., Brown, J. S., O\u2019Brien, E. E., Maina, G. M., & Abramsky, Z. (2001). Tragedy of the commons as a result of root competition. Journal of Ecology, 89(4), 660\u2013669. https:\/\/doi.org\/10.1046\/j.0022-0477.2001.00609.x","journal-title":"Journal of Ecology"},{"issue":"7770","key":"6823_CR9","doi-asserted-by":"publisher","first-page":"524","DOI":"10.1038\/s41586-019-1488-5","volume":"572","author":"OP Hauser","year":"2019","unstructured":"Hauser, O. P., Hilbe, C., Chatterjee, K., & Nowak, M. A. (2019). Social dilemmas among unequals. Nature, 572(7770), 524\u2013527. https:\/\/doi.org\/10.1038\/s41586-019-1488-5","journal-title":"Nature"},{"key":"6823_CR10","unstructured":"He, H., Boyd-Graber, J., Kwok, K., & Daume, H. (2016). Opponent modeling in deep reinforcement learning. In 33rd international conference on machine learning, ICML 2016 (Vol. 4, pp. 2675\u20132684)."},{"key":"6823_CR11","doi-asserted-by":"publisher","unstructured":"Huang, X., & Zhou, S. (2022). Importance-aware message exchange and prediction for multi-agent reinforcement learning. In 2022 IEEE global communications conference, GLOBECOM 2022 - proceedings (pp. 6493\u20136498). https:\/\/doi.org\/10.1109\/GLOBECOM48099.2022.10001408","DOI":"10.1109\/GLOBECOM48099.2022.10001408"},{"key":"6823_CR12","first-page":"3326","volume":"31","author":"E Hughes","year":"2018","unstructured":"Hughes, E., Leibo, J. Z., Phillips, M., & Tuyls, K. (2018). Inequity aversion improves cooperation in intertemporal social dilemmas. Advances in Neural Information Processing Systems, 31, 3326\u20133336.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6823_CR13","unstructured":"Iqbal, S., & Sha, F. (2019). Actor-attention-critic for multi-agent reinforcement learning. In 36th international conference on machine learning, ICML 2019 2019-June (pp. 5261\u20135270)."},{"key":"6823_CR14","unstructured":"Jaques, N., Lazaridou, A., Hughes, E., Gulcehre, C., Ortega, P. A., Strouse, D. J., Leibo, J. Z., & Freitas, N. (2019). Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In 36th international conference on machine learning, ICML 2019 2019-June (pp. 5372\u20135381)."},{"key":"6823_CR15","first-page":"20469","volume":"35","author":"J Jiang","year":"2022","unstructured":"Jiang, J., & Lu, Z. (2022). I2Q: A fully decentralized Q-learning algorithm. Advances in Neural Information Processing Systems, 35, 20469\u201320481.","journal-title":"Advances in Neural Information Processing Systems"},{"issue":"1","key":"6823_CR16","doi-asserted-by":"publisher","first-page":"90","DOI":"10.1109\/TNNLS.2021.3089834","volume":"34","author":"Y Jin","year":"2021","unstructured":"Jin, Y., Wei, S., Yuan, J., & Zhang, X. (2021). Hierarchical and stable multiagent reinforcement learning for cooperative navigation control. IEEE Transactions on Neural Networks and Learning Systems, 34(1), 90\u2013103. https:\/\/doi.org\/10.1109\/TNNLS.2021.3089834","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"6823_CR17","first-page":"1","volume-title":"Communication in multi-agent reinforcement learning: Intention sharing","author":"W Kim","year":"2021","unstructured":"Kim, W., Park, J., & Sung, Y. (2021). Communication in multi-agent reinforcement learning: Intention sharing (pp. 1\u201315). ICLR."},{"key":"6823_CR18","doi-asserted-by":"crossref","unstructured":"Kollock, P. (1998). SOCIAL DILEMMAS: The anatomy of cooperation. Technical report. www.sscnet.ucla.edu\/soc\/faculty\/kollock\/dilemmas","DOI":"10.1146\/annurev.soc.24.1.183"},{"issue":"1","key":"6823_CR19","doi-asserted-by":"publisher","first-page":"311","DOI":"10.1109\/TCCN.2021.3130993","volume":"8","author":"M Krouka","year":"2022","unstructured":"Krouka, M., Elgabli, A., Issaid, C. B., & Bennis, M. (2022). Communication-efficient and federated multi-agent reinforcement learning. IEEE Transactions on Cognitive Communications and Networking, 8(1), 311\u2013320. https:\/\/doi.org\/10.1109\/TCCN.2021.3130993","journal-title":"IEEE Transactions on Cognitive Communications and Networking"},{"key":"6823_CR20","unstructured":"Kuba, J. G., Chen, R., Wen, M., Wen, Y., Sun, F., Wang, J., & Yang, Y. (2022). Trust region policy optimisation in multi-agent reinforcement learning. In International conference on learning representations (pp. 1046)."},{"key":"6823_CR21","unstructured":"Leibo, J. Z., Zambaldi, V., Lanctot, M., Marecki, J., & Graepel, T. (2017). Multi-agent reinforcement learning in sequential social dilemmas. In Proceedings of the 16th international conference on autonomous agents and multiagent systems (pp. 464\u2013473)."},{"issue":"22","key":"6823_CR22","doi-asserted-by":"publisher","first-page":"22958","DOI":"10.1109\/JIOT.2022.3187067","volume":"9","author":"W Lei","year":"2022","unstructured":"Lei, W., Ye, Y., Xiao, M., Skoglund, M., & Han, Z. (2022). Adaptive stochastic ADMM for decentralized reinforcement learning in edge IoT. IEEE Internet of Things Journal, 9(22), 22958\u201322971. https:\/\/doi.org\/10.1109\/JIOT.2022.3187067","journal-title":"IEEE Internet of Things Journal"},{"key":"6823_CR23","first-page":"6380","volume":"30","author":"R Lowe","year":"2017","unstructured":"Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., & Mordatch, I. (2017). Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in Neural Information Processing Systems, 30, 6380\u20136391.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6823_CR24","unstructured":"Macy, M.W., & Flache, A. (2002). Learning dynamics in social dilemmas. Technical report. URL:www.pnas.orgcgidoi10.1073pnas.092080099"},{"issue":"6870","key":"6823_CR25","doi-asserted-by":"publisher","first-page":"424","DOI":"10.1126\/science.1064748","volume":"415","author":"M Milinski","year":"2002","unstructured":"Milinski, M., Semmann, D., & Krambeck, H. (2002). Reputation helps solve the \u2018tragedy of the commons\u2019. Nature, 415(6870), 424\u2013426. https:\/\/doi.org\/10.1126\/science.1064748","journal-title":"Nature"},{"key":"6823_CR26","unstructured":"Omidshafiei, S., Pazis, J., Amato, C., How, J. P., & Vian, J. (2017). Deep decentralized multi-task multi-agent reinforcement learning under partial observability 10(5555\/3305890), 3305958."},{"key":"6823_CR27","doi-asserted-by":"publisher","DOI":"10.2307\/3146384","volume-title":"Governing the commons: The evolution of institutions for collective action","author":"E Ostrom","year":"1990","unstructured":"Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action (Vol. 32). Cambridge University Press. https:\/\/doi.org\/10.2307\/3146384"},{"issue":"NeurIPS","key":"6823_CR28","first-page":"12208","volume":"15","author":"B Peng","year":"2021","unstructured":"Peng, B., Rashid, T., Witt, C. A., Kamienny, P. A., Torr, P. H. S., B\u00f6hmer, W., & Whiteson, S. (2021). FACMAC: Factored multi-agent centralised policy gradients. Advances in Neural Information Processing Systems, 15(NeurIPS), 12208\u201312221.","journal-title":"Advances in Neural Information Processing Systems"},{"issue":"14","key":"6823_CR29","doi-asserted-by":"publisher","first-page":"14014","DOI":"10.1109\/JIOT.2023.3240671","volume":"10","author":"Y Qiu","year":"2023","unstructured":"Qiu, Y., Jin, Y., Yu, L., Wang, J., Wang, Y., & Zhang, X. (2023). Improving sample efficiency of multi-agent reinforcement learning with non-expert policy for flocking control. IEEE Internet of Things Journal, 10(14), 14014\u201314027. https:\/\/doi.org\/10.1109\/JIOT.2023.3240671","journal-title":"IEEE Internet of Things Journal"},{"key":"6823_CR30","unstructured":"Schulman, J., Moritz, P., Levine, S., Jordan, M. I., & Abbeel, P. (2016). High-dimensional continuous control using generalized advantage estimation. In 4th international conference on learning representations, ICLR 2016 - conference track proceedings (pp. 1\u201314)."},{"key":"6823_CR31","unstructured":"Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347"},{"key":"6823_CR32","first-page":"1889","volume":"3","author":"J Schulman","year":"2015","unstructured":"Schulman, J., Levine, S., Moritz, P., Jordan, M., & Abbeel, P. (2015). Trust region policy optimization. 32nd International Conference on Machine Learning, ICML 2015, 3, 1889\u20131897.","journal-title":"32nd International Conference on Machine Learning, ICML 2015"},{"key":"6823_CR33","doi-asserted-by":"publisher","unstructured":"Sha, X., Zhang, J., & You, K. (2021). Policy evaluation for reinforcement learning over asynchronous multi-agent networks. In Chinese control conference, CCC 2021-July (pp. 5373\u20135378). https:\/\/doi.org\/10.23919\/CCC52363.2021.9550466","DOI":"10.23919\/CCC52363.2021.9550466"},{"key":"6823_CR34","unstructured":"Siedler, P.D., & Alpha, A. (2022). Dynamic collaborative multi-agent reinforcement learning communication for autonomous drone reforestation (NeurIPS 2022)."},{"key":"6823_CR35","doi-asserted-by":"publisher","unstructured":"Stankovic, M. S., Beko, M., & Stankovic, S. S. (2022). Convergent distributed actor-critic algorithm based on gradient temporal difference. In European signal processing conference 2022-Augusi (pp. 2066\u20132070). https:\/\/doi.org\/10.23919\/eusipco55093.2022.9909762","DOI":"10.23919\/eusipco55093.2022.9909762"},{"key":"6823_CR36","doi-asserted-by":"publisher","unstructured":"Stankovic, M. S., Beko, M., & Stankovic, S. S. (2022). Distributed actor-critic learning using emphatic weightings. In 2022 8th international conference on control, decision and information technologies, CoDIT 2022 (pp. 1167\u20131172). https:\/\/doi.org\/10.1109\/CoDIT55151.2022.9804022","DOI":"10.1109\/CoDIT55151.2022.9804022"},{"key":"6823_CR37","unstructured":"Su, K., & Lu, Z. (2022). Decentralized policy optimization. arXiv preprint arXiv:2211.03032"},{"key":"6823_CR38","unstructured":"Sun, M., Devlin, S., Beck, J., Hofmann, K., & Whiteson, S. (2022). Trust region bounds for decentralized PPO under non-stationarity. In Proceedings of the 2023 international conference on autonomous agents and multiagent systems (pp. 5\u201313)."},{"key":"6823_CR39","doi-asserted-by":"publisher","unstructured":"Sun, C., Shen, M., & How, J.P. (2020). Scaling up multiagent reinforcement learning for robotic systems: Learn an adaptive sparse communication graph. In IEEE international conference on intelligent robots and systems (pp. 11755\u201311762). https:\/\/doi.org\/10.1109\/IROS45743.2020.9341303","DOI":"10.1109\/IROS45743.2020.9341303"},{"key":"6823_CR40","doi-asserted-by":"publisher","first-page":"1549","DOI":"10.1016\/j.ifacol.2020.12.2021","volume":"53","author":"W Suttle","year":"2020","unstructured":"Suttle, W., Yang, Z., Zhang, K., Wang, Z., Basar, T., & Liu, J. (2020). A multi-agent off-policy actor-critic algorithm for distributed reinforcement learning. IFAC-PapersOnLine, 53, 1549\u20131554. https:\/\/doi.org\/10.1016\/j.ifacol.2020.12.2021","journal-title":"IFAC-PapersOnLine"},{"key":"6823_CR41","doi-asserted-by":"crossref","unstructured":"Tennant, E., Hailes, S., & Musolesi, M. (2023). Modeling moral choices in social dilemmas with multi-agent reinforcement learning. arXiv preprint arXiv:2301.08491","DOI":"10.24963\/ijcai.2023\/36"},{"issue":"2","key":"6823_CR42","doi-asserted-by":"publisher","first-page":"125","DOI":"10.1016\/j.obhdp.2012.11.003","volume":"120","author":"PAM Van Lange","year":"2013","unstructured":"Van Lange, P. A. M., Joireman, J., Parks, C. D., & Van Dijk, E. (2013). The psychology of social dilemmas: A review. Organizational Behavior and Human Decision Processes, 120(2), 125\u2013141. https:\/\/doi.org\/10.1016\/j.obhdp.2012.11.003","journal-title":"Organizational Behavior and Human Decision Processes"},{"key":"6823_CR43","doi-asserted-by":"crossref","unstructured":"Wang, Y., Damani, M., Wang, P., Cao, Y., & Sartoretti, G. (2022). Distributed reinforcement learning for robot teams: A review.","DOI":"10.1007\/s43154-022-00091-8"},{"key":"6823_CR44","unstructured":"Wen, Y., Yang, Y., Luo, R., Wang, J., & Pan, W. (2019). Probabilistic recursive reasoning for multi-agent reinforcement learning. In 7th international conference on learning representations, ICLR 2019 (pp. 1\u201320)."},{"key":"6823_CR45","first-page":"26437","volume":"32","author":"Z Wu","year":"2021","unstructured":"Wu, Z., Yu, C., Ye, D., Zhang, J., Piao, H., & Zhuo, H. H. (2021). Coordinated proximal policy optimization. Advances in Neural Information Processing Systems, 32, 26437\u201326448.","journal-title":"Advances in Neural Information Processing Systems"},{"issue":"1","key":"6823_CR46","doi-asserted-by":"publisher","first-page":"931","DOI":"10.1109\/TVT.2021.3129504","volume":"71","author":"Z Xia","year":"2022","unstructured":"Xia, Z., Du, J., Wang, J., Jiang, C., Ren, Y., Li, G., & Han, Z. (2022). Multi-agent reinforcement learning aided intelligent UAV swarm for target tracking. IEEE Transactions on Vehicular Technology, 71(1), 931\u2013945. https:\/\/doi.org\/10.1109\/TVT.2021.3129504","journal-title":"IEEE Transactions on Vehicular Technology"},{"key":"6823_CR47","unstructured":"Yi, Y., Li, G., Wang, Y., & Lu, Z. (2022). Learning to share in multi-agent reinforcement learning. In ICLR 2022 Workshop on Gamification and Multiagent Solutions."},{"key":"6823_CR48","doi-asserted-by":"publisher","unstructured":"Zhang, Y., & Zavlanos, M. M. (2019). Distributed off-policy actor-critic reinforcement learning with policy consensus. In Proceedings of the IEEE conference on decision and control 2019-December (Cdc) (pp. 4674\u20134679). https:\/\/doi.org\/10.1109\/CDC40024.2019.9029969","DOI":"10.1109\/CDC40024.2019.9029969"},{"key":"6823_CR49","doi-asserted-by":"publisher","unstructured":"Zhang, K., Yang, Z., & Basar, T. (2018). Networked multi-agent reinforcement learning in continuous spaces. In Proceedings of the IEEE Conference on Decision and Control 2018-December (Cdc) (pp. 2771\u20132776). https:\/\/doi.org\/10.1109\/CDC.2018.8619581","DOI":"10.1109\/CDC.2018.8619581"},{"key":"6823_CR50","doi-asserted-by":"crossref","unstructured":"Zhang, K., Yang, Z., Liu, H., Zhang, T., & Ba\u015far, T. (2018). Fully decentralized multi-agent reinforcement learning with networked agents. In 35th international conference on machine learning, ICML 2018 13 (pp. 9340\u20139371)","DOI":"10.1109\/CDC.2018.8619581"},{"issue":"2","key":"6823_CR51","doi-asserted-by":"publisher","first-page":"1049","DOI":"10.1016\/j.ifacol.2020.12.1290","volume":"53","author":"K Zhang","year":"2020","unstructured":"Zhang, K., Yang, Z., Liu, H., Zhang, T., & Basar, T. (2020). Finite-sample analysis for decentralized cooperative multi-agent reinforcement learning from batch data. IFAC-PapersOnLine, 53(2), 1049\u20131056. https:\/\/doi.org\/10.1016\/j.ifacol.2020.12.1290","journal-title":"IFAC-PapersOnLine"},{"issue":"4","key":"6823_CR52","doi-asserted-by":"publisher","first-page":"362","DOI":"10.1007\/s11768-020-00007-x","volume":"18","author":"X Zhao","year":"2020","unstructured":"Zhao, X., Yi, P., & Li, L. (2020). Distributed policy evaluation via inexact ADMM in multi-agent reinforcement learning. Control Theory Technol, 18(4), 362\u2013378. https:\/\/doi.org\/10.1007\/s11768-020-00007-x","journal-title":"Control Theory Technol"},{"key":"6823_CR53","unstructured":"Zheng, Y., Meng, Z., Hao, J., Zhang, Z., Yang, T., & Fan, C. (2018). A deep Bayesian policy reuse approach against non-stationary agents. In Advances in neural information processing systems 2018-December (NeurIPS) (pp. 954\u2013964)."}],"container-title":["Machine Learning"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10994-025-06823-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10994-025-06823-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10994-025-06823-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,7]],"date-time":"2025-09-07T11:17:16Z","timestamp":1757243836000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10994-025-06823-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,15]]},"references-count":53,"journal-issue":{"issue":"8","published-print":{"date-parts":[[2025,8]]}},"alternative-id":["6823"],"URL":"https:\/\/doi.org\/10.1007\/s10994-025-06823-z","relation":{},"ISSN":["0885-6125","1573-0565"],"issn-type":[{"value":"0885-6125","type":"print"},{"value":"1573-0565","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,15]]},"assertion":[{"value":"11 April 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 May 2025","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 June 2025","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 July 2025","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"190"}}