{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T16:17:36Z","timestamp":1774282656591,"version":"3.50.1"},"reference-count":19,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,10,3]],"date-time":"2023-10-03T00:00:00Z","timestamp":1696291200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,10,3]],"date-time":"2023-10-03T00:00:00Z","timestamp":1696291200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"European Union H2020","award":["769288"],"award-info":[{"award-number":["769288"]}]},{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","doi-asserted-by":"publisher","award":["00326"],"award-info":[{"award-number":["00326"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Sci Rep"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>This paper proposes using reinforcement learning (RL) to schedule maintenance tasks, which can significantly reduce direct operating costs for airlines. The approach consists of a static algorithm for long-term scheduling and an adaptive algorithm for rescheduling based on new maintenance information. To assess the performance of both approaches, three key performance indicators (KPIs) are defined: Ground Time, representing the hours an aircraft spends on the ground; Time Slack, measuring the proximity of tasks to their due dates; and Change Score, quantifying the similarity level between initial and adapted maintenance plans when new information surfaces. The results demonstrate the efficacy of RL in producing efficient maintenance plans, with the algorithms complementing each other to form a solid foundation for routine tasks and real-time responsiveness to new information. While the static algorithm performs slightly better in terms of Ground Time and Time Slack, the adaptive algorithm excels overwhelmingly in terms of Change Score, offering greater flexibility in handling new maintenance information. The proposed RL-based approach can improve the efficiency of aircraft maintenance and has the potential for further research in this area.<\/jats:p>","DOI":"10.1038\/s41598-023-41169-3","type":"journal-article","created":{"date-parts":[[2023,10,3]],"date-time":"2023-10-03T10:05:29Z","timestamp":1696327529000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["Adaptive reinforcement learning for task scheduling in aircraft maintenance"],"prefix":"10.1038","volume":"13","author":[{"given":"Catarina","family":"Silva","sequence":"first","affiliation":[]},{"given":"Pedro","family":"Andrade","sequence":"additional","affiliation":[]},{"given":"Bernardete","family":"Ribeiro","sequence":"additional","affiliation":[]},{"given":"Bruno","family":"F. Santos","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,10,3]]},"reference":[{"key":"41169_CR1","unstructured":"IATA: Airline Maintenance Cost Executive Commentary. https:\/\/www.iata.org\/contentassets\/bf8ca67c8bcd4358b3d004b0d6d0916f\/fy2020-mctg-report_public.pdf."},{"issue":"1","key":"41169_CR2","doi-asserted-by":"publisher","first-page":"29","DOI":"10.1016\/S0965-8564(02)00004-6","volume":"37","author":"C Sriram","year":"2003","unstructured":"Sriram, C. & Haghani, A. An optimization model for aircraft maintenance scheduling and re-assignment. Transp. Res. Part A 37(1), 29\u201348. https:\/\/doi.org\/10.1016\/S0965-8564(02)00004-6 (2003).","journal-title":"Transp. Res. Part A"},{"issue":"1","key":"41169_CR3","doi-asserted-by":"publisher","first-page":"295","DOI":"10.1007\/s10479-011-0885-4","volume":"186","author":"N Safaei","year":"2011","unstructured":"Safaei, N., Banjevic, D. & Jardine, A. Workforce-constrained maintenance scheduling for aircraft fleet: A case study. Ann. Oper. Res. 186(1), 295\u2013316. https:\/\/doi.org\/10.1007\/s10479-011-0885-4 (2011).","journal-title":"Ann. Oper. Res."},{"key":"41169_CR4","doi-asserted-by":"publisher","unstructured":"Li, H., Zuo, H., Lei, D., Liang, K. & Lu, T. Optimal combination of aircraft maintenance tasks by a novel simplex optimization method. Math. Probl. Eng. 2015. https:\/\/doi.org\/10.1155\/2015\/169310 (2015).","DOI":"10.1155\/2015\/169310"},{"issue":"1","key":"41169_CR5","doi-asserted-by":"publisher","first-page":"365","DOI":"10.1016\/j.ejor.2021.01.027","volume":"294","author":"M Witteman","year":"2021","unstructured":"Witteman, M., Deng, Q. & Santos, B. F. A bin packing approach to solve the aircraft maintenance task allocation problem. Eur. J. Oper. Res. 294(1), 365\u2013376. https:\/\/doi.org\/10.1016\/j.ejor.2021.01.027 (2021).","journal-title":"Eur. J. Oper. Res."},{"key":"41169_CR6","doi-asserted-by":"publisher","unstructured":"Knowles, M., Baglee, D. & Wermter, S. Reinforcement Learning for Scheduling of Maintenance. 409\u2013422. https:\/\/doi.org\/10.1007\/978-0-85729-130-1_31 (2010).","DOI":"10.1007\/978-0-85729-130-1_31"},{"issue":"2","key":"41169_CR7","doi-asserted-by":"publisher","first-page":"325","DOI":"10.1007\/s10845-013-0864-5","volume":"27","author":"X Wang","year":"2016","unstructured":"Wang, X. & Qi, C. Multi-agent reinforcement learning based maintenance policy for a resource constrained flow line system. J. Intell. Manuf. 27(2), 325\u2013333. https:\/\/doi.org\/10.1007\/s10845-013-0864-5 (2016).","journal-title":"J. Intell. Manuf."},{"key":"41169_CR8","doi-asserted-by":"publisher","DOI":"10.1016\/j.cie.2020.107056","volume":"153","author":"Y Hu","year":"2021","unstructured":"Hu, Y., Miao, X., Zhang, J., Liu, J. & Pan, E. Reinforcement learning-driven maintenance strategy: A novel solution for long-term aircraft maintenance decision optimization. Comput. Ind. Eng. 153, 107056. https:\/\/doi.org\/10.1016\/j.cie.2020.107056 (2021).","journal-title":"Comput. Ind. Eng."},{"key":"41169_CR9","doi-asserted-by":"publisher","DOI":"10.3390\/aerospace8040113","author":"P Andrade","year":"2021","unstructured":"Andrade, P., Silva, C., Ribeiro, B. & Santos, B. F. Aircraft maintenance check scheduling using reinforcement learning. Aerospacehttps:\/\/doi.org\/10.3390\/aerospace8040113 (2021).","journal-title":"Aerospace"},{"key":"41169_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/j.ress.2022.108908","volume":"230","author":"J Lee","year":"2023","unstructured":"Lee, J. & Mitici, M. Deep reinforcement learning for predictive aircraft maintenance using probabilistic remaining-useful-life prognostics. Reliabil. Eng. Syst. Saf. 230, 108908. https:\/\/doi.org\/10.1016\/j.ress.2022.108908 (2023).","journal-title":"Reliabil. Eng. Syst. Saf."},{"key":"41169_CR11","doi-asserted-by":"publisher","first-page":"367","DOI":"10.1007\/s40747-022-00784-9","volume":"9","author":"Y Zhang","year":"2023","unstructured":"Zhang, Y., Li, C., Su, X., Cui, R. & Wan, B. A baseline-reactive scheduling method for carrier-based aircraft maintenance tasks. Complex Intell. Syst. 9, 367\u2013397. https:\/\/doi.org\/10.1007\/s40747-022-00784-9 (2023).","journal-title":"Complex Intell. Syst."},{"key":"41169_CR12","doi-asserted-by":"publisher","unstructured":"Xue, B., Qiu, H., Niu, B. & Yan, X. Improved aircraft maintenance technician scheduling with task splitting strategy based on particle swarm optimization. In Advances in Swarm Intelligence. ICSI 2022. Lecture Notes in Computer Science. Vol. 13344. https:\/\/doi.org\/10.1007\/978-3-031-09677-8_18 (Springer, 2022).","DOI":"10.1007\/978-3-031-09677-8_18"},{"key":"41169_CR13","doi-asserted-by":"publisher","first-page":"17854","DOI":"10.1109\/ACCESS.2021.3053714","volume":"9","author":"H Shahmoradi-Moghadam","year":"2021","unstructured":"Shahmoradi-Moghadam, H., Safaei, N. & Sadjadi, S. J. Robust maintenance scheduling of aircraft fleet: A hybrid simulation-optimization approach. IEEE Access 9, 17854\u201317865. https:\/\/doi.org\/10.1109\/ACCESS.2021.3053714 (2021).","journal-title":"IEEE Access"},{"key":"41169_CR14","volume-title":"Reinforcement Learning: An Introduction","author":"RS Sutton","year":"2018","unstructured":"Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (MIT Press, 2018)."},{"key":"41169_CR15","doi-asserted-by":"publisher","first-page":"279","DOI":"10.1007\/BF00992698","volume":"8","author":"C Watkins","year":"1992","unstructured":"Watkins, C. & Dayan, P. Q-learning. Mach. Learn. 8, 279\u2013292. https:\/\/doi.org\/10.1007\/BF00992698 (1992).","journal-title":"Mach. Learn."},{"key":"41169_CR16","unstructured":"Van\u00a0Hasselt, H. Double Q-learning. In Proceedings of the 4th Annual Conference Advance Neural Information Processing System. Vol. 23 (2010)"},{"key":"41169_CR17","doi-asserted-by":"crossref","unstructured":"Van\u00a0Hasselt, H., Guez, A. & Silver, D. Deep reinforcement learning with double Q-learning. In Proceedings of the 30th AAAI Conference Artificial Intelligence. Vol. 30 (2016)","DOI":"10.1609\/aaai.v30i1.10295"},{"issue":"2","key":"41169_CR18","doi-asserted-by":"publisher","first-page":"256","DOI":"10.1016\/j.ejor.2019.08.025","volume":"281","author":"Q Deng","year":"2020","unstructured":"Deng, Q., Santos, B. F. & Curran, R. A practical dynamic programming based methodology for aircraft maintenance check scheduling optimization. Eur. J. Oper. Res. 281(2), 256\u2013273. https:\/\/doi.org\/10.1016\/j.ejor.2019.08.025 (2020).","journal-title":"Eur. J. Oper. Res."},{"key":"41169_CR19","unstructured":"ASA. Certification Specifications and Guidance Material for Master Minimum Equipment List (CS-MMEL). https:\/\/www.easa.europa.eu\/document-library\/certification-specifications\/cs-mmel-issue-3."}],"container-title":["Scientific Reports"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41598-023-41169-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41598-023-41169-3","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41598-023-41169-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,11,18]],"date-time":"2023-11-18T09:24:45Z","timestamp":1700299485000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41598-023-41169-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,3]]},"references-count":19,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["41169"],"URL":"https:\/\/doi.org\/10.1038\/s41598-023-41169-3","relation":{},"ISSN":["2045-2322"],"issn-type":[{"value":"2045-2322","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,3]]},"assertion":[{"value":"1 May 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 August 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 October 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"16605"}}