{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,28]],"date-time":"2026-01-28T02:06:29Z","timestamp":1769565989709,"version":"3.49.0"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686448","type":"electronic"}],"license":[{"start":{"date-parts":[[2026,1,27]],"date-time":"2026-01-27T00:00:00Z","timestamp":1769472000000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2026,1,27]]},"abstract":"<jats:p>With the growing scale and complexity of batch workflow tasks in heterogeneous computing environments, traditional metaheuristic scheduling methods struggle with high computational overhead. Deep learning-based approaches, while promising, remain underexplored due to the irregularity and structural complexity of workflow data. This paper proposes a deep neural scheduling algorithm based on the transformer architecture, leveraging its self-attention mechanism to capture cross-task dependencies across multiple workflows. The scheduling problem is modeled as a sequence-to-sequence inference process over time steps, enabling progressive decision-making. Unlike conventional methods that require manual decomposition, the proposed model processes complete graph-structured workflow data directly, preserving topological information. To address real-world imbalances in task size distribution, a curriculum learning strategy is introduced for hierarchical training, enhancing adaptability to heterogeneous workflow clusters. Experimental results show that the proposed method significantly reduces scheduling complexity while achieving comparable performance to classical metaheuristics in metrics such as resource utilization. Moreover, it drastically reduces inference time compared to traditional approaches such as PSO, making it highly suitable for real-time scheduling scenarios. Its graph-aware sequential inference further improves scheduling stability under complex task dependencies, offering a novel and efficient solution for large-scale workflow scheduling.<\/jats:p>","DOI":"10.3233\/faia251654","type":"book-chapter","created":{"date-parts":[[2026,1,27]],"date-time":"2026-01-27T13:18:58Z","timestamp":1769519938000},"source":"Crossref","is-referenced-by-count":0,"title":["Curriculum-Trained Transformer for Efficient Multi-Workflow Scheduling"],"prefix":"10.3233","author":[{"given":"Hongming","family":"Yang","sequence":"first","affiliation":[{"name":"School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China"}]},{"given":"Tao","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Automation, Guangdong University of Technology, Guangzhou 510006, China"}]},{"given":"Lianglun","family":"Cheng","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","Fuzzy Systems and Data Mining XI"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA251654","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,27]],"date-time":"2026-01-27T13:18:58Z","timestamp":1769519938000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA251654"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,1,27]]},"ISBN":["9781643686448"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia251654","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,1,27]]}}}