{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,27]],"date-time":"2026-03-27T20:12:14Z","timestamp":1774642334378,"version":"3.50.1"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643684369","type":"print"},{"value":"9781643684376","type":"electronic"}],"license":[{"start":{"date-parts":[[2023,9,28]],"date-time":"2023-09-28T00:00:00Z","timestamp":1695859200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023,9,28]]},"abstract":"<jats:p>In neural architecture search (NAS), training every sampled architecture is very time-consuming and should be avoided. Weight-sharing is a promising solution to speed up the evaluation process. However, training the supernetwork incurs many discrepancies between the actual ranking and the predicted one. Additionally, efficient deep-learning engineering processes require incorporating realistic hardware-performance metrics into the NAS evaluation process, also known as hardware-aware NAS (HW-NAS). In HW-NAS, estimating task-specific performance and hardware efficiency are both required. This paper proposes a supernetwork training methodology that preserves the Pareto ranking between its different subnetworks resulting in more efficient and accurate neural networks for a variety of hardware platforms. The results show a 97% near Pareto front approximation in less than 2 GPU days of search, which provides 2x speed up compared to state-of-the-art methods. We validate our methodology on NAS-Bench-201, DARTS, and ImageNet. Our optimal model achieves 77.2% accuracy (+1.7% compared to baseline) with an inference time of 3.68ms on Edge GPU for ImageNet, which yields a 2.3x speedup. Training implementation can be found: https:\/\/github.com\/IHIaadj\/PRP-NAS.<\/jats:p>","DOI":"10.3233\/faia230276","type":"book-chapter","created":{"date-parts":[[2023,9,29]],"date-time":"2023-09-29T09:00:49Z","timestamp":1695978049000},"source":"Crossref","is-referenced-by-count":2,"title":["Pareto Rank-Preserving Supernetwork for Hardware-Aware Neural Architecture Search"],"prefix":"10.3233","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5259-0749","authenticated-orcid":false,"given":"Hadjer","family":"Benmeziane","sequence":"first","affiliation":[{"name":"Univ. Polytechnique Hauts-de-France, CNRS, UMR 8201 - LAMIH, F-59313 Valenciennes, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1967-8749","authenticated-orcid":false,"given":"Kaoutar","family":"El Maghraoui","sequence":"additional","affiliation":[{"name":"IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA"}]},{"given":"Hamza","family":"Ouarnoughi","sequence":"additional","affiliation":[{"name":"Univ. Polytechnique Hauts-de-France, CNRS, UMR 8201 - LAMIH, F-59313 Valenciennes, France"}]},{"given":"Smail","family":"Niar","sequence":"additional","affiliation":[{"name":"Univ. Polytechnique Hauts-de-France, CNRS, UMR 8201 - LAMIH, F-59313 Valenciennes, France"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2023"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA230276","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,9,29]],"date-time":"2023-09-29T09:00:51Z","timestamp":1695978051000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA230276"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,28]]},"ISBN":["9781643684369","9781643684376"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia230276","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,28]]}}}