{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T20:52:05Z","timestamp":1768337525064,"version":"3.49.0"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"1","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2026,1,31]]},"abstract":"<jats:p>As deep learning\u2013based AI proliferates, model theft and plagiarism pose increasing Intellectual Property (IP) risks. However, watermarking alters model weights and can degrade performance, while fingerprinting often merely verifies uniqueness or requires heavy computation. In this article, we propose a Neural Network Fingerprinting-Based Model Authentication Code (NNFMAC) scheme that verifies both model uniqueness and ownership without affecting performance. NNFMAC extracts key weights from a trained model, applies a median-based method to generate a unique binary fingerprint, and uses this fingerprint as a codebook to encode ownership information via a newly designed index-based function with expansion, producing reliable authentication codes. This non-intrusive approach integrates fingerprinting for uniqueness verification and authentication coding for ownership verification, delivering comprehensive model IP protection while preserving the model\u2019s original performance. Extensive experiments demonstrate that NNFMAC preserves model accuracy without additional training overhead, unlike other watermarking schemes that degrade accuracy by 0.36\u20131.53%. It achieves bit error rates of 0.12 under weight perturbation, 0.03 under fine-tuning, 0.08 under pruning, and 0.09 under weight shifting attacks, which are substantially lower than the 0.51, 0.49, 0.46, and 0.22 reported in prior work, while consistently outperforming state-of-the-art schemes in effectiveness, efficiency, and robustness.<\/jats:p>","DOI":"10.1145\/3778121","type":"journal-article","created":{"date-parts":[[2025,11,26]],"date-time":"2025-11-26T15:05:52Z","timestamp":1764169552000},"page":"1-25","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["NNFMAC: A Neural Network Fingerprinting-Based Model Authentication Code Scheme"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7513-4009","authenticated-orcid":false,"given":"Haiyu","family":"Deng","sequence":"first","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9439-6437","authenticated-orcid":false,"given":"Xu","family":"Wang","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6111-1607","authenticated-orcid":false,"given":"Guangsheng","family":"Yu","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0780-4637","authenticated-orcid":false,"given":"Wei","family":"Ni","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1603-9375","authenticated-orcid":false,"given":"Ying","family":"He","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0467-1210","authenticated-orcid":false,"given":"Tanzeela","family":"Altaf","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7001-6305","authenticated-orcid":false,"given":"Ren Ping","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]}],"member":"320","published-online":{"date-parts":[[2026,1,13]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.48550\/arxiv.1802.04633"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2021.729663"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/3433210.3437526"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSC.2024.3376259"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3323873.3325042"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3572777"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3663363"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.48550\/arxiv.1412.0233"},{"key":"e_1_3_1_10_2","unstructured":"Haiyu Deng Yanna Jiang Guangsheng Yu Qin Wang Xu Wang Baihe Ma Wei Ni and Ren Ping Liu. 2025. PoLO: Proof-of-learning and proof-of-ownership at once with chained watermarking. arXiv:2505.12296. Retrieved from https:\/\/arxiv.org\/abs\/2505.12296"},{"key":"e_1_3_1_11_2","unstructured":"Alexey Dosovitskiy. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv:2010.11929. Retrieved from https:\/\/arxiv.org\/abs\/2010.11929"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.48550\/arxiv.1506.02626"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3664647.3681544"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3659099"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3437880.3460402"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3576915.3623120"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2025.3583109"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/3425605"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP46214.2022.9833693"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2023.3242737"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/3580502"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539257"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-88418-5_26"},{"key":"e_1_3_1_25_2","first-page":"5287","volume-title":"Proceedings of the 33rd USENIX Security Symposium (USENIX Security","volume":"24","author":"Pegoraro Alessandro","year":"2024","unstructured":"Alessandro Pegoraro, Carlotta Segna, Kavita Kumari, and Ahmad-Reza Sadeghi. 2024. \\(\\{\\) DeepEclipse \\(\\}\\) How to break. \\(\\{\\) white-box. \\(\\}\\) \\(\\{\\) DNN-watermarking. \\(\\}\\) schemes. In Proceedings of the 33rd USENIX Security Symposium (USENIX Security 24), 5287\u20135304."},{"key":"e_1_3_1_26_2","unstructured":"Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556. Retrieved from https:\/\/arxiv.org\/abs\/1409.1556"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/3078971.3078974"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TASLP.2022.3153268"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.2352\/ISSN.2470-1173.2020.4.MWSF-022"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSC.2025.3586091"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8682202"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3450000"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3439723"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2025.107951"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAI.2024.3388389"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589334.3645489"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2023.3329249"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3583780.3614889"}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3778121","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T14:19:41Z","timestamp":1768313981000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3778121"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,1,13]]},"references-count":37,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,1,31]]}},"alternative-id":["10.1145\/3778121"],"URL":"https:\/\/doi.org\/10.1145\/3778121","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"value":"1551-6857","type":"print"},{"value":"1551-6865","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,1,13]]},"assertion":[{"value":"2025-03-22","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-10-12","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-01-13","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}