{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:11:42Z","timestamp":1760033502796,"version":"build-2065373602"},"reference-count":67,"publisher":"Association for Computing Machinery (ACM)","issue":"OOPSLA2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Program. Lang."],"published-print":{"date-parts":[[2025,10,9]]},"abstract":"<jats:p>The wide adoption of deep learning in safety-critical domains has driven the need for formally verifying the robustness of neural networks. A critical challenge in this endeavor lies in addressing the inherent non-linearity of activation functions. The convex hull of the activation function has emerged as a promising solution, as it effectively tightens variable ranges and provides multi-neuron constraints, which together enhance verification precision. Given that constructing exact convex hulls is computationally expensive and even infeasible in most cases, existing research has focused on over-approximating them. Several ad-hoc methods have been devised for specific functions such as ReLU and Sigmoid. Nonetheless, there remains a substantial gap in developing broadly applicable approaches for general activation functions.<\/jats:p>\n          <jats:p>In this work, we propose WraAct, an approach to efficiently constructing tight over-approximations for activation function hulls. Its core idea is to introduce linear constraints to smooth out the fluctuations in the target function, by leveraging double-linear-piece (DLP) functions to simplify the local geometry. In this way, the problem is reduced to over-approximating DLP functions, which can be efficiently handled. We evaluate WraAct against SBLM+PDDM, the state-of-the-art (SOTA) multi-neuron over-approximation method based on decompositing functions into segments. WraAct outperforms it on commonly-used functions like Sigmoid, Tanh, and MaxPool, offering superior efficiency (average 400X faster) and precision (average 150X) while constructing fewer constraints (average 50% reduction). It can complete the computation of up to 8 input dimensions in 10 seconds. We also integrate WraAct into a neural network verifier to evaluate its capability in verification tasks. On 100 benchmark samples, it significantly enhances the single-neuron verification from under 10 to over 40, and outperforms the multi-neuron verifier PRIMA with up to additional 20 verified samples. On large networks like ResNets with 22k neurons, it can complete the verification of one sample within one minute.<\/jats:p>","DOI":"10.1145\/3763086","type":"journal-article","created":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T08:49:50Z","timestamp":1759999790000},"page":"1007-1033","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Convex Hull Approximation for Activation Functions"],"prefix":"10.1145","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2392-3751","authenticated-orcid":false,"given":"Zhongkui","family":"Ma","sequence":"first","affiliation":[{"name":"The University of Queensland, Brisbane, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6536-2948","authenticated-orcid":false,"given":"Zihan","family":"Wang","sequence":"additional","affiliation":[{"name":"The University of Queensland, Brisbane, Australia"},{"name":"CSIRO's Data61, Brisbane, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6390-9890","authenticated-orcid":false,"given":"Guangdong","family":"Bai","sequence":"additional","affiliation":[{"name":"The University of Queensland, Brisbane, Australia"}]}],"member":"320","published-online":{"date-parts":[[2025,10,9]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1016\/0893-9659(91)90141-h"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/bf02293050"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1137\/0109008"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/235815.235821"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33013240"},{"key":"e_1_2_2_6_1","volume-title":"Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research (JMLR), 21","author":"Bunel Rudy","year":"2020","unstructured":"Rudy Bunel, P. Mudigonda, Ilker Turkaslan, Philip Torr, Jingyue Lu, and Pushmeet Kohli. 2020. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research (JMLR), 21 (2020)."},{"key":"e_1_2_2_7_1","volume-title":"Pushmeet Kohli, and M. Pawan Kumar.","author":"Bunel Rudy","year":"2018","unstructured":"Rudy Bunel, Ilker Turkaslan, Philip HS Torr, Pushmeet Kohli, and M. Pawan Kumar. 2018. A unified view of piecewise linear neural network verification. Advances in Neural Information Processing Systems (NeurIPS), 4795\u20134804."},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/sp40001.2021.00076"},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/sp.2017.49"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/321556.321564"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-68167-2_18"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","unstructured":"Djork-Arn\u00e9 Clevert Thomas Unterthiner and Sepp Hochreiter. 2016. Fast and accurate deep network learning by exponential linear units (ELUs). https:\/\/doi.org\/10.48550\/arXiv.1511.07289 10.48550\/arXiv.1511.07289","DOI":"10.48550\/arXiv.1511.07289"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/msp.2012.2211477"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-77935-5_9"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-68167-2_19"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3658644.3690292"},{"key":"e_1_2_2_17_1","volume-title":"The International Conference on Learning Representations (ICLR).","author":"Ferrari Claudio","year":"2021","unstructured":"Claudio Ferrari, Mark Niklas Mueller, Nikola Jovanovi\u0107, and Martin Vechev. 2021. Complete verification via multi-neuron relaxation guided branch-and-bound. In The International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_2_18_1","volume-title":"Franco-Japanese and Franco-Chinese Conference on Combinatorics and Computer Science. Springer, 91\u2013111","author":"Fukuda Komei","year":"1995","unstructured":"Komei Fukuda and Alain Prodon. 1995. Double description method revisited. In Franco-Japanese and Franco-Chinese Conference on Combinatorics and Computer Science. Springer, 91\u2013111."},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/sp.2018.00058"},{"key":"e_1_2_2_20_1","volume-title":"Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS). JMLR Workshop and Conference Proceedings, 315\u2013323","author":"Glorot Xavier","year":"2011","unstructured":"Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS). JMLR Workshop and Conference Proceedings, 315\u2013323."},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","unstructured":"Ian J. Goodfellow Jonathon Shlens and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. https:\/\/doi.org\/10.48550\/arXiv.1412.6572 10.48550\/arXiv.1412.6572","DOI":"10.48550\/arXiv.1412.6572"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","unstructured":"Dan Hendrycks and Kevin Gimpel. 2023. Gaussian error linear units (GELUs). https:\/\/doi.org\/10.48550\/arXiv.1606.08415 10.48550\/arXiv.1606.08415","DOI":"10.48550\/arXiv.1606.08415"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.3233\/FAIA200385"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2012.2205597"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00140"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","unstructured":"Andrew G. Howard Menglong Zhu Bo Chen Dmitry Kalenichenko Weijun Wang Tobias Weyand Marco Andreetto and Hartwig Adam. 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. https:\/\/doi.org\/10.48550\/arXiv.1704.04861 10.48550\/arXiv.1704.04861","DOI":"10.48550\/arXiv.1704.04861"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-63387-9_5"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-25540-4_26"},{"key":"e_1_2_2_29_1","volume-title":"Proceedings of the International Conference on Machine Learning (ICML). PMLR, 3468\u20133477","author":"Ko Ching-Yun","year":"2019","unstructured":"Ching-Yun Ko, Zhaoyang Lyu, Lily Weng, Luca Daniel, Ngai Wong, and Dahua Lin. 2019. POPQORN: Quantifying robustness of recurrent neural networks. In Proceedings of the International Conference on Machine Learning (ICML). PMLR, 3468\u20133477."},{"key":"e_1_2_2_30_1","unstructured":"Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images."},{"key":"e_1_2_2_31_1","volume-title":"Hinton","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems (NeurIPS), 25 (2012)."},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/sp46215.2023.10179303"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","unstructured":"Alessio Lomuscio and Lalit Maganti. 2017. An approach to reachability analysis for feed-forward ReLU neural networks. https:\/\/doi.org\/10.48550\/arXiv.1706.07351 10.48550\/arXiv.1706.07351","DOI":"10.48550\/arXiv.1706.07351"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-99-7584-6_17"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","unstructured":"Zhongkui Ma. 2025. Trusted-System-Lab\/WraAct: Reproduction Package for Article \"Convex Hull Approximation for Activation Functions\" (v1.0.0). https:\/\/doi.org\/10.5281\/zenodo.17007119 10.5281\/zenodo.17007119","DOI":"10.5281\/zenodo.17007119"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-99-7584-6_7"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3632917"},{"volume-title":"Proceedings of the International Conference on Machine Learning (ICML). 30","author":"Maas Andrew L.","key":"e_1_2_2_38_1","unstructured":"Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the International Conference on Machine Learning (ICML). 30, PMLR, 3."},{"key":"e_1_2_2_39_1","article-title":"Adversarial robustness of deep neural networks: A survey from a formal verification perspective","author":"Meng Mark Huasong","year":"2022","unstructured":"Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, and Jin Song Dong. 2022. Adversarial robustness of deep neural networks: A survey from a formal verification perspective. IEEE Transactions on Dependable and Secure Computing.","journal-title":"IEEE Transactions on Dependable and Secure Computing."},{"key":"e_1_2_2_40_1","volume-title":"Thrall","author":"Motzkin Theodore S.","year":"1953","unstructured":"Theodore S. Motzkin, Howard Raiffa, Gerald L. Thompson, and Robert M. Thrall. 1953. The Double Description Method. In Contributions to the Theory of Games. 2, 51\u201374."},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3498704"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/CDC45484.2021.9683286"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.automatica.2023.111233"},{"volume-title":"Computer Safety, Reliability, and Security: 40th International Conference (SAFECOMP)","author":"Paterson Colin","key":"e_1_2_2_44_1","unstructured":"Colin Paterson, Haoze Wu, John Grese, Radu Calinescu, Corina S. P\u0103s\u0103reanu, and Clark Barrett. 2021. DeepCert: Verification of contextually relevant robustness for neural network image classifiers. In Computer Safety, Reliability, and Security: 40th International Conference (SAFECOMP). Springer, 3\u201317."},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-99524-9_19"},{"volume-title":"Towards a general formulation of lazy constraints. Ph. D. Dissertation","author":"Pearce Robin Harris","key":"e_1_2_2_46_1","unstructured":"Robin Harris Pearce. 2019. Towards a general formulation of lazy constraints. Ph. D. Dissertation. The University of Queensland."},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-81685-8_10"},{"key":"e_1_2_2_48_1","unstructured":"Hadi Salman Greg Yang Huan Zhang Cho-Jui Hsieh and Pengchuan Zhang. 2019. A convex relaxation barrier to tight robustness verification of neural networks. Advances in Neural Information Processing Systems (NeurIPS) 9835\u20139846."},{"key":"e_1_2_2_49_1","volume-title":"Workshop on Formal Verification of Machine Learning (WFVML).","author":"Shi Zhouxing","year":"2023","unstructured":"Zhouxing Shi, Qirui Jin, Huan Zhang, Zico Kolter, Suman Jana, and Cho-Jui Hsieh. 2023. Formal verification for neural networks with general nonlinearities via branch-and-bound. In Workshop on Formal Verification of Machine Learning (WFVML)."},{"key":"e_1_2_2_50_1","volume-title":"Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, and Marc Lanctot.","author":"Silver David","year":"2016","unstructured":"David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, and Marc Lanctot. 2016. Mastering the game of Go with deep neural networks and tree search. Nature, 529, 7587 (2016), 484\u2013489."},{"key":"e_1_2_2_51_1","volume-title":"Beyond the single neuron convex barrier for neural network certification. Advances in Neural Information Processing Systems (NeurIPS), 32","author":"Singh Gagandeep","year":"2019","unstructured":"Gagandeep Singh, Rupanshu Ganvir, Markus P\u00fcschel, and Martin Vechev. 2019. Beyond the single neuron convex barrier for neural network certification. Advances in Neural Information Processing Systems (NeurIPS), 32 (2019)."},{"key":"e_1_2_2_52_1","volume-title":"Fast and effective robustness certification. Advances in Neural Information Processing Systems (NeurIPS), 31","author":"Singh Gagandeep","year":"2018","unstructured":"Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus P\u00fcschel, and Martin Vechev. 2018. Fast and effective robustness certification. Advances in Neural Information Processing Systems (NeurIPS), 31 (2018)."},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290354"},{"key":"e_1_2_2_54_1","volume-title":"Le","author":"Sutskever Ilya","year":"2014","unstructured":"Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems (NeurIPS), 27 (2014)."},{"key":"e_1_2_2_55_1","first-page":"21675","article-title":"The convex relaxation barrier, revisited: Tightened single-neuron relaxations for neural network verification","volume":"33","author":"Tjandraatmadja Christian","year":"2020","unstructured":"Christian Tjandraatmadja, Ross Anderson, Joey Huchette, Will Ma, Krunal Kishor Patel, and Juan Pablo Vielma. 2020. The convex relaxation barrier, revisited: Tightened single-neuron relaxations for neural network verification. Advances in Neural Information Processing Systems (NeurIPS), 33 (2020), 21675\u201321686.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_2_2_56_1","volume-title":"The International Conference on Learning Representations (ICLR).","author":"Tjeng Vincent","year":"2018","unstructured":"Vincent Tjeng, Kai Y. Xiao, and Russ Tedrake. 2018. Evaluating robustness of neural networks with mixed integer programming. In The International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_2_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3575870.3587128"},{"key":"e_1_2_2_58_1","first-page":"29909","article-title":"Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification","volume":"34","author":"Wang Shiqi","year":"2021","unstructured":"Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. 2021. Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems (NeurIPS), 34 (2021), 29909\u201329921.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3696410.3714737"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP54263.2024.00233"},{"key":"e_1_2_2_61_1","volume-title":"Proceedings of the International Conference on Machine Learning (ICML). PMLR, 5276\u20135285","author":"Weng Lily","year":"2018","unstructured":"Lily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, and Inderjit Dhillon. 2018. Towards fast computation of certified robustness for relu networks. In Proceedings of the International Conference on Machine Learning (ICML). PMLR, 5276\u20135285."},{"key":"e_1_2_2_62_1","article-title":"Output range analysis for feed-forward deep neural networks via linear programming","author":"Xu Zhiwu","year":"2022","unstructured":"Zhiwu Xu, Yazheng Liu, Shengchao Qin, and Zhong Ming. 2022. Output range analysis for feed-forward deep neural networks via linear programming. IEEE Transactions on Reliability.","journal-title":"IEEE Transactions on Reliability."},{"key":"e_1_2_2_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP54263.2024.00123"},{"key":"e_1_2_2_64_1","first-page":"44","volume-title":"Don\u2019t Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models. In USENIX Security Symposium (USENIX Security). USENIX Association, 4675\u20134692","author":"Yu Zhiyuan","year":"2024","unstructured":"Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, and Ning Zhang. 2024. Don\u2019t Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models. In USENIX Security Symposium (USENIX Security). USENIX Association, 4675\u20134692. isbn:978-1-939133-44-1"},{"key":"e_1_2_2_65_1","first-page":"1656","article-title":"General cutting planes for bound-propagation-based neural network verification","volume":"35","author":"Zhang Huan","year":"2022","unstructured":"Huan Zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. 2022. General cutting planes for bound-propagation-based neural network verification. Advances in Neural Information Processing Systems (NeurIPS), 35 (2022), 1656\u20131670.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_2_2_66_1","first-page":"4944","article-title":"Efficient neural network robustness certification with general activation functions","volume":"31","author":"Zhang Huan","year":"2018","unstructured":"Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. Advances in Neural Information Processing Systems (NeurIPS), 31 (2018), 4944\u20134953.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3551349.3556907"}],"container-title":["Proceedings of the ACM on Programming Languages"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3763086","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:50:27Z","timestamp":1760032227000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3763086"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,9]]},"references-count":67,"journal-issue":{"issue":"OOPSLA2","published-print":{"date-parts":[[2025,10,9]]}},"alternative-id":["10.1145\/3763086"],"URL":"https:\/\/doi.org\/10.1145\/3763086","relation":{},"ISSN":["2475-1421"],"issn-type":[{"type":"electronic","value":"2475-1421"}],"subject":[],"published":{"date-parts":[[2025,10,9]]},"assertion":[{"value":"2025-03-26","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-08-12","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-10-09","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}