{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,30]],"date-time":"2026-01-30T02:47:13Z","timestamp":1769741233640,"version":"3.49.0"},"publisher-location":"Cham","reference-count":47,"publisher":"Springer International Publishing","isbn-type":[{"value":"9783031131844","type":"print"},{"value":"9783031131851","type":"electronic"}],"license":[{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,8,7]],"date-time":"2022-08-07T00:00:00Z","timestamp":1659830400000},"content-version":"vor","delay-in-days":218,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Linear approximations of nonlinear functions have a wide range of applications such as rigorous global optimization and, recently, verification problems involving neural networks. In the latter case, a linear approximation must be hand-crafted for the neural network\u2019s activation functions. This hand-crafting is tedious, potentially error-prone, and requires an expert to prove the soundness of the linear approximation. Such a limitation is at odds with the rapidly advancing deep learning field \u2013 current verification tools either lack the necessary linear approximation, or perform poorly on neural networks with state-of-the-art activation functions. In this work, we consider the problem of automatically synthesizing sound linear approximations for a given neural network activation function. Our approach is <jats:italic>example-guided<\/jats:italic>: we develop a procedure to generate examples, and then we leverage machine learning techniques to learn a (static) function that outputs linear approximations. However, since the machine learning techniques we employ do not come with formal guarantees, the resulting synthesized function may produce linear approximations with violations. To remedy this, we bound the maximum violation using rigorous global optimization techniques, and then adjust the synthesized linear approximation accordingly to ensure soundness. We evaluate our approach on several neural network verification tasks. Our evaluation shows that the automatically synthesized linear approximations greatly improve the accuracy (i.e., in terms of the number of verification problems solved) compared to hand-crafted linear approximations in state-of-the-art neural network verification tools. An artifact with our code and experimental scripts is available at: <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/zenodo.org\/record\/6525186#.Yp51L9LMIzM\">https:\/\/zenodo.org\/record\/6525186#.Yp51L9LMIzM<\/jats:ext-link>.\n\"Image missing\"\"Image missing\"<\/jats:p>","DOI":"10.1007\/978-3-031-13185-1_8","type":"book-chapter","created":{"date-parts":[[2022,8,6]],"date-time":"2022-08-06T19:29:09Z","timestamp":1659814149000},"page":"149-170","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Example Guided Synthesis of\u00a0Linear Approximations for\u00a0Neural Network Verification"],"prefix":"10.1007","author":[{"given":"Brandon","family":"Paulsen","sequence":"first","affiliation":[]},{"given":"Chao","family":"Wang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,8,7]]},"reference":[{"key":"8_CR1","doi-asserted-by":"crossref","unstructured":"Alzantot, M., Sharma, Y., Elgohary, A., Ho, B.J., Srivastava, M., Chang, K.W.: Generating natural language adversarial examples. arXiv:1804.07998 (2018)","DOI":"10.18653\/v1\/D18-1316"},{"key":"8_CR2","unstructured":"Balunovi\u0107, M., Baader, M., Singh, G., Gehr, T., Vechev, M.: Certifying geometric robustness of neural networks. NIPS (2019)"},{"key":"8_CR3","doi-asserted-by":"crossref","unstructured":"Baluta, T., Shen, S., Shinde, S., Meel, K.S., Saxena, P.: Quantitative verification of neural networks and its security applications. In: CCS (2019)","DOI":"10.1145\/3319535.3354245"},{"key":"8_CR4","doi-asserted-by":"crossref","unstructured":"Benhamou, F., Granvilliers, L.: Continuous and interval constraints. Foundations of Artificial Intelligence (2006)","DOI":"10.1016\/S1574-6526(06)80020-9"},{"key":"8_CR5","doi-asserted-by":"crossref","unstructured":"Chabert, G., Jaulin, L.: Contractor programming. Artificial Intelligence 173(11) (2009)","DOI":"10.1016\/j.artint.2009.03.002"},{"key":"8_CR6","doi-asserted-by":"crossref","unstructured":"De Moura, L., Bj\u00f8rner, N.: Z3: An efficient smt solver. In: TACAS (2008)","DOI":"10.1007\/978-3-540-78800-3_24"},{"key":"8_CR7","doi-asserted-by":"crossref","unstructured":"Du, T., et al.: Cert-RNN: towards certifying the robustness of recurrent neural networks (2021)","DOI":"10.1145\/3460120.3484538"},{"key":"8_CR8","doi-asserted-by":"crossref","unstructured":"Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: ATVA (2017)","DOI":"10.1007\/978-3-319-68167-2_19"},{"key":"8_CR9","unstructured":"Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: ICML (2019)"},{"key":"8_CR10","doi-asserted-by":"crossref","unstructured":"Finkel, R.A., Bentley, J.L.: Quad trees a data structure for retrieval on composite keys. Acta informatica (1974)","DOI":"10.1007\/BF00288933"},{"key":"8_CR11","doi-asserted-by":"crossref","unstructured":"Gao, S., Kong, S., Clarke, E.M.: dReal: An SMT solver for nonlinear theories over the reals. In: International Conference on Automated Deduction (2013)","DOI":"10.1007\/978-3-642-38574-2_14"},{"key":"8_CR12","doi-asserted-by":"crossref","unstructured":"Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: IEEE Symposium on Security and Privacy, pp. 3\u201318 (2018)","DOI":"10.1109\/SP.2018.00058"},{"key":"8_CR13","unstructured":"Gurobi Optimization, LLC: Gurobi Optimizer Reference Manual (2021). https:\/\/www.gurobi.com"},{"key":"8_CR14","unstructured":"Hendrycks, D., Gimpel, K.: Gaussian error linear units (gelus). arXiv:1606.08415 (2016)"},{"key":"8_CR15","doi-asserted-by":"crossref","unstructured":"Hu, H., Fazlyab, M., Morari, M., Pappas, G.J.: Reach-sdp: reachability analysis of closed-loop systems with neural network controllers via semidefinite programming. In: CDC (2020)","DOI":"10.1109\/CDC42340.2020.9304296"},{"key":"8_CR16","doi-asserted-by":"crossref","unstructured":"Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: CAV (2017)","DOI":"10.1007\/978-3-319-63387-9_1"},{"key":"8_CR17","doi-asserted-by":"crossref","unstructured":"Kanbak, C., Moosavi-Dezfooli, S.M., Frossard, P.: Geometric robustness of deep networks: analysis and improvement. In: CVPR (2018)","DOI":"10.1109\/CVPR.2018.00467"},{"key":"8_CR18","doi-asserted-by":"crossref","unstructured":"Katz, G., Barrett, C.W., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: CAV (2017)","DOI":"10.1007\/978-3-319-63387-9_5"},{"key":"8_CR19","unstructured":"Ko, C.Y., Lyu, Z., Weng, L., Daniel, L., Wong, N., Lin, D.: POPQORN: quantifying robustness of recurrent neural networks. In: ICML (2019)"},{"key":"8_CR20","unstructured":"Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)"},{"key":"8_CR21","doi-asserted-by":"crossref","unstructured":"Lebbah, Y., Michel, C., Rueher, M.: An efficient and safe framework for solving optimization problems. J. Comput. Appl. Math. (2007)","DOI":"10.1016\/j.cam.2005.08.037"},{"key":"8_CR22","doi-asserted-by":"crossref","unstructured":"Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE (1998)","DOI":"10.1109\/5.726791"},{"key":"8_CR23","unstructured":"Mirman, M., Gehr, T., Vechev, M.T.: Differentiable abstract interpretation for provably robust neural networks. In: ICML (2018)"},{"key":"8_CR24","unstructured":"Misra, D.: Mish: a self regularized non-monotonic neural activation function. arXiv:1908.08681 (2019)"},{"key":"8_CR25","doi-asserted-by":"crossref","unstructured":"Mohammadinejad, S., Paulsen, B., Deshmukh, J.V., Wang, C.: DiffRNN: Differential verification of recurrent neural networks. In: FORMATS (2021)","DOI":"10.1007\/978-3-030-85037-1_8"},{"key":"8_CR26","doi-asserted-by":"crossref","unstructured":"Moore, R.E., Kearfott, R.B., Cloud, M.J.: Introduction to interval analysis. SIAM (2009)","DOI":"10.1137\/1.9780898717716"},{"key":"8_CR27","doi-asserted-by":"crossref","unstructured":"Paulsen, B., Wang, J., Wang, C.: Reludiff: differential verification of deep neural networks. In: ICSE (2020)","DOI":"10.1145\/3377811.3380337"},{"key":"8_CR28","doi-asserted-by":"crossref","unstructured":"Paulsen, B., Wang, J., Wang, J., Wang, C.: NeuroDiff: scalable differential verification of neural networks using fine-grained approximation. In: ASE (2020)","DOI":"10.1145\/3324884.3416560"},{"key":"8_CR29","doi-asserted-by":"crossref","unstructured":"Powell, M.J.: An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal (1964)","DOI":"10.1093\/comjnl\/7.2.155"},{"key":"8_CR30","unstructured":"Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018)"},{"key":"8_CR31","unstructured":"Ramachandran, P., Zoph, B., Le, Q.V.: Searching for activation functions. arXiv:1710.05941 (2017)"},{"key":"8_CR32","unstructured":"Roy, S.K., Manna, S., Dubey, S.R., Chaudhuri, B.B.: LiSHT: Non-parametric linearly scaled hyperbolic tangent activation function for neural networks. arXiv:1901.05894 (2019)"},{"key":"8_CR33","doi-asserted-by":"crossref","unstructured":"Ryou, W., Chen, J., Balunovic, M., Singh, G., Dan, A., Vechev, M.: Scalable polyhedral verification of recurrent neural networks. In: CAV (2021)","DOI":"10.1007\/978-3-030-81685-8_10"},{"key":"8_CR34","unstructured":"Shi, Z., Zhang, H., Chang, K.W., Huang, M., Hsieh, C.J.: Robustness verification for transformers. ICLR (2020)"},{"key":"8_CR35","doi-asserted-by":"crossref","unstructured":"Singh, G., Gehr, T., P\u00fcschel, M., Vechev, M.T.: An abstract domain for certifying neural networks. POPL (2019)","DOI":"10.1145\/3290354"},{"key":"8_CR36","unstructured":"Singh, G., Gehr, T., P\u00fcschel, M., Vechev, M.T.: Boosting robustness certification of neural networks. In: ICLR (2019)"},{"key":"8_CR37","unstructured":"Szegedy, C., et al.: Intriguing properties of neural networks. arXiv:1312.6199 (2013)"},{"key":"8_CR38","unstructured":"Tjeng, V., Xiao, K., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. ICLR (2019)"},{"key":"8_CR39","doi-asserted-by":"crossref","unstructured":"Tran, H.D., et al.: Star-based reachability analysis of deep neural networks. In: FM (2019)","DOI":"10.1007\/978-3-030-30942-8_39"},{"key":"8_CR40","doi-asserted-by":"crossref","unstructured":"Trombettoni, G., Araya, I., Neveu, B., Chabert, G.: Inner regions and interval linearizations for global optimization. In: AAAI (2011)","DOI":"10.1609\/aaai.v25i1.7817"},{"key":"8_CR41","doi-asserted-by":"crossref","unstructured":"Virtanen, P.: SciPy 1.0 Contributors: SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods (2020)","DOI":"10.1038\/s41592-020-0772-5"},{"key":"8_CR42","unstructured":"Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: NIPS (2018)"},{"key":"8_CR43","unstructured":"Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: USENIX Security (2018)"},{"key":"8_CR44","unstructured":"Weng, T., et al.: Towards fast computation of certified robustness for relu networks. In: ICML (2018)"},{"key":"8_CR45","doi-asserted-by":"crossref","unstructured":"Wu, Y., Zhang, M.: Tightening robustness verification of convolutional neural networks with fine-grained linear approximation. In: AAAI (2021)","DOI":"10.1609\/aaai.v35i13.17388"},{"key":"8_CR46","unstructured":"Xu, K., et al.: Automatic perturbation analysis for scalable certified robustness and beyond. In: NIPS (2020)"},{"key":"8_CR47","unstructured":"Zhang, H., Weng, T.W., Chen, P.Y., Hsieh, C.J., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: NIPS (2018)"}],"container-title":["Lecture Notes in Computer Science","Computer Aided Verification"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-13185-1_8","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,11,3]],"date-time":"2022-11-03T17:11:48Z","timestamp":1667495508000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-13185-1_8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022]]},"ISBN":["9783031131844","9783031131851"],"references-count":47,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-13185-1_8","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"value":"0302-9743","type":"print"},{"value":"1611-3349","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022]]},"assertion":[{"value":"7 August 2022","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"CAV","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"International Conference on Computer Aided Verification","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Haifa","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Israel","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2022","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"7 August 2022","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"10 August 2022","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"34","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"cav2022","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"http:\/\/i-cav.org\/2022\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Double-blind","order":1,"name":"type","label":"Type","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"EasyChair","order":2,"name":"conference_management_system","label":"Conference Management System","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"209","order":3,"name":"number_of_submissions_sent_for_review","label":"Number of Submissions Sent for Review","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"40","order":4,"name":"number_of_full_papers_accepted","label":"Number of Full Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"11","order":5,"name":"number_of_short_papers_accepted","label":"Number of Short Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"19% - The value is computed by the equation \"Number of Full Papers Accepted \/ Number of Submissions Sent for Review * 100\" and then rounded to a whole number.","order":6,"name":"acceptance_rate_of_full_papers","label":"Acceptance Rate of Full Papers","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3.9","order":7,"name":"average_number_of_reviews_per_paper","label":"Average Number of Reviews per Paper","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"9.7","order":8,"name":"average_number_of_papers_per_reviewer","label":"Average Number of Papers per Reviewer","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"Yes","order":9,"name":"external_reviewers_involved","label":"External Reviewers Involved","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}}]}}