{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,4,30]],"date-time":"2025-04-30T05:08:33Z","timestamp":1745989713357,"version":"3.37.3"},"reference-count":37,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2021,6,22]],"date-time":"2021-06-22T00:00:00Z","timestamp":1624320000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,6,22]],"date-time":"2021-06-22T00:00:00Z","timestamp":1624320000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100005722","name":"Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100005722","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Softw Tools Technol Transfer"],"published-print":{"date-parts":[[2021,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Tool competitions are a special form of comparative evaluation, where each tool has a team of developers or supporters associated that makes sure the tool is properly configured to show its best possible performance. In several research areas, tool competitions have been a driving force for the development of mature tools that represent the state of the art in their field. This paper describes and reports the results of the 1<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{\\text {st}}$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:msup>\n                    <mml:mrow\/>\n                    <mml:mtext>st<\/mml:mtext>\n                  <\/mml:msup>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula> International Competition on Software Testing (Test-Comp 2019), a\u00a0comparative evaluation of automatic tools for software test generation. Test-Comp\u00a02019 was presented as part of TOOLympics\u00a02019, a satellite event of the conference\u00a0TACAS. Nine test generators were evaluated on 2\u00a0356\u00a0test-generation tasks. There were two test specifications, one for generating a test that covers a particular function call and one for generating a test suite that tries to cover the branches of the program.<\/jats:p>","DOI":"10.1007\/s10009-021-00613-3","type":"journal-article","created":{"date-parts":[[2021,6,22]],"date-time":"2021-06-22T18:05:59Z","timestamp":1624385159000},"page":"833-846","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["First international competition on software testing"],"prefix":"10.1007","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4832-7662","authenticated-orcid":false,"given":"Dirk","family":"Beyer","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,6,22]]},"reference":[{"key":"613_CR1","doi-asserted-by":"publisher","unstructured":"Bartocci, E., Beyer, D., Black, P.E., Fedyukovich, G., Garavel, H., Hartmanns, A., Huisman, M., Kordon, F., Nagele, J., Sighireanu, M., Steffen, B., Suda, M., Sutcliffe, G., Weber, T., Yamada, A.: TOOLympics 2019: An overview of competitions in formal methods. In: Proc. TACAS\u00a0(3). pp. 3\u201324. LNCS\u00a011429, Springer (2019). https:\/\/doi.org\/10.1007\/978-3-030-17502-3_1","DOI":"10.1007\/978-3-030-17502-3_1"},{"key":"613_CR2","doi-asserted-by":"publisher","unstructured":"Beyer, D.: Competition on software verification (SV-COMP). In: Proc. TACAS. pp. 504\u2013524. LNCS\u00a07214, Springer (2012). https:\/\/doi.org\/10.1007\/978-3-642-28756-5_38","DOI":"10.1007\/978-3-642-28756-5_38"},{"key":"613_CR3","doi-asserted-by":"publisher","unstructured":"Beyer, D.: Second competition on software verification (Summary of SV-COMP 2013). In: Proc. TACAS. pp. 594\u2013609. LNCS\u00a07795, Springer (2013). https:\/\/doi.org\/10.1007\/978-3-642-36742-7_43","DOI":"10.1007\/978-3-642-36742-7_43"},{"key":"613_CR4","doi-asserted-by":"publisher","unstructured":"Beyer, D.: Reliable and reproducible competition results with BenchExec and witnesses (Report on SV-COMP 2016). In: Proc. TACAS. pp. 887\u2013904. LNCS\u00a09636, Springer (2016). https:\/\/doi.org\/10.1007\/978-3-662-49674-9_55","DOI":"10.1007\/978-3-662-49674-9_55"},{"key":"613_CR5","doi-asserted-by":"publisher","unstructured":"Beyer, D.: Automatic verification of C and Java programs: SV-COMP 2019. In: Proc. TACAS\u00a0(3). pp. 133\u2013155. LNCS\u00a011429, Springer (2019). https:\/\/doi.org\/10.1007\/978-3-030-17502-3_9","DOI":"10.1007\/978-3-030-17502-3_9"},{"key":"613_CR6","doi-asserted-by":"publisher","unstructured":"Beyer, D.: Competition on software testing (Test-Comp). In: Proc. TACAS\u00a0(3). pp. 167\u2013175. LNCS\u00a011429, Springer (2019). https:\/\/doi.org\/10.1007\/978-3-030-17502-3_11","DOI":"10.1007\/978-3-030-17502-3_11"},{"key":"613_CR7","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.3856661","author":"D Beyer","year":"2020","unstructured":"Beyer, D.: Results of the 1st international competition on software testing (test-comp 2019). Zenodo (2020). https:\/\/doi.org\/10.5281\/zenodo.3856661","journal-title":"Zenodo"},{"key":"613_CR8","doi-asserted-by":"publisher","unstructured":"Beyer, D.: SV-Benchmarks: benchmark set of the 1st Intl. competition on software testing (Test-comp 2019). Zenodo (2020). https:\/\/doi.org\/10.5281\/zenodo.3856478","DOI":"10.5281\/zenodo.3856478"},{"key":"613_CR9","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.3856669","author":"D Beyer","year":"2020","unstructured":"Beyer, D.: Test suites from test-comp 2019 test-generation tools. Zenodo (2020). https:\/\/doi.org\/10.5281\/zenodo.3856669","journal-title":"Zenodo"},{"key":"613_CR10","doi-asserted-by":"publisher","unstructured":"Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proc. ICSE. pp. 326\u2013335. IEEE (2004). https:\/\/doi.org\/10.1109\/ICSE.2004.1317455","DOI":"10.1109\/ICSE.2004.1317455"},{"key":"613_CR11","doi-asserted-by":"publisher","unstructured":"Beyer, D., Jakobs, M.C.: CoVeriTest: Cooperative verifier-based testing. In: Proc. FASE. pp. 389\u2013408. LNCS\u00a011424, Springer (2019). https:\/\/doi.org\/10.1007\/978-3-030-16722-6_23","DOI":"10.1007\/978-3-030-16722-6_23"},{"key":"613_CR12","doi-asserted-by":"publisher","unstructured":"Beyer, D., Lemberger, T.: Software verification: testing vs. model checking. In: Proc. HVC. pp. 99\u2013114. LNCS\u00a010629, Springer (2017). https:\/\/doi.org\/10.1007\/978-3-319-70389-3_7","DOI":"10.1007\/978-3-319-70389-3_7"},{"key":"613_CR13","doi-asserted-by":"publisher","unstructured":"Beyer, D., Lemberger, T.: TestCov: Robust test-suite execution and coverage measurement. In: Proc. ASE. pp. 1074\u20131077. IEEE (2019). https:\/\/doi.org\/10.1109\/ASE.2019.00105","DOI":"10.1109\/ASE.2019.00105"},{"issue":"1","key":"613_CR14","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10009-017-0469-y","volume":"21","author":"D Beyer","year":"2019","unstructured":"Beyer, D., L\u00f6we, S., Wendler, P.: Reliable benchmarking: requirements and solutions. Int. J. Softw. Tools Technol. Transfer 21(1), 1\u201329 (2019). https:\/\/doi.org\/10.1007\/s10009-017-0469-y","journal-title":"Int. J. Softw. Tools Technol. Transfer"},{"key":"613_CR15","doi-asserted-by":"publisher","unstructured":"B\u00fcrdek, J., Lochau, M., Bauregger, S., Holzer, A., von Rhein, A., Apel, S., Beyer, D.: Facilitating reuse in multi-goal test-suite generation for software product lines. In: Proc. FASE. pp. 84\u201399. LNCS\u00a09033, Springer (2015). https:\/\/doi.org\/10.1007\/978-3-662-46675-9_6","DOI":"10.1007\/978-3-662-46675-9_6"},{"key":"613_CR16","unstructured":"Cadar, C., Dunbar, D., Engler, D.R.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proc. OSDI. pp. 209\u2013224. USENIX Association (2008)"},{"key":"613_CR17","doi-asserted-by":"publisher","DOI":"10.1007\/s10009-020-00570-3","author":"C Cadar","year":"2020","unstructured":"Cadar, C., Nowack, M.: Klee symbolic execution engine in 2019 (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https:\/\/doi.org\/10.1007\/s10009-020-00570-3","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"key":"613_CR18","doi-asserted-by":"publisher","unstructured":"Chalupa, M., Strej\u010dek, J., Vitovsk\u00e1, M.: Joint forces for memory safety checking. In: Proc. SPIN. pp. 115\u2013132. Springer (2018). https:\/\/doi.org\/10.1007\/978-3-319-94111-0_7","DOI":"10.1007\/978-3-319-94111-0_7"},{"key":"613_CR19","doi-asserted-by":"publisher","DOI":"10.1007\/s10009-020-00573-0","author":"M Chalupa","year":"2020","unstructured":"Chalupa, M., Vitovska, M., Ja\u0161ek, T., \u0160im\u00e1\u010dek, M., Strej\u010dek, J.: Symbiotic 6: generating test-cases by slicing and symbolic execution (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https:\/\/doi.org\/10.1007\/s10009-020-00573-0","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"key":"613_CR20","doi-asserted-by":"publisher","unstructured":"Chowdhury, A.B., Medicherla, R.K., Venkatesh, R.: VeriFuzz: Program-aware fuzzing (competition contribution). In: Proc. TACAS\u00a0(3). pp. 244\u2013249. LNCS\u00a011429, Springer (2019). https:\/\/doi.org\/10.1007\/978-3-030-17502-3_22","DOI":"10.1007\/978-3-030-17502-3_22"},{"key":"613_CR21","doi-asserted-by":"publisher","DOI":"10.1007\/s10009-020-00571-2","author":"MR Gadelha","year":"2020","unstructured":"Gadelha, M.R., Menezes, R., Cordeiro, L.: Esbmc 6.1: automated test-case generation using bounded model checking (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https:\/\/doi.org\/10.1007\/s10009-020-00571-2","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"issue":"1","key":"613_CR22","doi-asserted-by":"publisher","first-page":"97","DOI":"10.1007\/s10009-015-0407-9","volume":"19","author":"MY Gadelha","year":"2017","unstructured":"Gadelha, M.Y., Ismail, H.I., Cordeiro, L.C.: Handling loops in bounded model checking of C programs via k-induction. Int. J. Softw. Tools Technol. Transf. 19(1), 97\u2013114 (2017). https:\/\/doi.org\/10.1007\/s10009-015-0407-9","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"key":"613_CR23","doi-asserted-by":"publisher","unstructured":"Godefroid, P., Sen, K.: Combining model checking and testing. In: Handbook of Model Checking, pp. 613\u2013649. Springer (2018). https:\/\/doi.org\/10.1007\/978-3-319-10575-8_19","DOI":"10.1007\/978-3-319-10575-8_19"},{"issue":"1","key":"613_CR24","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1109\/TSE.2004.1265732","volume":"30","author":"M Harman","year":"2004","unstructured":"Harman, M., Hu, L., Hierons, R.M., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Trans. Software Eng. 30(1), 3\u201316 (2004). https:\/\/doi.org\/10.1109\/TSE.2004.1265732","journal-title":"IEEE Trans. Software Eng."},{"key":"613_CR25","doi-asserted-by":"publisher","unstructured":"Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: How did you specify your test suite. In: Proc. ASE. pp. 407\u2013416. ACM (2010). https:\/\/doi.org\/10.1145\/1858996.1859084","DOI":"10.1145\/1858996.1859084"},{"key":"613_CR26","doi-asserted-by":"publisher","unstructured":"Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D., P\u0103s\u0103reanu, C.S.: Rigorous examination of reactive systems. the RERS challenges 2012 and 2013. Int. J. Softw. Tools Technol. Transfer 16(5), 457\u2013464 (2014). https:\/\/doi.org\/10.1007\/s10009-014-0337-y","DOI":"10.1007\/s10009-014-0337-y"},{"issue":"6","key":"613_CR27","doi-asserted-by":"publisher","first-page":"647","DOI":"10.1007\/s10009-015-0396-8","volume":"17","author":"M Huisman","year":"2015","unstructured":"Huisman, M., Klebanov, V., Monahan, R.: VerifyThis 2012: a program verification competition. STTT 17(6), 647\u2013657 (2015). https:\/\/doi.org\/10.1007\/s10009-015-0396-8","journal-title":"STTT"},{"key":"613_CR28","doi-asserted-by":"publisher","DOI":"10.1007\/s10009-020-00572-1","author":"MC Jakobs","year":"2020","unstructured":"Jakobs, M.C.: CoVeriTest: interleaving value and predicate analysis for test-case generation (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https:\/\/doi.org\/10.1007\/s10009-020-00572-1","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"key":"613_CR29","doi-asserted-by":"publisher","unstructured":"Kifetew, F.M., Devroey, X., Rueda, U.: Java unit-testing tool competition: Seventh round. In: Proc. SBST. pp. 15\u201320. IEEE (2019). https:\/\/doi.org\/10.1109\/SBST.2019.00014","DOI":"10.1109\/SBST.2019.00014"},{"issue":"7","key":"613_CR30","doi-asserted-by":"publisher","first-page":"385","DOI":"10.1145\/360248.360252","volume":"19","author":"JC King","year":"1976","unstructured":"King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385\u2013394 (1976). https:\/\/doi.org\/10.1145\/360248.360252","journal-title":"Commun. ACM"},{"key":"613_CR31","doi-asserted-by":"publisher","DOI":"10.1007\/s10009-020-00568-x","author":"T Lemberger","year":"2020","unstructured":"Lemberger, T.: Plain random test generation with PRTest (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https:\/\/doi.org\/10.1007\/s10009-020-00568-x","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"key":"613_CR32","doi-asserted-by":"publisher","DOI":"10.1007\/s10009-020-00569-w","author":"C Lemieux","year":"2020","unstructured":"Lemieux, C., Sen, K.: FairFuzz-TC: A fuzzer targeting rare branches (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https:\/\/doi.org\/10.1007\/s10009-020-00569-w","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"key":"613_CR33","doi-asserted-by":"publisher","DOI":"10.1007\/s10009-020-00574-z","author":"S Ruland","year":"2020","unstructured":"Ruland, S., Lochau, M., Fehse, O., Sch\u00fcrr, A.: CPA\/Tiger-MGP: test-goal set partitioning for efficient multi-goal test-suite generation (competition contribution). Int. J. Softw. Tools Technol. Transf. (2020). https:\/\/doi.org\/10.1007\/s10009-020-00574-z","journal-title":"Int. J. Softw. Tools Technol. Transf."},{"issue":"1","key":"613_CR34","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1109\/MSP.2016.14","volume":"14","author":"J Song","year":"2016","unstructured":"Song, J., Alves-Foss, J.: The DARPA cyber grand challenge: a competitor\u2019s perspective, part 2. IEEE Sec Privacy 14(1), 76\u201381 (2016). https:\/\/doi.org\/10.1109\/MSP.2016.14","journal-title":"IEEE Sec Privacy"},{"key":"613_CR35","doi-asserted-by":"publisher","unstructured":"Stump, A., Sutcliffe, G., Tinelli, C.: StarExec: A\u00a0cross-community infrastructure for logic solving. In: Proc. IJCAR, pp. 367\u2013373. LNCS\u00a08562, Springer (2014). https:\/\/doi.org\/10.1007\/978-3-319-08587-6_28","DOI":"10.1007\/978-3-319-08587-6_28"},{"key":"613_CR36","doi-asserted-by":"publisher","unstructured":"Visser, W., P\u0103s\u0103reanu, C.S., Khurshid, S.: Test-input generation with Java PathFinder. In: Proc. ISSTA. pp. 97\u2013107. ACM (2004). https:\/\/doi.org\/10.1145\/1007512.1007526","DOI":"10.1145\/1007512.1007526"},{"key":"613_CR37","doi-asserted-by":"publisher","unstructured":"Wendler, P., Beyer, D.: sosy-lab\/benchexec: Release 1.18. Zenodo (2019). https:\/\/doi.org\/10.5281\/zenodo.2561835","DOI":"10.5281\/zenodo.2561835"}],"container-title":["International Journal on Software Tools for Technology Transfer"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10009-021-00613-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10009-021-00613-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10009-021-00613-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,12,26]],"date-time":"2021-12-26T12:03:17Z","timestamp":1640520197000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10009-021-00613-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,6,22]]},"references-count":37,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2021,12]]}},"alternative-id":["613"],"URL":"https:\/\/doi.org\/10.1007\/s10009-021-00613-3","relation":{},"ISSN":["1433-2779","1433-2787"],"issn-type":[{"type":"print","value":"1433-2779"},{"type":"electronic","value":"1433-2787"}],"subject":[],"published":{"date-parts":[[2021,6,22]]},"assertion":[{"value":"20 April 2021","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 June 2021","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The test-generation tasks and results of the competition are published at Zenodo, as described in\u00a0Table 3. All components and data that are necessary for reproducing the competition are available in public version repositories, as specified in\u00a0Table 2. Furthermore, the results are presented online on the competition web site for easy access:","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Data Availability Statement"}}]}}