{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,23]],"date-time":"2025-10-23T21:04:17Z","timestamp":1761253457083,"version":"3.37.3"},"reference-count":37,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2020,4,22]],"date-time":"2020-04-22T00:00:00Z","timestamp":1587513600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,4,22]],"date-time":"2020-04-22T00:00:00Z","timestamp":1587513600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100005713","name":"Technische Universit\u00e4t M\u00fcnchen","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100005713","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"published-print":{"date-parts":[[2020,5]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n                <jats:title><jats:bold>Purpose<\/jats:bold>\n<\/jats:title>\n                <jats:p>Minimally invasive surgery (MIS) has become the standard for many surgical procedures as it minimizes trauma, reduces infection rates and shortens hospitalization. However, the manipulation of objects in the surgical workspace can be difficult due to the unintuitive handling of instruments and limited range of motion. Apart from the advantages of robot-assisted systems such as augmented view or improved dexterity, both robotic and MIS techniques introduce drawbacks such as limited haptic perception and their major reliance on visual perception.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title><jats:bold>Methods<\/jats:bold>\n<\/jats:title>\n                <jats:p>In order to address the above-mentioned limitations, a perception study was conducted to investigate whether the transmission of intra-abdominal acoustic signals can potentially improve the perception during MIS. To investigate whether these acoustic signals can be used as a basis for further automated analysis, a large audio data set capturing the application of electrosurgery on different types of porcine tissue was acquired. A sliding window technique was applied to compute log-mel-spectrograms, which were fed to a pre-trained convolutional neural network for feature extraction. A fully connected layer was trained on the intermediate feature representation to classify instrument\u2013tissue interaction.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title><jats:bold>Results<\/jats:bold>\n<\/jats:title>\n                <jats:p>The perception study revealed that acoustic feedback has potential to improve the perception during MIS and to serve as a basis for further automated analysis. The proposed classification pipeline yielded excellent performance for four types of instrument\u2013tissue interaction (muscle, fascia, liver and fatty tissue) and achieved top-1 accuracies of up to 89.9%. Moreover, our model is able to distinguish electrosurgical operation modes with an overall classification accuracy of 86.40%.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title><jats:bold>Conclusion<\/jats:bold>\n<\/jats:title>\n                <jats:p>Our proof-of-principle indicates great application potential for guidance systems in MIS, such as controlled tissue resection. Supported by a pilot perception study with surgeons, we believe that utilizing audio signals as an additional information channel has great potential to improve the surgical performance and to partly compensate the loss of haptic feedback.<\/jats:p>\n              <\/jats:sec>","DOI":"10.1007\/s11548-020-02146-7","type":"journal-article","created":{"date-parts":[[2020,4,22]],"date-time":"2020-04-22T18:03:27Z","timestamp":1587578607000},"page":"771-779","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":18,"title":["Acoustic signal analysis of instrument\u2013tissue interaction for minimally invasive interventions"],"prefix":"10.1007","volume":"15","author":[{"given":"Daniel","family":"Ostler","sequence":"first","affiliation":[]},{"given":"Matthias","family":"Seibold","sequence":"additional","affiliation":[]},{"given":"Jonas","family":"Fuchtmann","sequence":"additional","affiliation":[]},{"given":"Nicole","family":"Samm","sequence":"additional","affiliation":[]},{"given":"Hubertus","family":"Feussner","sequence":"additional","affiliation":[]},{"given":"Dirk","family":"Wilhelm","sequence":"additional","affiliation":[]},{"given":"Nassir","family":"Navab","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,4,22]]},"reference":[{"issue":"8","key":"2146_CR1","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1007\/s11517-018-1785-4","volume":"56","author":"N Befrui","year":"2018","unstructured":"Befrui N, Elsner J, Flesser A, Huvanandana J, Jarrousse O, Le TN, M\u00fcller M, Schulze WHW, Taing S, Weidert S (2018) Vibroarthrography for early detection of knee osteoarthritis using normalized frequency features. Med Biol Eng Comput 56(8):1499\u20131514","journal-title":"Med Biol Eng Comput"},{"key":"2146_CR2","doi-asserted-by":"crossref","unstructured":"Cakir E, Heittola T, Huttunen H, Virtanen T (2015) Polyphonic sound event detection using multi label deep neural networks. In: 2015 International joint conference on neural networks (IJCNN). IEEE\/Institute of Electrical and Electronics Engineers Incorporated, pp 1\u20137","DOI":"10.1109\/IJCNN.2015.7280624"},{"key":"2146_CR3","unstructured":"Dai W (2016) Acoustic scene recognition with deep learning. In: Detection and classification of acoustic scenes and events (DCASE) challenge. Carnegie Mellon University, Pittsburg, Pennsylvania, USA"},{"key":"2146_CR4","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition. IEEE, Piscataway, pp 248\u2013255","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"2146_CR5","unstructured":"Dennis JW (2014) Sound event recognition in unstructured environments using spectrogram image processing: Dissertation. Nanyang Technological University"},{"issue":"1","key":"2146_CR6","doi-asserted-by":"publisher","first-page":"321","DOI":"10.1109\/TSA.2005.854103","volume":"14","author":"AJ Eronen","year":"2006","unstructured":"Eronen AJ, Peltonen VT, Tuomi JT, Klapuri AP, Fagerlund S, Sorsa T, Lorho G, Huopaniemi J (2006) Audio-based context recognition. IEEE Trans Audio Speech Lang Process 14(1):321\u2013329","journal-title":"IEEE Trans Audio Speech Lang Process"},{"key":"2146_CR7","doi-asserted-by":"crossref","unstructured":"Hershey S, Chaudhuri S, Ellis DPW, Gemmeke JF, Jansen A, Moore C, Plakal M, Platt D, Saurous RA, Seybold B, Slaney M, Weiss R, Wilson K (2017) Cnn architectures for large-scale audio classification. In: International conference on acoustics, speech and signal processing (ICASSP). arXiv:1609.09430","DOI":"10.1109\/ICASSP.2017.7952132"},{"key":"2146_CR8","doi-asserted-by":"crossref","unstructured":"Illanes A, Boese A, Maldonado I, Pashazadeh A, Schaufler A, Navab N, Friebe M (2018) Novel clinical device tracking and tissue event characterization using proximally placed audio signal acquisition and processing. Sci Rep 8(1):12070. https:\/\/doi.org\/10.1038\/s41598-018-30641-0","DOI":"10.1038\/s41598-018-30641-0"},{"issue":"2","key":"2146_CR9","doi-asserted-by":"publisher","first-page":"124","DOI":"10.4103\/2229-516X.157168","volume":"5","author":"A Jain","year":"2015","unstructured":"Jain A, Bansal R, Kumar A, Singh KD (2015) A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels of medical first year students. Int J Appl Basic Med Res 5(2):124\u2013127","journal-title":"Int J Appl Basic Med Res"},{"key":"2146_CR10","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-662-53204-1","volume-title":"Minimalinvasive viszeralchirurgie","author":"T Keck","year":"2017","unstructured":"Keck T, Germer C (2017) Minimalinvasive viszeralchirurgie. Springer, Berlin, Heidelberg"},{"issue":"2","key":"2146_CR11","doi-asserted-by":"publisher","first-page":"198","DOI":"10.1016\/j.cmpb.2008.12.012","volume":"94","author":"KS Kim","year":"2009","unstructured":"Kim KS, Seo JH, Kang JU, Song CG (2009) An enhanced algorithm for knee joint sound classification using feature extraction based on time-frequency analysis. Comput Methods ProgramsBiomed 94(2):198\u2013206","journal-title":"Comput Methods ProgramsBiomed"},{"key":"2146_CR12","doi-asserted-by":"publisher","first-page":"356","DOI":"10.1038\/nn831","volume":"5","author":"MS Lewicki","year":"2002","unstructured":"Lewicki MS (2002) Efficient coding of natural sounds. Nat Neurosci 5:356\u2013363","journal-title":"Nat Neurosci"},{"key":"2146_CR13","doi-asserted-by":"crossref","unstructured":"Li J, Dai W, Metze F, Qu S, Das S (2017) A comparison of deep learning methods for environmental sound detection. In: 2017 IEEE International conference on acoustics, speech, and signal processing. IEEE, Piscataway, NJ, pp 126\u2013130","DOI":"10.1109\/ICASSP.2017.7952131"},{"key":"2146_CR14","unstructured":"Lidy T (2015) Spectral convolutional neural network for music\u00a0classification. In: Music information retrieval evaluation exchange (MIREX). Malaga, Spain"},{"key":"2146_CR15","unstructured":"Lidy T, Schindler A (2016) Cqt-based convolutional neural networks for audio scene classification. In: Proceedings of the detection and classification of acoustic scenes and events 2016 workshop (DCASE2016). pp 1032\u20131048"},{"issue":"1","key":"2146_CR16","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1515\/cdbme-2019-0006","volume":"5","author":"I Maldonado","year":"2019","unstructured":"Maldonado I, Illanes A, Kalmar M, S\u00fchn T, Boese A, Friebe M (2019) Audio waves and its loss of energy in puncture needles. Curr Dir Biomed Eng 5(1):21\u201324","journal-title":"Curr Dir Biomed Eng"},{"issue":"3\u20134","key":"2146_CR17","doi-asserted-by":"publisher","first-page":"230","DOI":"10.1016\/j.jfranklin.2006.08.003","volume":"344","author":"A Marshall","year":"2007","unstructured":"Marshall A, Boussakta S (2007) Signal analysis of medical acoustic sounds with applications to chest medicine. J Frankl Inst 344(3\u20134):230\u2013242","journal-title":"J Frankl Inst"},{"key":"2146_CR18","unstructured":"Masters D, Luschi C (2018) Revisiting small batch training for deep neural networks. CoRR arXiv:1804.07612"},{"issue":"4","key":"2146_CR19","doi-asserted-by":"publisher","first-page":"373","DOI":"10.1177\/1553350617705207","volume":"24","author":"FC Meeuwsen","year":"2017","unstructured":"Meeuwsen FC, Gu\u00e9don ACP, Arkenbout EA, van der Elst M, Dankelman J, van den Dobbelsteen JJ (2017) The art of electrosurgery: trainees and experts. Surg Innov 24(4):373\u2013378","journal-title":"Surg Innov"},{"key":"2146_CR20","doi-asserted-by":"publisher","first-page":"94","DOI":"10.1016\/j.cmpb.2016.01.020","volume":"127","author":"S Nalband","year":"2016","unstructured":"Nalband S, Sundar A, Prince AA, Agarwal A (2016) Feature selection and classification methodology for the detection of knee-joint disorders. Comput Methods Programs Biomed 127:94\u2013104","journal-title":"Comput Methods Programs Biomed"},{"issue":"1","key":"2146_CR21","first-page":"4","volume":"1","author":"S Oramas","year":"2018","unstructured":"Oramas S, Barbieri F, Nieto O, Serra X (2018) Multimodal deep learning for music genre classification. Trans Int Soc Music Inf Retr 1(1):4\u201321","journal-title":"Trans Int Soc Music Inf Retr"},{"key":"2146_CR22","unstructured":"Peltonen V, Tuomi J, Klapuri A, Huopaniemi J, Sorsa T (2002) Computational auditory scene recognition. In: 2002 IEEE international conference on acoustics, speech, and signal processing. IEEE, Piscataway, pp II\u20131941\u2013II\u20131944"},{"key":"2146_CR23","doi-asserted-by":"crossref","unstructured":"Piczak KJ (2015) Environmental sound classification with convolutional neural networks. In: 2015 IEEE 25th International workshop on machine learning for signal processing (MLSP). pp 1\u20136","DOI":"10.1109\/MLSP.2015.7324337"},{"key":"2146_CR24","unstructured":"Pons J, Serra X (2018) Randomly weighted cnns for (music) audio classification. In: In proceedings of the 44th IEEE international conference on acoustics, speech and signal processing (ICASSP2019). pp 336\u2013340"},{"key":"2146_CR25","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1109\/JSTSP.2019.2908700","volume":"13","author":"H Purwins","year":"2019","unstructured":"Purwins H, Li B, Virtanen T, Schl\u00fcter J, Chang SY, Sainath T (2019) Deep learning for audio signal processing. IEEE J Sel Top Signal Process 13:206\u2013219","journal-title":"IEEE J Sel Top Signal Process"},{"key":"2146_CR26","unstructured":"Rangayyan RM, Frank CB, Bell GD, Smith R (1992) Analysis of knee joint sound signals. In: Morucci JP (ed) Proceedings of the annual international conference of the IEEE Engineering in medicine and biology society. Springer, New York and Piscataway, NJ, vol 2, pp 712\u2013713"},{"key":"2146_CR27","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-30425-0","volume-title":"Springer handbook of acoustics","author":"T Rossing","year":"2007","unstructured":"Rossing T (2007) Springer handbook of acoustics, 2nd edn. Springer, New York","edition":"2"},{"issue":"1","key":"2146_CR28","doi-asserted-by":"publisher","first-page":"369","DOI":"10.1515\/cdbme-2019-0093","volume":"5","author":"A Schaufler","year":"2019","unstructured":"Schaufler A, S\u00fchn T, Esmaeili N, Boese A, Wex C, Croner R, Friebe M, Illanes A (2019) Automatic differentiation between veress needle events in laparoscopic access using proximally attached audio signal characterization. Curr Dir Biomed Eng 5(1):369\u2013371","journal-title":"Curr Dir Biomed Eng"},{"key":"2146_CR29","volume-title":"Biomedical engineering in gastrointestinal surgery","author":"A Schneider","year":"2017","unstructured":"Schneider A, Feussner H (2017) Biomedical engineering in gastrointestinal surgery, 1st edn. Academic Press, London","edition":"1"},{"key":"2146_CR30","doi-asserted-by":"crossref","unstructured":"Shkelev Y, Kuzmin VG, Orlov I, Kuznetsova SV, Lupov S (2000) A system for studying spectral and temporal characteristics of acoustic cardiosignals. In: Proceedings of the second international symposium of trans black sea region on applied electromagnetism. IEEE, Piscataway, NY, p\u00a028","DOI":"10.1109\/AEM.2000.943191"},{"issue":"3","key":"2146_CR31","doi-asserted-by":"publisher","first-page":"329","DOI":"10.2307\/1417526","volume":"53","author":"SS Stevens","year":"1940","unstructured":"Stevens SS, Volkmann J (1940) The relation of pitch to frequency: a revised scale. Am J Psychol 53(3):329","journal-title":"Am J Psychol"},{"key":"2146_CR32","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: 29th IEEE conference on computer vision and pattern recognition. IEEE, Piscataway, NJ, pp 2818\u20132826","DOI":"10.1109\/CVPR.2016.308"},{"key":"2146_CR33","volume-title":"Transfer learning","author":"L Torrey","year":"2009","unstructured":"Torrey L, Shavlik J (2009) Transfer learning. IGI Global, Hershey"},{"issue":"2","key":"2146_CR34","doi-asserted-by":"publisher","first-page":"205","DOI":"10.1007\/BF02348126","volume":"40","author":"C Tranulis","year":"2002","unstructured":"Tranulis C, Durand LG, Senhadji L, Pibarot P (2002) Estimation of pulmonary arterial pressure by a neural network analysis using features based on time-frequency representations of the second heart sound. Med Biol Eng Comput 40(2):205\u2013212","journal-title":"Med Biol Eng Comput"},{"key":"2146_CR35","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1007\/978-3-319-60916-4_2","volume":"3","author":"A Valada","year":"2018","unstructured":"Valada A, Spinello L, Burgard W (2018) Deep feature learning for acoustics-based terrain classification. Robot Res 3:21\u201337","journal-title":"Robot Res"},{"key":"2146_CR36","unstructured":"Wyse L (2017) Audio spectrogram representations for processing with convolutional neural networks. In: Proceedings of the first international workshop on deep learning and music joint with IJCNN. vol 1(1), pp 37\u201341"},{"key":"2146_CR37","doi-asserted-by":"crossref","unstructured":"Zhang H, McLoughlin I, Song Y (2015) Robust sound event recognition using convolutional neural networks. In: 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, Piscataway, NJ, pp 559\u2013563","DOI":"10.1109\/ICASSP.2015.7178031"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-020-02146-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-020-02146-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-020-02146-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,4,21]],"date-time":"2021-04-21T23:31:26Z","timestamp":1619047886000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-020-02146-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,4,22]]},"references-count":37,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2020,5]]}},"alternative-id":["2146"],"URL":"https:\/\/doi.org\/10.1007\/s11548-020-02146-7","relation":{},"ISSN":["1861-6410","1861-6429"],"issn-type":[{"type":"print","value":"1861-6410"},{"type":"electronic","value":"1861-6429"}],"subject":[],"published":{"date-parts":[[2020,4,22]]},"assertion":[{"value":"18 November 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 March 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 April 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Compliance with ethical standards"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article does not contain any studies with human participants or living animals performed by any of the authors.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"This articles does not contain patient data.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}}]}}