{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T07:52:13Z","timestamp":1770450733787,"version":"3.49.0"},"reference-count":86,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2020,9,4]],"date-time":"2020-09-04T00:00:00Z","timestamp":1599177600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2020,9,4]]},"abstract":"<jats:p>This paper presents FinGTrAC, a system that shows the feasibility of fine grained finger gesture tracking using low intrusive wearable sensor platform (smart-ring worn on the index finger and a smart-watch worn on the wrist). The key contribution is in scaling up gesture recognition to hundreds of gestures while using only a sparse wearable sensor set where prior works have been able to only detect tens of hand gestures. Such sparse sensors are convenient to wear but cannot track all fingers and hence provide under-constrained information. However application specific context can fill the gap in sparse sensing and improve the accuracy of gesture classification. Rich context exists in a number of applications such as user-interfaces, sports analytics, medical rehabilitation, sign language translation etc. This paper shows the feasibility of exploiting such context in an application of American Sign Language (ASL) translation. Noisy sensor data, variations in gesture performance across users and the inability to capture data from all fingers introduce non-trivial challenges. FinGTrAC exploits a number of opportunities in data preprocessing, filtering, pattern matching, context of an ASL sentence to systematically fuse the available sensory information into a Bayesian filtering framework. Culminating into the design of a Hidden Markov Model, a Viterbi decoding scheme is designed to detect finger gestures and the corresponding ASL sentences in real time. Extensive evaluation on 10 users shows a recognition accuracy of 94.2% for 100 most frequently used ASL finger gestures over different sentences. When the size of the dictionary is extended to 200 words, the accuracy is degrades gracefully to 90% thus indicating the robustness and scalability of the multi-stage optimization framework.<\/jats:p>","DOI":"10.1145\/3414117","type":"journal-article","created":{"date-parts":[[2020,9,4]],"date-time":"2020-09-04T21:39:45Z","timestamp":1599255585000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":29,"title":["Finger Gesture Tracking for Interactive Applications"],"prefix":"10.1145","volume":"4","author":[{"given":"Yilin","family":"Liu","sequence":"first","affiliation":[{"name":"The Pennsylvania State University, University Park, Pennsylvania"}]},{"given":"Fengyang","family":"Jiang","sequence":"additional","affiliation":[{"name":"The Pennsylvania State University, University Park, Pennsylvania"}]},{"given":"Mahanth","family":"Gowda","sequence":"additional","affiliation":[{"name":"The Pennsylvania State University, University Park, Pennsylvania"}]}],"member":"320","published-online":{"date-parts":[[2020,9,4]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"5DT 2019. 5DT Data Glove Ultra - 5DT. https:\/\/5dt.com\/5dt-data-glove-ultra\/.  5DT 2019. 5DT Data Glove Ultra - 5DT. https:\/\/5dt.com\/5dt-data-glove-ultra\/."},{"key":"e_1_2_1_2_1","volume-title":"Impairment of individual finger movements in Parkinson's disease. Movement disorders","author":"Agostino Rocco","year":"2003"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.3390\/s18072208"},{"key":"e_1_2_1_4_1","unstructured":"Toshiyuki Ando Yuki Kubo Buntarou Shizuki and Shin Takahashi. 2017. CanalSense: face-related movement recognition system based on sensing air pressure in ear canals. In ACM UIST.  Toshiyuki Ando Yuki Kubo Buntarou Shizuki and Shin Takahashi. 2017. CanalSense: face-related movement recognition system based on sensing air pressure in ear canals. In ACM UIST."},{"key":"e_1_2_1_5_1","unstructured":"Benjamin J Bahan. 1997. Non-manual realization of agreement in American Sign Language. (1997).  Benjamin J Bahan. 1997. Non-manual realization of agreement in American Sign Language. (1997)."},{"key":"e_1_2_1_6_1","volume-title":"Proceedings of the 17th international conference on Computational linguistics-Volume 1.","author":"Baker Collin F","year":"1998"},{"key":"e_1_2_1_7_1","first-page":"1089","article-title":"No unbiased estimator of the variance of k-fold cross-validation","author":"Bengio Yoshua","year":"2004","journal-title":"Journal of machine learning research 5"},{"key":"e_1_2_1_8_1","volume-title":"KDD workshop.","author":"Berndt Donald J","year":"1994"},{"key":"e_1_2_1_9_1","unstructured":"Bracelets and Rings 2013. Bracelets and rings translate sign language. https:\/\/www.cnet.com\/news\/bracelet-and-rings-translate-sign-language\/.  Bracelets and Rings 2013. Bracelets and rings translate sign language. https:\/\/www.cnet.com\/news\/bracelet-and-rings-translate-sign-language\/."},{"key":"e_1_2_1_10_1","unstructured":"Helene Brashear et al. 2003. Using multiple sensors for mobile sign language recognition. Georgia Institute of Technology.  Helene Brashear et al. 2003. Using multiple sensors for mobile sign language recognition. Georgia Institute of Technology."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01231-1_41"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13042-017-0705-5"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/GHTC-SAS.2014.6967567"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00812"},{"key":"e_1_2_1_15_1","first-page":"2493","article-title":"Natural language processing (almost) from scratch","author":"Collobert Ronan","year":"2011","journal-title":"Journal of machine learning research 12"},{"key":"e_1_2_1_16_1","unstructured":"Cyber Glove 2017. Cyber Glove III - Cyber Glove Systems LL. http:\/\/www.cyberglovesystems.com\/cyberglove-iii\/.  Cyber Glove 2017. Cyber Glove III - Cyber Glove Systems LL. http:\/\/www.cyberglovesystems.com\/cyberglove-iii\/."},{"key":"e_1_2_1_17_1","unstructured":"Deaf Statistics 2011. How many deaf people are there in United States. https:\/\/research.gallaudet.edu\/Demographics\/deaf-US.php.  Deaf Statistics 2011. How many deaf people are there in United States. https:\/\/research.gallaudet.edu\/Demographics\/deaf-US.php."},{"key":"e_1_2_1_18_1","unstructured":"Facial Expressions 2017. Facial expressions in American Sign Language. https:\/\/www.newscientist.com\/article\/2133451-automatic-sign-language-translators-turn-signing-into-text\/.  Facial Expressions 2017. Facial expressions in American Sign Language. https:\/\/www.newscientist.com\/article\/2133451-automatic-sign-language-translators-turn-signing-into-text\/."},{"key":"e_1_2_1_19_1","doi-asserted-by":"crossref","unstructured":"Biyi Fang et al. 2017. DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation. In ACM SenSys.  Biyi Fang et al. 2017. DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation. In ACM SenSys.","DOI":"10.1145\/3131672.3131693"},{"key":"e_1_2_1_20_1","unstructured":"Finger Placement 2011. Finger Placement When Shooting a Basketball. https:\/\/www.sportsrec.com\/476507-finger-placement-when-shooting-a-basketball.html.  Finger Placement 2011. Finger Placement When Shooting a Basketball. https:\/\/www.sportsrec.com\/476507-finger-placement-when-shooting-a-basketball.html."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2016.2583542"},{"key":"e_1_2_1_22_1","doi-asserted-by":"crossref","unstructured":"Marcus Georgi Christoph Amma and Tanja Schultz. 2015. Recognizing Hand and Finger Gestures with IMU based Motion and EMG based Muscle Activity Sensing.. In Biosignals. 99--108.  Marcus Georgi Christoph Amma and Tanja Schultz. 2015. Recognizing Hand and Finger Gestures with IMU based Motion and EMG based Muscle Activity Sensing.. In Biosignals. 99--108.","DOI":"10.5220\/0005276900990108"},{"key":"e_1_2_1_23_1","volume-title":"LSTM: A search space odyssey","author":"Greff Klaus","year":"2016"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1016\/0010-0277(77)90006-3"},{"key":"e_1_2_1_25_1","unstructured":"Jiahui Hou et al. 2019. SignSpeaker: A Real-time High-Precision SmartWatch-based Sign Language Translator. MobiCom (2019).  Jiahui Hou et al. 2019. SignSpeaker: A Real-time High-Precision SmartWatch-based Sign Language Translator. MobiCom (2019)."},{"key":"e_1_2_1_26_1","doi-asserted-by":"crossref","unstructured":"Junxian Huang Feng Qian Alexandre Gerber Z Morley Mao Subhabrata Sen and Oliver Spatscheck. 2012. A close examination of performance and power characteristics of 4G LTE networks. In ACM MobiSys.  Junxian Huang Feng Qian Alexandre Gerber Z Morley Mao Subhabrata Sen and Oliver Spatscheck. 2012. A close examination of performance and power characteristics of 4G LTE networks. In ACM MobiSys.","DOI":"10.1145\/2307636.2307658"},{"key":"e_1_2_1_27_1","volume-title":"Proceedings of the European Conference on Computer Vision (ECCV). 118--134","author":"Iqbal Umar","year":"2018"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300506"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2014.08.167"},{"key":"e_1_2_1_30_1","volume-title":"2011 11th International Conference on Control, Automation and Systems. IEEE, 206--210","author":"Jeong Eunseok","year":"2011"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/3477.485888"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.3390\/s19183827"},{"key":"e_1_2_1_33_1","volume-title":"Deep Sign: Enabling Robust Statistical Continuous Sign Language Recognition via Hybrid CNN-HMMs. International Journal of Computer Vision","author":"Koller Oscar","year":"2018"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/JMEMS.2008.921727"},{"key":"e_1_2_1_35_1","unstructured":"Hong Li Wei Yang Jianxin Wang Yang Xu and Liusheng Huang. 2016. WiFinger: talk to your smart devices with finger-grained gesture. In ACM UbiComp.  Hong Li Wei Yang Jianxin Wang Yang Xu and Liusheng Huang. 2016. WiFinger: talk to your smart devices with finger-grained gesture. In ACM UbiComp."},{"key":"e_1_2_1_36_1","doi-asserted-by":"crossref","DOI":"10.1109\/TBME.2012.2190734","volume-title":"A sign-component-based framework for Chinese sign language recognition using accelerometer and sEMG data","author":"Li Yun","year":"2012"},{"key":"e_1_2_1_37_1","unstructured":"Lifeprint 2017. ASL Grammar. https:\/\/www.lifeprint.com\/asl101\/pages-layout\/grammar.htm.  Lifeprint 2017. ASL Grammar. https:\/\/www.lifeprint.com\/asl101\/pages-layout\/grammar.htm."},{"key":"e_1_2_1_38_1","volume-title":"Proceedings workshop on human motion. IEEE.","author":"Lin John","year":"2000"},{"key":"e_1_2_1_39_1","volume-title":"uWave: Accelerometer-based personalized gesture recognition and its applications. Pervasive and Mobile Computing","author":"Liu Jiayang","year":"2009"},{"key":"e_1_2_1_40_1","volume-title":"Application Informed Motion Signal Processing for Finger Motion Tracking using Wearable Sensors","author":"Liu Yilin"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3191755"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2632048.2632095"},{"key":"e_1_2_1_43_1","first-page":"037","article-title":"Computing numeric representations of words in a high-dimensional space","volume":"9","author":"Mikolov Tomas","year":"2015","journal-title":"US Patent"},{"key":"e_1_2_1_44_1","doi-asserted-by":"crossref","unstructured":"Tom\u00e1\u0161 Mikolov Martin Karafi\u00e1t Luk\u00e1\u0161 Burget Jan \u010cernocky and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association.  Tom\u00e1\u0161 Mikolov Martin Karafi\u00e1t Luk\u00e1\u0161 Burget Jan \u010cernocky and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association.","DOI":"10.21437\/Interspeech.2010-343"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/219717.219748"},{"key":"e_1_2_1_46_1","unstructured":"Motiv Ring 2020. Motiv Ring | 24\/7 Smart Ring | Fitness + Sleep Tracking | Online Security. https:\/\/mymotiv.com\/.  Motiv Ring 2020. Motiv Ring | 24\/7 Smart Ring | Fitness + Sleep Tracking | Online Security. https:\/\/mymotiv.com\/."},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00013"},{"key":"e_1_2_1_48_1","volume-title":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 1515--1525","author":"Rajalakshmi"},{"key":"e_1_2_1_49_1","doi-asserted-by":"crossref","unstructured":"Anh Nguyen et al. 2016. A lightweight inexpensive in-ear sensing system for automatic whole-night sleep stage monitoring. In ACM SenSys.  Anh Nguyen et al. 2016. A lightweight inexpensive in-ear sensing system for automatic whole-night sleep stage monitoring. In ACM SenSys.","DOI":"10.1145\/2994551.2994562"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3356250.3360040"},{"key":"e_1_2_1_51_1","unstructured":"OSF 2020. OSF|SignData.csv. https:\/\/osf.io\/ua4mw\/.  OSF 2020. OSF|SignData.csv. https:\/\/osf.io\/ua4mw\/."},{"key":"e_1_2_1_52_1","unstructured":"Oura Ring 2018. Oura Ring Review. https:\/\/www.wareable.com\/health-and-wellbeing\/oura-ring-2018-review-6628.  Oura Ring 2018. Oura Ring Review. https:\/\/www.wareable.com\/health-and-wellbeing\/oura-ring-2018-review-6628."},{"key":"e_1_2_1_53_1","unstructured":"Oura Ring 2019. Oura Ring - What we learned about the sleep tracking ring. https:\/\/www.cnbc.com\/2019\/12\/20\/oura-ring-review---what-we-learned-about-the-sleep-tracking-ring.html.  Oura Ring 2019. Oura Ring - What we learned about the sleep tracking ring. https:\/\/www.cnbc.com\/2019\/12\/20\/oura-ring-review---what-we-learned-about-the-sleep-tracking-ring.html."},{"key":"e_1_2_1_54_1","unstructured":"Oura Ring 2019. Oura Ring review - The early adopter catches the worm. https:\/\/www.androidauthority.com\/oura-ring-2-review-933935\/.  Oura Ring 2019. Oura Ring review - The early adopter catches the worm. https:\/\/www.androidauthority.com\/oura-ring-2-review-933935\/."},{"key":"e_1_2_1_55_1","unstructured":"Oura Ring 2020. Oura Ring: The most accurate sleep and activity tracker. https:\/\/ouraring.com\/.  Oura Ring 2020. Oura Ring: The most accurate sleep and activity tracker. https:\/\/ouraring.com\/."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2011.06.015"},{"key":"e_1_2_1_57_1","volume-title":"Risq: Recognizing smoking gestures with inertial sensors on a wristband. In ACM MobiSys.","author":"Abhinav Parate","year":"2014"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3369831"},{"key":"e_1_2_1_59_1","doi-asserted-by":"crossref","unstructured":"Deborah Chen Pichler. 2002. Word order variation and acquisition in American Sign Language. (2002).  Deborah Chen Pichler. 2002. Word order variation and acquisition in American Sign Language. (2002).","DOI":"10.1075\/sll.5.1.10pic"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICAECC.2014.7002401"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00429"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/2500423.2500436"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.sna.2012.07.023"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1088\/0964-1726\/25\/1\/013001"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3097620.3097624"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/INDICON.2015.7443381"},{"key":"e_1_2_1_67_1","doi-asserted-by":"crossref","unstructured":"Sheng Shen Mahanth Gowda and Romit Roy Choudhury. 2018. Closing the Gaps in Inertial Motion Tracking. In ACM MobiCom.  Sheng Shen Mahanth Gowda and Romit Roy Choudhury. 2018. Closing the Gaps in Inertial Motion Tracking. In ACM MobiCom.","DOI":"10.1145\/3241539.3241582"},{"key":"e_1_2_1_68_1","doi-asserted-by":"crossref","unstructured":"Michael Sherman Gradeigh Clark Yulong Yang Shridatt Sugrim Arttu Modig Janne Lindqvist Antti Oulasvirta and Teemu Roos. 2014. User-generated free-form gestures for authentication: Security and memorability. In ACM MobiSys.  Michael Sherman Gradeigh Clark Yulong Yang Shridatt Sugrim Arttu Modig Janne Lindqvist Antti Oulasvirta and Teemu Roos. 2014. User-generated free-form gestures for authentication: Security and memorability. In ACM MobiSys.","DOI":"10.1145\/2594368.2594375"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1115\/1.4036288"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2015.12.276"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0227039"},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0227039"},{"key":"e_1_2_1_73_1","unstructured":"Signlanguage Characters [n. d.]. Drawing hands in sign language poses. http:\/\/albaneze.weebly.com\/step-3---drawing-hands-in-sign-language-poses.html.  Signlanguage Characters [n. d.]. Drawing hands in sign language poses. http:\/\/albaneze.weebly.com\/step-3---drawing-hands-in-sign-language-poses.html."},{"key":"e_1_2_1_74_1","volume-title":"2007 6th International Conference on Information, Communications & Signal Processing. IEEE, 1--4.","author":"Swee Tan Tian","year":"2007"},{"key":"e_1_2_1_75_1","volume-title":"The American sign language handshape dictionary","author":"Tennant Richard A"},{"key":"e_1_2_1_76_1","volume-title":"3d tracking= classification+ interpolation. In null","author":"Tomasi Carlo"},{"key":"e_1_2_1_77_1","doi-asserted-by":"crossref","unstructured":"Hoang Truong et al. 2018. CapBand: Battery-free Successive Capacitance Sensing Wristband for Hand Gesture Recognition. In ACM SenSys.  Hoang Truong et al. 2018. CapBand: Battery-free Successive Capacitance Sensing Wristband for Hand Gesture Recognition. In ACM SenSys.","DOI":"10.1145\/3274783.3274854"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/2789168.2790102"},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.1967.1054010"},{"key":"e_1_2_1_80_1","unstructured":"VMU931 2018. New Rugged and Compact IMU. https:\/\/variense.com\/product\/vmu931\/.  VMU931 2018. New Rugged and Compact IMU. https:\/\/variense.com\/product\/vmu931\/."},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2017.2654863"},{"key":"e_1_2_1_82_1","unstructured":"WHO 2020. World Health Organization: Deafness and Hearing Loss. https:\/\/www.who.int\/news-room\/fact-sheets\/detail\/deafness-and-hearing-loss.  WHO 2020. World Health Organization: Deafness and Hearing Loss. https:\/\/www.who.int\/news-room\/fact-sheets\/detail\/deafness-and-hearing-loss."},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2016.2598302"},{"key":"e_1_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2018.00280"},{"key":"e_1_2_1_85_1","volume-title":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM.","author":"Cheng"},{"key":"e_1_2_1_86_1","doi-asserted-by":"crossref","unstructured":"Pengfei Zhou Mo Li and Guobin Shen. 2014. Use it free: Instantly knowing your phone attitude. In ACM MobiCOm.  Pengfei Zhou Mo Li and Guobin Shen. 2014. Use it free: Instantly knowing your phone attitude. In ACM MobiCOm.","DOI":"10.1145\/2639108.2639110"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414117","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3414117","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:38:38Z","timestamp":1750199918000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414117"}},"subtitle":["A Pilot Study with Sign Languages"],"short-title":[],"issued":{"date-parts":[[2020,9,4]]},"references-count":86,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2020,9,4]]}},"alternative-id":["10.1145\/3414117"],"URL":"https:\/\/doi.org\/10.1145\/3414117","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,9,4]]},"assertion":[{"value":"2020-09-04","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}