{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T03:06:23Z","timestamp":1771038383747,"version":"3.50.1"},"reference-count":62,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2020,11,27]],"date-time":"2020-11-27T00:00:00Z","timestamp":1606435200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Research Grants Council of the Hong Kong Special Administrative Region","award":["(Project no. CUHK 14201017 and Project no. CUHK 14201918)"],"award-info":[{"award-number":["(Project no. CUHK 14201017 and Project no. CUHK 14201918)"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2020,12,31]]},"abstract":"<jats:p>\n            This paper presents the idea of\n            <jats:italic>mono-nizing<\/jats:italic>\n            binocular videos and a framework to effectively realize it. Mono-nize means we purposely convert a binocular video into a regular monocular video with the stereo information implicitly encoded in a visual but nearly-imperceptible form. Hence, we can impartially distribute and show the mononized video as an ordinary monocular video. Unlike ordinary monocular videos, we can restore from it the original binocular video and show it on a stereoscopic display. To start, we formulate an encoding-and-decoding framework with the pyramidal deformable fusion module to exploit long-range correspondences between the left and right views, a quantization layer to suppress the restoring artifacts, and the compression noise simulation module to resist the compression noise introduced by modern video codecs. Our framework is self-supervised, as we articulate our objective function with loss terms defined on the input: a monocular term for creating the mononized video, an invertibility term for restoring the original video, and a temporal term for frame-to-frame coherence. Further, we conducted extensive experiments to evaluate our generated mononized videos and restored binocular videos for diverse types of images and 3D movies. Quantitative results on both standard metrics and user perception studies show the effectiveness of our method.\n          <\/jats:p>","DOI":"10.1145\/3414685.3417764","type":"journal-article","created":{"date-parts":[[2020,11,27]],"date-time":"2020-11-27T21:51:05Z","timestamp":1606513865000},"page":"1-16","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":37,"title":["Mononizing binocular videos"],"prefix":"10.1145","volume":"39","author":[{"given":"Wenbo","family":"Hu","sequence":"first","affiliation":[{"name":"The Chinese University of Hong Kong"}]},{"given":"Menghan","family":"Xia","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong"}]},{"given":"Chi-Wing","family":"Fu","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong"}]},{"given":"Tien-Tsin","family":"Wong","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong"}]}],"member":"320","published-online":{"date-parts":[[2020,11,27]]},"reference":[{"key":"e_1_2_2_1_1","volume-title":"Gool","author":"Agustsson Eirikur","year":"2017","unstructured":"Eirikur Agustsson , Fabian Mentzer , Michael Tschannen , Lukas Cavigelli , Radu Timofte , Luca Benini , and Luc V . Gool . 2017 . Soft-to-hard vector quantization for end-to-end learning compressible representations. In Advances in Neural Information Processing Systems . Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, and Luc V. Gool. 2017. Soft-to-hard vector quantization for end-to-end learning compressible representations. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_2_1","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Atapour-Abarghouei Amir","unstructured":"Amir Atapour-Abarghouei and Toby P. Breckon . 2018. Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer . In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Amir Atapour-Abarghouei and Toby P. Breckon. 2018. Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_3_1","volume-title":"End-to-end Optimized Image Compression. In International Conference on Learning Representations (ICLR).","author":"Ball\u00e9 Johannes","unstructured":"Johannes Ball\u00e9 , Valero Laparra , and Eero P. Simoncelli . 2017 . End-to-end Optimized Image Compression. In International Conference on Learning Representations (ICLR). Johannes Ball\u00e9, Valero Laparra, and Eero P. Simoncelli. 2017. End-to-end Optimized Image Compression. In International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_2_4_1","unstructured":"Shumeet Baluja. 2017. Hiding images in plain sight: Deep steganography. In Advances in Neural Information Processing Systems.  Shumeet Baluja. 2017. Hiding images in plain sight: Deep steganography. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2017.2748458"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.1994.413553"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/PCS.2018.8456249"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00324"},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/MCG.2018.2884188"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.89"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964991"},{"key":"e_1_2_2_12_1","article-title":"A luminance-contrast-aware disparity model and applications","volume":"31","author":"Didyk Piotr","year":"2012","unstructured":"Piotr Didyk , Tobias Ritschel , Elmar Eisemann , Karol Myszkowski , Hans-Peter Seidel , and Wojciech Matusik . 2012 . A luminance-contrast-aware disparity model and applications . ACM Transactions on Graphics (SIGGRAPH Asia) 31 , 6 (2012), 184:1--184:10. Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, Hans-Peter Seidel, and Wojciech Matusik. 2012. A luminance-contrast-aware disparity model and applications. ACM Transactions on Graphics (SIGGRAPH Asia) 31, 6 (2012), 184:1--184:10.","journal-title":"ACM Transactions on Graphics (SIGGRAPH Asia)"},{"key":"e_1_2_2_13_1","unstructured":"David Eigen Christian Puhrsch and Rob Fergus. 2014. Depth Map Prediction from a Single Image using a Multi-Scale Deep Network. In Advances in Neural Information Processing Systems.  David Eigen Christian Puhrsch and Rob Fergus. 2014. Depth Map Prediction from a Single Image using a Multi-Scale Deep Network. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_14_1","volume-title":"CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"F\u00e1cil Jos\u00e9 M.","year":"2019","unstructured":"Jos\u00e9 M. F\u00e1cil , Benjamin Ummenhofer , Huizhong Zhou , Luis Montesano , Thomas Brox , and Javier Civera . 2019 . CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Jos\u00e9 M. F\u00e1cil, Benjamin Ummenhofer, Huizhong Zhou, Luis Montesano, Thomas Brox, and Javier Civera. 2019. CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_15_1","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Flynn John","year":"2016","unstructured":"John Flynn , Ivan Neulander , James Philbin , and Noah Snavely . 2016 . DeepStereo: Learning to predict new views from the world's imagery . In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. 2016. DeepStereo: Learning to predict new views from the world's imagery. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073672"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1177\/0278364913491297"},{"key":"e_1_2_2_18_1","volume-title":"Deep Learning","author":"Goodfellow Ian","unstructured":"Ian Goodfellow , Yoshua Bengio , and Aaron Courville . 2016. Deep Learning . MIT Press . http:\/\/www.deeplearningbook.org. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. http:\/\/www.deeplearningbook.org."},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_2_20_1","unstructured":"Itay Hubara Matthieu Courbariaux Daniel Soudry Ran El-Yaniv and Yoshua Bengio. 2016. Binarized neural networks. In Advances in Neural Information Processing Systems.  Itay Hubara Matthieu Courbariaux Daniel Soudry Ran El-Yaniv and Yoshua Bengio. 2016. Binarized neural networks. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_21_1","volume-title":"Proceedings of the 32nd International Conference on Machine Learning (ICML).","author":"Ioffe Sergey","year":"2015","unstructured":"Sergey Ioffe and Christian Szegedy . 2015 . Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . In Proceedings of the 32nd International Conference on Machine Learning (ICML). Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML)."},{"key":"e_1_2_2_22_1","unstructured":"Max Jaderberg Karen Simonyan Andrew Zisserman etal 2015. Spatial transformer networks. In Advances in Neural Information Processing Systems.  Max Jaderberg Karen Simonyan Andrew Zisserman et al. 2015. Spatial transformer networks. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_23_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2897824.2925866","article-title":"GazeStereo3D: Seamless disparity manipulations","volume":"35","author":"Kellnhofer Petr","year":"2016","unstructured":"Petr Kellnhofer , Piotr Didyk , Karol Myszkowski , Mohamed M. Hefeeda , Hans-Peter Seidel , and Wojciech Matusik . 2016 . GazeStereo3D: Seamless disparity manipulations . ACM Transactions on Graphics (SIGGRAPH) 35 , 4 (2016), 1 -- 13 . Petr Kellnhofer, Piotr Didyk, Karol Myszkowski, Mohamed M. Hefeeda, Hans-Peter Seidel, and Wojciech Matusik. 2016. GazeStereo3D: Seamless disparity manipulations. ACM Transactions on Graphics (SIGGRAPH) 35, 4 (2016), 1--13.","journal-title":"ACM Transactions on Graphics (SIGGRAPH)"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073617"},{"key":"e_1_2_2_25_1","volume-title":"Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR).","author":"Diederik","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015 . Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR). Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.14257\/ijhit.2017.10.8.07"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1778812"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2017.2703612"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2872876"},{"key":"e_1_2_2_30_1","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Li Zhengqi","unstructured":"Zhengqi Li , Tali Dekel , Forrester Cole , Richard Tucker , Noah Snavely , Ce Liu , and William T. Freeman . 2019a. Learning the depths of moving people by watching frozen people . In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, and William T. Freeman. 2019a. Learning the depths of moving people by watching frozen people. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00485"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323020"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00024"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/DeSE.2016.31"},{"key":"e_1_2_2_35_1","doi-asserted-by":"crossref","unstructured":"Ben Mildenhall Pratul P. Srinivasan Matthew Tancik Jonathan T. Barron Ravi Ramamoorthi and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. arXiv:2003.08934 [cs.CV]  Ben Mildenhall Pratul P. Srinivasan Matthew Tancik Jonathan T. Barron Ravi Ramamoorthi and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. arXiv:2003.08934 [cs.CV]","DOI":"10.1007\/978-3-030-58452-8_24"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.5594\/j18499"},{"key":"e_1_2_2_37_1","volume-title":"Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Nah Seungjun","year":"2017","unstructured":"Seungjun Nah , Tae Hyun Kim , and Kyoung Mu Lee . 2017 . Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. 2017. Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_38_1","article-title":"3D Ken Burns effect from a single image","volume":"38","author":"Niklaus Simon","year":"2019","unstructured":"Simon Niklaus , Long Mai , Jimei Yang , and Feng Liu . 2019 . 3D Ken Burns effect from a single image . ACM Transactions on Graphics (SIGGRAPH Asia) 38 , 6 (2019), 184:1--184:15. Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu. 2019. 3D Ken Burns effect from a single image. ACM Transactions on Graphics (SIGGRAPH Asia) 38, 6 (2019), 184:1--184:15.","journal-title":"ACM Transactions on Graphics (SIGGRAPH Asia)"},{"key":"e_1_2_2_39_1","volume-title":"PyTorch: An Imperative Style","author":"Paszke Adam","unstructured":"Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , Alban Desmaison , Andreas K\u00f6pf , Edward Yang , Zachary DeVito , Martin Raison , Alykhan Tejani , Sasank Chilamkurthy , Benoit Steiner , Lu Fang , Junjie Bai , and Soumith Chintala . 2019. PyTorch: An Imperative Style , High-Performance Deep Learning Library . In Advances in Neural Information Processing Systems. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_40_1","volume-title":"European Conference on Computer Vision (ECCV).","author":"Rastegari Mohammad","year":"2017","unstructured":"Mohammad Rastegari , Vicente Ordonez , Joseph Redmon , and Ali Farhadi . 2017 . XNOR-Net: ImageNet classification using binary convolutional neural networks . In European Conference on Computer Vision (ECCV). Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2017. XNOR-Net: ImageNet classification using binary convolutional neural networks. In European Conference on Computer Vision (ECCV)."},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.5555\/2322561.2323676"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2487228.2487229"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1007\/BF01068419"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2369050"},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00805"},{"key":"e_1_2_2_46_1","volume-title":"Pushing the Boundaries of View Extrapolation with Multiplane Images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Srinivasan Pratul P.","year":"2019","unstructured":"Pratul P. Srinivasan , Richard Tucker , Jonathan T. Barron , Ravi Ramamoorthi , Ren Ng , and Noah Snavely . 2019 . Pushing the Boundaries of View Extrapolation with Multiplane Images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. 2019. Pushing the Boundaries of View Extrapolation with Multiplane Images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2012.2221191"},{"key":"e_1_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00931"},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2015.2477935"},{"key":"e_1_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2010.5651071"},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2010.2098830"},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2019.00478"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13846"},{"key":"e_1_2_2_54_1","volume-title":"The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers. IEEE.","author":"Wang Zhou","unstructured":"Zhou Wang , Eero P. Simoncelli , and Alan C. Bovik . 2003. Multiscale structural similarity for image quality assessment . In The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers. IEEE. Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik. 2003. Multiscale structural similarity for image quality assessment. In The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers. IEEE."},{"key":"e_1_2_2_55_1","volume-title":"Light Field Messaging With Deep Photographic Steganography. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Wengrowski Eric","year":"2019","unstructured":"Eric Wengrowski and Kristin Dana . 2019 . Light Field Messaging With Deep Photographic Steganography. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Eric Wengrowski and Kristin Dana. 2019. Light Field Messaging With Deep Photographic Steganography. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2003.815165"},{"key":"e_1_2_2_57_1","article-title":"Invertible grayscale","volume":"37","author":"Xia Menghan","year":"2018","unstructured":"Menghan Xia , Xueting Liu , and Tien-Tsin Wong . 2018 . Invertible grayscale . ACM Transactions on Graphics (SIGGRAPH Asia) 37 , 6 (2018), 246:1--246:10. Menghan Xia, Xueting Liu, and Tien-Tsin Wong. 2018. Invertible grayscale. ACM Transactions on Graphics (SIGGRAPH Asia) 37, 6 (2018), 246:1--246:10.","journal-title":"ACM Transactions on Graphics (SIGGRAPH Asia)"},{"key":"e_1_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46493-0_51"},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323007"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201323"},{"key":"e_1_2_2_61_1","volume-title":"HiDDeN: Hiding Data With Deep Networks. In European Conference on Computer Vision (ECCV).","author":"Zhu Jiren","year":"2018","unstructured":"Jiren Zhu , Russell Kaplan , Justin Johnson , and Li Fei-Fei . 2018 . HiDDeN: Hiding Data With Deep Networks. In European Conference on Computer Vision (ECCV). Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. 2018. HiDDeN: Hiding Data With Deep Networks. In European Conference on Computer Vision (ECCV)."},{"key":"e_1_2_2_62_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00953"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414685.3417764","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3414685.3417764","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:03:13Z","timestamp":1750197793000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414685.3417764"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,11,27]]},"references-count":62,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2020,12,31]]}},"alternative-id":["10.1145\/3414685.3417764"],"URL":"https:\/\/doi.org\/10.1145\/3414685.3417764","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,11,27]]},"assertion":[{"value":"2020-11-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}