{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:08:22Z","timestamp":1750219702027,"version":"3.41.0"},"reference-count":75,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2024,1,11]],"date-time":"2024-01-11T00:00:00Z","timestamp":1704931200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Key Research and Development Program of China","award":["2019YFA0706200, 2018AAA0102002"],"award-info":[{"award-number":["2019YFA0706200, 2018AAA0102002"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61932009"],"award-info":[{"award-number":["61932009"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2024,4,30]]},"abstract":"<jats:p>This article presents one-bit supervision, a novel setting of learning with fewer labels, for image classification. Instead of the training model using the accurate label of each sample, our setting requires the model to interact with the system by predicting the class label of each sample and learn from the answer whether the guess is correct, which provides one bit (yes or no) of information. An intriguing property of the setting is that the burden of annotation largely is alleviated in comparison to offering the accurate label. There are two keys to one-bit supervision: (i) improving the guess accuracy and (ii) making good use of the incorrect guesses. To achieve these goals, we propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm. Theoretical analysis shows that one-bit annotation is more efficient than full-bit annotation in most cases and gives the conditions of combining our approach with active learning. Inspired by this, we further integrate the one-bit supervision framework into the self-supervised learning algorithm, which yields an even more efficient training schedule. Different from training from scratch, when self-supervised learning is used for initialization, both hard example mining and class balance are verified to be effective in boosting the learning performance. However, these two frameworks still need full-bit labels in the initial stage. To cast off this burden, we utilize unsupervised domain adaptation to train the initial model and conduct pure one-bit annotations on the target dataset. In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.<\/jats:p>","DOI":"10.1145\/3633779","type":"journal-article","created":{"date-parts":[[2023,11,24]],"date-time":"2023-11-24T11:31:16Z","timestamp":1700825476000},"page":"1-22","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["One-Bit Supervision for Image Classification: Problem, Solution, and Beyond"],"prefix":"10.1145","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3095-6009","authenticated-orcid":false,"given":"Hengtong","family":"Hu","sequence":"first","affiliation":[{"name":"Hefei University of Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4831-9451","authenticated-orcid":false,"given":"Lingxi","family":"Xie","sequence":"additional","affiliation":[{"name":"Huawei Inc., China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1724-9438","authenticated-orcid":false,"given":"Xinyue","family":"Huo","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5461-3986","authenticated-orcid":false,"given":"Richang","family":"Hong","sequence":"additional","affiliation":[{"name":"Hefei University of Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7252-5047","authenticated-orcid":false,"given":"Qi","family":"Tian","sequence":"additional","affiliation":[{"name":"Huawei Inc., China"}]}],"member":"320","published-online":{"date-parts":[[2024,1,11]]},"reference":[{"key":"e_1_3_1_2_2","first-page":"566","volume-title":"Advances in Neural Information Processing Systems","author":"Atlas Les E.","year":"1990","unstructured":"Les E. Atlas, David A. Cohn, and Richard E. Ladner. 1990. Training connectionist networks with queries and selective sampling. In Advances in Neural Information Processing Systems. 566\u2013573."},{"key":"e_1_3_1_3_2","first-page":"5050","volume-title":"Advances in Neural Information Processing Systems","author":"Berthelot David","year":"2019","unstructured":"David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A. Raffel. 2019. MixMatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems. 5050\u20135060."},{"key":"e_1_3_1_4_2","article-title":"Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning","author":"Cascante-Bonilla Paola","year":"2020","unstructured":"Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, and Vicente Ordonez. 2020. Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning. arXiv preprint arXiv:2001.06001 (2020).","journal-title":"arXiv preprint arXiv:2001.06001"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.5745"},{"key":"e_1_3_1_6_2","first-page":"1704","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Chen John","year":"2020","unstructured":"John Chen, Vatsal Shah, and Anastasios Kyrillidis. 2020. Negative sampling in semi-supervised learning. In Proceedings of the International Conference on Machine Learning. 1704\u20131714."},{"key":"e_1_3_1_7_2","first-page":"1597","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Chen Ting","year":"2020","unstructured":"Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning. 1597\u20131607."},{"key":"e_1_3_1_8_2","article-title":"Big self-supervised models are strong semi-supervised learners","author":"Chen Ting","year":"2020","unstructured":"Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. 2020. Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029 (2020).","journal-title":"arXiv preprint arXiv:2006.10029"},{"key":"e_1_3_1_9_2","article-title":"Improved baselines with momentum contrastive learning","author":"Chen Xinlei","year":"2020","unstructured":"Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. 2020. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297 (2020).","journal-title":"arXiv preprint arXiv:2003.04297"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_1_11_2","first-page":"522","volume-title":"Advances in Neural Information Processing Systems","author":"Fergus Rob","year":"2009","unstructured":"Rob Fergus, Yair Weiss, and Antonio Torralba. 2009. Semi-supervised learning in gigantic image collections. In Advances in Neural Information Processing Systems. 522\u2013530."},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10593-2_37"},{"key":"e_1_3_1_13_2","article-title":"Born again neural networks","author":"Furlanello Tommaso","year":"2018","unstructured":"Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. arXiv preprint arXiv:1805.04770 (2018).","journal-title":"arXiv preprint arXiv:1805.04770"},{"key":"e_1_3_1_14_2","first-page":"1183","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Gal Yarin","year":"2017","unstructured":"Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian active learning with image data. In Proceedings of the International Conference on Machine Learning. 1183\u20131192."},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58607-2_30"},{"key":"e_1_3_1_16_2","article-title":"Shake-shake regularization","author":"Gastaldi Xavier","year":"2017","unstructured":"Xavier Gastaldi. 2017. Shake-shake regularization. arXiv preprint arXiv:1705.07485 (2017).","journal-title":"arXiv preprint arXiv:1705.07485"},{"key":"e_1_3_1_17_2","article-title":"Unsupervised representation learning by predicting image rotations","author":"Gidaris Spyros","year":"2018","unstructured":"Spyros Gidaris, Praveer Singh, and Nikos Komodakis. 2018. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018).","journal-title":"arXiv preprint arXiv:1803.07728"},{"key":"e_1_3_1_18_2","first-page":"529","volume-title":"Advances in Neural Information Processing Systems","author":"Grandvalet Yves","year":"2005","unstructured":"Yves Grandvalet and Yoshua Bengio. 2005. Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems. 529\u2013536."},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2010.5540120"},{"key":"e_1_3_1_20_2","doi-asserted-by":"crossref","unstructured":"Tao Han Wei-Wei Tu and Yu-Feng Li. 2021. Explanation consistency training: Facilitating consistency-based semi-supervised learning with interpretability. In Proceedings of the AAAI Conference on Artificial Intelligence .","DOI":"10.1609\/aaai.v35i9.16934"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00975"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_23_2","article-title":"Distilling the knowledge in a neural network","author":"Hinton Geoffrey","year":"2015","unstructured":"Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).","journal-title":"arXiv preprint arXiv:1503.02531"},{"key":"e_1_3_1_24_2","article-title":"Bayesian active learning for classification and preference learning","author":"Houlsby Neil","year":"2011","unstructured":"Neil Houlsby, Ferenc Husz\u00e1r, Zoubin Ghahramani, and M\u00e1t\u00e9 Lengyel. 2011. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745 (2011).","journal-title":"arXiv preprint arXiv:1112.5745"},{"key":"e_1_3_1_25_2","article-title":"One-bit supervision for image classification","author":"Hu Hengtong","year":"2020","unstructured":"Hengtong Hu, Lingxi Xie, Zewei Du, Richang Hong, and Qi Tian. 2020. One-bit supervision for image classification. In Advances in Neural Information Processing Systems. 1\u201311.","journal-title":"Advances in Neural Information Processing Systems."},{"key":"e_1_3_1_26_2","article-title":"Creating something from nothing: Unsupervised knowledge distillation for cross-modal hashing","author":"Hu Hengtong","year":"2020","unstructured":"Hengtong Hu, Lingxi Xie, Richang Hong, and Qi Tian. 2020. Creating something from nothing: Unsupervised knowledge distillation for cross-modal hashing. arXiv preprint arXiv:2004.00280 (2020).","journal-title":"arXiv preprint arXiv:2004.00280"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.243"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00521"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00019"},{"key":"e_1_3_1_30_2","first-page":"7024","volume-title":"Advances in Neural Information Processing Systems","author":"Kirsch Andreas","year":"2019","unstructured":"Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. 2019. BatchBALD: Efficient and diverse batch acquisition for deep Bayesian active learning. In Advances in Neural Information Processing Systems. 7024\u20137035."},{"key":"e_1_3_1_31_2","unstructured":"Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images . University of Toronto."},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58523-5_28"},{"key":"e_1_3_1_33_2","article-title":"Temporal ensembling for semi-supervised learning","author":"Laine Samuli","year":"2016","unstructured":"Samuli Laine and Timo Aila. 2016. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242 (2016).","journal-title":"arXiv preprint arXiv:1610.02242"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.96"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1038\/nature14539"},{"key":"e_1_3_1_36_2","first-page":"2","volume-title":"Proceedings of the Workshop on Challenges in Representation Learning (ICML \u201913)","volume":"3","author":"Lee Dong-Hyun","year":"2013","unstructured":"Dong-Hyun Lee. 2013. Pseudo-Label: The simple and efficient semi-supervised learning method for deep neural networks. In Proceedings of the Workshop on Challenges in Representation Learning (ICML \u201913), Vol. 3. 2."},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4471-2099-5_1"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00897"},{"key":"e_1_3_1_39_2","first-page":"728","volume-title":"Advances in Neural Information Processing Systems","author":"Luo Wenjie","year":"2013","unstructured":"Wenjie Luo, Alex Schwing, and Raquel Urtasun. 2013. Latent structured active learning. In Advances in Neural Information Processing Systems. 728\u2013736."},{"key":"e_1_3_1_40_2","first-page":"1222","volume-title":"Advances in Neural Information Processing Systems","author":"Malisiewicz Tomasz","year":"2009","unstructured":"Tomasz Malisiewicz and Alyosha Efros. 2009. Beyond categories: The Visual Memex model for reasoning about object relationships. In Advances in Neural Information Processing Systems. 1222\u20131230."},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2858821"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46466-4_5"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.628"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00975"},{"key":"e_1_3_1_45_2","first-page":"854","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Papadopoulos Dim P.","year":"2016","unstructured":"Dim P. Papadopoulos, Jasper R. R. Uijlings, Frank Keller, and Vittorio Ferrari. 2016. We don\u2019t need no bounding-boxes: Training object class detectors using only human verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 854\u2013863."},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.27"},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.638"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00149"},{"key":"e_1_3_1_49_2","first-page":"6356","volume-title":"Advances in Neural Information Processing Systems","author":"Pinsler Robert","year":"2019","unstructured":"Robert Pinsler, Jonathan Gordon, Eric Nalisnick, and Jos\u00e9 Miguel Hern\u00e1ndez-Lobato. 2019. Bayesian batch active learning as sparse subset approximation. In Advances in Neural Information Processing Systems. 6356\u20136367."},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01267-0_9"},{"key":"e_1_3_1_51_2","first-page":"3546","volume-title":"Advances in Neural Information Processing Systems","author":"Rasmus Antti","year":"2015","unstructured":"Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. 2015. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems. 3546\u20133554."},{"key":"e_1_3_1_52_2","unstructured":"Sachin Ravi and Hugo Larochelle. 2016. Optimization as a model for few-shot learning. In Proceedings of the 4th International Conference on Learning Representations (ICLR \u201916) . 1\u201311."},{"key":"e_1_3_1_53_2","article-title":"In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning","author":"Rizve Mamshad Nayeem","year":"2021","unstructured":"Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S. Rawat, and Mubarak Shah. 2021. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. arXiv preprint arXiv:2101.06329 (2021).","journal-title":"arXiv preprint arXiv:2101.06329"},{"key":"e_1_3_1_54_2","article-title":"FitNets: Hints for thin deep nets","author":"Romero Adriana","year":"2014","unstructured":"Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. FitNets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014).","journal-title":"arXiv preprint arXiv:1412.6550"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-015-0816-y"},{"key":"e_1_3_1_56_2","article-title":"Active learning for convolutional neural networks: A core-set approach","author":"Sener Ozan","year":"2017","unstructured":"Ozan Sener and Silvio Savarese. 2017. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489 (2017).","journal-title":"arXiv preprint arXiv:1708.00489"},{"key":"e_1_3_1_57_2","first-page":"2282","volume-title":"Advances in Neural Information Processing Systems","author":"Shi Weishi","year":"2019","unstructured":"Weishi Shi and Qi Yu. 2019. Integrating Bayesian and discriminative sparse kernel machines for multi-class active learning. In Advances in Neural Information Processing Systems. 2282\u20132291."},{"key":"e_1_3_1_58_2","article-title":"FixMatch: Simplifying semi-supervised learning with consistency and confidence","author":"Sohn Kihyuk","year":"2020","unstructured":"Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. 2020. FixMatch: Simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685 (2020).","journal-title":"arXiv preprint arXiv:2001.07685"},{"key":"e_1_3_1_59_2","article-title":"Unsupervised domain adaptation through self-supervision","author":"Sun Yu","year":"2019","unstructured":"Yu Sun, Eric Tzeng, Trevor Darrell, and Alexei A. Efros. 2019. Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825 (2019).","journal-title":"arXiv preprint arXiv:1909.11825"},{"key":"e_1_3_1_60_2","first-page":"9229","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Sun Yu","year":"2020","unstructured":"Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. 2020. Test-time training with self-supervision for generalization under distribution shifts. In Proceedings of the International Conference on Machine Learning. 9229\u20139248."},{"key":"e_1_3_1_61_2","volume-title":"Advances in Neural Information Processing Systems","author":"Tarvainen Antti","year":"2017","unstructured":"Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems. 1\u201310."},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.316"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2016.2589879"},{"key":"e_1_3_1_64_2","article-title":"Unsupervised data augmentation for consistency training","author":"Xie Qizhe","year":"2019","unstructured":"Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. 2019. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848 (2019).","journal-title":"arXiv preprint arXiv:1904.12848"},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01070"},{"key":"e_1_3_1_66_2","article-title":"Hierarchical semantic aggregation for contrastive representation learning","author":"Xu Haohang","year":"2020","unstructured":"Haohang Xu, Xiaopeng Zhang, Hao Li, Lingxi Xie, Hongkai Xiong, and Qi Tian. 2020. Hierarchical semantic aggregation for contrastive representation learning. arXiv preprint arXiv:2012.02733 (2020).","journal-title":"arXiv preprint arXiv:2012.02733"},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.47"},{"key":"e_1_3_1_68_2","article-title":"Knowledge distillation in generations: More tolerant teachers educate better students","author":"Yang Chenglin","year":"2018","unstructured":"Chenglin Yang, Lingxi Xie, Siyuan Qiao, and Alan Yuille. 2018. Knowledge distillation in generations: More tolerant teachers educate better students. arXiv preprint arXiv:1805.05551 (2018).","journal-title":"arXiv preprint arXiv:1805.05551"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00018"},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01093"},{"key":"e_1_3_1_71_2","article-title":"Wide residual networks","author":"Zagoruyko Sergey","year":"2016","unstructured":"Sergey Zagoruyko and Nikos Komodakis. 2016. Wide residual networks. arXiv preprint arXiv:1605.07146 (2016).","journal-title":"arXiv preprint arXiv:1605.07146"},{"key":"e_1_3_1_72_2","article-title":"Central moment discrepancy (CMB) for domain-invariant representation learning","author":"Zellinger Werner","year":"2017","unstructured":"Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl\u00e4ger, and Susanne Saminger-Platz. 2017. Central moment discrepancy (CMB) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811 (2017).","journal-title":"arXiv preprint arXiv:1702.08811"},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00156"},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00397"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46487-9_40"},{"key":"e_1_3_1_76_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2017\/505"}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3633779","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3633779","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:35:48Z","timestamp":1750178148000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3633779"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,11]]},"references-count":75,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,4,30]]}},"alternative-id":["10.1145\/3633779"],"URL":"https:\/\/doi.org\/10.1145\/3633779","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"type":"print","value":"1551-6857"},{"type":"electronic","value":"1551-6865"}],"subject":[],"published":{"date-parts":[[2024,1,11]]},"assertion":[{"value":"2023-03-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-11-14","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-11","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}