{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,26]],"date-time":"2026-02-26T15:32:25Z","timestamp":1772119945495,"version":"3.50.1"},"reference-count":50,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2025,1,22]],"date-time":"2025-01-22T00:00:00Z","timestamp":1737504000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,1,22]],"date-time":"2025-01-22T00:00:00Z","timestamp":1737504000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002241","name":"Japan Science and Technology Agency","doi-asserted-by":"publisher","award":["JPMJCR22U4"],"award-info":[{"award-number":["JPMJCR22U4"]}],"id":[{"id":"10.13039\/501100002241","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2025,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Zero-shot OOD detection is a task that detects OOD images during inference with only in-distribution (ID) class names. Existing methods assume ID images contain a single, centered object, and do not consider the more realistic multi-object scenarios, where both ID and OOD objects are present. To meet the needs of many users, the detection method must have the flexibility to adapt the type of ID images. To this end, we present Global-Local Maximum Concept Matching (GL-MCM), which incorporates local image scores as an auxiliary score to enhance the separability of global and local visual features. Due to the simple ensemble score function design, GL-MCM can control the type of ID images with a single weight parameter. Experiments on ImageNet and multi-object benchmarks demonstrate that GL-MCM outperforms baseline zero-shot methods and is comparable to fully supervised methods. Furthermore, GL-MCM offers strong flexibility in adjusting the target type of ID images. The code is available via <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/AtsuMiyai\/GL-MCM\" ext-link-type=\"uri\">https:\/\/github.com\/AtsuMiyai\/GL-MCM<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s11263-025-02356-z","type":"journal-article","created":{"date-parts":[[2025,1,22]],"date-time":"2025-01-22T13:10:25Z","timestamp":1737551425000},"page":"3586-3596","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["GL-MCM: Global and Local Maximum Concept Matching for Zero-Shot Out-of-Distribution Detection"],"prefix":"10.1007","volume":"133","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-9172-6014","authenticated-orcid":false,"given":"Atsuyuki","family":"Miyai","sequence":"first","affiliation":[]},{"given":"Qing","family":"Yu","sequence":"additional","affiliation":[]},{"given":"Go","family":"Irie","sequence":"additional","affiliation":[]},{"given":"Kiyoharu","family":"Aizawa","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,1,22]]},"reference":[{"key":"2356_CR1","doi-asserted-by":"crossref","unstructured":"Arora, U., Huang, W., & He, H. (2021). Types of out-of-distribution texts and how to detect them. In EMLP.","DOI":"10.18653\/v1\/2021.emnlp-main.835"},{"key":"2356_CR2","unstructured":"Cao, C., Zhong, Z., Zhou, Z., Liu, Y., Liu, T., & Han, B. (2024). Envisioning outlier exposure by large language models for out-of-distribution detection. In ICML."},{"key":"2356_CR3","doi-asserted-by":"crossref","unstructured":"Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., & Vedaldi, A. (2014). Describing textures in the wild. In CVPR.","DOI":"10.1109\/CVPR.2014.461"},{"key":"2356_CR4","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In CVPR.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"2356_CR5","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J. & Houlsby, N. (2021). An image is worth $$16\\times 16$$ words: Transformers for image recognition at scale. In ICLR."},{"key":"2356_CR6","doi-asserted-by":"crossref","unstructured":"Esmaeilpour, S., Liu, B., Robertson, E., & Shu, L. (2022). Zero-shot out-of-distribution detection based on the pretrained model clip. In AAAI.","DOI":"10.1609\/aaai.v36i6.20610"},{"key":"2356_CR7","doi-asserted-by":"publisher","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","volume":"88","author":"M Everingham","year":"2009","unstructured":"Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2009). The pascal visual object classes (VOC) challenge. IJCV, 88, 303\u2013308.","journal-title":"IJCV"},{"key":"2356_CR8","unstructured":"Fort, S., Ren, J., & Lakshminarayanan, B. (2021). Exploring the limits of out-of-distribution detection. In NeurIPS."},{"key":"2356_CR9","doi-asserted-by":"crossref","unstructured":"Ge, Z., Demyanov, S., Chen, Z., & Garnavi, R. (2017). Generative openmax for multi-class open set classification. arXiv preprint arXiv:1707.07418 .","DOI":"10.5244\/C.31.42"},{"key":"2356_CR10","unstructured":"Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. In ICML."},{"key":"2356_CR11","unstructured":"Hendrycks, D., & Gimpel, K. (2017). A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR."},{"key":"2356_CR12","doi-asserted-by":"crossref","unstructured":"Hendrycks, D., Liu, X., Wallace, E., Dziedzic, A., Krishnan, R., & Song, D. (2020). Pretrained transformers improve out-of-distribution robustness. In ACL.","DOI":"10.18653\/v1\/2020.acl-main.244"},{"key":"2356_CR13","unstructured":"Huang, R., Geng, A., & Li, Y. (2021). On the importance of gradients for detecting distributional shifts in the wild. In NeurIPS."},{"key":"2356_CR14","doi-asserted-by":"crossref","unstructured":"Huang, R., & Li, Y. (2021). MOS: Towards scaling out-of-distribution detection for large semantic space. In CVPR.","DOI":"10.1109\/CVPR46437.2021.00860"},{"key":"2356_CR15","unstructured":"Jiang, X., Liu, F., Fang, Z., Chen, H., Liu, T., Zheng, F., & Han, B. (2024). Negative label guided OOD detection with pretrained vision-language models. In ICLR."},{"key":"2356_CR16","unstructured":"Kirichenko, P., Izmailov, P., & Wilson, A. G. (2020). Why normalizing flows fail to detect out-of-distribution data. In NeurIPS."},{"key":"2356_CR17","unstructured":"Lee, K., Lee, K., Lee, H., & Shin, J. (2018). A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS."},{"key":"2356_CR18","doi-asserted-by":"crossref","unstructured":"Li, L. H., Zhang, P., Zhang, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J.-N., Chang, K.-W., & Gao, J. (2022). Grounded language-image pre-training. In CVPR.","DOI":"10.1109\/CVPR52688.2022.01069"},{"key":"2356_CR19","unstructured":"Liang, S., Li, Y., & Srikant, R. (2018). Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR."},{"key":"2356_CR20","doi-asserted-by":"crossref","unstructured":"Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In ECCV.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"2356_CR21","doi-asserted-by":"crossref","unstructured":"Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Li, C., Yang, J., Su, H., Zhu, J., et\u00a0al. (2023). Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499 .","DOI":"10.1007\/978-3-031-72970-6_3"},{"key":"2356_CR22","unstructured":"Liu, W., Wang, X., Owens, J., & Li Y. (2020). Energy-based out-of-distribution detection. In NeurIPS."},{"key":"2356_CR23","unstructured":"Ming, Y., Cai, Z., Gu, J., Sun, Y., Li, W., & Li Y. (2022). Delving into out-of-distribution detection with vision-language representations. In NeurIPS."},{"key":"2356_CR24","doi-asserted-by":"crossref","unstructured":"Ming, Y., & Li, Y. (2024). How does fine-tuning impact out-of-distribution detection for vision-language models? In IJCV, (Vol. 132(2), pp. 596\u2013609).","DOI":"10.1007\/s11263-023-01895-7"},{"key":"2356_CR25","unstructured":"Miyai, A., Yu, Q., Irie, G., & Aizawa, K. (2023). LOCOOP: Few-shot out-of-distribution detection via prompt learning. In NeurIPS."},{"key":"2356_CR26","unstructured":"Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., & Lakshminarayanan, B. (2019). Do deep generative models know what they don\u2019t know? In ICLR."},{"key":"2356_CR27","doi-asserted-by":"crossref","unstructured":"Neal, L., Olson, M., Fern, X., Wong, W. K., & Li, F. (2018). Open set learning with counterfactual images. In ECCV.","DOI":"10.1007\/978-3-030-01231-1_38"},{"key":"2356_CR28","doi-asserted-by":"crossref","unstructured":"Oza, P., & Patel, V. M. (2019). C2AE: Class conditioned auto-encoder for open-set recognition. In CVPR.","DOI":"10.1109\/CVPR.2019.00241"},{"key":"2356_CR29","doi-asserted-by":"crossref","unstructured":"Podolskiy, A., Lipin, D., Bout, A., Artemova, E., & Piontkovskaya, I. (2021). Revisiting mahalanobis distance for transformer-based out-of-domain detection. In AAAI.","DOI":"10.1609\/aaai.v35i15.17612"},{"key":"2356_CR30","unstructured":"Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., & Sutskever, I. (2021). Learning transferable visual models from natural language supervision. In ICML."},{"key":"2356_CR31","unstructured":"Ridnik, T., Ben-Baruch, E., Noy, A., & Zelnik-Manor, L. (2021). Imagenet-21k pretraining for the masses. In NeurIPS Datasets and Benchmarks Track."},{"key":"2356_CR32","doi-asserted-by":"crossref","unstructured":"Sharma, P., Ding, N., Goodman, S., & Soricut, R. (2018). Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL.","DOI":"10.18653\/v1\/P18-1238"},{"key":"2356_CR33","doi-asserted-by":"crossref","unstructured":"Subramanian, S., Merrill, W., Darrell, T., Gardner, M., Singh, S., & Rohrbach, A. (2022). Reclip: A strong zero-shot baseline for referring expression comprehension. In ACL.","DOI":"10.18653\/v1\/2022.acl-long.357"},{"key":"2356_CR34","unstructured":"Sun, X., Hu, P., & Saenko, K. (2022). Dualcoop: Fast adaptation to multi-label recognition with limited annotations. In NeurIPS."},{"key":"2356_CR35","unstructured":"Sun, Y., Ming, Y., Zhu, X., & Li Y. (2022). Out-of-distribution detection with deep nearest neighbors. In ICML."},{"key":"2356_CR36","unstructured":"Tao, L., Du, X., Zhu, X., & Li Y. (2023). Non-parametric outlier synthesis. In ICLR."},{"key":"2356_CR37","doi-asserted-by":"crossref","unstructured":"Van\u00a0Horn, G., Mac\u00a0Aodha, O., Song, Y., Cui, Y., Sun, C., Shepard, A., Adam, H., Perona, P., & Belongie, S. (2018). The inaturalist species classification and detection dataset. In CVPR.","DOI":"10.1109\/CVPR.2018.00914"},{"key":"2356_CR38","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141. & Polosukhin, I. (2017). Attention is all you need. In NeurIPS."},{"key":"2356_CR39","doi-asserted-by":"crossref","unstructured":"Wang, H., Li, Y., Yao, H., & Li, X. (2023a). CLIPN for zero-shot OOD detection: Teaching clip to say no. In ICCV.","DOI":"10.1109\/ICCV51070.2023.00173"},{"key":"2356_CR40","doi-asserted-by":"crossref","unstructured":"Wang, H., Li, Z., Feng, L., & Zhang, W. (2022). VIM: out-of-distribution with virtual-logit matching. In CVPR.","DOI":"10.1109\/CVPR52688.2022.00487"},{"key":"2356_CR41","unstructured":"Wang, H., Liu, W., Bocchieri, A., & Li, Y. (2021). Can multi-label classification networks know what they don\u2019t know? In NeurIPS."},{"key":"2356_CR42","doi-asserted-by":"crossref","unstructured":"Xiao, J., Hays, J., Ehinger, K.A., Oliva, A., & Torralba, A. (2010). Sun database: Large-scale scene recognition from abbey to zoo. In CVPR.","DOI":"10.1109\/CVPR.2010.5539970"},{"key":"2356_CR43","doi-asserted-by":"crossref","unstructured":"Xu, K., Ren, T., Zhang, S., Feng, Y., & Xiong, C. (2021). Unsupervised out-of-domain detection via pre-trained transformers. In ACL.","DOI":"10.18653\/v1\/2021.acl-long.85"},{"key":"2356_CR44","doi-asserted-by":"crossref","unstructured":"Xu, M., Zhang, Z., Wei, F., Hu, H., & Bai, X. (2023). Side adapter network for open-vocabulary semantic segmentation. In CVPR.","DOI":"10.1109\/CVPR52729.2023.00288"},{"key":"2356_CR45","unstructured":"Yang, J., Wang, P., Zou, D., Zhou, Z., Ding, K., Peng, W., Wang, H., Chen, G., Li, B., Sun, Y., Du, X., Zhou, K., Zhang, W., Hendrycks, D., Li, Y., & Liu, Z. (2022). Openood: Benchmarking generalized out-of-distribution detection. In NeurIPS Datasets and Benchmarks Track."},{"key":"2356_CR46","unstructured":"Zhang, J., Yang, J., Wang, P., Wang, H., Lin, Y., Zhang, H., Sun, Y., Du, X., Zhou, K. Zhang, W., Li, Y., Liu, Z., Chen, Y., & Li, H. (2023). Openood v1. 5: Enhanced benchmark for out-of-distribution detection. In NeurIPS Datasets and Benchmarks Track."},{"key":"2356_CR47","doi-asserted-by":"crossref","unstructured":"Zhang, R., Wei, Z., Fang, R., Gao, P., Li, K., Dai, J., Qiao, Y., & Li H. (2022). Tip-adapter: Training-free adaption of clip for few-shot classification. In ECCV.","DOI":"10.1007\/978-3-031-19833-5_29"},{"key":"2356_CR48","doi-asserted-by":"crossref","unstructured":"Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A. (2017). Places: A 10 million image database for scene recognition. In TPAMI, (Vol. 40(6), pp. 1452\u20131464).","DOI":"10.1109\/TPAMI.2017.2723009"},{"key":"2356_CR49","doi-asserted-by":"crossref","unstructured":"Zhou, C., Loy, C. C., & Dai B. (2022). Extract free dense labels from clip. In ECCV.","DOI":"10.1007\/978-3-031-19815-1_40"},{"key":"2356_CR50","doi-asserted-by":"crossref","unstructured":"Zhou, K., Yang, J., Loy, C. C., & Liu, Z. (2022). Learning to prompt for vision-language models. In IJCV.","DOI":"10.1007\/s11263-022-01653-1"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02356-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02356-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02356-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,5,10]],"date-time":"2025-05-10T06:55:58Z","timestamp":1746860158000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02356-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,22]]},"references-count":50,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,6]]}},"alternative-id":["2356"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02356-z","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,22]]},"assertion":[{"value":"21 April 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 January 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 January 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}