{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,9]],"date-time":"2026-03-09T14:36:32Z","timestamp":1773066992856,"version":"3.50.1"},"reference-count":54,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2026,3,1]],"date-time":"2026-03-01T00:00:00Z","timestamp":1772323200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,3,9]],"date-time":"2026-03-09T00:00:00Z","timestamp":1773014400000},"content-version":"vor","delay-in-days":8,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100008332","name":"Graz University of Technology","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100008332","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Mach Learn"],"published-print":{"date-parts":[[2026,3]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Test-time adaptation (TTA) aims to improve model robustness under domain shifts without access to source data\u2013an essential capability for real-world applications such as autonomous driving and robotics. Existing TTA methods for semantic segmentation often rely on stochastic techniques like Monte Carlo dropout or augmentation-averaged predictions to estimate uncertainty or stabilize outputs. However, these approaches typically require multiple forward passes, which are computationally expensive and limit real-time applicability. We propose GaPaTTA, a lightweight and deterministic TTA framework built on SegFormer. Unlike previous methods, GaPaTTA adopts a single forward pass with a traditional augmentation strategy, avoiding repeated inference required by ensemble-based TTA approaches. Key innovations include: (1) Grad-CAM-based global prompt placement identifies the most relevant encoder layers for adaptation; (2) Gaussian entropy-guided local prompt injection selects the top-K most uncertain pixels; (3) Shannon entropy-based filtering suppresses unreliable pseudo-labels; and (4) cross-stage consistency aligns mid- and high-level features for structural coherence. Experiments on ACDC (A-Fog, A-Night, A-Rain, A-Snow), Cityscapes-Foggy (CS-Fog) and Cityscapes-Rainy (CS-Rain) demonstrate that GaPaTTA consistently outperforms previous TTA methods in mean intersection over union (mIoU) while reducing inference time by over 50%. The source code is available at\n                    <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/ml4papers\/GaPaTTA\" ext-link-type=\"uri\">https:\/\/github.com\/ml4papers\/GaPaTTA<\/jats:ext-link>\n                    .\n                  <\/jats:p>","DOI":"10.1007\/s10994-025-06988-7","type":"journal-article","created":{"date-parts":[[2026,3,9]],"date-time":"2026-03-09T10:14:28Z","timestamp":1773051268000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["GaPaTTA: Gaussian Entropy-Guided Prompt Placement for Test-Time Adaptation in Semantic Segmentation"],"prefix":"10.1007","volume":"115","author":[{"given":"Jixiang","family":"Lei","sequence":"first","affiliation":[]},{"given":"Franz","family":"Pernkopf","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,3,9]]},"reference":[{"key":"6988_CR1","doi-asserted-by":"crossref","unstructured":"Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213\u20133223.","DOI":"10.1109\/CVPR.2016.350"},{"key":"6988_CR2","volume-title":"Elements of Information Theory","author":"TM Cover","year":"2006","unstructured":"Cover, T. M., & Thomas, J. A. (2006). Elements of Information Theory. John Wiley & Sons."},{"key":"6988_CR3","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv:2010.11929"},{"key":"6988_CR4","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2024.106230","author":"Y Fang","year":"2024","unstructured":"Fang, Y., Yap, P., Lin, W., Zhu, H., & Liu, M. (2024). Source-free unsupervised domain adaptation: A survey. Neural Networks. https:\/\/doi.org\/10.1016\/j.neunet.2024.106230","journal-title":"Neural Networks"},{"key":"6988_CR5","unstructured":"Feng, X., Xu, Y., Yuan, Y., Lu, J., & Li, C. (2020). Dmt: Dynamic mutual training for semi-supervised learning. In European Conference on Computer Vision (ECCV), pp. 585\u2013602."},{"key":"6988_CR6","unstructured":"Gal, Y., Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML."},{"key":"6988_CR7","doi-asserted-by":"publisher","first-page":"7595","DOI":"10.1609\/aaai.v37i6.25922","volume":"37","author":"Y Gan","year":"2023","unstructured":"Gan, Y., Bai, Y., Lou, Y., Ma, X., Zhang, R., Shi, N., & Luo, L. (2023). Decorate the newcomers: Visual domain prompt for continual test time adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 37, 7595\u20137603.","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"issue":"59","key":"6988_CR8","first-page":"1","volume":"17","author":"Y Ganin","year":"2016","unstructured":"Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., March, M., & Lempitsky, V. (2016). Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59), 1\u201335.","journal-title":"Journal of Machine Learning Research"},{"key":"6988_CR9","doi-asserted-by":"crossref","unstructured":"Gao, J., Zhang, J., Liu, X., Darrell, T., Shelhamer, E., & Wang, D. (2023). Back to the source: Diffusion-driven test-time adaptation. arXiv:2207.03442","DOI":"10.1109\/CVPR52729.2023.01134"},{"key":"6988_CR10","unstructured":"Gao, Y., Shi, X., Zhu, Y., Wang, H., Tang, Z., Zhou, X., Li, M., & Metaxas, D. (2022). Visual Prompt Tuning for Test-time Domain Adaptation. arXiv:2210.04831"},{"key":"6988_CR11","unstructured":"Han, J., Na, J., & Hwang, W. (2025). Ranked entropy minimization for continual test-time adaptation. arXiv preprint arXiv:2505.16441"},{"key":"6988_CR12","doi-asserted-by":"publisher","first-page":"123402","DOI":"10.52202\/079017-3923","volume":"37","author":"TH Hoang","year":"2024","unstructured":"Hoang, T. H., Vo, M., & Do, M. (2024). Persistent test-time adaptation in recurring testing scenarios. Advances in Neural Information Processing Systems, 37, 123402\u2013123442.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6988_CR13","doi-asserted-by":"crossref","unstructured":"Hu, X., Li, C., Zhu, L., & Heng, P. (2019). Depth-attentional features for single-image rain removal. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8022\u20138031.","DOI":"10.1109\/CVPR.2019.00821"},{"key":"6988_CR14","unstructured":"Huang, X., Zheng, P., Wang, L., Huang, Z., Chen, W., Wang, Y., & Han, B. (2024). Test-time model adaptation with only forward passes. arXiv:2404.01650"},{"key":"6988_CR15","doi-asserted-by":"publisher","first-page":"74211","DOI":"10.52202\/079017-2361","volume":"37","author":"H-K Jang","year":"2024","unstructured":"Jang, H.-K., Kim, J., Kweon, H., & Yoon, K.-J. (2024). Talos: Enhancing semantic scene completion via test-time adaptation on the line of sight. Advances in Neural Information Processing Systems, 37, 74211\u201374232.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6988_CR16","doi-asserted-by":"crossref","unstructured":"Jia, M., Tang, L., Chen, B., Cardie, C., Belongie, S., Hariharan, B., & Lim, S. (2022). Visual prompt tuning. In European Conference on Computer Vision, pp. 709\u2013727.","DOI":"10.1007\/978-3-031-19827-4_41"},{"key":"6988_CR17","unstructured":"Kingma, D., & Ba, J. (2017). Adam: A method for stochastic optimization. arXiv:1412.6980"},{"key":"6988_CR18","unstructured":"Lee, J., Jung, D., Lee, S., Park, J., Shin, J., Hwang, U., & Yoon, S. (2024). Entropy is not enough for test-time adaptation: From the perspective of disentangled factors. arXiv:2403.07366"},{"key":"6988_CR19","unstructured":"Lee, T., Chottananurak, S., Kim, J., Shin, J., Gong, T., & Lee, S.-J. (2025). Test-time adaptation with binary feedback. arXiv preprint arXiv:2505.18514"},{"key":"6988_CR20","unstructured":"Lei, J. (2022). Interpretation of semantic urban scene segmentation for autonomous vehicles. Master\u2019s thesis, Johannes Kepler University Linz. Master\u2019s Thesis, JKU Linz. https:\/\/epub.jku.at\/obvulihs\/content\/titleinfo\/7773812"},{"key":"6988_CR21","unstructured":"Lei, J., & Pernkopf, F. (2024). Two-Level Test-Time Adaptation in Multimodal Learning. https:\/\/openreview.net\/forum?id=n0lDbIKVAT. ICML 2024 Workshop."},{"key":"6988_CR22","doi-asserted-by":"crossref","unstructured":"Li, Y., Wang, N., & Zhang, Z. (2020). Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 9641\u20139650).","DOI":"10.1109\/CVPR42600.2020.00966"},{"issue":"1","key":"6988_CR23","doi-asserted-by":"publisher","first-page":"31","DOI":"10.1007\/s11263-024-02181-w","volume":"133","author":"J Liang","year":"2025","unstructured":"Liang, J., He, R., & Tan, T. (2025). A comprehensive survey on test-time adaptation under distribution shifts. International Journal of Computer Vision,133(1), 31\u201364.","journal-title":"International Journal of Computer Vision"},{"key":"6988_CR24","doi-asserted-by":"crossref","unstructured":"Litrico, M., Del Bue, A., & Morerio, P. (2023). Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 7640\u20137650.","DOI":"10.1109\/CVPR52729.2023.00738"},{"key":"6988_CR25","unstructured":"Liu, J., Yang, S., Jia, P., Zhang, R., Lu, M., Guo, Y., Xue, W., & Zhang, S. (2023). Vida: Homeostatic visual domain adapter for continual test time adaptation. arXiv preprint arXiv:2306.04344"},{"key":"6988_CR26","doi-asserted-by":"crossref","unstructured":"Liu, Y., Zhang, W., & Wang, J. (2021). Source-free domain adaptation for semantic segmentation. arXiv:2103.16372","DOI":"10.1109\/CVPR46437.2021.00127"},{"key":"6988_CR27","unstructured":"Long, M., Cao, Z., Wang, J., & Jordan, M.I. (2018). Conditional adversarial domain adaptation. Advances in Neural Information Processing Dystems 31."},{"key":"6988_CR28","doi-asserted-by":"crossref","unstructured":"Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., & Terzopoulos, D. (2020). Image Segmentation using deep learning: A survey. arxiv:2001.05566","DOI":"10.1109\/TPAMI.2021.3059968"},{"key":"6988_CR29","unstructured":"Mirza, M., Masana, M., Possegger, H., & Bischof, H. (2022). An efficient domain-incremental learning approach to drive in all weather conditions. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (pp. 3001\u20133011)."},{"key":"6988_CR30","doi-asserted-by":"crossref","unstructured":"Ni, J., Yang, S., Xu, R., Liu, J., Li, X., Jiao, W., Chen, Z., Liu, Y., & Zhang, S. (2024). Distribution-aware continual test-time adaptation for semantic segmentation. arXiv:2309.13604","DOI":"10.1109\/ICRA57147.2024.10610045"},{"key":"6988_CR31","unstructured":"Niu, S., Chen, G., Zhao, P., Wang, T., Wu, P., & Shen, Z. (2025). Self-bootstrapping for versatile test-time adaptation. arXiv:2504.08010"},{"key":"6988_CR32","unstructured":"Niu, S., Wu, J., Zhang, Y., Chen, Y., Zheng, S., Zhao, P., & Tan, M. (2022). Efficient test-time model adaptation without forgetting. In International Conference on Machine Learning, pp. 16888\u201316905."},{"key":"6988_CR33","unstructured":"Niu, S., Wu, J., Zhang, Y., Wen, Z., Chen, Y., Zhao, P., & Tan, M. (2023). Towards stable test-time adaptation in dynamic wild world. arXiv:2302.12400"},{"key":"6988_CR34","unstructured":"O\u2019shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458"},{"key":"6988_CR35","doi-asserted-by":"crossref","unstructured":"Pissas, T., Ravasio, C. S., Cruz, L. D., & Bergeles, C. (2022). Multi-scale and cross-scale contrastive learning for semantic segmentation. In European Conference on Computer Vision, pp. 413\u2013429. Springer.","DOI":"10.1007\/978-3-031-19818-2_24"},{"key":"6988_CR36","unstructured":"Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434"},{"key":"6988_CR37","doi-asserted-by":"crossref","unstructured":"Roy, S., Trapp, M., Pilzer, A., Kannala, J., Sebe, N., Ricci, E., & Solin, A. (2022). Uncertainty-guided source-free domain adaptation. In European Conference on Computer Vision, pp. 537\u2013555. Springer.","DOI":"10.1007\/978-3-031-19806-9_31"},{"key":"6988_CR38","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-018-1072-8","author":"C Sakaridis","year":"2018","unstructured":"Sakaridis, C., Dai, D., & Van\u00a0Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision. https:\/\/doi.org\/10.1007\/s11263-018-1072-8","journal-title":"International Journal of Computer Vision"},{"key":"6988_CR39","doi-asserted-by":"crossref","unstructured":"Sakaridis, C., Wang, H., Li, K., Zurbr\u00fcgg, R., Jadon, A., Abbeloos, W., Reino, D.O., Van\u00a0Gool, L., & Dai, D. (2021). Acdc: The adverse conditions dataset with correspondences for robust semantic driving scene perception. arXiv preprint arXiv:2104.13395","DOI":"10.1109\/ICCV48922.2021.01059"},{"key":"6988_CR40","doi-asserted-by":"crossref","unstructured":"Shin, J., & Kim, H. (2024). L-tta: Lightweight test-time adaptation using a versatile stem layer. In: Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., Zhang, C. (eds.) Advances in Neural Information Processing Systems, vol. 37, pp. 39325\u201339349. Curran Associates, Inc., ???","DOI":"10.52202\/079017-1242"},{"key":"6988_CR41","doi-asserted-by":"crossref","unstructured":"Tang, Y., Chen, S., Jia, J., Zhang, Y., & He, Z. (2024). Domain-conditioned transformer for fully test-time adaptation. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 6260\u20136269.","DOI":"10.1145\/3664647.3680678"},{"key":"6988_CR42","doi-asserted-by":"crossref","unstructured":"Vinogradova, K., Dibrov, A., & Myers, G. (2020). Towards interpretable semantic segmentation via gradient-weighted class activation mapping (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 13943\u201313944.","DOI":"10.1609\/aaai.v34i10.7244"},{"key":"6988_CR43","unstructured":"Wang, D., Shelhamer, E., Liu, S., Olshausen, B., & Darrell, T. (2020). Tent: Fully test-time adaptation by entropy minimization. arXiv:2006.10726"},{"key":"6988_CR44","doi-asserted-by":"crossref","unstructured":"Wang, Q., Dai, D., Hoyer, L., Gool, L., & Fink, O. (2021). Domain adaptive semantic segmentation with self-supervised depth estimation. arXiv:2104.13613","DOI":"10.1109\/ICCV48922.2021.00840"},{"key":"6988_CR45","doi-asserted-by":"crossref","unstructured":"Wang, Q., Fink, O., Van Gool, L., & Dai, D. (2022). Continual test-time domain adaptation. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 7201\u20137211).","DOI":"10.1109\/CVPR52688.2022.00706"},{"key":"6988_CR46","unstructured":"Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J., & Luo, P. (2021). SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. arXiv:2105.15203"},{"key":"6988_CR47","unstructured":"Yang, S., Li, Y., Ma, Y., Wang, Y., & Loy, C.C. (2022). Stc: Simple test-time classifier adjustment for domain adaptation. In European Conference on Computer Vision (ECCV)."},{"key":"6988_CR48","doi-asserted-by":"crossref","unstructured":"Yang, S., Wu, J., Liu, J., Li, X., Zhang, Q., Pan, M., Gan, Y., Chen, Z., & Zhang, S. (2024). Exploring sparse visual prompt for domain adaptive dense prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 16334\u201316342.","DOI":"10.1609\/aaai.v38i15.29569"},{"key":"6988_CR49","doi-asserted-by":"crossref","unstructured":"Yeh, Y.-H., Liu, T.-W., & Kao, H.-Y. (2021). Sofa: Source-free feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision. pp. 319\u2013328.","DOI":"10.1109\/WACV48630.2021.00052"},{"key":"6988_CR50","doi-asserted-by":"crossref","unstructured":"Yi, C., Chen, H., Zhang, Y., Xu, Y., Zhou, Y., & Cui, L. (2024). From question to exploration: Can classic test-time adaptation strategies be effectively applied in semantic segmentation? https:\/\/openreview.net\/forum?id=AVD5XdFDN7. ACM Multimedia 2024.","DOI":"10.1145\/3664647.3680910"},{"key":"6988_CR51","unstructured":"Yu, Y., Sheng, L., He, R., & Liang, J. (2023). Benchmarking test-time adaptation against distribution shifts in image classification. arXiv:2307.03133"},{"key":"6988_CR52","unstructured":"Yuan, L., Li, S., He, Z., & Xie, B. (2023). Few clicks suffice: Active test-time adaptation for semantic segmentation. arXiv:2312.01835"},{"key":"6988_CR53","unstructured":"Zhang, M., Levine, S., & Finn, C. (2022). Memo: Test time robustness via adaptation and augmentation. In Advances in Neural Information Processing Systems."},{"key":"6988_CR54","unstructured":"Zhang, Y., Patras, P., & Hospedales, T.M. (2021). Proda: Progressive domain adaptation for semantic segmentation. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7491\u20137500."}],"container-title":["Machine Learning"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10994-025-06988-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10994-025-06988-7","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10994-025-06988-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,9]],"date-time":"2026-03-09T10:14:46Z","timestamp":1773051286000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10994-025-06988-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3]]},"references-count":54,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2026,3]]}},"alternative-id":["6988"],"URL":"https:\/\/doi.org\/10.1007\/s10994-025-06988-7","relation":{},"ISSN":["0885-6125","1573-0565"],"issn-type":[{"value":"0885-6125","type":"print"},{"value":"1573-0565","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3]]},"assertion":[{"value":"28 May 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 September 2025","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 December 2025","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 March 2026","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"65"}}