{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:39:07Z","timestamp":1773801547246,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"10","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>With the rapid advancement of image generation, visual text editing using natural language instructions has received increasing attention. \nThe main challenge of this task is to fully understand the instruction and reference image, and thus generate visual text that is style-consistent with the image. \nPrevious methods often involve complex steps of specifying the text content and attributes, such as font size, color, and layout, without considering the stylistic consistency with the reference image.\nTo address this, we propose UM-Text, a unified multimodal model for context understanding and visual text editing by natural language instructions. \nSpecifically, we introduce a Visual Language Model (VLM) to process the instruction and reference image, so that the text content and layout can be elaborately designed according to the context information.\nTo generate an accurate and harmonious visual text image, we further propose the UM Encoder to combine the embeddings of various condition information, where the combination is automatically configured by VLM according to the input instruction.\nDuring training, we propose a regional consistency loss to offer more effective supervision for glyph generation on both latent and RGB space, and design a tailored three-stage training strategy to further enhance model performance. \nIn addition, we contribute the UM-DATA-200K, a large-scale visual text image dataset on diverse scenes for model training.\nExtensive qualitative and quantitative results on multiple public benchmarks demonstrate that our method achieves state-of-the-art performance.<\/jats:p>","DOI":"10.1609\/aaai.v40i10.37722","type":"journal-article","created":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T23:41:18Z","timestamp":1773790878000},"page":"7791-7799","source":"Crossref","is-referenced-by-count":0,"title":["UM-Text: A Unified Multimodal Model for Image Understanding and Visual Text Editing"],"prefix":"10.1609","volume":"40","author":[{"given":"Lichen","family":"Ma","sequence":"first","affiliation":[]},{"given":"Xiaolong","family":"Fu","sequence":"additional","affiliation":[]},{"given":"Gaojing","family":"Zhou","sequence":"additional","affiliation":[]},{"given":"Zipeng","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Ting","family":"Zhu","sequence":"additional","affiliation":[]},{"given":"Yichun","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Yu","family":"Shi","sequence":"additional","affiliation":[]},{"given":"Jason","family":"Li","sequence":"additional","affiliation":[]},{"given":"Junshi","family":"Huang","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/37722\/41684","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/37722\/41684","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T23:41:18Z","timestamp":1773790878000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/37722"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i10.37722","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}