{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,31]],"date-time":"2025-12-31T14:56:24Z","timestamp":1767192984122,"version":"3.37.3"},"reference-count":46,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2021,7,23]],"date-time":"2021-07-23T00:00:00Z","timestamp":1626998400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,7,23]],"date-time":"2021-07-23T00:00:00Z","timestamp":1626998400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61702363","51978441"],"award-info":[{"award-number":["61702363","51978441"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001381","name":"National Research Foundation Singapore","doi-asserted-by":"publisher","award":["International Research Centres in Singapore Funding Initiative"],"award-info":[{"award-number":["International Research Centres in Singapore Funding Initiative"]}],"id":[{"id":"10.13039\/501100001381","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis Comput"],"published-print":{"date-parts":[[2022,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Line drawing with colorization is a popular art format and tool for architectural illustration. The goal of this research is toward generating a high-quality and natural-looking colorization based on an architectural line drawing. This paper presents a new Generative Adversarial Network (GAN)-based method, named <jats:italic>ArchGANs<\/jats:italic>, including <jats:italic>ArchColGAN<\/jats:italic> and <jats:italic>ArchShdGAN<\/jats:italic>. <jats:italic>ArchColGAN<\/jats:italic> is a GAN-based line-feature-aware network for stylized colorization generation. <jats:italic>ArchShdGAN<\/jats:italic> is a lighting effects generation network, from which the building depiction in 3D can benefit. In particular, <jats:italic>ArchColGAN<\/jats:italic> is able to maintain the important line features and the correlation property of building parts as well as reduce the uneven colorization caused by sparse lines. Moreover, we proposed a color enhancement method to further improve <jats:italic>ArchColGAN<\/jats:italic>. Besides the single line drawing images, we also extend our method to handle line drawing image sequences and achieve rotation animation. Experiments and studies demonstrate the effectiveness and usefulness of our proposed method for colorization prototyping.\n<\/jats:p>","DOI":"10.1007\/s00371-021-02219-x","type":"journal-article","created":{"date-parts":[[2021,7,23]],"date-time":"2021-07-23T19:03:10Z","timestamp":1627066990000},"page":"1283-1300","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["A GAN-based approach toward architectural line drawing colorization prototyping"],"prefix":"10.1007","volume":"38","author":[{"given":"Qian","family":"Sun","sequence":"first","affiliation":[]},{"given":"Yan","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Wenyuan","family":"Tao","sequence":"additional","affiliation":[]},{"given":"Han","family":"Jiang","sequence":"additional","affiliation":[]},{"given":"Mu","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Kan","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Marius","family":"Erdt","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,7,23]]},"reference":[{"key":"2219_CR1","unstructured":"Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, DG., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X.:TensorFlow: A system for large-scale machine learning. In: OSDI\u201916: Proceedings of the 12th USENIX symposium on operating systems design and implementation, pp 265\u2013283(2016)"},{"key":"2219_CR2","doi-asserted-by":"crossref","unstructured":"Bousseau, A., Kaplan, M., Thollot, J., Sillion, FX.: Interactive watercolor rendering with temporal coherence and abstraction. In: NPAR\u201906: Proceedings of the 2006 international symposium on non-photorealistic animation and rendering, pp 141\u2013149(2006)","DOI":"10.1145\/1124728.1124751"},{"key":"2219_CR3","doi-asserted-by":"crossref","unstructured":"Byeon W, Wang Q, Kumar\u00a0Srivastava R, Koumoutsakos P (2018) ContextVP: Fully context-aware video prediction. In: ECCV\u201918: Proceedings of the European conference on computer vision, pp 753\u2013769","DOI":"10.1007\/978-3-030-01270-0_46"},{"issue":"6","key":"2219_CR4","first-page":"1","volume":"37","author":"K Cao","year":"2018","unstructured":"Cao, K., Liao, J., Yuan, L.: CariGANs: unpaired photo-to-caricature translation. ACM Trans. Graph. 37(6), 1\u201314 (2018)","journal-title":"ACM Trans. Graph."},{"key":"2219_CR5","unstructured":"Capcom (2008) Street fighter iv"},{"key":"2219_CR6","doi-asserted-by":"crossref","unstructured":"Chen, D., Liao, J., Yuan, L., Yu, N., Hua, G.: Coherent online video style transfer. In: ICCV\u201919: Proceedings of the IEEE international conference on computer vision, pp 1105\u20131114 (2017a)","DOI":"10.1109\/ICCV.2017.126"},{"key":"2219_CR7","doi-asserted-by":"crossref","unstructured":"Chen, D., Yuan, L., Liao, J., Yu, N., Hua, G.: Stylebank: An explicit representation for neural image style transfer. In: CVPR\u201917: Proceedings of the 2017 IEEE conference on computer vision and pattern recognition, pp 1897\u20131906 (2017b)","DOI":"10.1109\/CVPR.2017.296"},{"issue":"3","key":"2219_CR8","doi-asserted-by":"publisher","first-page":"504","DOI":"10.1145\/1073204.1073221","volume":"24","author":"NSH Chu","year":"2005","unstructured":"Chu, N.S.H., Tai, C.L.: MoXi: real-time ink dispersion in absorbent paper. ACM Trans. Graph. 24(3), 504\u2013511 (2005)","journal-title":"ACM Trans. Graph."},{"key":"2219_CR9","unstructured":"Clark, A., Donahue, J., Simonyan, K.: Adversarial video generation on complex datasets. (2019).arXiv preprint p arXiv:1907.06571"},{"key":"2219_CR10","unstructured":"Corel (2011) Painter 12. www.corel.com"},{"key":"2219_CR11","doi-asserted-by":"crossref","unstructured":"Curtis, CJ., Anderson, SE., Seims, JE., Fleischer, KW., Salesin, DH.: Computer-generated watercolor. In: SIGGRAPH\u201997: proceedings of the 1997 annual conference on computer graphics and interactive techniques, pp 421\u2013430(1997)","DOI":"10.1145\/258734.258896"},{"issue":"3","key":"2219_CR12","doi-asserted-by":"publisher","first-page":"848","DOI":"10.1145\/882262.882354","volume":"22","author":"D DeCarlo","year":"2003","unstructured":"DeCarlo, D., Finkelstein, A., Rusinkiewicz, S., Santella, A.: Suggestive contours for conveying shape. ACM Trans. Graph. 22(3), 848\u2013855 (2003)","journal-title":"ACM Trans. Graph."},{"issue":"3","key":"2219_CR13","first-page":"1667","volume":"35","author":"L Fang","year":"2013","unstructured":"Fang, L., Wang, J., Lu, G., Zhang, D., Fu, J.: Hand-drawn grayscale image colorful colorization based on natural image. The Vis. Comput. 35(3), 1667\u20131681 (2013)","journal-title":"The Vis. Comput."},{"key":"2219_CR14","doi-asserted-by":"crossref","unstructured":"Gatys, LA., Ecker, AS., Bethge, M.:Image style transfer using convolutional neural networks. In: CVPR\u201916: proceedings of the 2016 IEEE conference on computer vision and pattern recognition, pp 2414\u20132423 (2016)","DOI":"10.1109\/CVPR.2016.265"},{"key":"2219_CR15","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.:Generative adversarial networks. In: NIPS\u201914: Proceedings of the 2014 international conference on neural information processing systems, pp 2672\u20132680 (2014)"},{"key":"2219_CR16","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.:Deep residual learning for image recognition. In: CVPR\u201916: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778 (2016)","DOI":"10.1109\/CVPR.2016.90"},{"issue":"4","key":"2219_CR17","doi-asserted-by":"publisher","first-page":"70","DOI":"10.1109\/MCG.2003.1210867","volume":"23","author":"A Hertzmann","year":"2003","unstructured":"Hertzmann, A.: Tutorial: a survey of stroke-based rendering. IEEE Comput. Graph. Appl. 23(4), 70\u201381 (2003)","journal-title":"IEEE Comput. Graph. Appl."},{"issue":"8","key":"2219_CR18","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735\u20131780 (1997)","journal-title":"Neural Comput."},{"issue":"9","key":"2219_CR19","doi-asserted-by":"publisher","first-page":"861","DOI":"10.1007\/s00371-011-0596-5","volume":"27","author":"H Huang","year":"2011","unstructured":"Huang, H., Fu, T.N., Li, C.F.: Painterly rendering with content-dependent natural paint strokes. The Vis. Comput. 27(9), 861\u2013871 (2011)","journal-title":"The Vis. Comput."},{"key":"2219_CR20","doi-asserted-by":"crossref","unstructured":"Huang, X., Liu, MY., Belongie, S., Kautz, J.:Multimodal unsupervised image-to-image translation. In: ECCV\u201918: Proceedings of the 2018 european conference on computer vision, pp 172\u2013189 (2018)","DOI":"10.1007\/978-3-030-01219-9_11"},{"key":"2219_CR21","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, JY., Zhou, T., Efros, AA.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125\u20131134(2017)","DOI":"10.1109\/CVPR.2017.632"},{"key":"2219_CR22","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: ECCV\u201916: Proceedings of the European conference on computer vision, pp 694\u2013711(2016)","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"2219_CR23","doi-asserted-by":"crossref","unstructured":"Judd, T., Durand, F., Adelson, EH.:Apparent ridges for line drawing. ACM Transactions on Graphics 26(3):19\u2013es (2007)","DOI":"10.1145\/1276377.1276401"},{"key":"2219_CR24","unstructured":"Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation.(2017) arXiv preprint p arXiv:1710.10196"},{"key":"2219_CR25","doi-asserted-by":"crossref","unstructured":"Kim, H., Jhoo, HY., Park, E., Yoo, S.:Tag2Pix: Line art colorization using text tag with secat and changing loss. In: ICCV\u201919: Proceedings of the ieee international conference on computer vision, pp 9056\u20139065 (2019)","DOI":"10.1109\/ICCV.2019.00915"},{"key":"2219_CR26","unstructured":"Kim, T., Cha, M., Kim, H., Lee, JK., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: ICML\u201917: Proceedings of the 2017 international conference on machine learning, pp 1857\u20131865(2017)"},{"key":"2219_CR27","doi-asserted-by":"crossref","unstructured":"Kolomenkin, M., Shimshoni, I., Tal, A.: On edge detection on surfaces. In: CVPR\u201909: Proceedings of the 2009 IEEE conference on computer vision and pattern recognition, pp 2767\u20132774(2009)","DOI":"10.1109\/CVPR.2009.5206517"},{"key":"2219_CR28","doi-asserted-by":"crossref","unstructured":"Lei, SIE., Chang, CF.:Real-time rendering of watercolor effects for virtual environments. In: PCM\u201904: Proceedings of the 2004 Pacific Rim conference on Advances in multimedia information processing, pp 474\u2013481 (2004)","DOI":"10.1007\/978-3-540-30543-9_60"},{"key":"2219_CR29","doi-asserted-by":"crossref","unstructured":"Liao, J., Yao, Y., Yuan, L., Hua, G., Kang, SB.: Visual attribute transfer through deep image analogy. (2017). arXiv preprint p arXiv:1705.01088","DOI":"10.1145\/3072959.3073683"},{"key":"2219_CR30","unstructured":"Liu, MY., Breuel, T., Kautz, J.:Unsupervised image-to-image translation networks. In: NIPS\u201917: Proceedings of the 2017 international conference on neural information processing systems, pp 700\u2013708 (2017)"},{"key":"2219_CR31","doi-asserted-by":"crossref","unstructured":"Luft, T., Deussen, O.:Real-time watercolor illustrations of plants using a blurred depth test. In: NPAR\u201906: Proceedings of the 2006 international symposium on non-photorealistic animation and rendering, pp 11\u201320 (2006)","DOI":"10.1145\/1124728.1124732"},{"key":"2219_CR32","unstructured":"Luft, T., Kobs, F., Zinser, W., Deussen, O.: Watercolor illustrations of cad data. In: Computational Aesthetics\u201908: Proceedings of the 2008 Eurographics conference on computational aesthetics in graphics, visualization and imaging, pp 57\u201363(2008)"},{"issue":"3","key":"2219_CR33","doi-asserted-by":"publisher","first-page":"609","DOI":"10.1145\/1015706.1015768","volume":"23","author":"Y Ohtake","year":"2004","unstructured":"Ohtake, Y., Belyaev, A., Seidel, H.P.: Ridge-valley lines on meshes via implicit surface fitting. ACM Trans. Graph. 23(3), 609\u2013612 (2004)","journal-title":"ACM Trans. Graph."},{"issue":"7","key":"2219_CR34","doi-asserted-by":"publisher","first-page":"753","DOI":"10.1007\/s00371-008-0257-5","volume":"24","author":"N Okaichi","year":"2008","unstructured":"Okaichi, N., Johan, H., Imagire, T., Nishita, T.: A virtual painting knife. The Vis. Comput. 24(7), 753\u2013763 (2008)","journal-title":"The Vis. Comput."},{"key":"2219_CR35","unstructured":"Schaller, T.W.: The art of architectural drawing: imagination and technique. Wiley (1997)"},{"issue":"5","key":"2219_CR36","doi-asserted-by":"publisher","first-page":"1045","DOI":"10.1109\/TPAMI.2017.2691321","volume":"40","author":"A Shahroudy","year":"2017","unstructured":"Shahroudy, A., Ng, T.T., Gong, Y., Wang, G.: Deep multimodal feature analysis for action recognition in rgb+d videos. IEEE Trans. Pattern Anal. Mach. Intell. 40(5), 1045\u20131058 (2017)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"2219_CR37","unstructured":"Simonyan, K., Zisserman, A.:Very deep convolutional networks for large-scale image recognition. (2014) .arXiv preprint p arXiv:1409.1556"},{"key":"2219_CR38","doi-asserted-by":"crossref","unstructured":"Tao, W., Jiang, H., Sun, Q., Zhang, M., Chen, K., Erdt, M.:ArchGANs: stylized colorization prototyping for architectural line drawing. In: CW\u201920: Proceedings of the 2020 international conference on cyberworlds, pp 33\u201340 (2020)","DOI":"10.1109\/CW49994.2020.00013"},{"key":"2219_CR39","doi-asserted-by":"crossref","unstructured":"Tulyakov, S., Liu, MY., Yang, X., Kautz, J.:MoCoGAN: Decomposing motion and content for video generation. In: CVPR\u201918: Proceedings of the 2018 IEEE conference on computer vision and pattern recognition, pp 1526\u20131535 (2018)","DOI":"10.1109\/CVPR.2018.00165"},{"issue":"3\u20134","key":"2219_CR40","doi-asserted-by":"publisher","first-page":"429","DOI":"10.1002\/cav.95","volume":"16","author":"T Van Laerhoven","year":"2005","unstructured":"Van Laerhoven, T., Van Reeth, F.: Real-time simulation of watery paint: natural phenomena and special effects. Comput. Animat. Virtual Worlds 16(3\u20134), 429\u2013439 (2005)","journal-title":"Comput. Animat. Virtual Worlds"},{"key":"2219_CR41","doi-asserted-by":"crossref","unstructured":"Yi, Z., Zhang, H., Tan, P., Gong, M.:DualGAN: Unsupervised dual learning for image-to-image translation. In: ICCV\u201917: Proceedings of the 2017 IEEE conference on computer vision and pattern recognition, pp 2849\u20132857 (2017)","DOI":"10.1109\/ICCV.2017.310"},{"issue":"9","key":"2219_CR42","doi-asserted-by":"publisher","first-page":"969","DOI":"10.1007\/s00371-013-0881-6","volume":"30","author":"Y Zang","year":"2013","unstructured":"Zang, Y., Huang, H., Li, C.F.: Artistic preprocessing for painterly rendering and image stylization. The Vis. Comput. 30(9), 969\u2013979 (2013)","journal-title":"The Vis. Comput."},{"issue":"6\u20138","key":"2219_CR43","doi-asserted-by":"publisher","first-page":"399","DOI":"10.1007\/s00371-010-0454-x","volume":"26","author":"L Zhang","year":"2010","unstructured":"Zhang, L., He, Y., Seah, H.S.: Real-time computation of photic extremum lines (PELs). The Vis. Comput. 26(6\u20138), 399\u2013407 (2010)","journal-title":"The Vis. Comput."},{"key":"2219_CR44","doi-asserted-by":"crossref","unstructured":"Zhang, L., Sun, Q., He, Y.:Splatting Lines: An efficient method for illustrating 3d surfaces and volumes. In: I3D\u201914: Proceedings of the 2014 ACM SIGGRAPH symposium on interactive 3D graphics and games, pp 135\u2013142 (2014)","DOI":"10.1145\/2556700.2556703"},{"key":"2219_CR45","doi-asserted-by":"crossref","unstructured":"Zhu, JY., Park, T., Isola, P., Efros, AA.:Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV\u201917: Proceedings of the 2017 IEEE conference on computer vision and pattern recognition, pp 2223\u20132232 (2017a)","DOI":"10.1109\/ICCV.2017.244"},{"key":"2219_CR46","unstructured":"Zhu, JY., Zhang, R., Pathak, D., Darrell, T., Efros, AA., Wang, O., Shechtman, E.:Toward multimodal image-to-image translation. In: NIPS\u201917: Proceedings of the 2017 international conference on neural information processing systems, pp 465\u2013476 (2017b)"}],"container-title":["The Visual Computer"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-021-02219-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00371-021-02219-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-021-02219-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,7,21]],"date-time":"2022-07-21T14:20:32Z","timestamp":1658413232000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00371-021-02219-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,7,23]]},"references-count":46,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2022,4]]}},"alternative-id":["2219"],"URL":"https:\/\/doi.org\/10.1007\/s00371-021-02219-x","relation":{},"ISSN":["0178-2789","1432-2315"],"issn-type":[{"type":"print","value":"0178-2789"},{"type":"electronic","value":"1432-2315"}],"subject":[],"published":{"date-parts":[[2021,7,23]]},"assertion":[{"value":"6 June 2021","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 July 2021","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}