{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T20:23:28Z","timestamp":1771273408032,"version":"3.50.1"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"12","license":[{"start":{"date-parts":[[2022,6,8]],"date-time":"2022-06-08T00:00:00Z","timestamp":1654646400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,6,8]],"date-time":"2022-06-08T00:00:00Z","timestamp":1654646400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002347","name":"Bundesministerium f\u00fcr Bildung und Forschung","doi-asserted-by":"publisher","award":["01IS18092"],"award-info":[{"award-number":["01IS18092"]}],"id":[{"id":"10.13039\/501100002347","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002347","name":"Bundesministerium f\u00fcr Bildung und Forschung","doi-asserted-by":"publisher","award":["01IS19006"],"award-info":[{"award-number":["01IS19006"]}],"id":[{"id":"10.13039\/501100002347","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis Comput"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Fast style transfer methods have recently gained popularity in art-related applications as they make a generalized real-time stylization of images practicable. However, they are mostly limited to one-shot stylizations concerning the interactive adjustment of style elements. In particular, the expressive control over stroke sizes or stroke orientations remains an open challenge. To this end, we propose a novel stroke-adjustable fast style transfer network that enables simultaneous control over the stroke size and intensity, and allows a wider range of expressive editing than current approaches by utilizing the scale-variance of convolutional neural networks. Furthermore, we introduce a network-agnostic approach for style-element editing by applying reversible input transformations that can adjust strokes in the stylized output. At this, stroke orientations can be adjusted, and warping-based effects can be applied to stylistic elements, such as swirls or waves. To demonstrate the real-world applicability of our approach, we present <jats:italic>StyleTune<\/jats:italic>, a mobile app for interactive editing of neural style transfers at multiple levels of control. Our app allows stroke adjustments on a global and local level. It furthermore implements an on-device patch-based upsampling step that enables users to achieve results with high output fidelity and resolutions of more than 20 megapixels. Our approach allows users to art-direct their creations and achieve results that are not possible with current style transfer applications.<\/jats:p>","DOI":"10.1007\/s00371-022-02518-x","type":"journal-article","created":{"date-parts":[[2022,6,8]],"date-time":"2022-06-08T19:02:25Z","timestamp":1654714945000},"page":"4019-4033","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Controlling strokes in fast neural style transfer using content transforms"],"prefix":"10.1007","volume":"38","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2146-4229","authenticated-orcid":false,"given":"Max","family":"Reimann","sequence":"first","affiliation":[]},{"given":"Benito","family":"Buchheim","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1553-4940","authenticated-orcid":false,"given":"Amir","family":"Semmo","sequence":"additional","affiliation":[]},{"given":"J\u00fcrgen","family":"D\u00f6llner","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3861-5759","authenticated-orcid":false,"given":"Matthias","family":"Trapp","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,6,8]]},"reference":[{"key":"2518_CR1","unstructured":"Amato, G., Behrmann, M., Bimbot, F., Caramiaux, B., Falchi, F., Garcia, A., Geurts, J., Gibert, J., Gravier, G., Holken, H., et\u00a0al.: AI in the media and creative industries. arXiv preprint arXiv:1905.04175 (2019)"},{"key":"2518_CR2","unstructured":"Babaeizadeh, M., Ghiasi, G.: Adjustable real-time style transfer. In: 8th International Conference on Learning Representations, ICLR 2020 (2020)"},{"issue":"3","key":"2518_CR3","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1145\/1531326.1531330","volume":"28","author":"C Barnes","year":"2009","unstructured":"Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24 (2009)","journal-title":"ACM Trans. Graph."},{"issue":"4","key":"2518_CR4","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2766934","volume":"34","author":"C Barnes","year":"2015","unstructured":"Barnes, C., Zhang, F.L., Lou, L., Wu, X., Hu, S.M.: Patchtable: efficient patch queries for large datasets and applications. ACM Trans. Graph. 34(4), 1\u201310 (2015)","journal-title":"ACM Trans. Graph."},{"issue":"6","key":"2518_CR5","doi-asserted-by":"publisher","first-page":"567","DOI":"10.1109\/34.24792","volume":"11","author":"FL Bookstein","year":"1989","unstructured":"Bookstein, F.L.: Principal warps: thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. 11(6), 567\u2013585 (1989)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"2518_CR6","doi-asserted-by":"crossref","unstructured":"Chen, D., Yuan, L., Liao, J., Yu, N., Hua, G.: Stereoscopic neural style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6654\u20136663 (2018)","DOI":"10.1109\/CVPR.2018.00696"},{"key":"2518_CR7","unstructured":"Dapkus, D.: How to transfer styles to images with Adobe Photoshop. https:\/\/creativecloud.adobe.com\/de\/discover\/article\/how-to-transfer-styles-to-images-with-adobe-photoshop"},{"key":"2518_CR8","unstructured":"Dumoulin, V., Shlens, J., Kudlur, M.: A Learned representation for artistic style. In: ICLR (2017)"},{"issue":"4","key":"2518_CR9","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2897824.2925948","volume":"35","author":"J Fi\u0161er","year":"2016","unstructured":"Fi\u0161er, J., Jamri\u0161ka, O., Luk\u00e1\u010d, M., Shechtman, E., Asente, P., Lu, J., S\u1ef3kora, D.: Stylit: illumination-guided example-based stylization of 3d renderings. ACM Trans. Graph. 35(4), 1\u201311 (2016)","journal-title":"ACM Trans. Graph."},{"key":"2518_CR10","doi-asserted-by":"crossref","unstructured":"Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414\u20132423. IEEE Computer Society (2016)","DOI":"10.1109\/CVPR.2016.265"},{"key":"2518_CR11","doi-asserted-by":"crossref","unstructured":"Gatys, L.A., Ecker, A.S., Bethge, M., Hertzmann, A., Shechtman, E.: Controlling perceptual factors in neural style transfer. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21\u201326, 2017, pp. 3730\u20133738. IEEE Computer Society (2017)","DOI":"10.1109\/CVPR.2017.397"},{"issue":"4","key":"2518_CR12","doi-asserted-by":"publisher","first-page":"255","DOI":"10.1016\/S0895-6111(02)00091-5","volume":"27","author":"DG Gobbi","year":"2003","unstructured":"Gobbi, D.G., Peters, T.M.: Generalized 3d nonlinear transformations for medical imaging: an object-oriented implementation in VTK. Comput. Med. Imaging Graph. 27(4), 255\u2013265 (2003)","journal-title":"Comput. Med. Imaging Graph."},{"key":"2518_CR13","doi-asserted-by":"crossref","unstructured":"Gu, S., Chen, C., Liao, J., Yuan, L.: Arbitrary style transfer with deep feature reshuffle. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8222\u20138231 (2018)","DOI":"10.1109\/CVPR.2018.00858"},{"key":"2518_CR14","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770\u2013778 (2016)","DOI":"10.1109\/CVPR.2016.90"},{"key":"2518_CR15","doi-asserted-by":"crossref","unstructured":"Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1510\u20131519. IEEE Computer Society (2017)","DOI":"10.1109\/ICCV.2017.167"},{"key":"2518_CR16","unstructured":"Isenberg, T.: Interactive NPAR: what type of tools should we create? In: Proceedings of the NPAR, Expressive \u201916, pp. 89\u201396. Eurographics Association, Goslar, DEU (2016)"},{"key":"2518_CR17","doi-asserted-by":"crossref","unstructured":"Jing, Y., Liu, Y., Yang, Y., Feng, Z., Yu, Y., Tao, D., Song, M.: Stroke controllable fast style transfer with adaptive receptive fields. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 244\u2013260 (2018)","DOI":"10.1007\/978-3-030-01261-8_15"},{"issue":"11","key":"2518_CR18","doi-asserted-by":"publisher","first-page":"3365","DOI":"10.1109\/TVCG.2019.2921336","volume":"26","author":"Y Jing","year":"2020","unstructured":"Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., Song, M.: Neural style transfer: a review. IEEE Trans. Vis. Comput. Graph. 26(11), 3365\u20133385 (2020)","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"2518_CR19","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Computer Vision\u2014ECCV 2016\u201414th European Conference, Amsterdam, The Netherlands, October 11\u201314, 2016, Proceedings, Part II, Lecture Notes in Computer Science, vol. 9906, pp. 694\u2013711. Springer (2016)","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"2518_CR20","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proceedings of the CVPR (2020)","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"2518_CR21","unstructured":"Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7\u20139, 2015, Conference Track Proceedings (2015)"},{"key":"2518_CR22","doi-asserted-by":"crossref","unstructured":"Klingbeil, M., Pasewaldt, S., Semmo, A., D\u00f6llner, J.: Challenges in user experience design of image filtering apps. In: Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications. ACM, New York (2017)","DOI":"10.1145\/3132787.3132803"},{"issue":"3","key":"2518_CR23","doi-asserted-by":"publisher","first-page":"96","DOI":"10.1145\/1276377.1276497","volume":"26","author":"J Kopf","year":"2007","unstructured":"Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. ACM Trans. Graph. 26(3), 96\u2013102 (2007)","journal-title":"ACM Trans. Graph."},{"issue":"5","key":"2518_CR24","doi-asserted-by":"publisher","first-page":"866","DOI":"10.1109\/TVCG.2012.160","volume":"19","author":"JE Kyprianidis","year":"2012","unstructured":"Kyprianidis, J.E., Collomosse, J., Wang, T., Isenberg, T.: State of the \u201cart\u2019\u2019: a taxonomy of artistic stylization techniques for images and video. IEEE Trans. Vis. Comput. Graph. 19(5), 866\u2013885 (2012)","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"2518_CR25","unstructured":"Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.H.: Universal style transfer via feature transforms. In: Advances in Neural Information Processing Systems (2017)"},{"key":"2518_CR26","doi-asserted-by":"crossref","unstructured":"Li, Y., Huang, J.B., Ahuja, N., Yang, M.H.: Deep joint image filtering. In: European Conference on Computer Vision, pp. 154\u2013169. Springer (2016)","DOI":"10.1007\/978-3-319-46493-0_10"},{"issue":"4","key":"2518_CR27","doi-asserted-by":"publisher","first-page":"417","DOI":"10.3233\/ICA-200641","volume":"27","author":"Y Liang","year":"2020","unstructured":"Liang, Y., He, F., Zeng, X.: 3d mesh simplification with feature preservation based on whale optimization algorithm and differential evolution. Integr. Comput.-Aided Eng. 27(4), 417\u2013435 (2020)","journal-title":"Integr. Comput.-Aided Eng."},{"key":"2518_CR28","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Proceedings of the ECCV, pp. 740\u2013755. Springer, Cham (2014)","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"2518_CR29","first-page":"29","volume-title":"Machine Learning with Core ML","author":"O Marques","year":"2020","unstructured":"Marques, O.: Machine Learning with Core ML, pp. 29\u201340. Springer, Cham (2020)"},{"key":"2518_CR30","volume":"368","author":"S Mohanty","year":"2012","unstructured":"Mohanty, S., Mohanty, A.K., Carminati, F.: Efficient pseudo-random number generation for Monte-Carlo simulations using graphic processors. J. Phys.: Conf. Ser. 368, 012024 (2012)","journal-title":"J. Phys.: Conf. Ser."},{"key":"2518_CR31","unstructured":"Moiseenkov, A., Poyaganov, O., Frolov, I., Usoltsev, A.: Prisma. Version: 4.3.4. https:\/\/prisma-ai.com\/ (2021)"},{"key":"2518_CR32","doi-asserted-by":"crossref","unstructured":"Pasewaldt, S., Semmo, A., D\u00f6llner, J., Schlegel, F.: BeCasso: artistic image processing and editing on mobile devices. In: SIGGRAPH ASIA 2016, Macao, December 5\u20138, 2016\u2014Mobile Graphics and Interactive Applications, p. 14:1. ACM (2016)","DOI":"10.1145\/2999508.2999518"},{"key":"2518_CR33","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: PyTorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, pp. 8024\u20138035. Curran Associates, Inc. (2019)"},{"key":"2518_CR34","doi-asserted-by":"crossref","unstructured":"Reimann, M., Buchheim, B., Semmo, A., D\u00f6llner, J., Trapp, M.: Interactive multi-level stroke control for neural style transfer. In: 2021 International Conference on Cyberworlds (CW), pp. 1\u20138 (2021)","DOI":"10.1109\/CW52790.2021.00009"},{"key":"2518_CR35","doi-asserted-by":"crossref","unstructured":"Reimann, M., Klingbeil, M., Pasewaldt, S., Semmo, A., Trapp, M., D\u00f6llner, J.: MaeSTrO: a mobile app for style transfer orchestration using neural networks. In: 2018 International Conference on Cyberworlds, CW 2018, Singapore, October 3\u20135, 2018, pp. 9\u201316. IEEE Computer Society (2018)","DOI":"10.1109\/CW.2018.00016"},{"issue":"11","key":"2518_CR36","doi-asserted-by":"publisher","first-page":"1531","DOI":"10.1007\/s00371-019-01654-1","volume":"35","author":"M Reimann","year":"2019","unstructured":"Reimann, M., Klingbeil, M., Pasewaldt, S., Semmo, A., Trapp, M., D\u00f6llner, J.: Locally controllable neural style transfer on mobile devices. Vis. Comput. 35(11), 1531\u20131547 (2019). https:\/\/doi.org\/10.1007\/s00371-019-01654-1","journal-title":"Vis. Comput."},{"key":"2518_CR37","doi-asserted-by":"crossref","unstructured":"Semmo, A., Isenberg, T., D\u00f6llner, J.: Neural style transfer: a paradigm shift for image-based artistic rendering? In: Proceedings International Symposium on Non-Photorealistic Animation and Rendering (NPAR), pp. 5:1\u20135:13. ACM, New York (2017)","DOI":"10.1145\/3092919.3092920"},{"key":"2518_CR38","unstructured":"Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations, ICLR 2015. San Diego, CA, USA (2015)"},{"key":"2518_CR39","doi-asserted-by":"crossref","unstructured":"Su, H., Jampani, V., Sun, D., Gallo, O., Learned-Miller, E., Kautz, J.: Pixel-adaptive convolutional neural networks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 11166\u201311175 (2019)","DOI":"10.1109\/CVPR.2019.01142"},{"key":"2518_CR40","doi-asserted-by":"crossref","unstructured":"Tewari, A., Fried, O., Thies, J., Sitzmann, V., Lombardi, S., Sunkavalli, K., Martin-Brualla, R., Simon, T., Saragih, J., Nie\u00dfner, M., et\u00a0al.: State of the art on neural rendering. In: Computer Graphics Forum, vol.\u00a039, pp. 701\u2013727. Wiley Online Library (2020)","DOI":"10.1111\/cgf.14022"},{"key":"2518_CR41","unstructured":"Texler, O., Fi\u0161er, J., Luk\u00e1\u010d, M., Lu, J., Shechtman, E., S\u00fdkora, D.: Enhancing neural style transfer using patch-based synthesis. In: Proceedings of the NPAR, Expressive \u201919, pp. 43\u201350. Eurographics Association, Goslar, DEU (2019)"},{"issue":"3","key":"2518_CR42","doi-asserted-by":"publisher","first-page":"463","DOI":"10.1109\/TPAMI.2007.60","volume":"29","author":"Y Wexler","year":"2007","unstructured":"Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 463\u2013476 (2007)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"2518_CR43","doi-asserted-by":"publisher","first-page":"39","DOI":"10.1016\/j.neucom.2019.08.075","volume":"370","author":"H Wu","year":"2019","unstructured":"Wu, H., Sun, Z., Zhang, Y., Li, Q.: Direction-aware neural style transfer with texture enhancement. Neurocomputing 370, 39\u201355 (2019)","journal-title":"Neurocomputing"},{"key":"2518_CR44","doi-asserted-by":"crossref","unstructured":"Wu, H., Zheng, S., Zhang, J., Huang, K.: Fast end-to-end trainable guided filter. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1838\u20131847 (2018)","DOI":"10.1109\/CVPR.2018.00197"},{"key":"2518_CR45","doi-asserted-by":"crossref","unstructured":"Yang, L., Yang, L., Zhao, M., Zheng, Y.: Controlling stroke size in fast style transfer with recurrent convolutional neural network. In: Computer Graphics Forum, vol.\u00a037, pp. 97\u2013107. Wiley Online Library (2018)","DOI":"10.1111\/cgf.13551"},{"key":"2518_CR46","doi-asserted-by":"crossref","unstructured":"Yao, Y., Ren, J., Xie, X., Liu, W., Liu, Y., Wang, J.: Attention-aware multi-stroke style transfer. In: IEEE Conference on Computer Vision and Pattern Recognition. CVPR, pp. 1467\u20131475. Computer Vision Foundation\/IEEE, Long Beach, CA, USA (2019)","DOI":"10.1109\/CVPR.2019.00156"},{"key":"2518_CR47","unstructured":"Youssef, V.: Loki: a random number generator for Metal (2017). https:\/\/github.com\/YoussefV\/Loki"},{"key":"2518_CR48","doi-asserted-by":"crossref","unstructured":"Zhang, H., Dana, K.: Multi-style generative network for real-time transfer. In: Computer Vision\u2014ECCV 2018 Workshops, pp. 349\u2013365. Springer (2019)","DOI":"10.1007\/978-3-030-11018-5_32"},{"issue":"1","key":"2518_CR49","doi-asserted-by":"publisher","first-page":"121","DOI":"10.1007\/s11263-005-4638-1","volume":"62","author":"SC Zhu","year":"2005","unstructured":"Zhu, S.C., Guo, C.E., Wang, Y., Xu, Z.: What are textons? Int. J. Comput. Vis. 62(1), 121\u2013143 (2005)","journal-title":"Int. J. Comput. Vis."}],"container-title":["The Visual Computer"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-022-02518-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00371-022-02518-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-022-02518-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,14]],"date-time":"2022-12-14T15:11:29Z","timestamp":1671030689000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00371-022-02518-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,8]]},"references-count":49,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["2518"],"URL":"https:\/\/doi.org\/10.1007\/s00371-022-02518-x","relation":{},"ISSN":["0178-2789","1432-2315"],"issn-type":[{"value":"0178-2789","type":"print"},{"value":"1432-2315","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,8]]},"assertion":[{"value":"3 May 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 June 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no competing interests to declare that are relevant to the content of this article.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}