{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T11:38:53Z","timestamp":1764934733498,"version":"3.46.0"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T00:00:00Z","timestamp":1742860800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T00:00:00Z","timestamp":1742860800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Univ Access Inf Soc"],"published-print":{"date-parts":[[2025,8]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Speech interaction holds significant potential to make creative visual design activities more inclusive for people with physical impairments, although no work has yet investigated the feasibility of graphical object rotation via voice control. An elicitation study with disabled participants (\n                    <jats:italic>N<\/jats:italic>\n                    \u2009=\u200912) is initially presented where candidate voice commands for rotation actions are identified. The use of these commands is then evaluated in an exploratory study with people who have physical impairments (\n                    <jats:italic>N<\/jats:italic>\n                    \u2009=\u200912). Results found all participants could successfully complete a series of rotation tasks, although interaction issues were also identified (e.g., estimating rotation transformation angles). To further investigate these challenges, three different voice-controlled rotation approaches were developed: Baseline-Rotation, Fixed-Jumps, and Animation-Rotation. These methods were evaluated with disabled participants (\n                    <jats:italic>N<\/jats:italic>\n                    \u2009=\u200925) with results highlighting that all three approaches supported users in successfully rotating graphical objects, although Animation-Rotation was found to be more efficient and usable than the other methods.\n                  <\/jats:p>","DOI":"10.1007\/s10209-025-01212-8","type":"journal-article","created":{"date-parts":[[2025,3,27]],"date-time":"2025-03-27T07:24:35Z","timestamp":1743060275000},"page":"2573-2595","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Inclusive speech interaction techniques for creative object rotation"],"prefix":"10.1007","volume":"24","author":[{"given":"Farkhandah","family":"Aziz","sequence":"first","affiliation":[]},{"given":"Chris","family":"Creed","sequence":"additional","affiliation":[]},{"given":"Maite","family":"Frutos-Pascual","sequence":"additional","affiliation":[]},{"given":"Ian","family":"Williams","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,25]]},"reference":[{"key":"1212_CR1","unstructured":"Adobe. 2023. Photoshop apps\u2014desktop, mobile, and tablet | Photoshop.com. Retrieved Mar 17, 2023 from https:\/\/www.adobe.com\/products\/photoshop.html"},{"key":"1212_CR2","unstructured":"Adobe Inc. 2023. Adobe Illustrator CS6: industry-leading vector graphics software. Retrieved Mar 17, 2023 from https:\/\/www.adobe.com\/uk\/products\/illustrator.html"},{"key":"1212_CR3","unstructured":"Adobe Inc. 2023. Adobe XD | Fast & Powerful UI\/UX Design & Collaboration Tool. Retrieved Mar 17, 2023 from https:\/\/www.adobe.com\/uk\/products\/xd.html"},{"key":"1212_CR4","unstructured":"Figma. 2023.\u00a0Figma: The collaborative interface design tool. Retrieved Mar 17, (2023). From https:\/\/www.figma.com"},{"key":"1212_CR5","unstructured":"Nuance Communications. 2023.\u00a0Dragon Speech Recognition\u2014Get More Done by Voice | Nuance. Retrived Mar 17, (2023). From https:\/\/www.nuance.com\/dragon.html"},{"key":"1212_CR6","doi-asserted-by":"publisher","unstructured":"Alibay, F., Kavakli, M., Chardonnet, J.R. and Baig, M.Z.: The usability of speech and\/or gestures in multi-modal interface systems. In:\u00a0Proceedings of the 9th international conference on computer and automation engineering. (2017). https:\/\/doi.org\/10.1145\/3057039.3057089","DOI":"10.1145\/3057039.3057089"},{"key":"1212_CR7","unstructured":"Alsuraihi, M.M., Rigas, D.I.: How effective is it to design by voice?. In: Proceedings of HCI 2007 The 21st British HCI Group Annual Conference University of Lancaster, UK 21. 1\u20134. Retrieved Mar 17, 2023 (2007). From https:\/\/www.scienceopen.com\/hosted-document?doi=10.14236\/ewic\/HCI2007.42"},{"key":"1212_CR8","doi-asserted-by":"publisher","unstructured":"Aziz, F., Creed, C., Sarcar, S., Frutos-Pascual, M., Williams, I.: Voice snapping: inclusive speech interaction techniques for creative object manipulation. In:\u00a0designing interactive systems conference. 1486\u20131496. (2022). https:\/\/doi.org\/10.1145\/3532106.3533452","DOI":"10.1145\/3532106.3533452"},{"key":"1212_CR9","doi-asserted-by":"publisher","unstructured":"Aziz, F., Creed, C., Sarcar, S., Frutos-Pascual, M., Williams, I.: Inclusive voice interaction techniques for creative object positioning. In:\u00a0proceedings of the 2021 international conference on multimodal interaction. 461\u2013469, 2021. https:\/\/doi.org\/10.1145\/3462244.3479937","DOI":"10.1145\/3462244.3479937"},{"issue":"3","key":"1212_CR10","doi-asserted-by":"publisher","first-page":"114","DOI":"10.5555\/2835587.2835589","volume":"4","author":"A Bangor","year":"2009","unstructured":"Bangor, A., Kortum, P., Miller, J.: Determining what individual SUS scores mean: adding an adjective rating scale. J. Usability Stud. 4(3), 114\u2013123 (2009). https:\/\/doi.org\/10.5555\/2835587.2835589","journal-title":"J. Usability Stud."},{"key":"1212_CR11","doi-asserted-by":"publisher","unstructured":"Creed, C., Frutos-Pascual, M., Williams, I.: Multimodal gaze interaction for creative design. In:\u00a0proceedings of the 2020 CHI conference on human factors in computing systems. 1\u201313, (2020).https:\/\/doi.org\/10.1145\/3313831.3376196","DOI":"10.1145\/3313831.3376196"},{"key":"1212_CR12","doi-asserted-by":"publisher","first-page":"94","DOI":"10.1145\/1029014.1028648","volume":"77\u201378","author":"L Dai","year":"2003","unstructured":"Dai, L., Goldman, R., Sears, A., Lozier, J.: Speech-based cursor control: a study of grid-based solutions. ACM SIGACCESS Access. Comput. 77\u201378, 94\u2013101 (2003). https:\/\/doi.org\/10.1145\/1029014.1028648","journal-title":"ACM SIGACCESS Access. Comput."},{"issue":"3","key":"1212_CR13","doi-asserted-by":"publisher","first-page":"219","DOI":"10.1080\/01449290412331328563","volume":"24","author":"L Dai","year":"2005","unstructured":"Dai, L., Goldman, R., Sears, A., Lozier, J.: Speech-based cursor control using grids: modelling performance and comparisons with other solutions. Behav. Inf. Technol. 24(3), 219\u2013230 (2005). https:\/\/doi.org\/10.1080\/01449290412331328563","journal-title":"Behav. Inf. Technol."},{"key":"1212_CR14","unstructured":"Dragon. (2023). Nuance Communications. Dragon Speech Recognition\u2014Get More Done by Voice | Nuance. Retrieved Mar 17, 2023, from https:\/\/www.nuance.com\/dragon.html"},{"key":"1212_CR15","doi-asserted-by":"publisher","unstructured":"Elepfandt, M., Grund, M.: Move it there, or not? The design of voice commands for gaze with speech. In:\u00a0proceedings of the 4th workshop on eye gaze in intelligent human machine interaction. 1\u20133 (2012). https:\/\/doi.org\/10.1145\/2401836.2401848","DOI":"10.1145\/2401836.2401848"},{"key":"1212_CR16","first-page":"271","volume":"92","author":"AP Gourdol","year":"1992","unstructured":"Gourdol, A.P., Nigay, L., Salber, D., Coutaz, J.: Two case studies of software architecture for multimodal interactive systems: voicepaint and a voice-enabled graphical notebook. Eng. Hum. Comput. Interact. 92, 271\u201384 (1992)","journal-title":"Eng. Hum. Comput. Interact."},{"key":"1212_CR17","doi-asserted-by":"publisher","unstructured":"Harada, S., Saponas, T.S., Landay, J.A.: VoicePen: augmenting pen input with simultaneous non-linguistic vocalization. In: proceedings of the 9th international conference on multimodal interfaces, ICMI\u201907. 178\u2013185 (2007). https:\/\/doi.org\/10.1145\/1322192.1322225","DOI":"10.1145\/1322192.1322225"},{"key":"1212_CR18","doi-asserted-by":"publisher","unstructured":"Harada, S., Wobbrock, J.O., Landay, J.A.: VoiceDraw: a hands-free voice-driven drawing application for people with motor impairments. In ASSETS\u201907: proceedings of the ninth international ACM SIGACCESS conference on computers and accessibility 27\u201334 (2007.). https:\/\/doi.org\/10.1145\/1296843.1296850","DOI":"10.1145\/1296843.1296850"},{"key":"1212_CR19","doi-asserted-by":"publisher","unstructured":"Harada, S., Wobbrock, J.O., Malkin, J., Bilmes, J.A., Landay, J.A.: Longitudinal study of people learning to use continuous voice-based cursor control. In:\u00a0proceedings of the SIGCHI conference on human factors in computing systems\u00a0(pp. 347\u2013356) (2009). https:\/\/doi.org\/10.1145\/1518701.1518757","DOI":"10.1145\/1518701.1518757"},{"key":"1212_CR20","doi-asserted-by":"publisher","unstructured":"Hauptmann, A.G.: Speech and gestures for graphic image manipulation. In:\u00a0Proceedings of the SIGCHI conference on Human factors in computing systems 241\u2013245 (1989). https:\/\/doi.org\/10.1145\/67449.67496","DOI":"10.1145\/67449.67496"},{"key":"1212_CR21","doi-asserted-by":"publisher","unstructured":"Hiyoshi, M., Shimazu, H.: Drawing pictures with natural language and direct manipulation. In:\u00a0COLING 1994 volume 2: the 15th international conference on computational linguistics (1994). https:\/\/doi.org\/10.3115\/991250.991262","DOI":"10.3115\/991250.991262"},{"key":"1212_CR22","doi-asserted-by":"publisher","unstructured":"House, B., Malkin, J., Bilmes, J.: The VoiceBot: a voice controlled robot arm. In:\u00a0proceedings of the SIGCHI conference on human factors in computing systems. 183\u2013192 (2009). https:\/\/doi.org\/10.1145\/1518701.1518731","DOI":"10.1145\/1518701.1518731"},{"key":"1212_CR23","doi-asserted-by":"publisher","unstructured":"Hu, R., Zhu, S., Feng, J., Sears, A.: Use of speech technology in real life environment. In:\u00a0universal access in human-computer interaction. Applications and services: 6th international conference, UAHCI 2011, held as part of HCI international 2011, Orlando, FL, USA, July 9-14, 2011, Proceedings, Part IV 6. 62-71 (2011). https:\/\/doi.org\/10.1007\/978-3-642-21657-2_7","DOI":"10.1007\/978-3-642-21657-2_7"},{"key":"1212_CR24","doi-asserted-by":"publisher","unstructured":"Kamel, H.M., Landay, J.A.: A study of blind drawing practice: creating graphical information without the visual channel. In:\u00a0proceedings of the fourth international ACM conference on Assistive technologies. 34\u201341 (2000). https:\/\/doi.org\/10.1145\/354324.354334","DOI":"10.1145\/354324.354334"},{"key":"1212_CR25","doi-asserted-by":"publisher","unstructured":"Karimullah, A.S., Sears, A.: Speech-based cursor control. In:\u00a0Proceedings of the fifth international ACM conference on Assistive technologies. 178\u2013185 (2002). https:\/\/doi.org\/10.1145\/638249.638282","DOI":"10.1145\/638249.638282"},{"key":"1212_CR26","doi-asserted-by":"publisher","unstructured":"Kim, Y.S., Dontcheva, M., Adar, E., Hullman, J.: Vocal shortcuts for creative experts. In:\u00a0proceedings of the 2019 CHI conference on human factors in computing systems. 1\u201314 (2019). https:\/\/doi.org\/10.1145\/3290605.3300562","DOI":"10.1145\/3290605.3300562"},{"key":"1212_CR27","doi-asserted-by":"publisher","unstructured":"Laput, G.P., Dontcheva, M., Wilensky, G., Chang, W., Agarwala, A., Linder, J., Adar, E.: Pixeltone: a multimodal interface for image editing. In:\u00a0proceedings of the SIGCHI conference on human factors in computing systems. 2185\u20132194 (2013). https:\/\/doi.org\/10.1145\/2470654.2481301","DOI":"10.1145\/2470654.2481301"},{"key":"1212_CR28","doi-asserted-by":"publisher","unstructured":"Laviola, J.J., Katzourin, M.: An exploration of non-isomorphic 3D rotation in surround screen virtual environments. In:\u00a02007 IEEE symposium on 3D user interfaces. IEEE (2007). https:\/\/doi.org\/10.1109\/3DUI.2007.340774","DOI":"10.1109\/3DUI.2007.340774"},{"key":"1212_CR29","doi-asserted-by":"publisher","unstructured":"Milota, A.D. : Modality fusion for graphic design applications. In:\u00a0proceedings of the 6th international conference on multimodal interfaces. 167\u2013174 (2004). https:\/\/doi.org\/10.1145\/1027933.1027963","DOI":"10.1145\/1027933.1027963"},{"key":"1212_CR30","doi-asserted-by":"publisher","unstructured":"Nishimoto, T., Shida, N., Koayashi, T., Shirai, K.: Improving human interface drawing tool using speech, mouse and key-board. In:\u00a0proceedings 4th ieee international workshop on robot and human communication. 107\u2013112 (1995). IEEE. https:\/\/doi.org\/10.1109\/ROMAN.1995.531944","DOI":"10.1109\/ROMAN.1995.531944"},{"issue":"10","key":"1212_CR31","doi-asserted-by":"publisher","first-page":"2965","DOI":"10.1016\/j.patcog.2008.05.008","volume":"41","author":"D O\u2019Shaughnessy","year":"2008","unstructured":"O\u2019Shaughnessy, D.: Automatic speech recognition: history, methods and challenges. Pattern Recogn. 41(10), 2965\u20132979 (2008). https:\/\/doi.org\/10.1016\/j.patcog.2008.05.008","journal-title":"Pattern Recogn."},{"issue":"4","key":"1212_CR32","doi-asserted-by":"publisher","first-page":"263","DOI":"10.1207\/S15327051HCI1504_1","volume":"15","author":"S Oviatt","year":"2000","unstructured":"Oviatt, S., Cohen, P., Wu, L., Duncan, L., Suhm, B., Bers, J., Holzman, T., Winograd, T., Landay, J., Larson, J., Ferro, D.: Designing the user interface for multimodal speech and pen-based gesture applications: state-of-the-art systems and future research directions. Hum. Comput. Interact. 15(4), 263\u2013322 (2000). https:\/\/doi.org\/10.1207\/S15327051HCI1504_1","journal-title":"Hum. Comput. Interact."},{"key":"1212_CR33","unstructured":"Pausch, R., Leatherby, J.H.: An empirical study: adding voice input to a graphical editor. In:\u00a0Journal of the american voice input\/output society (1991). Retrieved Mar 17, 2023 , from https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/217e9d8ccf975a99e0c910e2ed12a3d512154c8b"},{"key":"1212_CR34","doi-asserted-by":"publisher","unstructured":"Poupyrev, I., Weghorst, S., Fels, S.: Non-isomorphic 3D rotational techniques. In:\u00a0Proceedings of the SIGCHI conference on Human factors in computing systems. 540\u2013547 (2000). https:\/\/doi.org\/10.1145\/332040.332497","DOI":"10.1145\/332040.332497"},{"key":"1212_CR35","doi-asserted-by":"publisher","unstructured":"Schaadhardt, A., Hiniker, A., Wobbrock, J.O.: Understanding blind screen-reader users\u2019 experiences of digital artboards. In:\u00a0Proceedings of the 2021 CHI conference on human factors in computing systems. 1\u201319 (2021). https:\/\/doi.org\/10.1145\/3411764.3445242","DOI":"10.1145\/3411764.3445242"},{"key":"1212_CR36","doi-asserted-by":"publisher","unstructured":"Sedivy, J., Johnson, H.: Supporting creative work tasks: the potential of multimodal tools to support sketching. In:\u00a0proceedings of the 3rd conference on creativity & cognition. 42\u201349 (1999). https:\/\/doi.org\/10.1145\/317561.317571","DOI":"10.1145\/317561.317571"},{"key":"1212_CR37","doi-asserted-by":"publisher","first-page":"890","DOI":"10.1109\/TASLP.2022.3145313","volume":"30","author":"P Serai","year":"2022","unstructured":"Serai, P., Sunder, V., Fosler-Lussier, E.: Hallucination of speech recognition errors with sequence to sequence learning. IEEE\/ACM Trans. Audio, Speech, Lan-Guage Process. 30, 890\u2013900 (2022). https:\/\/doi.org\/10.1109\/TASLP.2022.3145313","journal-title":"IEEE\/ACM Trans. Audio, Speech, Lan-Guage Process."},{"key":"1212_CR38","doi-asserted-by":"publisher","unstructured":"Shapiro, S.S., Wilk, M.B.: An analysis of variance test for normality (Complete Samples). Biometrika. 591\u2013611 (1965). https:\/\/doi.org\/10.2307\/2333709","DOI":"10.2307\/2333709"},{"key":"1212_CR39","doi-asserted-by":"publisher","unstructured":"Sporka, A.J., Kurniawan, S.H., Mahmud, M., Slav\u00edk, P.: Non-speech input and speech recognition for real-time control of computer games. In:\u00a0proceedings of the 8th international ACM SIGACCESS conference on computers and accessibility 213\u2013220 (2006). https:\/\/doi.org\/10.1145\/1168987.1169023","DOI":"10.1145\/1168987.1169023"},{"key":"1212_CR40","doi-asserted-by":"publisher","unstructured":"Srinivasan, A., Dontcheva, M., Adar, E., Walker, S.: Discovering natural language commands in multimodal interfaces. In:\u00a0proceedings of the 24th international conference on intelligent user interfaces. 661\u2013672 (2019). https:\/\/doi.org\/10.1145\/3301275.3302292","DOI":"10.1145\/3301275.3302292"},{"key":"1212_CR41","doi-asserted-by":"publisher","unstructured":"Van der Kamp, J., Sundstedt, V.: Gaze and voice controlled drawing. In:\u00a0Proceedings of the 1st conference on novel gaze-controlled applications. 1\u20138 (2011).https:\/\/doi.org\/10.1145\/1983302.1983311","DOI":"10.1145\/1983302.1983311"},{"key":"1212_CR42","unstructured":"Web Speech API: MND | Developer.mozilla.org\u2014Web APIs. Retrieved Mar 17, (2023). from https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Web_Speech_API"},{"issue":"12","key":"1212_CR43","doi-asserted-by":"publisher","first-page":"3479","DOI":"10.1109\/TVCG.2020.3023566","volume":"26","author":"AS Williams","year":"2020","unstructured":"Williams, A.S., Garcia, J., Ortega, F.: Understanding multimodal user gesture and speech behavior for object manipulation in augmented reality using elicitation. IEEE Trans. Visual Comput. Graphics 26(12), 3479\u20133489 (2020). https:\/\/doi.org\/10.1109\/TVCG.2020.3023566","journal-title":"IEEE Trans. Visual Comput. Graphics"},{"key":"1212_CR44","doi-asserted-by":"publisher","unstructured":"Williams, A.S., Ortega, F.R.: Understanding gesture and speech multimodal interactions for manipulation tasks in augmented reality using unconstrained elicitation.\u00a0In: Proceedings of the ACM on human-computer interaction,\u00a04(ISS), 1\u201321 (2020).https:\/\/doi.org\/10.1145\/3427330","DOI":"10.1145\/3427330"},{"key":"1212_CR45","doi-asserted-by":"publisher","unstructured":"Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In:\u00a0Proceedings of the SIGCHI conference on human factors in computing systems. 1083\u20131092 (2009). https:\/\/doi.org\/10.1145\/1518701.1518866","DOI":"10.1145\/1518701.1518866"},{"key":"1212_CR46","doi-asserted-by":"publisher","unstructured":"Xu, P., Fu, H., Igarashi, T., Tai, C.L.: Global beautification of layouts with interactive ambiguity resolution. In:\u00a0Proceedings of the 27th annual ACM symposium on User interface software and technology. 243\u2013252 (2014). https:\/\/doi.org\/10.1145\/2642918.2647398","DOI":"10.1145\/2642918.2647398"},{"issue":"1","key":"1212_CR47","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1080\/00220973.1993.9943832","volume":"62","author":"DW Zimmerman","year":"1993","unstructured":"Zimmerman, D.W., Zumbo, B.D.: Relative power of the wilcoxon test, the friedman test, and repeated-measures ANOVA on ranks. J. Exp. Educ. 62(1), 75\u201386 (1993). https:\/\/doi.org\/10.1080\/00220973.1993.9943832","journal-title":"J. Exp. Educ."},{"key":"1212_CR48","doi-asserted-by":"publisher","unstructured":"Zhu, S., Ma, Y., Feng, J., Sears, A.: Speech-based navigation: improving grid-based solutions. In:\u00a0human-computer interaction\u2013INTERACT 2009: 12th IFIP TC 13 international conference, Uppsala, Sweden, August 24\u201328, 2009, Proceedings, Part I 12. 50\u201362. Springer Berlin Heidelberg (2009). https:\/\/doi.org\/10.1007\/978-3-642-03655-2_6","DOI":"10.1007\/978-3-642-03655-2_6"},{"issue":"1","key":"1212_CR49","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/1838562.1838565","volume":"3","author":"S Zhu","year":"2010","unstructured":"Zhu, S., Feng, J., Sears, A.: Investigating grid-based navigation: the impact of physical disability. ACM Trans. Access. Comput. (TACCESS) 3(1), 1\u201330 (2010). https:\/\/doi.org\/10.1145\/1838562.1838565","journal-title":"ACM Trans. Access. Comput. (TACCESS)"}],"container-title":["Universal Access in the Information Society"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10209-025-01212-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10209-025-01212-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10209-025-01212-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T11:32:47Z","timestamp":1764934367000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10209-025-01212-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,25]]},"references-count":49,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,8]]}},"alternative-id":["1212"],"URL":"https:\/\/doi.org\/10.1007\/s10209-025-01212-8","relation":{},"ISSN":["1615-5289","1615-5297"],"issn-type":[{"type":"print","value":"1615-5289"},{"type":"electronic","value":"1615-5297"}],"subject":[],"published":{"date-parts":[[2025,3,25]]},"assertion":[{"value":"5 March 2025","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 March 2025","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"The research work received ethical approval from Research Ethics Committee at Birmingham City University, UK (Reference: Komal\u00a0\/#10132\u00a0\/sub2\u00a0\/R(B)\u00a0\/2022\u00a0\/Mar\u00a0\/CEBE FAEC).","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"Informed consent was received from all participants prior to taking part in the study.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}},{"value":"All authors consent to the publication of this article.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}}]}}