{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,10]],"date-time":"2026-01-10T03:22:02Z","timestamp":1768015322085,"version":"3.49.0"},"reference-count":33,"publisher":"Emerald","issue":"1","license":[{"start":{"date-parts":[[2023,10,20]],"date-time":"2023-10-20T00:00:00Z","timestamp":1697760000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emerald.com\/insight\/site-policies"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IJICC"],"published-print":{"date-parts":[[2024,2,29]]},"abstract":"<jats:sec><jats:title content-type=\"abstract-subheading\">Purpose<\/jats:title><jats:p>Assistive technology has been developed to assist the visually impaired individuals in their social interactions. Specifically designed to enhance communication skills, facilitate social engagement and improve the overall quality of life, conversational assistive technologies include speech recognition APIs, text-to-speech APIs and various communication tools that are real. Enable real-time interaction. Using natural language processing (NLP) and machine learning algorithms, the technology analyzes spoken language and provides appropriate responses, offering an immersive experience through voice commands, audio feedback and vibration alerts.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Design\/methodology\/approach<\/jats:title><jats:p>These technologies have demonstrated their ability to promote self-confidence and self-reliance in visually impaired individuals during social interactions. Moreover, they promise to improve social competence and foster better relationships. In short, assistive technology in conversation stands as a promising tool that empowers the visually impaired individuals, elevating the quality of their social engagement.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Findings<\/jats:title><jats:p>The main benefit of assistive communication technology is that it will help visually impaired people overcome communication barriers in social contexts. This technology helps them communicate effectively with acquaintances, family, co-workers and even strangers in public places. By enabling smoother and more natural communication, it works to reduce feelings of isolation and increase overall quality of life.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Originality\/value<\/jats:title><jats:p>Research findings include successful activity recognition, aligning with activities on which the VGG-16 model was trained, such as hugging, shaking hands, talking, walking, waving and more. The originality of this study lies in its approach to address the challenges faced by the visually impaired individuals in their social interactions through modern technology. Research adds to the body of knowledge in the area of assistive technologies, which contribute to the empowerment and social inclusion of the visually impaired individuals.<\/jats:p><\/jats:sec>","DOI":"10.1108\/ijicc-06-2023-0147","type":"journal-article","created":{"date-parts":[[2023,10,19]],"date-time":"2023-10-19T00:14:35Z","timestamp":1697674475000},"page":"126-142","source":"Crossref","is-referenced-by-count":8,"title":["Improving social interaction of the visually impaired individuals through conversational assistive technology"],"prefix":"10.1108","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-8956-1011","authenticated-orcid":false,"given":"Komal","family":"Ghafoor","sequence":"first","affiliation":[]},{"given":"Tauqir","family":"Ahmad","sequence":"additional","affiliation":[]},{"given":"Muhammad","family":"Aslam","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7126-5765","authenticated-orcid":false,"given":"Samyan","family":"Wahla","sequence":"additional","affiliation":[]}],"member":"140","published-online":{"date-parts":[[2023,10,20]]},"reference":[{"key":"key2024022811224093500_ref001","article-title":"The use of information and communication technologies by older people with cognitive impairments: from barriers to benefits","volume":"104","year":"2020","journal-title":"Computers in Human Behavior"},{"key":"key2024022811224093500_ref002","first-page":"2911","article-title":"Sst: single-stream temporal action proposals","year":"2017"},{"issue":"1","key":"key2024022811224093500_ref035","first-page":"45","article-title":"Enhancing group activity engagement for visually impaired individuals: a conversational agent approach","volume":"14","year":"2020","journal-title":"Journal of Assistive Technologies"},{"key":"key2024022811224093500_ref003","first-page":"1060","article-title":"Meva: a large-scale multiview, multimodal video dataset for activity detection","year":"2021"},{"key":"key2024022811224093500_ref004","first-page":"5793","article-title":"Temporal context network for activity localization in videos","year":"2017"},{"key":"key2024022811224093500_ref005","article-title":"Deep learning models for the perception of human social interactions","year":"2019"},{"issue":"2","key":"key2024022811224093500_ref006","doi-asserted-by":"crossref","first-page":"73","DOI":"10.1007\/BF00335287","article-title":"A model for separation of spatial and temporal information in the visual system","volume":"28","year":"1977","journal-title":"Biological Cybernetics"},{"key":"key2024022811224093500_ref007","first-page":"68","article-title":"Ctap: complementary temporal action proposal generation","year":"2018"},{"key":"key2024022811224093500_ref008","article-title":"Activitynet challenge 2017 summary","year":"2017"},{"key":"key2024022811224093500_ref009","first-page":"630","article-title":"Identity mappings in deep residual networks","year":"2016"},{"issue":"1","key":"key2024022811224093500_ref011","first-page":"49","article-title":"Combining semantic and geometric features for object class segmentation of indoor scenes","volume":"2","year":"2016","journal-title":"IEEE Robotics and Automation Letters"},{"issue":"4","key":"key2024022811224093500_ref012","doi-asserted-by":"crossref","first-page":"11807","DOI":"10.1109\/LRA.2022.3184025","article-title":"Socially compliant navigation dataset (scand): a large-scale dataset of demonstrations for social navigation","volume":"7","year":"2022","journal-title":"IEEE Robotics and Automation Letters"},{"issue":"2","key":"key2024022811224093500_ref014","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/S2214-109X(21)00008-5","article-title":"Rising to the challenge: estimates of the magnitude and causes of vision impairment and blindness","volume":"9","year":"2021","journal-title":"The Lancet Global Health"},{"issue":"2","key":"key2024022811224093500_ref034","first-page":"125","article-title":"Enhancing social event accessibility for visually impaired individuals: a conversational agent approach","volume":"7","year":"2021","journal-title":"Journal of Accessibility and Inclusion"},{"key":"key2024022811224093500_ref015","article-title":"Microsoft COCO: common objects in context","year":"2014"},{"key":"key2024022811224093500_ref016","article-title":"Temporal convolution based action proposal: submission to activitynet 2017","year":"2017"},{"key":"key2024022811224093500_ref017","first-page":"3","article-title":"Bsn: boundary sensitive network for temporal action proposal generation","year":"2018"},{"key":"key2024022811224093500_ref018","article-title":"Review of intent detection methods in the human-machine dialogue system","volume":"1267","year":"2019","journal-title":"Journal of Physics: Conference Series"},{"issue":"3","key":"key2024022811224093500_ref033","first-page":"201","article-title":"A conversational agent for assisting visually impaired individuals with real-time navigation information","volume":"12","year":"2019","journal-title":"Journal of Assistive Technologies"},{"key":"key2024022811224093500_ref019","article-title":"Pedestrian detection on caviar dataset using a movement feature space","year":"2012"},{"key":"key2024022811224093500_ref020","first-page":"1","article-title":"Ai for accessibility: virtual assistant for hearing impaired","year":"2020"},{"key":"key2024022811224093500_ref021","first-page":"463","article-title":"Comparing state-of-the-art visual features on invariant object recognition tasks","year":"2011"},{"issue":"2","key":"key2024022811224093500_ref022","doi-asserted-by":"crossref","first-page":"992","DOI":"10.37385\/jaets.v4i2.2013","article-title":"Smart_eye: a navigation and obstacle detection for visually impaired people through smart app","volume":"4","year":"2023","journal-title":"Journal of Applied Engineering and Technological Science (JAETS)"},{"issue":"9","key":"key2024022811224093500_ref023","doi-asserted-by":"crossref","first-page":"839","DOI":"10.1080\/10447318.2019.1696513","article-title":"Social glasses: simulating interactive gaze for visually impaired people in face-to-face communication","volume":"36","year":"2020","journal-title":"International Journal of Human\u2013Computer Interaction"},{"key":"key2024022811224093500_ref024","first-page":"5296","article-title":"Youtube-boundingboxes: a large high-precision human-annotated data set for object detection in video","year":"2017"},{"key":"key2024022811224093500_ref025","first-page":"2730","article-title":"First-person activity recognition: what are they doing to me?","year":"2013"},{"key":"key2024022811224093500_ref026","doi-asserted-by":"crossref","first-page":"540","DOI":"10.1007\/s00287-017-1077-7","article-title":"A multimodal assistive system for helping visually impaired in social interactions","volume":"40","year":"2017","journal-title":"Informatik-Spektrum"},{"key":"key2024022811224093500_ref027","first-page":"1","article-title":"Deep analysis for smartphone-based human activity recognition","year":"2020"},{"key":"key2024022811224093500_ref028","first-page":"1049","article-title":"Temporal action localization in untrimmed videos via multi-stage cnns","year":"2016"},{"key":"key2024022811224093500_ref029","article-title":"Materials today chemistry","volume":"29","year":"2023","journal-title":"Materials Today"},{"key":"key2024022811224093500_ref030","first-page":"1","article-title":"Path and floor detection in outdoor environments for fall prevention of the visually impaired population","year":"2022"},{"key":"key2024022811224093500_ref031","first-page":"2914","article-title":"Temporal action detection with structured segment networks","year":"2017"},{"key":"key2024022811224093500_ref032","first-page":"408","article-title":"Flow-guided feature aggregation for video object detection","year":"2017"}],"container-title":["International Journal of Intelligent Computing and Cybernetics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/IJICC-06-2023-0147\/full\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/IJICC-06-2023-0147\/full\/html","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,24]],"date-time":"2025-07-24T22:54:27Z","timestamp":1753397667000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.emerald.com\/ijicc\/article\/17\/1\/126-142\/1236297"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,20]]},"references-count":33,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,10,20]]},"published-print":{"date-parts":[[2024,2,29]]}},"alternative-id":["10.1108\/IJICC-06-2023-0147"],"URL":"https:\/\/doi.org\/10.1108\/ijicc-06-2023-0147","relation":{},"ISSN":["1756-378X"],"issn-type":[{"value":"1756-378X","type":"print"}],"subject":[],"published":{"date-parts":[[2023,10,20]]}}}