{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,25]],"date-time":"2025-11-25T06:57:20Z","timestamp":1764053840506},"reference-count":30,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2023,5,26]],"date-time":"2023-05-26T00:00:00Z","timestamp":1685059200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,5,26]],"date-time":"2023-05-26T00:00:00Z","timestamp":1685059200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Manipal Academy of Higher Education, Manipal"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Real-Time Image Proc"],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>New and interesting uses for portable devices include the creation and viewing of 3D models and 360-degree photos of real landscapes. To provide a 3D model and a 360-degree view of a scenario, these apps search for real-time rendering and presentation. This study examines the impact of real-time image processing on movements in camera view and the application of optical flow algorithms. The optical flows are affected by how the objects involved motion. The way the image is recorded and the camera movements affect the relative motion. As a consequence, camera motions are responsive to optical flow algorithms. To record an image, the camera may pan around, move in a straight path, or move randomly. We have captured datasets of videos produced in different contexts to better replicate real-world scenarios. Each dataset was captured in a variety of illumination situations, camera movements, and indoor and outdoor recording sites. Here, to determine the most effective optical flow algorithms for use in near real-time applications, such as augmented reality and virtual reality, we compare results based on quality and processing delay for each video frame. We conducted a comparison study to better understand how random camera motion affects real-time video processing. These methods can be used to handle a variety of real-world issues, such as object tracking, video segmentation, structure from motion, gesture tracking, and so on.<\/jats:p>","DOI":"10.1007\/s11554-023-01322-7","type":"journal-article","created":{"date-parts":[[2023,5,26]],"date-time":"2023-05-26T18:02:10Z","timestamp":1685124130000},"update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["An investigation of camera movements and capture techniques on optical flow for real-time rendering and presentation"],"prefix":"10.1007","volume":"20","author":[{"given":"Nishant","family":"Modi","sequence":"first","affiliation":[]},{"given":"M.","family":"Ramakrishna","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,5,26]]},"reference":[{"key":"1322_CR1","doi-asserted-by":"crossref","unstructured":"Alldieck, T., Kassubeck, M., Magnor, M.: Optical flow-based 3D human motion estimation from monocular video. arXiv:1703.00177 (2017)","DOI":"10.1007\/978-3-319-66709-6_28"},{"key":"1322_CR2","doi-asserted-by":"publisher","first-page":"435","DOI":"10.1007\/978-3-642-10331-5_41","volume-title":"Advances in Visual Computing","author":"J Almeida","year":"2009","unstructured":"Almeida, J., Minetto, R., Almeida, T.A., da Torres, S.R., Leite, N.J.: Robust estimation of camera motion using optical flow models. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Kuno, Y., Wang, J., Wang, J.X., Wang, J., Pajarola, R., Lindstrom, P., Hinkenjann, A., Encarna\u00e7\u00e3o, M.L., Silva, C.T., Coming, D. (eds.) Advances in Visual Computing, pp. 435\u2013446. Springer, Berlin (2009)"},{"issue":"1\u201310","key":"1322_CR3","first-page":"4","volume":"5","author":"JY Bouguet","year":"2001","unstructured":"Bouguet, J.Y., et al.: Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corpor. 5(1\u201310), 4 (2001)","journal-title":"Intel Corpor."},{"key":"1322_CR4","doi-asserted-by":"publisher","first-page":"281","DOI":"10.1111\/cgf.12983","volume":"35","author":"A Boulch","year":"2016","unstructured":"Boulch, A., Marlet, R.: Deep learning for robust normal estimation in unstructured point clouds. Comput. Graph. Forum Wiley Online Lib. 35, 281\u2013290 (2016)","journal-title":"Comput. Graph. Forum Wiley Online Lib."},{"key":"1322_CR5","doi-asserted-by":"crossref","unstructured":"Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: European Conference on Computer Vision, pp. 611\u2013625. Springer, Berlin (2012)","DOI":"10.1007\/978-3-642-33783-3_44"},{"issue":"4","key":"1322_CR6","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1371\/journal.pone.0195781","volume":"13","author":"M Caramenti","year":"2018","unstructured":"Caramenti, M., Lafortuna, C.L., Mugellini, E., Abou Khaled, O., Bresciani, J.P., Dubois, A.: Matching optical flow to motor speed in virtual reality while running on a treadmill. PLoS ONE 13(4), 1\u201313 (2018). https:\/\/doi.org\/10.1371\/journal.pone.0195781","journal-title":"PLoS ONE"},{"key":"1322_CR7","doi-asserted-by":"crossref","unstructured":"Farneb\u00e4ck, G.: Two-frame motion estimation based on polynomial expansion. In: Scandinavian Conference on Image Analysis, pp. 363\u2013370. Springer, London (2003)","DOI":"10.1007\/3-540-45103-X_50"},{"key":"1322_CR8","doi-asserted-by":"publisher","unstructured":"Fortun, D., Bouthemy, P., Kervrann, C.: Optical flow modeling and computation: a survey. Comput. Vis. Image Understand. 134, 1\u201321 (2015). https:\/\/doi.org\/10.1016\/j.cviu.2015.02.008. www.sciencedirect.com\/science\/article\/pii\/S1077314215000429 (Image Understanding for Real-world Distributed Video Networks)","DOI":"10.1016\/j.cviu.2015.02.008"},{"key":"1322_CR9","doi-asserted-by":"publisher","unstructured":"Kajo, I., Malik, A.S., Kamel, N.: Motion estimation of crowd flow using optical flow techniques: a review. In: 2015 9th International Conference on Signal Processing and Communication Systems (ICSPCS), pp. 1\u20139 (2015). https:\/\/doi.org\/10.1109\/ICSPCS.2015.7391778","DOI":"10.1109\/ICSPCS.2015.7391778"},{"key":"1322_CR10","doi-asserted-by":"publisher","unstructured":"Koenderink, J.J.: Optic flow. Vis. Res. 26(1), 161\u2013179 (1986). https:\/\/doi.org\/10.1016\/0042-6989(86)90078-7. www.sciencedirect.com\/science\/article\/pii\/0042698986900787","DOI":"10.1016\/0042-6989(86)90078-7"},{"key":"1322_CR11","doi-asserted-by":"publisher","unstructured":"Koniarski, K.: Augmented reality using optical flow. In: 2015 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 841\u2013847 (2015). https:\/\/doi.org\/10.15439\/2015F202","DOI":"10.15439\/2015F202"},{"key":"1322_CR12","doi-asserted-by":"crossref","unstructured":"Kroeger, T., Timofte, R., Dai, D., Van\u00a0Gool, L.: Fast optical flow using dense inverse search. In: European Conference on Computer Vision, pp 471\u2013488. Springer, London (2016)","DOI":"10.1007\/978-3-319-46493-0_29"},{"key":"1322_CR13","doi-asserted-by":"publisher","unstructured":"Lin, F., Chou, Y., Martinez, T.: Flow adaptive video object segmentation. Image Vis. Comput. 94, 103864 (2020). https:\/\/doi.org\/10.1016\/j.imavis.2019.103864. www.sciencedirect.com\/science\/article\/pii\/S0262885619304573","DOI":"10.1016\/j.imavis.2019.103864"},{"issue":"1","key":"1322_CR14","doi-asserted-by":"publisher","first-page":"126","DOI":"10.1006\/cviu.2001.0930","volume":"84","author":"B McCane","year":"2001","unstructured":"McCane, B., Novins, K., Crannitch, D., Galvin, B.: On benchmarking optical flow. Comput. Vis. Image Underst. 84(1), 126\u2013143 (2001)","journal-title":"Comput. Vis. Image Underst."},{"key":"1322_CR15","doi-asserted-by":"publisher","unstructured":"Mooser, J., You, S., Neumann, U.: Real-time object tracking for augmented reality combining graph cuts and optical flow. In: Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 1\u20138. IEEE Computer Society, USA, ISMAR\u201907 (2007). https:\/\/doi.org\/10.1109\/ISMAR.2007.4538839","DOI":"10.1109\/ISMAR.2007.4538839"},{"key":"1322_CR16","first-page":"1","volume":"2005","author":"P O\u2019Donovan","year":"2005","unstructured":"O\u2019Donovan, P.: Optical flow: techniques and applications. Int. J. Comput. Vis. 2005, 1\u201326 (2005)","journal-title":"Int. J. Comput. Vis."},{"key":"1322_CR17","doi-asserted-by":"publisher","first-page":"137","DOI":"10.5201\/ipol.2013.26","volume":"2013","author":"JS P\u00e9rez","year":"2013","unstructured":"P\u00e9rez, J.S., Meinhardt-Llopis, E., Facciolo, G.: Tv-l1 optical flow estimation. Image Process. Online 2013, 137\u2013150 (2013)","journal-title":"Image Process. Online"},{"key":"1322_CR18","first-page":"522","volume-title":"Intelligent Autonomous Systems, An International Conference","author":"P Rives","year":"1986","unstructured":"Rives, P., Breuil, E., Espiau, B.: Recursive estimation of 3d features using optical flow and camera motion. In: Intelligent Autonomous Systems, An International Conference, pp. 522\u2013532. North-Holland Publishing Co., North-Holland (1986)"},{"key":"1322_CR19","doi-asserted-by":"crossref","unstructured":"Roosendaal, T.: Sintel. In: ACM SIGGRAPH 2011 Computer Animation Festival, p. 71 (2011)","DOI":"10.1145\/2019001.2019066"},{"issue":"9","key":"1322_CR20","doi-asserted-by":"publisher","first-page":"1377","DOI":"10.1109\/TCSVT.2012.2202070","volume":"22","author":"T Senst","year":"2012","unstructured":"Senst, T., Eiselein, V., Sikora, T.: Robust local optical flow for feature tracking. IEEE Trans. Circuits Syst. Video Technol. 22(9), 1377\u20131387 (2012)","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"1322_CR21","doi-asserted-by":"publisher","first-page":"345","DOI":"10.1111\/j.1467-8659.2012.03013.x","volume":"31","author":"M Tao","year":"2012","unstructured":"Tao, M., Bai, J., Kohli, P., Paris, S.: Simpleflow: a non-iterative, sublinear optical flow algorithm. Comput. Graph. Forum Wiley Online Lib. 31, 345\u2013353 (2012)","journal-title":"Comput. Graph. Forum Wiley Online Lib."},{"key":"1322_CR22","doi-asserted-by":"publisher","unstructured":"Tsai, Y.H., Yang, M.H., Black, M.J.: Video segmentation via object flow. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3899\u20133908 (2016). https:\/\/doi.org\/10.1109\/CVPR.2016.423","DOI":"10.1109\/CVPR.2016.423"},{"key":"1322_CR23","doi-asserted-by":"publisher","unstructured":"Vishniakou, I., Pl\u00f6ger, P.G., Seelig, J.D.: Virtual reality for animal navigation with camera-based optical flow tracking. J. Neurosci. Methods 327, 108403 (2019). https:\/\/doi.org\/10.1016\/j.jneumeth.2019.108403. www.sciencedirect.com\/science\/article\/pii\/S0165027019302602","DOI":"10.1016\/j.jneumeth.2019.108403"},{"key":"1322_CR24","doi-asserted-by":"crossref","unstructured":"Weinzaepfel, P., Revaud, J., Harchaoui, Z., Schmid, C.: Deepflow: large displacement optical flow with deep matching. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2013)","DOI":"10.1109\/ICCV.2013.175"},{"key":"1322_CR25","doi-asserted-by":"crossref","unstructured":"Wulff, J., Black, M.J.: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)","DOI":"10.1109\/CVPR.2015.7298607"},{"key":"1322_CR26","doi-asserted-by":"crossref","unstructured":"Yang, G., Ramanan, D.: Upgrading optical flow to 3D scene flow through optical expansion. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)","DOI":"10.1109\/CVPR42600.2020.00141"},{"key":"1322_CR27","doi-asserted-by":"publisher","unstructured":"Younis, O., Al-Nuaimy, W., Rowe, F., Alomari, M.H.: Real-time detection of wearable camera motion using optical flow. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1\u20136 (2018). https:\/\/doi.org\/10.1109\/CEC.2018.8477783","DOI":"10.1109\/CEC.2018.8477783"},{"key":"1322_CR28","doi-asserted-by":"crossref","unstructured":"Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime tv-l 1 optical flow. In: Joint Pattern Recognition Symposium, pp. 214\u2013223. Springer, London (2007)","DOI":"10.1007\/978-3-540-74936-3_22"},{"key":"1322_CR29","doi-asserted-by":"publisher","unstructured":"Zhai, M., Xiang, X., Lv, N., Kong, X.: Optical flow and scene flow estimation: a survey. Pattern Recogn. 114, 107861 (2021). https:\/\/doi.org\/10.1016\/j.patcog.2021.107861. www.sciencedirect.com\/science\/article\/pii\/S0031320321000480","DOI":"10.1016\/j.patcog.2021.107861"},{"key":"1322_CR30","doi-asserted-by":"publisher","unstructured":"Zhang, J., Ding, Y., Xu, H., Yuan, Y.: An optical flow based moving objects detection algorithm for the UAV. In: 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), pp. 233\u2013238 (2019). https:\/\/doi.org\/10.1109\/CCOMS.2019.8821661","DOI":"10.1109\/CCOMS.2019.8821661"}],"container-title":["Journal of Real-Time Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-023-01322-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11554-023-01322-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-023-01322-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,24]],"date-time":"2023-07-24T19:19:58Z","timestamp":1690226398000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11554-023-01322-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,26]]},"references-count":30,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["1322"],"URL":"https:\/\/doi.org\/10.1007\/s11554-023-01322-7","relation":{},"ISSN":["1861-8200","1861-8219"],"issn-type":[{"value":"1861-8200","type":"print"},{"value":"1861-8219","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,26]]},"assertion":[{"value":"12 January 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 May 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 May 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declares that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"60"}}