{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,8]],"date-time":"2025-09-08T05:39:52Z","timestamp":1757309992565,"version":"3.37.3"},"reference-count":64,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2024,10,14]],"date-time":"2024-10-14T00:00:00Z","timestamp":1728864000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,10,14]],"date-time":"2024-10-14T00:00:00Z","timestamp":1728864000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Institute of Information & Communications Technology Planning & Evaluation","award":["2019-0-00075,2022-0-00157"],"award-info":[{"award-number":["2019-0-00075,2022-0-00157"]}]},{"DOI":"10.13039\/501100003629","name":"Korea Meteorological Administration","doi-asserted-by":"publisher","award":["KMA2021-00123"],"award-info":[{"award-number":["KMA2021-00123"]}],"id":[{"id":"10.13039\/501100003629","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2021R1C1C1008526,NRF-2020R1C1C1008296"],"award-info":[{"award-number":["NRF-2021R1C1C1008526,NRF-2020R1C1C1008296"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Knowl Inf Syst"],"published-print":{"date-parts":[[2025,2]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Many real-world datasets are represented as tensors, i.e., multi-dimensional arrays of numerical values. Storing them without compression often requires substantial space, which grows exponentially with the order. While many tensor compression algorithms are available, many of them rely on strong data assumptions regarding its order, sparsity, rank, and smoothness. In this work, we propose <jats:sc>TensorCodec<\/jats:sc>, a lossy compression algorithm for general tensors that do not necessarily adhere to strong input data assumptions.<jats:sc>TensorCodec<\/jats:sc> incorporates three key ideas. The first idea is neural tensor-train decomposition (NTTD) where we integrate a recurrent neural network into Tensor-Train Decomposition to enhance its expressive power and alleviate the limitations imposed by the low-rank assumption. Another idea is to fold the input tensor into a higher-order tensor to reduce the space required by NTTD. Finally, the mode indices of the input tensor are reordered to reveal patterns that can be exploited by NTTD for improved approximation. In addition, we extend <jats:sc>TensorCodec<\/jats:sc> to enable the lossy compression of tensors with missing entries, often found in real-world datasets. Our analysis and experiments on 8 real-world datasets demonstrate that <jats:sc>TensorCodec<\/jats:sc> is (a) Concise: it gives up to <jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$7.38 \\times $$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mrow>\n                    <mml:mn>7.38<\/mml:mn>\n                    <mml:mo>\u00d7<\/mml:mo>\n                  <\/mml:mrow>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> more compact compression than the best competitor with similar reconstruction error, (b) Accurate: given the same budget for compressed size, it yields up to <jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$3.33\\times $$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mrow>\n                    <mml:mn>3.33<\/mml:mn>\n                    <mml:mo>\u00d7<\/mml:mo>\n                  <\/mml:mrow>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> more accurate reconstruction than the best competitor, (c) Scalable: Its empirical compression time is linear in the number of tensor entries, and it reconstructs each entry in logarithmic time. Our code and datasets are available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/kbrother\/TensorCodec\" ext-link-type=\"uri\">https:\/\/github.com\/kbrother\/TensorCodec<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s10115-024-02252-x","type":"journal-article","created":{"date-parts":[[2024,10,14]],"date-time":"2024-10-14T01:01:36Z","timestamp":1728867696000},"page":"1169-1211","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Compact lossy compression of tensors via neural tensor-train decomposition"],"prefix":"10.1007","volume":"67","author":[{"given":"Taehyung","family":"Kwon","sequence":"first","affiliation":[]},{"given":"Jihoon","family":"Ko","sequence":"additional","affiliation":[]},{"given":"Jinhong","family":"Jung","sequence":"additional","affiliation":[]},{"given":"Jun-Gi","family":"Jang","sequence":"additional","affiliation":[]},{"given":"Kijung","family":"Shin","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,10,14]]},"reference":[{"issue":"3","key":"2252_CR1","doi-asserted-by":"publisher","first-page":"455","DOI":"10.1137\/07070111X","volume":"51","author":"TG Kolda","year":"2009","unstructured":"Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455\u2013500","journal-title":"SIAM Rev"},{"key":"2252_CR2","doi-asserted-by":"crossref","unstructured":"Jang J-G, Kang U (2020) D-tucker: fast and memory-efficient tucker decomposition for dense tensors. In: IEEE international conference on data engineering (ICDE), pp 1850\u2013 1853","DOI":"10.1109\/ICDE48307.2020.00186"},{"key":"2252_CR3","doi-asserted-by":"crossref","unstructured":"Jang J-G, Kang U (2021) Fast and memory-efficient tucker decomposition for answering diverse time range queries. In: ACM SIGKDD conference on knowledge discovery & data mining (KDD), pp 725\u2013 735","DOI":"10.1145\/3447548.3467290"},{"key":"2252_CR4","doi-asserted-by":"crossref","unstructured":"Wang J, Liu Z, Wu Y, Yuan J (2012) Mining actionlet ensemble for action recognition with depth cameras. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1290\u2013 1297","DOI":"10.1109\/CVPR.2012.6247813"},{"key":"2252_CR5","doi-asserted-by":"crossref","unstructured":"Liu X, You X, Zhang X, Wu J, Lv P (2020) Tensor graph convolutional networks for text classification. In: AAAI conference on artificial intelligence (AAAI), vol 34, pp 8409\u20138416","DOI":"10.1609\/aaai.v34i05.6359"},{"key":"2252_CR6","doi-asserted-by":"crossref","unstructured":"Wu C-Y, Feichtenhofer C, Fan H, He K, Krahenbuhl P, Girshick R (2019) Long-term feature banks for detailed video understanding. In: IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 284\u2013 293","DOI":"10.1109\/CVPR.2019.00037"},{"key":"2252_CR7","doi-asserted-by":"crossref","unstructured":"Luo Y, Liu Q, Liu Z (2021) Stan: Spatio-temporal attention network for next location recommendation. In: ACM Web Conference (WebConf), pp. 2177\u2013 2185","DOI":"10.1145\/3442381.3449998"},{"key":"2252_CR8","first-page":"448","volume":"3","author":"C Yin","year":"2021","unstructured":"Yin C, Acun B, Wu C-J, Liu X (2021) Tt-rec: tensor train compression for deep learning recommendation models. Mach Learn Syst (MLSys) 3:448\u2013462","journal-title":"Mach Learn Syst (MLSys)"},{"key":"2252_CR9","doi-asserted-by":"crossref","unstructured":"Yin C, Zheng D, Nisa I, Faloutsos C, Karypis G, Vuduc R (2022) Nimble gnn embedding with tensor-train decomposition. In: ACM SIGKDD conference on knowledge discovery and data mining (KDD), pp 2327\u2013 2335","DOI":"10.1145\/3534678.3539423"},{"issue":"2","key":"2252_CR10","doi-asserted-by":"publisher","first-page":"505","DOI":"10.1046\/j.1365-246X.1998.00652.x","volume":"135","author":"P Xu","year":"1998","unstructured":"Xu P (1998) Truncated svd methods for discrete linear ill-posed problems. Geophys J Int 135(2):505\u2013514","journal-title":"Geophys J Int"},{"key":"2252_CR11","doi-asserted-by":"crossref","unstructured":"Sun J, Xie Y, Zhang H, Faloutsos C (2007) Less is more: compact matrix decomposition for large sparse graphs. In: SIAM international conference on data mining (SDM), pp 366\u2013 377","DOI":"10.1137\/1.9781611972771.33"},{"key":"2252_CR12","doi-asserted-by":"crossref","unstructured":"Smith S, Ravindran N, Sidiropoulos ND, Karypis G (2015) Splatt: efficient and parallel sparse tensor-matrix multiplication. In: IEEE international parallel and distributed processing symposium (IPDPS), pp 61\u2013 70","DOI":"10.1109\/IPDPS.2015.27"},{"key":"2252_CR13","doi-asserted-by":"crossref","unstructured":"Kwon T, Ko J, Jung J, Shin K (2023) Neukron: constant-size lossy compression of sparse reorderable matrices and tensors. In: ACM web conference (WWW), pp 71\u2013 81","DOI":"10.1145\/3543507.3583226"},{"issue":"1\u20134","key":"2252_CR14","doi-asserted-by":"publisher","first-page":"164","DOI":"10.1002\/sapm192761164","volume":"6","author":"FL Hitchcock","year":"1927","unstructured":"Hitchcock FL (1927) The expression of a tensor or a polyadic as a sum of products. J Math Phys 6(1\u20134):164\u2013189","journal-title":"J Math Phys"},{"issue":"3","key":"2252_CR15","doi-asserted-by":"publisher","first-page":"279","DOI":"10.1007\/BF02289464","volume":"31","author":"LR Tucker","year":"1966","unstructured":"Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31(3):279\u2013311","journal-title":"Psychometrika"},{"issue":"5","key":"2252_CR16","doi-asserted-by":"publisher","first-page":"2295","DOI":"10.1137\/090752286","volume":"33","author":"IV Oseledets","year":"2011","unstructured":"Oseledets IV (2011) Tensor-train decomposition. SIAM J Sci Comput (SISC) 33(5):2295\u20132317","journal-title":"SIAM J Sci Comput (SISC)"},{"key":"2252_CR17","doi-asserted-by":"crossref","unstructured":"Zhao Q, Sugiyama M, Yuan L, Cichocki A (2019) Learning efficient tensor representations with ring-structured networks. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 8608\u2013 8612","DOI":"10.1109\/ICASSP.2019.8682231"},{"issue":"9","key":"2252_CR18","doi-asserted-by":"publisher","first-page":"2891","DOI":"10.1109\/TVCG.2019.2904063","volume":"26","author":"R Ballester-Ripoll","year":"2019","unstructured":"Ballester-Ripoll R, Lindstrom P, Pajarola R (2019) Tthresh: tensor compression for multidimensional visual data. IEEE Trans Vis Comput Gr (TVCG) 26(9):2891\u20132903","journal-title":"IEEE Trans Vis Comput Gr (TVCG)"},{"key":"2252_CR19","doi-asserted-by":"crossref","unstructured":"Zhao K, Di S, Dmitriev M, Tonellot T-LD, Chen Z, Cappello F (2021) Optimizing error-bounded lossy compression for scientific data by dynamic spline interpolation. In: IEEE international conference on data engineering (ICDE), pp 1643\u2013 1654","DOI":"10.1109\/ICDE51399.2021.00145"},{"issue":"6","key":"2252_CR20","doi-asserted-by":"publisher","first-page":"1683","DOI":"10.1109\/TCSVT.2019.2910119","volume":"30","author":"S Ma","year":"2019","unstructured":"Ma S, Zhang X, Jia C, Zhao Z, Wang S, Wang S (2019) Image and video compression with neural networks: a review. IEEE Trans Circuits Syst Video Technol (TCSVT) 30(6):1683\u20131698","journal-title":"IEEE Trans Circuits Syst Video Technol (TCSVT)"},{"key":"2252_CR21","doi-asserted-by":"crossref","unstructured":"Bhaskaran V, Konstantinides, K (1997) Image and video compression standards: algorithms and architectures","DOI":"10.1007\/978-1-4615-6199-6"},{"key":"2252_CR22","doi-asserted-by":"crossref","unstructured":"Kwon T, Ko J, Jung J, Shin K (2023) Tensorcodec: compact lossy compression of tensors without strong assumptions on data properties. In: IEEE international conference on data mining (ICDM)","DOI":"10.1109\/ICDM58522.2023.00032"},{"issue":"3","key":"2252_CR23","doi-asserted-by":"publisher","first-page":"283","DOI":"10.1007\/BF02310791","volume":"35","author":"JD Carroll","year":"1970","unstructured":"Carroll JD, Chang J-J (1970) Analysis of individual differences in multidimensional scaling via an n-way generalization of \u201ceckart-young\u2019\u2019 decomposition. Psychometrika 35(3):283\u2013319","journal-title":"Psychometrika"},{"issue":"1","key":"2252_CR24","doi-asserted-by":"publisher","first-page":"205","DOI":"10.1137\/060676489","volume":"30","author":"BW Bader","year":"2008","unstructured":"Bader BW, Kolda TG (2008) Efficient matlab computations with sparse and factored tensors. SIAM J Sci Comput (SISC) 30(1):205\u2013231","journal-title":"SIAM J Sci Comput (SISC)"},{"key":"2252_CR25","doi-asserted-by":"crossref","unstructured":"Kolda TG, Sun J (2008) Scalable tensor decompositions for multi-aspect data mining. In: IEEE international conference on data mining (ICDM), pp 363\u2013 372","DOI":"10.1109\/ICDM.2008.89"},{"issue":"7","key":"2252_CR26","doi-asserted-by":"publisher","first-page":"2765","DOI":"10.1007\/s10115-019-01435-1","volume":"62","author":"J Zhang","year":"2020","unstructured":"Zhang J, Oh J, Shin K, Papalexakis EE, Faloutsos C, Yu H (2020) Fast and memory-efficient algorithms for high-order tucker decomposition. Knowl Inf Syst 62(7):2765\u20132794","journal-title":"Knowl Inf Syst"},{"key":"2252_CR27","doi-asserted-by":"crossref","unstructured":"Leskovec J, Faloutsos C (2007) Scalable modeling of real graphs using kronecker multiplication. In: International conference on machine learning (ICML), pp 497\u2013 504","DOI":"10.1145\/1273496.1273559"},{"key":"2252_CR28","unstructured":"Novikov A, Podoprikhin D, Osokin A, Vetrov DP (2015) Tensorizing neural networks. Adv Neural Inf Process Syst (NeurIPS) 28"},{"key":"2252_CR29","unstructured":"Yang Y, Krompass D, Tresp V (2017) Tensor-train recurrent neural networks for video classification. In: International conference on machine learning (ICML), pp 3891\u2013 3900"},{"key":"2252_CR30","unstructured":"Xu M, Xu YL, Mandic DP (2023) Tensorgpt: efficient compression of the embedding layer in llms based on the tensor-train decomposition. arXiv preprint arXiv:2307.00526"},{"issue":"1","key":"2252_CR31","doi-asserted-by":"publisher","first-page":"41","DOI":"10.1016\/j.chemolab.2010.08.004","volume":"106","author":"E Acar","year":"2011","unstructured":"Acar E, Dunlavy DM, Kolda TG, M\u00f8rup M (2011) Scalable tensor factorizations for incomplete data. Chemom Intell Lab Syst 106(1):41\u201356","journal-title":"Chemom Intell Lab Syst"},{"key":"2252_CR32","unstructured":"Yu R, Zheng S, Anandkumar A, Yue Y (2017) Long-term forecasting using tensor-train rnns"},{"key":"2252_CR33","doi-asserted-by":"crossref","unstructured":"Zheng Y-B, Huang T-Z, Zhao X-L, Zhao Q, Jiang T-X (2021) Fully-connected tensor network decomposition and its application to higher-order tensor completion. In: AAAI conference on artificial intelligence (AAAI), pp 11071\u2013 11078","DOI":"10.1609\/aaai.v35i12.17321"},{"key":"2252_CR34","unstructured":"Fan, J (2021) Multi-mode deep matrix and tensor factorization. In: International conference on learning representations (ICLR)"},{"key":"2252_CR35","doi-asserted-by":"crossref","unstructured":"Lee D, Shin, K (2021) Robust factorization of real-world tensor streams with patterns, missing values, and outliers. In: IEEE international conference on data engineering (ICDE), pp 840\u2013 851","DOI":"10.1109\/ICDE51399.2021.00078"},{"key":"2252_CR36","doi-asserted-by":"crossref","unstructured":"Lamba H, Nagarajan V, Shin K, Shajarisales N (2016) Incorporating side information in tensor completion. In: International conference companion on world wide web, pp 65\u2013 66","DOI":"10.1145\/2872518.2889371"},{"issue":"3","key":"2252_CR37","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/BF02288367","volume":"1","author":"C Eckart","year":"1936","unstructured":"Eckart C, Young G (1936) The approximation of one matrix by another of lower rank. Psychometrika 1(3):211\u2013218","journal-title":"Psychometrika"},{"issue":"11","key":"2252_CR38","doi-asserted-by":"publisher","first-page":"3016","DOI":"10.1109\/TKDE.2015.2448542","volume":"27","author":"Z Guan","year":"2015","unstructured":"Guan Z, Zhang L, Peng J, Fan J (2015) Multi-view concept learning for data representation. IEEE Trans Knowl Data Eng 27(11):3016\u20133028","journal-title":"IEEE Trans Knowl Data Eng"},{"key":"2252_CR39","doi-asserted-by":"crossref","unstructured":"Xu C, Guan Z, Zhao W, Niu Y, Wang Q, Wang Z (2018) Deep multi-view concept learning. In: IJCAI, pp. 2898\u2013 2904 . Stockholm","DOI":"10.24963\/ijcai.2018\/402"},{"issue":"2","key":"2252_CR40","doi-asserted-by":"publisher","first-page":"814","DOI":"10.1109\/TNNLS.2020.2979532","volume":"32","author":"W Zhao","year":"2020","unstructured":"Zhao W, Xu C, Guan Z, Liu Y (2020) Multiview concept learning via deep matrix factorization. IEEE Trans Neural Netw Learn Syst 32(2):814\u2013825","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"2252_CR41","first-page":"1573","volume":"1","author":"KL Hoffman","year":"2013","unstructured":"Hoffman KL, Padberg M, Rinaldi G et al (2013) Traveling salesman problem. Encycl Oper Res Manag Sci 1:1573\u20131578","journal-title":"Encycl Oper Res Manag Sci"},{"key":"2252_CR42","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-30162-4","volume-title":"Encyclopedia of algorithms","author":"M-Y Kao","year":"2008","unstructured":"Kao M-Y (2008) Encyclopedia of algorithms. Springer, New York"},{"issue":"6","key":"2252_CR43","doi-asserted-by":"publisher","first-page":"1389","DOI":"10.1002\/j.1538-7305.1957.tb01515.x","volume":"36","author":"RC Prim","year":"1957","unstructured":"Prim RC (1957) Shortest connection networks and some generalizations. Bell Syst Tech J 36(6):1389\u20131401","journal-title":"Bell Syst Tech J"},{"issue":"8","key":"2252_CR44","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735\u20131780","journal-title":"Neural Comput"},{"key":"2252_CR45","doi-asserted-by":"crossref","unstructured":"Cho K, Merrienboer B, Gulcehre C, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation. In: Conference on empirical methods in natural language processing (EMNLP)","DOI":"10.3115\/v1\/D14-1179"},{"key":"2252_CR46","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst (NeurIPS) 30"},{"key":"2252_CR47","unstructured":"Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: International conference on learning representations (ICLR)"},{"key":"2252_CR48","doi-asserted-by":"publisher","DOI":"10.1017\/9781108684163","volume-title":"Mining of massive data sets","author":"J Leskovec","year":"2020","unstructured":"Leskovec J, Rajaraman A, Ullman JD (2020) Mining of massive data sets. Cambridge University Press, Cambridge"},{"key":"2252_CR49","doi-asserted-by":"crossref","unstructured":"Zhang Y, Roughan M, Willinger W, Qiu L (2009) Spatio-temporal compressive sensing and internet traffic matrices. In: ACM SIGCOMM conference on data communication (SIGCOMM), pp 267\u2013 278","DOI":"10.1145\/1594977.1592600"},{"issue":"1","key":"2252_CR50","doi-asserted-by":"publisher","first-page":"100","DOI":"10.1109\/TKDE.2016.2610420","volume":"29","author":"K Shin","year":"2017","unstructured":"Shin K, Sael L, Kang U (2017) Fully scalable methods for distributed tensor factorization. IEEE Trans Knowl Data Eng (TKDE) 29(1):100\u2013113","journal-title":"IEEE Trans Knowl Data Eng (TKDE)"},{"key":"2252_CR51","doi-asserted-by":"crossref","unstructured":"Yuan L, Li C, Mandic D, Cao J, Zhao Q (2019) Tensor ring decomposition with rank minimization on latent space: an efficient approach for tensor completion. In: AAAI conference on artificial intelligence (AAAI), pp 9151\u2013 9158","DOI":"10.1609\/aaai.v33i01.33019151"},{"issue":"2","key":"2252_CR52","doi-asserted-by":"publisher","first-page":"876","DOI":"10.1137\/17M1112303","volume":"39","author":"C Battaglino","year":"2018","unstructured":"Battaglino C, Ballard G, Kolda TG (2018) A practical randomized CP tensor decomposition. SIAM J Matrix Anal Appl (SIMAX) 39(2):876\u2013901","journal-title":"SIAM J Matrix Anal Appl (SIMAX)"},{"key":"2252_CR53","doi-asserted-by":"crossref","unstructured":"Perros I, Papalexakis EE, Park H, Vuduc R, Yan X, Defilippi C, Stewart WF, Sun J (2018) Sustain: scalable unsupervised scoring for tensors and its application to phenotyping. In: ACM SIGKDD international conference on knowledge discovery and data mining (KDD), pp 2080\u2013 2089","DOI":"10.1145\/3219819.3219999"},{"key":"2252_CR54","first-page":"1877","volume":"33","author":"T Brown","year":"2020","unstructured":"Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877\u20131901","journal-title":"Adv Neural Inf Process Syst"},{"key":"2252_CR55","unstructured":"Touvron H, Lavril T, Izacard G, Martinet X, Lachaux M-A, Lacroix T, Rozi\u00e8re B, Goyal N, Hambro E, Azhar F, et al (2023) Llama: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)"},{"key":"2252_CR56","unstructured":"FROSTT: the formidable repository of open sparse tensors and tools. http:\/\/frostt.io\/"},{"key":"2252_CR57","doi-asserted-by":"publisher","first-page":"237","DOI":"10.1016\/j.neunet.2019.04.014","volume":"116","author":"F Karim","year":"2019","unstructured":"Karim F, Majumdar S, Darabi H, Harford S (2019) Multivariate LSTM-FCNs for time series classification. Neural Netw 116:237\u2013245","journal-title":"Neural Netw"},{"key":"2252_CR58","unstructured":"Cuturi M (2011) Fast global alignment kernels. In: International conference on machine learning (ICML), pp 929\u2013 936"},{"key":"2252_CR59","unstructured":"Tensor toolbox for MATLAB V. 3.5. https:\/\/tensortoolbox.org\/"},{"key":"2252_CR60","unstructured":"TT-toolbox V. 2.2.2. https:\/\/github.com\/oseledets\/TT-Toolbox"},{"key":"2252_CR61","unstructured":"Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A (2016) Tensor ring decomposition. arXiv preprint arXiv:1606.05535"},{"issue":"1\u20132","key":"2252_CR62","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1007\/s10107-011-0484-9","volume":"137","author":"H Attouch","year":"2013","unstructured":"Attouch H, Bolte J, Svaiter BF (2013) Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math Program 137(1\u20132):91\u2013129","journal-title":"Math Program"},{"key":"2252_CR63","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1016\/S0925-2312(01)00700-7","volume":"50","author":"C Igel","year":"2003","unstructured":"Igel C, H\u00fcsken M (2003) Empirical evaluation of the improved Rprop learning algorithms. Neurocomputing 50:105\u2013123","journal-title":"Neurocomputing"},{"issue":"1","key":"2252_CR64","first-page":"1","volume":"3","author":"S Boyd","year":"2011","unstructured":"Boyd S, Parikh N, Chu E, Peleato B, Eckstein J et al (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends\u00ae Mach Learn 3(1):1\u2013122","journal-title":"Found Trends\u00ae Mach Learn"}],"container-title":["Knowledge and Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10115-024-02252-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10115-024-02252-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10115-024-02252-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,31]],"date-time":"2025-01-31T22:53:43Z","timestamp":1738364023000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10115-024-02252-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,14]]},"references-count":64,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,2]]}},"alternative-id":["2252"],"URL":"https:\/\/doi.org\/10.1007\/s10115-024-02252-x","relation":{},"ISSN":["0219-1377","0219-3116"],"issn-type":[{"type":"print","value":"0219-1377"},{"type":"electronic","value":"0219-3116"}],"subject":[],"published":{"date-parts":[[2024,10,14]]},"assertion":[{"value":"27 December 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 June 2024","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 September 2024","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 October 2024","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}