{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T01:09:19Z","timestamp":1760058559406,"version":"build-2065373602"},"reference-count":22,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2025,4,8]],"date-time":"2025-04-08T00:00:00Z","timestamp":1744070400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Computation"],"abstract":"<jats:p>Existing neural network architectures often struggle with two critical limitations: (1) information loss during dataset length standardization, where variable-length samples are forced into fixed dimensions, and (2) inefficient feature selection in single-modal systems, which treats all features equally regardless of relevance. To address these issues, this paper introduces the Deep Multi-Components Neural Network (DMCNN), a novel architecture that processes variable-length data by regrouping samples into components of similar lengths, thereby preserving information that traditional methods discard. DMCNN dynamically prioritizes task-relevant features through a component-weighting mechanism, which calculates the importance of each component via loss functions and adjusts weights using a SoftMax function. This approach eliminates the need for dataset standardization while enhancing meaningful features and suppressing irrelevant ones. Additionally, DMCNN seamlessly integrates multimodal data (e.g., text, speech, and signals) as separate components, leveraging complementary information to improve accuracy without requiring dimension alignment. Evaluated on the Multimodal EmotionLines Dataset (MELD) and CIFAR-10, DMCNN achieves state-of-the-art accuracy of 99.22% on MELD and 97.78% on CIFAR-10, outperforming existing methods like MNN and McDFR. The architecture\u2019s efficiency is further demonstrated by its reduced trainable parameters and robust handling of multimodal and variable-length inputs, making it a versatile solution for classification tasks.<\/jats:p>","DOI":"10.3390\/computation13040093","type":"journal-article","created":{"date-parts":[[2025,4,10]],"date-time":"2025-04-10T11:26:41Z","timestamp":1744284401000},"page":"93","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Deep Multi-Component Neural Network Architecture"],"prefix":"10.3390","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2097-9098","authenticated-orcid":false,"given":"Chafik","family":"Boulealam","sequence":"first","affiliation":[{"name":"LISAC, Department of Computer Science, Faculty of Science Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7623-2113","authenticated-orcid":false,"given":"Hajar","family":"Filali","sequence":"additional","affiliation":[{"name":"LISAC, Department of Computer Science, Faculty of Science Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco"},{"name":"Laboratory of Innovation in Management and Engineering (LIMIE), I\u015aGA, Fez 30000, Morocco"}]},{"given":"Jamal","family":"Riffi","sequence":"additional","affiliation":[{"name":"LISAC, Department of Computer Science, Faculty of Science Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco"}]},{"given":"Adnane Mohamed","family":"Mahraz","sequence":"additional","affiliation":[{"name":"LISAC, Department of Computer Science, Faculty of Science Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco"}]},{"given":"Hamid","family":"Tairi","sequence":"additional","affiliation":[{"name":"LISAC, Department of Computer Science, Faculty of Science Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco"}]}],"member":"1968","published-online":{"date-parts":[[2025,4,8]]},"reference":[{"key":"ref_1","first-page":"60","article-title":"Learning Multi-channel Deep Feature Representations for Face Recognition","volume":"44","author":"Chen","year":"2015","journal-title":"JMLR Workshop Conf. Proc."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1798","DOI":"10.1109\/TPAMI.2013.50","article-title":"Representation learning: A review and new perspectives","volume":"35","author":"Bengio","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"943","DOI":"10.22214\/ijraset.2022.47789","article-title":"An Introduction to Convolutional Neural Networks","volume":"10","author":"Saxena","year":"2022","journal-title":"Int. J. Res. Appl. Sci. Eng. Technol."},{"key":"ref_4","unstructured":"Sabour, S., and Hinton, G.E. (2017). Dynamic Routing Between Capsules. arXiv."},{"key":"ref_5","unstructured":"Sun, J., Fard, A.P., and Mahoor, M.H. (2021). XnODR and XnIDR: Two Accurate and Fast Fully Connected Layers for Convolutional Neural Networks. arXiv."},{"key":"ref_6","unstructured":"Jeevan, P., and Sethi, A. (2021). Vision Xformers: Efficient Attention for Image Classification. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Chen, T., Zhang, Z., Ouyang, X., Liu, Z., Shen, Z., and Wang, Z. (2021, January 19\u201325). \u201cBNN - BN = ?\u201d: Training binary neural networks without batch normalization. Proceedings of the 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA.","DOI":"10.1109\/CVPRW53098.2021.00520"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"387","DOI":"10.1007\/s11063-021-10636-1","article-title":"Meaningful Learning for Deep Facial Emotional Features","volume":"54","author":"Filali","year":"2021","journal-title":"Neural Process. Lett."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Filali, H., Riffi, J., Boulealam, C., Mahraz, M.A., and Tairi, H. (2022). Multimodal Emotional Classification Based on Meaningful Learning. Big Data Cogn. Comput., 6.","DOI":"10.3390\/bdcc6030095"},{"key":"ref_10","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_11","unstructured":"Zhang, Z., Zhang, H., Zhao, L., Chen, T., and Pfister, T. (2021). Aggregating Nested Transformers. arXiv."},{"key":"ref_12","unstructured":"Deng, W., Feng, Q., Gao, L., Liang, F., and Lin, G. (2020, January 13\u201318). Non-convex learning via replica exchange stochastic gradient MCMC. Proceedings of the 37 th International Conference on Machine Learning, Online."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Yun, S., Han, D., Chun, S., Oh, S.J., Choe, J., and Yoo, Y. (November, January 27). CutMix: Regularization strategy to train strong classifiers with localizable features. Proceedings of the 2019 IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea.","DOI":"10.1109\/ICCV.2019.00612"},{"key":"ref_14","unstructured":"Lu, Z., Member, S., Sreekumar, G., Goodman, E., Banzhaf, W., Deb, K., and Boddeti, V.N. (2020). Neural Architecture Transfer. arXiv."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., and Mihalcea, R. (2018). Meld: A multimodal multi-party dataset for emotion recognition in conversations. arXiv.","DOI":"10.18653\/v1\/P19-1050"},{"key":"ref_16","unstructured":"Chen, S.-Y., Hsu, C.-C., Kuo, C.-C., and Ku, L.-W. (2018). Emotionlines: An emotion corpus of multi-party conversations. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Xiong, Y., Zeng, Z., Chakraborty, R., Tan, M., Fung, G., Li, Y., and Singh, V. (2021, January 2\u20139). Nystr\u00f6mformer: A nystr\u00f6m-based algorithm for approximating self-attention. Proceedings of the AAAI Conference on Artificial Intelligence, Virtual.","DOI":"10.1609\/aaai.v35i16.17664"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Wu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., and Zhang, L. (2021, January 11\u201317). Cvt: Introducing convolutions to vision transformers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00009"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Li, J., Zhang, H., and Xie, C. (2022, January 23\u201327). ViP: Unified Certified Detection and Recovery for Patch Attack with Vision Transformers. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-19806-9_33"},{"key":"ref_20","unstructured":"Kim, W., Son, B., and Kim, I. (2021, January 18\u201324). Vilt: Vision-and-language transformer without convolution or region supervision. Proceedings of the International Conference on Machine Learning, PMLR, Virtual."},{"key":"ref_21","first-page":"3872","article-title":"Robust Bayesian method for simultaneous block sparse signal recovery with applications to face recognition Proc","volume":"2016","author":"Fedorov","year":"2016","journal-title":"Int. Conf. Image Process."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010, January 13\u201318). The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA, USA.","DOI":"10.1109\/CVPRW.2010.5543262"}],"container-title":["Computation"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2079-3197\/13\/4\/93\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:12:40Z","timestamp":1760029960000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2079-3197\/13\/4\/93"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,8]]},"references-count":22,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,4]]}},"alternative-id":["computation13040093"],"URL":"https:\/\/doi.org\/10.3390\/computation13040093","relation":{},"ISSN":["2079-3197"],"issn-type":[{"type":"electronic","value":"2079-3197"}],"subject":[],"published":{"date-parts":[[2025,4,8]]}}}