{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,27]],"date-time":"2026-03-27T08:18:11Z","timestamp":1774599491588,"version":"3.50.1"},"reference-count":30,"publisher":"Cambridge University Press (CUP)","license":[{"start":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T00:00:00Z","timestamp":1715904000000},"content-version":"unspecified","delay-in-days":137,"URL":"https:\/\/www.cambridge.org\/core\/terms"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["52075111"],"award-info":[{"award-number":["52075111"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["cambridge.org"],"crossmark-restriction":true},"short-container-title":["AIEDAM"],"published-print":{"date-parts":[[2024]]},"abstract":"<jats:title>Abstract<\/jats:title>\n\t  <jats:p>Deep learning (DL) has been widely used in bearing fault diagnosis. In particular, convolutional neural networks (CNNs) improve diagnosis accuracy by extracting excellent fault features. However, CNN lacks an explicit learning mechanism to distinguish between different fault characteristics in the input signal to the diagnosis results. This article presents a new end-to-end depth framework called multi-head self-attention convolution neural network (MSA-CNN) for bearing fault diagnosis. Firstly, we adopt a data pre-processing method that directly converts one-dimensional (1D) original signals into two-dimensional (2D) grayscale images, which is simple to implement and preserves the complete information of the original signal. Secondly, multi-head self-attention (MSA) is first constructed to aggregate the global information and adaptively assign weights to the input signal's features. Thirdly, the CNN with small-scale kernels extracted detailed local features. Finally, the learned high-level representations are fed into the full connect (FC) layer for fault diagnosis. The performance of the MSA-CNN is validated on different datasets. The results show that the proposed MSA-CNN can significantly improve fault diagnosis accuracy compared with the other state-of-the-art methods and has excellent noise immunity performance.<\/jats:p>","DOI":"10.1017\/s0890060423000197","type":"journal-article","created":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T07:53:00Z","timestamp":1715932380000},"update-policy":"https:\/\/doi.org\/10.1017\/policypage","source":"Crossref","is-referenced-by-count":8,"title":["A novel intelligent fault diagnosis method of bearing based on multi-head self-attention convolutional neural network"],"prefix":"10.1017","volume":"38","author":[{"given":"Hang","family":"Ren","sequence":"first","affiliation":[]},{"given":"Shaogang","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Bo","family":"Qiu","sequence":"additional","affiliation":[]},{"given":"Hong","family":"Guo","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5639-2056","authenticated-orcid":false,"given":"Dan","family":"Zhao","sequence":"additional","affiliation":[]}],"member":"56","published-online":{"date-parts":[[2024,5,17]]},"reference":[{"key":"S0890060423000197_ref26","doi-asserted-by":"publisher","DOI":"10.1016\/j.ymssp.2017.06.022"},{"key":"S0890060423000197_ref2","doi-asserted-by":"publisher","DOI":"10.1109\/TIE.2022.3144572"},{"key":"S0890060423000197_ref7","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"S0890060423000197_ref29","doi-asserted-by":"publisher","DOI":"10.1088\/1361-6501\/ac543a"},{"key":"S0890060423000197_ref11","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8682154"},{"key":"S0890060423000197_ref17","doi-asserted-by":"publisher","DOI":"10.1016\/j.gsf.2022.101519"},{"key":"S0890060423000197_ref16","doi-asserted-by":"publisher","DOI":"10.1177\/0954408920971976"},{"key":"S0890060423000197_ref23","doi-asserted-by":"publisher","DOI":"10.1109\/TMECH.2022.3177174"},{"key":"S0890060423000197_ref9","doi-asserted-by":"publisher","DOI":"10.1016\/j.ymssp.2019.106587"},{"key":"S0890060423000197_ref15","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0164111"},{"key":"S0890060423000197_ref10","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2019.03.084"},{"key":"S0890060423000197_ref30","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2023.03.142"},{"key":"S0890060423000197_ref21","doi-asserted-by":"publisher","DOI":"10.1109\/TIE.2017.2774777"},{"key":"S0890060423000197_ref18","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2018.2864759"},{"key":"S0890060423000197_ref20","doi-asserted-by":"publisher","DOI":"10.1016\/j.promfg.2020.07.005"},{"key":"S0890060423000197_ref28","doi-asserted-by":"publisher","DOI":"10.1109\/TR.2022.3180273"},{"key":"S0890060423000197_ref5","doi-asserted-by":"publisher","DOI":"10.1016\/j.measurement.2020.107802"},{"key":"S0890060423000197_ref6","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.12.088"},{"key":"S0890060423000197_ref12","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2022.3152091"},{"key":"S0890060423000197_ref27","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2019.2943898"},{"key":"S0890060423000197_ref4","unstructured":"Dosovitskiy, A , Beyer, L , Kolesnikov, A , Weissenborn, D , Zhai, X , Unterthiner, T , Dehghani, M , Minderer, M , Heigold, G , Gelly, S , Uszkoreit, J and Houlsby, N (2021) An image is worth 16X16 words: transformers for image recognition at scale. International Conference on Learning Representations."},{"key":"S0890060423000197_ref24","doi-asserted-by":"publisher","DOI":"10.1016\/j.oceaneng.2022.113424"},{"key":"S0890060423000197_ref14","doi-asserted-by":"publisher","DOI":"10.1016\/j.ymssp.2018.02.016"},{"key":"S0890060423000197_ref22","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2020.02.013"},{"key":"S0890060423000197_ref8","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2019.05.052"},{"key":"S0890060423000197_ref3","doi-asserted-by":"publisher","DOI":"10.1109\/TIM.2017.2674738"},{"key":"S0890060423000197_ref19","doi-asserted-by":"publisher","DOI":"10.1016\/j.ymssp.2015.04.021"},{"key":"S0890060423000197_ref25","doi-asserted-by":"publisher","DOI":"10.1016\/j.measurement.2021.109226"},{"key":"S0890060423000197_ref13","doi-asserted-by":"publisher","DOI":"10.1016\/j.measurement.2020.107768"},{"key":"S0890060423000197_ref1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jsv.2016.10.043"}],"container-title":["Artificial Intelligence for Engineering Design, Analysis and Manufacturing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.cambridge.org\/core\/services\/aop-cambridge-core\/content\/view\/S0890060423000197","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T07:53:05Z","timestamp":1715932385000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.cambridge.org\/core\/product\/identifier\/S0890060423000197\/type\/journal_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024]]},"references-count":30,"alternative-id":["S0890060423000197"],"URL":"https:\/\/doi.org\/10.1017\/s0890060423000197","relation":{},"ISSN":["0890-0604","1469-1760"],"issn-type":[{"value":"0890-0604","type":"print"},{"value":"1469-1760","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024]]},"assertion":[{"value":"Copyright \u00a9 The Author(s), 2024. Published by Cambridge University Press","name":"copyright","label":"Copyright","group":{"name":"copyright_and_licensing","label":"Copyright and Licensing"}}],"article-number":"e9"}}