{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,1,18]],"date-time":"2025-01-18T05:35:41Z","timestamp":1737178541498,"version":"3.33.0"},"reference-count":49,"publisher":"Wiley","issue":"2","license":[{"start":{"date-parts":[[2006,12,13]],"date-time":"2006-12-13T00:00:00Z","timestamp":1165968000000},"content-version":"vor","delay-in-days":5400,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Circuit Theory &amp; Apps"],"published-print":{"date-parts":[[1992,3]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In this paper we present in a unified framework the gradient algorithms employed in the adaptation of linear time filters (TF) and the supervised training of (non\u2010linear) neural networks (NN). the optimality criteria used to optimize the parameters<jats:italic>H<\/jats:italic>of the filter or network are the least squares (LS) and least mean squares (LMS) in both contexts. They respectively minimize the total or the mean squares of the error<jats:italic>e(k)<\/jats:italic>between an (output) reference sequence<jats:italic>d(k)<\/jats:italic>and the actual system output<jats:italic>y(k)<\/jats:italic>corresponding to the input<jats:italic>X(k)<\/jats:italic>. Minimization is performed iteratively by a gradient algorithm. the index<jats:italic>k<\/jats:italic>in (TF) is time and it runs indefinitely. Thus iterations start as soon as reception of<jats:italic>X(k)<\/jats:italic>begins. the recursive algorithm for the adaptation<jats:italic>H<\/jats:italic>(<jats:italic>k<\/jats:italic>\u2013 1) \u2192<jats:italic>H(k)<\/jats:italic>of the parameters is implemented each time a new input<jats:italic>X(k)<\/jats:italic>is observed. When training a (NN) with a finite number of examples, the index<jats:italic>k<\/jats:italic>denotes the example and it is upper\u2010bounded. Iterative (block) algorithms wait until all<jats:italic>K<\/jats:italic>examples are received to begin the network updating. However,<jats:italic>K<\/jats:italic>being frequently very large, recursive algorithms are also often preferred in (NN) training, but they raise the question of ordering the examples<jats:italic>X(k)<\/jats:italic>.<\/jats:p><jats:p>Except in the specific case of a transversal filter, there is no general recursive technique for optimizing the LS criterion. However,<jats:italic>X(k)<\/jats:italic>is normally a random stationary sequence; thus LS and LMS are equivalent when<jats:italic>k<\/jats:italic>becomes large. Moreover, the LMS criterion can always be minimized recursively with the help of the stochastic LMS gradient algorithm, which has low computational complexity.<\/jats:p><jats:p>In (TF),<jats:italic>X(k)<\/jats:italic>is a sliding window of (time) samples, whereas in the supervised training of (NN) with arbitrarily ordered examples,<jats:italic>X<\/jats:italic>(<jats:italic>k<\/jats:italic>\u2013 1) and<jats:italic>X(k)<\/jats:italic>have nothing to do with each other. When this (major) difference is rubbed out by plugging a time signal at the network input, the recursive algorithms recently developed for (NN) training become similar to those of adaptive filtering. In this context the present paper displays the similarities between adaptive cascaded linear filters and trained multilayer networks. It is also shown that there is a close similarity between adaptive recursive filters and neural networks including feedback loops.<\/jats:p><jats:p>The classical filtering approach is to evaluate the gradient by \u2018forward propagation\u2019, whereas the most popular (NN) training method uses a gradient backward propagation method. We show that when a linear (TF) problem is implemented by an (NN), the two approaches are equivalent. However, the backward method can be used for more general (non\u2010linear) filtering problems. Conversely, new insights can be drawn in the (NN) context by the use of a gradient forward computation.<\/jats:p><jats:p>The advantage of the (NN) framework, and in particular of the gradient backward propagation approach, is evidently to have a much larger spectrum of applications than (TF), since (i) the inputs are arbitrary and (ii) the (NN) can perform non\u2010linear (TF).<\/jats:p>","DOI":"10.1002\/cta.4490200205","type":"journal-article","created":{"date-parts":[[2007,7,2]],"date-time":"2007-07-02T04:00:50Z","timestamp":1183348850000},"page":"159-200","source":"Crossref","is-referenced-by-count":16,"title":["A unified framework for gradient algorithms used for filter adaptation and neural network training"],"prefix":"10.1002","volume":"20","author":[{"given":"Sylvie","family":"Marcos","sequence":"first","affiliation":[]},{"given":"Odile","family":"Macchi","sequence":"additional","affiliation":[]},{"given":"Christophe","family":"Vignat","sequence":"additional","affiliation":[]},{"given":"G\u00e9rard","family":"Dreyfus","sequence":"additional","affiliation":[]},{"given":"L\u00e9on","family":"Personnaz","sequence":"additional","affiliation":[]},{"given":"Pierre","family":"Roussel\u2010Ragot","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2006,12,13]]},"reference":[{"key":"e_1_2_1_2_2","unstructured":"B.WidrowandM. E.Hoff Adaptive switching circuits' IRE WESCON Conven. Rec. New York September1960 pt. 4 pp.96\u2013104."},{"key":"e_1_2_1_3_2","doi-asserted-by":"crossref","unstructured":"R. P.Lippman An introduction to computing with neural nets' IEEE ASSP Mag. 4\u201322(1987).","DOI":"10.1109\/MASSP.1987.1165576"},{"key":"e_1_2_1_4_2","doi-asserted-by":"crossref","DOI":"10.7551\/mitpress\/5236.001.0001","volume-title":"Parallel Distributed Processing: Explorations in the Microstructure of Cognition","author":"Rumelhart D. E.","year":"1986"},{"volume-title":"Automata Networks in Computer Sciences","year":"1988","author":"Fogelman\u2010Soulie F.","key":"e_1_2_1_5_2"},{"key":"e_1_2_1_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.58337"},{"volume-title":"Adaptive Signal Processing","year":"1985","author":"Widrow B.","key":"e_1_2_1_7_2"},{"volume-title":"Adaptive Filters, Structures, Algorithms, and Applications","year":"1984","author":"Honig M. L.","key":"e_1_2_1_8_2"},{"key":"e_1_2_1_9_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4612-4978-8"},{"volume-title":"Adaptive Digital Filters and Signal Analysis","year":"1987","author":"Bellanger M.","key":"e_1_2_1_10_2"},{"key":"e_1_2_1_11_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4613-1701-2"},{"key":"e_1_2_1_12_2","unstructured":"A.LapedesandR.Farber Nonlinear signal processing using neural networks: prediction and system modelling' Internal Rep. Los Alamos National Laboratory July1987."},{"key":"e_1_2_1_13_2","doi-asserted-by":"publisher","DOI":"10.1016\/0893-6080(89)90005-1"},{"key":"e_1_2_1_14_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1989.1.2.270"},{"key":"e_1_2_1_15_2","unstructured":"O.NerrandP.Roussel\u2010Ragot L.Personnaz G.Dreyfus S.Marcos O.Macchi C.Vignat Neural Network Training Schemes for Non\u2010Linear Adaptive Filtering and Modelling' Proc. Int. Joint Conf. on Neural Networks Seattle 1991."},{"volume-title":"The utility driven dynamic error propagation network","year":"1987","author":"Robinson A. J.","key":"e_1_2_1_16_2"},{"key":"e_1_2_1_17_2","doi-asserted-by":"crossref","unstructured":"J. A.Cadzow Signal processing via least squares error modeling' IEEE ASSP Mag. pp.12\u201331(1990).","DOI":"10.1109\/53.62941"},{"key":"e_1_2_1_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAC.1983.1103120"},{"key":"e_1_2_1_19_2","unstructured":"F.Fogelman\u2010Soulie P.Gallinari Y.Le CunandS.Thiria Network learning' in Y. Kodratoff and R. S. Michalski (eds) Machine Learning Vol. 3 1988."},{"key":"e_1_2_1_20_2","first-page":"41","volume-title":"Digital Communications","author":"Macchi O.","year":"1986"},{"key":"e_1_2_1_21_2","doi-asserted-by":"publisher","DOI":"10.1016\/0165-1684(90)90076-B"},{"key":"e_1_2_1_22_2","unstructured":"J. M.Travassos\u2010Romano M.BellangerandL. C.Coradine Least squares adaptive filter in cascade form for line pair spectrum modelling' Proc. EUSIPCO 90 Barcelona1990 pp.249\u2013252."},{"volume-title":"Adaptive filters with arbitrary structures","year":"1990","author":"Forssen U.","key":"e_1_2_1_23_2"},{"volume-title":"Adaptive Control: the Model Reference Approach","year":"1979","author":"Landau I. D.","key":"e_1_2_1_24_2"},{"volume-title":"Theory and Design of Adaptive Filters","year":"1987","author":"Treichler J. R.","key":"e_1_2_1_25_2"},{"key":"e_1_2_1_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/MASSP.1984.1162215"},{"key":"e_1_2_1_27_2","first-page":"295","article-title":"A common formalism for adaptive identification in signal processing and in control","volume":"138","author":"Macchi O.","journal-title":"Proc. IEE\u2010F"},{"key":"e_1_2_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TASSP.1981.1163588"},{"key":"e_1_2_1_29_2","doi-asserted-by":"crossref","unstructured":"J. J.Shynk Adaptive IIR filtering' IEEE ASSP Mag. 4\u201321(1989).","DOI":"10.1109\/53.29644"},{"key":"e_1_2_1_30_2","doi-asserted-by":"crossref","unstructured":"P. L.Feintuch An adaptive recursive LMS filter' Proc. IEEE 1622\u20131624(1976).","DOI":"10.1109\/PROC.1976.10384"},{"key":"e_1_2_1_31_2","unstructured":"M.JaidaneandO.Macchi Stability of adaptive recursive filters' Proc. ICASSP New York 1988 pp.1503\u20131505."},{"key":"e_1_2_1_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.1979.1056097"},{"key":"e_1_2_1_33_2","doi-asserted-by":"crossref","unstructured":"C. R.JohnsonandI. D.Landau On adaptive IIR filters and parallel adaptive identifiers with adaptive error filtering' Proc. ICASSP 1981; pp.538\u2013541. Atlanta GA.","DOI":"10.1109\/ICASSP.1981.1171224"},{"volume-title":"Principles of Neurodynamics","year":"1959","author":"Rosenblatt R.","key":"e_1_2_1_34_2"},{"key":"e_1_2_1_35_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0925-2312(89)80014-1"},{"key":"e_1_2_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/72.80252"},{"issue":"1","key":"e_1_2_1_37_2","first-page":"18","article-title":"Steady\u2010state analysis of a single\u2010layered perceptron based on a system model with bias terms","volume":"38","author":"Shynk J. J.","year":"1991","journal-title":"IEEE Trans. On Circuits and Systems"},{"key":"e_1_2_1_38_2","doi-asserted-by":"crossref","unstructured":"N. J.Bershad J. J.ShynkandP. L.Feintuch Statistical analysis of the single\u2010layer backpropagation algorithm for identification of a nonlinear system with Gaussian inputs' Proc. of ICASSP Toronto Canada May1991 pp.2157\u20132160.","DOI":"10.1109\/ICASSP.1991.150840"},{"key":"e_1_2_1_39_2","doi-asserted-by":"crossref","unstructured":"Y.Le Cun Mod\u00e8les connexionnistes de l'apprentissage' Th\u00e8se Universit\u00e9 Paris VI 1987.","DOI":"10.3406\/intel.1987.1804"},{"key":"e_1_2_1_40_2","unstructured":"G. E.Hinton Connectionist learning procedurese in Y. Kodratoff and R. S. Michalski (eds) Machine Learning Vol. 3 pp.391\u2013399 1988."},{"key":"e_1_2_1_41_2","doi-asserted-by":"crossref","unstructured":"P.Gallinari S.ThiriaandF.Fogelman\u2010Soulie Multilayer perceptrons in data analysis' Proc. 2nd Int. Conf. on Neural Networks San Diego CA pp.391\u2013399 1988.","DOI":"10.1109\/ICNN.1988.23871"},{"key":"e_1_2_1_42_2","unstructured":"B.Widrow R. G.WinterandR. A.Baxter Learning phenomena in layered neural networks' Proc. 1st Int. Conf. on Neural Networks San Diego CA June1987 pp.411\u2013429."},{"key":"e_1_2_1_43_2","unstructured":"A.Petrowski L.Personnaz G.DreyfusandC.Girault Implantation de r\u00e9seaux de neurones formels sur une architecture multiprocesseurs' Ier Colloq. Eur. sur les Hypercubes et Calculateurs Distribu\u00e9s Rennes October1989."},{"key":"e_1_2_1_44_2","unstructured":"C.VignatandF.Rozycki R\u00e9seaux de neurones et filtres adaptatifs' Internal Rep. Laboratoire des Signaux et Syst\u00e8mes Gif sur Yvette September1989."},{"key":"e_1_2_1_45_2","unstructured":"S.Marcos C.VignatandO.Macchi L'algorithme de r\u00e9tropropagation du gradient en r\u00e9seaux de neurones compar\u00e9 aux algorithmes adaptatifs du traitement du signal' Internal Rep. Laboratoire des Signaux et Syst\u00e8mes Gif sur Yvette July1990."},{"key":"e_1_2_1_46_2","doi-asserted-by":"crossref","unstructured":"J. J.Hopfield Artificial neural networks' IEEE Circuits and Devices Mag. 3\u201310(1988).","DOI":"10.1109\/101.8118"},{"key":"e_1_2_1_47_2","unstructured":"R.RohwerandS.Renals Training recurrent networks' in L. Personnaz and G. Dreyfus (eds) Proc. nEURO88 Paris June pp.207\u2013216 1988."},{"key":"e_1_2_1_48_2","doi-asserted-by":"publisher","DOI":"10.1103\/PhysRevLett.59.2229"},{"key":"e_1_2_1_49_2","unstructured":"L. B.Almeida A learning rule for asynchronous perceptrons with feedback in a combinatorial environment' Proc. 1st Int. Conf. on Neural Networks San Diego June1987."},{"key":"e_1_2_1_50_2","unstructured":"C.UhlandO.Macchi When is DPCM a stable system?' Proc. ICASSP Albuquerque NM April1990 pp.2747\u20132750."}],"container-title":["International Journal of Circuit Theory and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/api.wiley.com\/onlinelibrary\/tdm\/v1\/articles\/10.1002%2Fcta.4490200205","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/cta.4490200205","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,18]],"date-time":"2025-01-18T02:09:10Z","timestamp":1737166150000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/cta.4490200205"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[1992,3]]},"references-count":49,"journal-issue":{"issue":"2","published-print":{"date-parts":[[1992,3]]}},"alternative-id":["10.1002\/cta.4490200205"],"URL":"https:\/\/doi.org\/10.1002\/cta.4490200205","archive":["Portico"],"relation":{},"ISSN":["0098-9886","1097-007X"],"issn-type":[{"type":"print","value":"0098-9886"},{"type":"electronic","value":"1097-007X"}],"subject":[],"published":{"date-parts":[[1992,3]]}}}