{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,7,13]],"date-time":"2024-07-13T14:00:07Z","timestamp":1720879207083},"reference-count":20,"publisher":"Hindawi Limited","license":[{"start":{"date-parts":[[2009,10,29]],"date-time":"2009-10-29T00:00:00Z","timestamp":1256774400000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/3.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Advances in Artificial Intelligence"],"published-print":{"date-parts":[[2009,10,29]]},"abstract":"<jats:p>This article describes how the costs of misclassification given with the individual training objects for classification learning can be used in the construction of decision trees for minimal cost instead of minimal error class decisions. This is demonstrated by defining modified, cost-dependent probabilities, a new, cost-dependent information measure, and using a cost-sensitive extension of the CAL5 algorithm for learning decision trees. The cost-dependent information measure ensures the selection of the (local) next best, that is, cost-minimizing, discriminating attribute in the sequential construction of the classification trees. This is shown to be a cost-dependent generalization of the classical information measure introduced by Shannon, which only depends on classical probabilities. It is therefore of general importance and extends classic information theory, knowledge processing, and cognitive science, since subjective evaluations of decision alternatives can be included in entropy and the transferred information. Decision trees can then be viewed as cost-minimizing decoders for class symbols emitted by a source and coded by feature vectors. Experiments with two artificial datasets and one application example show that this approach is more accurate than a method which uses class dependent costs given by experts a priori.<\/jats:p>","DOI":"10.1155\/2009\/134807","type":"journal-article","created":{"date-parts":[[2009,10,29]],"date-time":"2009-10-29T15:29:49Z","timestamp":1256830189000},"page":"1-13","source":"Crossref","is-referenced-by-count":2,"title":["A New Information Measure Based on Example-Dependent Misclassification Costs and Its Application in Decision Tree Learning"],"prefix":"10.1155","volume":"2009","author":[{"given":"Fritz","family":"Wysotzki","sequence":"first","affiliation":[{"name":"Faculty of Electrical Engineering and Computer Science, University of Technology Berlin, Sekr. FR 5-8, Franklinstra\u00dfe 28\/29, D-10587 Berlin, Germany"}]},{"given":"Peter","family":"Geibel","sequence":"additional","affiliation":[{"name":"Faculty of Electrical Engineering and Computer Science, University of Technology Berlin, Sekr. FR 5-8, Franklinstra\u00dfe 28\/29, D-10587 Berlin, Germany"}]}],"member":"98","reference":[{"key":"21","series-title":"Series in Artificial Intelligence","year":"1994"},{"key":"25","year":"2000"},{"key":"18","volume-title":"Cost\u2013sensitive learning and the class imbalance problem","year":"2008"},{"issue":"1\u20133","key":"17","first-page":"191","volume":"46","year":"2002","journal-title":"Machine Learning"},{"key":"8","doi-asserted-by":"publisher","DOI":"10.1023\/B:APIN.0000027766.72235.bc"},{"issue":"5","key":"9","doi-asserted-by":"crossref","first-page":"439","DOI":"10.3233\/IDA-2004-8502","volume":"8","year":"2004","journal-title":"Intelligent Data Analysis"},{"key":"12","volume-title":"Concept drift and the importance of examples","year":"2003"},{"key":"10","year":"2006"},{"key":"30","year":"1981"},{"key":"22","doi-asserted-by":"publisher","DOI":"10.1007\/BF02032305"},{"key":"23","volume-title":"The decision-tree algorithm CAL5 based on a statistical approach to its splitting algorithm","year":"1997"},{"key":"26","year":"1949"},{"key":"6","year":"2004"},{"key":"13","doi-asserted-by":"publisher","DOI":"10.1007\/s00451-006-0297-2"},{"key":"32","series-title":"CISM Courses and Lectures no. 382","volume-title":"Automatic construction of decision trees and neural nets for classification using statistical considerations","year":"1997"},{"issue":"2","key":"28","doi-asserted-by":"crossref","first-page":"169","DOI":"10.1108\/17563780810874708","volume":"1","year":"2008","journal-title":"International Journal of Intelligent Computing and Cybernetics"},{"key":"24","year":"1993"},{"key":"2","year":"1984"},{"key":"33","volume-title":"DIPOL-a hybrid piecewise linear classifier"},{"key":"11","volume-title":"Making large-scale SVM learning practical","year":"1999"}],"container-title":["Advances in Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/archive\/2009\/134807.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/archive\/2009\/134807.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/archive\/2009\/134807.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2020,12,7]],"date-time":"2020-12-07T20:45:27Z","timestamp":1607373927000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/aai\/2009\/134807\/"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2009,10,29]]},"references-count":20,"alternative-id":["134807","134807"],"URL":"https:\/\/doi.org\/10.1155\/2009\/134807","relation":{},"ISSN":["1687-7470","1687-7489"],"issn-type":[{"value":"1687-7470","type":"print"},{"value":"1687-7489","type":"electronic"}],"subject":[],"published":{"date-parts":[[2009,10,29]]}}}