Applying Dempster–Shafer theory for developing a flexible, accurate and interpretable classifier. (15th June 2020)
- Record Type:
- Journal Article
- Title:
- Applying Dempster–Shafer theory for developing a flexible, accurate and interpretable classifier. (15th June 2020)
- Main Title:
- Applying Dempster–Shafer theory for developing a flexible, accurate and interpretable classifier
- Authors:
- Peñafiel, Sergio
Baloian, Nelson
Sanson, Horacio
Pino, José A. - Abstract:
- Highlights: Classification method that combines machine learning techniques and expert systems. Tables indicates the contribution for each attribute, allowing interpretability. Gradient descent is used to optimize the values for the weights for each rule. Results are comparable to other classification methods but being interpretable. The proposed method is general and can be easily applied to other scenarios. Abstract: Two approaches have traditionally been identified for developing artificial intelligence systems supporting decision-making: Machine Learning, which applies general techniques based on statistical analysis and optimization methods to extract information from a large amount of data looking for possible relations among them, and Expert Systems, which codify experts knowledge in rules, which are then applied to a specific situation. One of the main advantages of the first approach is its greater accuracy and wider generality for the application of the methods developed which can be used in various scenarios. By contrast, expert systems are usually more restricted and often applicable only to the domain for which they were originally developed. However, the machine learning approach requires the availability of large chunks of data, and it is much more complicated to interpret the results of the statistical methods to obtain some explanation of why the system decides, classifies, or evaluates a situation in a certain way. This issue may become very important inHighlights: Classification method that combines machine learning techniques and expert systems. Tables indicates the contribution for each attribute, allowing interpretability. Gradient descent is used to optimize the values for the weights for each rule. Results are comparable to other classification methods but being interpretable. The proposed method is general and can be easily applied to other scenarios. Abstract: Two approaches have traditionally been identified for developing artificial intelligence systems supporting decision-making: Machine Learning, which applies general techniques based on statistical analysis and optimization methods to extract information from a large amount of data looking for possible relations among them, and Expert Systems, which codify experts knowledge in rules, which are then applied to a specific situation. One of the main advantages of the first approach is its greater accuracy and wider generality for the application of the methods developed which can be used in various scenarios. By contrast, expert systems are usually more restricted and often applicable only to the domain for which they were originally developed. However, the machine learning approach requires the availability of large chunks of data, and it is much more complicated to interpret the results of the statistical methods to obtain some explanation of why the system decides, classifies, or evaluates a situation in a certain way. This issue may become very important in areas such as medicine, where it is relevant to know why the system recommends a certain treatment or diagnoses a certain illness. Likewise, in the financial sector, it might be legally required to explain that a decision to reject the granting of a mortgage loan to a person is not due to discriminatory causes such as gender or race. In order to be able to have interpretability and extract knowledge of available data we developed a classification method based on Dempster-Shafer's Plausibility Theory. Mass assignment functions (MAF) must be established to apply this theory and they assign a weight or probability to all subsets of the possible outcomes, given the presence of a certain fact on a decision scenario. Thus MAF assignments encode expert knowledge. The method learns optimal values for the weights of each MAF using the Gradient Descent method. The presented method allows combination of MAF which have been generated by the method itself or defined by an expert with those that are derived from a set of available data. The developed method was first applied to controlled scenarios and traditional data sets to ensure that classifications and explanations are correct. Results show that the model can classify with an accuracy which is comparable to other statistical classification methods, being also able to extract the most important decision rules from the data. … (more)
- Is Part Of:
- Expert systems with applications. Volume 148(2020)
- Journal:
- Expert systems with applications
- Issue:
- Volume 148(2020)
- Issue Display:
- Volume 148, Issue 2020 (2020)
- Year:
- 2020
- Volume:
- 148
- Issue:
- 2020
- Issue Sort Value:
- 2020-0148-2020-0000
- Page Start:
- Page End:
- Publication Date:
- 2020-06-15
- Subjects:
- Supervised learning -- Expert systems -- Gradient descent -- Dempster-Shafer theory -- Interpretability
Expert systems (Computer science) -- Periodicals
Systèmes experts (Informatique) -- Périodiques
Electronic journals
006.33 - Journal URLs:
- http://www.sciencedirect.com/science/journal/09574174 ↗
http://www.elsevier.com/journals ↗ - DOI:
- 10.1016/j.eswa.2020.113262 ↗
- Languages:
- English
- ISSNs:
- 0957-4174
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - 3842.004220
British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 13378.xml