Thesis
DESIGNING DEEP METHODS TO IMPROVE MACHINE LEARNING INTERPRETABILITY
Washington State University
Master of Science (MS), Washington State University
01/2022
DOI:
https://doi.org/10.7273/000004457
Handle:
https://hdl.handle.net/2376/124553
Abstract
Machine learning interpretability has attracted much attention recently, especially with the growing popularity of deep learning methods and their applications. Deep learning algorithms train multi-layer neural networks for many purposes, including classifying and predicting complex data and targets. However, interpreting neural networks remains an open challenge. While deep networks offer strong predictive performance for valuable tasks, including machine translation, image classification, and human activity recognition, humans have difficulty understanding how a neural network makes decisions. As a result, practitioners lack trust in deep networks for critical applications. To address this need, researchers have started to investigate how to increase the interpretability of deep learning methods. While these methods have gained traction, they still face limitations in expressing and handling complex data types.In this thesis, we introduce two methods to improve upon current interpretability approaches. In the first contribution, we design a language generation method called BraIN to improve current text explanations of image class instances. Our method produces human-like language to maximize the understandability of text explanations for humans. We hypothesize that the algorithm provides improved text understandability and expressivity over existing methods. To evaluate the performance of the model, we compare human preferences for BraIN-generated captions with baseline methods. We also compare results with actual human-generated captions using automated metrics. Results show the model is capable of producing more human-like captions than baseline methods.
In the second contribution, we propose a novel Mimic algorithm that improves the interpretability of complex, multivariate time series data by visualizing expressive shapelets. Mimic uniquely retains the predictive accuracy of the strongest classifiers while introducing classifier interpretability. Mimic mirrors the learning method of an existing multivariate time-series classifier while simultaneously producing a visual representation that enhances user understanding of the learned model. We hypothesize that Mimic can generate visual explanations of models for various classifiers without experiencing a dramatic reduction in predictive accuracy. Experiments on 26 time-series datasets support Mimic's ability to visually and accurately imitate a variety of time-series classifiers.
Metrics
5 File views/ downloads
24 Record Views
Details
- Title
- DESIGNING DEEP METHODS TO IMPROVE MACHINE LEARNING INTERPRETABILITY
- Creators
- Yuhui Wang
- Contributors
- Diane J Cook (Advisor)Lawrence Holder (Committee Member)Hassan Ghasemzadeh (Committee Member)
- Awarding Institution
- Washington State University
- Academic Unit
- Electrical Engineering and Computer Science, School of
- Theses and Dissertations
- Master of Science (MS), Washington State University
- Publisher
- Washington State University
- Number of pages
- 59
- Identifiers
- OCLC#: 1370910207; 99900883438801842
- Language
- English
- Resource Type
- Thesis