Investigating Interpretability and Robustness of Machine Learning Algorithms
- Neural networks have become one of the most powerful algorithms when it comes to learning from big data sets and it is used extensively for classification. But the deeper the network models, the lesser is the interpretability of such models. Although many methods exist to explain the output of such networks, the lack of interpretability makes them black boxes. On the other hand, prototype-based machine learning algorithms are known to be interpretable and robust. Therefore, the aim of this thesis is to find a way to interpret the functioning of the neural networks by introducing a prototype layer to the neural network architecture. This prototype layer will train alongside the neural network and help us interpret the model. We present architectures of neural networks consisting of autoencoders and prototypes that perform activity recognition from heart rates extracted from ECG signals. These prototypes represent the different activity groups that the heart rates belong to and thereby aid in interpretability.
Author: | Seetha Lakshmanan |
---|---|
Advisor: | Thomas Villmann, Felix Kemeth |
Document Type: | Master's Thesis |
Language: | English |
Year of Completion: | 2019 |
Granting Institution: | Hochschule Mittweida |
Release Date: | 2021/02/02 |
GND Keyword: | Maschinelles Lernen |
Institutes: | Angewandte Computer‐ und Biowissenschaften |
DDC classes: | 006.31 Maschinelles Lernen |
Open Access: | Innerhalb der Hochschule |
Licence (German): | Urheberrechtlich geschützt |