OPUS


Volltext-Downloads (blau) und Frontdoor-Views (grau)
The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 7 of 301
Back to Result List

Investigating Interpretability and Robustness of Machine Learning Algorithms

  • Neural networks have become one of the most powerful algorithms when it comes to learning from big data sets and it is used extensively for classification. But the deeper the network models, the lesser is the interpretability of such models. Although many methods exist to explain the output of such networks, the lack of interpretability makes them black boxes. On the other hand, prototype-based machine learning algorithms are known to be interpretable and robust. Therefore, the aim of this thesis is to find a way to interpret the functioning of the neural networks by introducing a prototype layer to the neural network architecture. This prototype layer will train alongside the neural network and help us interpret the model. We present architectures of neural networks consisting of autoencoders and prototypes that perform activity recognition from heart rates extracted from ECG signals. These prototypes represent the different activity groups that the heart rates belong to and thereby aid in interpretability.

Download full text files

  • Master_Thesis_Final_Report.pdf
    eng

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Seetha Lakshmanan
Advisor:Thomas Villmann, Felix Kemeth
Document Type:Master's Thesis
Language:English
Year of Completion:2019
Granting Institution:Hochschule Mittweida
Release Date:2021/02/02
GND Keyword:Maschinelles Lernen
Institutes:Angewandte Computer‐ und Bio­wissen­schaften
DDC classes:006.31 Maschinelles Lernen
Open Access:Innerhalb der Hochschule
Licence (German):License LogoUrheberrechtlich geschützt