OPUS


Volltext-Downloads (blau) und Frontdoor-Views (grau)

Comparison of Variation Autoencoder with Autoencoder Ala Principe

  • In the past few years Generative models have become an interesting topic in the field of Machine Learning (ML). Variational Autoencoder (VAE) is one of the popular frameworks of generative models based on the work of D.P Kingma and M. Welling [6] [7]. As an alternative to VAE the authors in [12] proposed and implemented Information Theoretic Learning (ITL) based Autoencoder. VAE and ITL Autoencoder are a combination of the neural networks and probabilistic graphical models (PGM) [7]. In modern statistics it is difficult to compute the approximation ofthe probability densities. In this paper we make use of Variational Inference (VI) technique from machine learning that approximate the distributions through optimization. The closeness between the distributions are measured by the information theoretic divergence measures such as Kullbach-Liebler, Euclidean and Cauchy Schwarz divergences. In this thesis, we study theoretical and experimental results of two different frameworks of generative models which generate images of MNIST handwritten characters [8] and Yale face database B [3]. The results obtained show that the proposed VAE and ITL Autoencoder are capable of generating the underlying structure of the example datasets

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Santhosh Naredla
Advisor:Thomas Villmann, Marika Kaden
Document Type:Master's Thesis
Language:English
Year of Completion:2022
Granting Institution:Hochschule Mittweida
Release Date:2022/11/21
GND Keyword:Maschinelles Lernen
Page Number:53
Institutes:Angewandte Computer‐ und Bio­wissen­schaften
DDC classes:006.31 Maschinelles Lernen
Open Access:Frei zugänglich
Licence (German):License LogoUrheberrechtlich geschützt