OPUS


Volltext-Downloads (blau) und Frontdoor-Views (grau)
The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 5 of 66
Back to Result List

Recurrent Learning Vector Quantization

  • Learning Vector Quantization (LVQ) methods have been popular choices of classification models ever since its introduction by T. Kohonen in the 90s. These days, LVQ is combined with Deep Learning methods to provide powerful yet interpretable machine-learning solutions to some of the most challenging computational problems. However, techniques to model recurrent relationships in the data using prototype methods still remain quite unsophisticated. In particular, we are not aware of any modification of LVQ that allows the input data to have different lengths. Needless to say, such data is abundant in today's digital world and demands new processing techniques to extract useful information. In this paper, we propose the use of the Siamese architecture to not only model recurrent relationships within the prototypes but also the ability to handle prototypes of various dimensions simultaneously.

Download full text files

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Jensun Ravichandran, Thomas Villmann
DOI:https://doi.org/10.48446/opus-12297
ISSN:1437-7624
Parent Title (German):26. Interdisziplinäre Wissenschaftliche Konferenz Mittweida
Publisher:Hochschule Mittweida
Place of publication:Mittweida
Document Type:Conference Proceeding
Language:English
Year of Completion:2021
Publishing Institution:Hochschule Mittweida
Contributing Corporation:Saxon Institute for Computational Intelligence and Machine Learning
Release Date:2021/05/18
Tag:Interpretable Models; Prototye-based models; Recurent Neural Networks
Issue:002
Page Number:2
First Page:147
Last Page:148
Open Access:Frei zugänglich
Licence (German):License LogoUrheberrechtlich geschützt