Refine
Document Type
- Conference Proceeding (4) (remove)
Year of publication
- 2021 (4)
Keywords
- Maschinelles Lernen (4) (remove)
Prototype-based Vector Quantization is one of the key methods in data processing like data compression or interpretable classification learning. Prototype vectors serve as references for data and data classes. The data are given as vectors representing objects by numerical features. Famous approaches are the Neural Gas Vector Quantizer (NGVQ) for data compression and Learning Vector Quantizers (LVQ) for classification tasks. Frequently, training of those models is time consuming. In the contribution we discuss modifications of these algorithms adopting ideas from quantum computing. The aim for this is a least twofold: First quantum computing provides ideas for enormous speedup making use of quantum mechanical systems and inherent parallelization.
Second, considering data and prototype vectors in terms of quantum systems, implicit data processing is performed, which frequently results in better data separation. We will highlight respective ideas and difficulties when equipping vector quantizers with quantum computing features.
This article aims to explain mathematically, why the so called double descent observed by Belkin et al., Reconciling modern machine-learning practice and the classical bias-variance trade-off, PNAS 116(32) (2019), p. 15849-15854, occurs on the way from the classical approximation regime of machine learning to the modern interpolation regime. We argue that this phenomenon may be explained by a decomposition of mean squared error plus complexity into bias, variance and an unavoidable irreducible error inherent to the problem. Further, in case of normally distributed output errors, we apply this decomposition to explain, why LASSO provides reliable predictors avoiding overfitting.
Durch die steigende Leistungsfähigkeit von Prozessoren und Datenübertragungstechniken hat die Entwicklung und Anwendung von künstlicher Intelligenz, exemplarisch das maschinelle Lernen (engl. Machine Learning – ML) und die Methode des Deep Learning, in den letzten Jahren stark an Bedeutung gewonnen. Hierbei stellt sich die Frage, wie diese Technologien in einem weiteren zukunftsträchtigen Entwicklungsfeld, zum Beispiel bei der Entwicklung moderner Mobilitätskonzepte und hochautomatisierter/autonomer Fahrzeuge, eingesetzt werden können. Potentielle Möglichkeiten der Anwendung von AI im Entwicklungsprozess eines hochautomatisierten Fahrzeugs werden vorgestellt, aber auch die entscheidenden Herausforderungen diskutiert. Darüber hinaus wird der Unterschied zwischen verschiedenen Ansätzen ausgeführt. Dazu werden sowohl Randbedingungen als auch Herausforderungen mit Hilfe eines einfachen Beispiels aus dem täglichen Verkehrsgeschehen veranschaulicht.
We use machine learning for the selection and classification of single–molecule trajectories to replace commonly used user–dependent sorting algorithms. Measured fluorescence time series of labelled single molecules need to be sorted into ’good molecules’ and ’bad’ molecules before further kinetic and thermodynamic analysis.
Currently, processing, sorting and analysis of the data is mainly done with the help of laboratory specific programs.
Although there are freely available programs for processing smFRET data, they do not offer ’molecular sorting’ or it is purely empirical. Only recently, new approaches came up to solve this problem by means of machine learning. Here, we describe a sound terminology for molecular sorting of smFRET data and present an efficient workflow for manual annotation followed by the training of the ML algorithm. Descriptive statistics of our generated dataset are provided and will serve as the basis for supervised ML-based molecular sorting algorithms yet to be developed.