Refine
Document Type
Year of publication
- 2021 (4)
Language
- English (4)
Keywords
- Ganganalyse (1)
- Interpretable Models (1)
- Künstliche Intelligenz (1)
- Maschinelles Lernen (1)
- Motion Capturing (1)
- Prototye-based models (1)
- Quantencomputer (1)
- Recurent Neural Networks (1)
- Vector Quantization (1)
- interpretable models (1)
Sensor fusion is an important and crucial topic in many industrial applications. One of the challenging problems is to find an appropriate sensor combination for the dedicated application or to weight their information adequately. In our contribution, we focus on the application of the sensor fusion concept together with the reference to the distance-based learning for object classification purposes. The developed machine learning model has a bi-functional architecture, which learns on the one side the discrimination of the data regarding their classes and, on the other side, the importance of the single signals, i.e., the contribution of each sensor to the decision. We show that the resulting bi-functional model is interpretative, sparse, and simple to integrate in many standard artificial neural networks.
Prototype-based Vector Quantization is one of the key methods in data processing like data compression or interpretable classification learning. Prototype vectors serve as references for data and data classes. The data are given as vectors representing objects by numerical features. Famous approaches are the Neural Gas Vector Quantizer (NGVQ) for data compression and Learning Vector Quantizers (LVQ) for classification tasks. Frequently, training of those models is time consuming. In the contribution we discuss modifications of these algorithms adopting ideas from quantum computing. The aim for this is a least twofold: First quantum computing provides ideas for enormous speedup making use of quantum mechanical systems and inherent parallelization.
Second, considering data and prototype vectors in terms of quantum systems, implicit data processing is performed, which frequently results in better data separation. We will highlight respective ideas and difficulties when equipping vector quantizers with quantum computing features.
Learning Vector Quantization (LVQ) methods have been popular choices of classification models ever since its introduction by T. Kohonen in the 90s. These days, LVQ is combined with Deep Learning methods to provide powerful yet interpretable machine-learning solutions to some of the most challenging computational problems.
However, techniques to model recurrent relationships in the data using prototype methods still remain quite unsophisticated. In particular, we are not aware of any modification of LVQ that allows the input data to have different lengths. Needless to say, such data is abundant in today's digital world and demands new processing techniques to extract useful information. In this paper, we propose the use of the Siamese architecture to not only model recurrent relationships within the prototypes but also the ability to handle prototypes of various dimensions simultaneously.
Marker-based systems can digitally record human movements in detail. Using the digital biomechanical human model Dynamicus, which was developed by the Institut für Mechatronik, it is possible to model joint angles and their velocities such accurately that it can be used to improve motion analysis in competitive sports or for ergonomic evaluation of motion sequences. In this paper, we use interpretable machine learning techniques to analyze the gait. Here, the focus is on the classification between foot touchdown and drop-off during normal walking. The motion data for training the model is labeled using force plates. We analyze how we could apply our machine learning models directly on new motion data recorded in a different scenario compared to the initial training, more precise on a treadmill. We use the properties of the interpretable model
to detect drift and to transfer our model if necessary.