Refine
Document Type
- Master's Thesis (121)
- Bachelor Thesis (94)
- Conference Proceeding (66)
- Diploma Thesis (14)
- Final Report (6)
Year of publication
Language
- English (301) (remove)
Keywords
- Blockchain (40)
- Maschinelles Lernen (28)
- Vektorquantisierung (9)
- Algorithmus (7)
- Bioinformatik (6)
- Bitcoin (6)
- Graphentheorie (6)
- Internet der Dinge (6)
- Neuronales Netz (6)
- Unternehmen (6)
Institute
- Angewandte Computer‐ und Biowissenschaften (120)
- 06 Medien (35)
- 03 Mathematik / Naturwissenschaften / Informatik (28)
- Wirtschaftsingenieurwesen (21)
- 01 Elektro- und Informationstechnik (7)
- 04 Wirtschaftswissenschaften (7)
- Ingenieurwissenschaften (4)
- Sonstige (4)
- 02 Maschinenbau (2)
- 05 Soziale Arbeit (1)
The number of Internet of Things (IoT) devices is increasing rapidly. The Trustless Incentivized Remote Node Network, in short IN3 (Incubed), enables trustworthy and fast access to a blockchain for a large number of low-performance IoT devices. Although currently IN3 only supports the verification of Ethereum data, it is not limited to one blockchain due to modularity. This thesis describes the fundamentals, the concept and the implementation of the Bitcoin verification in IN3.
In this thesis two novel methods for removing undesired background illumination are de-veloped. These include a wavelet analysis based approach and an enhancement of a deep learning method. These methods have been compared with conventional methods, using real confocal microscopy images and synthetic generated microscopy images. These synthetic images were created utilizing a generator introduced in this thesis.
Machine learning models for timeseries have always been a special topic of interest due to their unique data structure. Recently, the introduction of attention improved the capabilities of recurrent neural networks and transformers with respect to their learning tasks such as machine translation. However, these models are usually subsymbolic architectures, making their inner working hard to interpret without comprehensive tools. In contrast, interpretable models such learning vector quantization are more transparent in the ability to interpret their decision process. This thesis tries to merge attention as a machine learning function with learning vector quantization to better handle timeseries data. A design on such a model is proposed and tested with a dataset used in connection with the attention based transformers. Although the proposed model did not yield the expected results, this work outlines improvements for further research on this approach.
This work emphasises the synergy between anthropologi-cal research on human skeletal remains and suitable doc-umentation strategies. Highlighting the significance of data recording and the use of digital databases in various aspects of anthropological work on bones, including scien-tific standards, skeletal collections, analysis of research re-sults, ethical considerations, and curation, it provides a comprehensive examination of these topics to demonstrate the value of investing time and resources in this field, countering the existing lack of funding that has led to sig-nificant deficiencies. Additionally, the paper outlines the requirements and challenges associated with standard data protocoling and suggests that digital data manage-ment frameworks and technologies such as ontologies and semantic web technologies for anthropological information should be a central focus in developing solutions.
In this paper, we conduct experiments to optimize the learning rates for the Generalized Learning Vector Quantization (GLVQ) model. Our approach leverages insights from cog- nitive science rooted in the profound intricacies of human thinking. Recognizing that human-like thinking has propelled humankind to its current state, we explore the applica- bility of cognitive science principles in enhancing machine learning. Prior research has demonstrated promising results when applying learning rate methods inspired by cognitive science to Learning Vector Quantization (LVQ) models. In this study, we extend this approach to GLVQ models. Specifically, we examine five distinct cognitive science-inspired GLVQ variants: Conditional Probability (CP), Dual Factor Heuristic (DFH), Middle Symmetry (MS), Loose Symmetry (LS), and Loose Symme- try with Rarity (LSR). Our experiments involve a comprehensive analysis of the performance of these cogni- tive science-derived learning rate techniques across various datasets, aiming to identify optimal settings and variants of cognitive science GLVQ model training. Through this research, we seek to unlock new avenues for enhancing the learning process in machine learning models by drawing inspiration from the rich complexities of human cognition. Keywords: machine learning, GLVQ, cognitive science, cognitive bias, learning rate op- timization, optimizers, human-like learning, Conditional Probability (CP), Dual Factor Heuristic (DFH), Middle Symmetry (MS), Loose Symmetry (LS), Loose Symmetry with Rarity (LSR).
Adversarial robustness of a nearest prototype classifier assures safe deployment in sensitive use fields. Much research has been conducted on artificial neural networks regarding their robustness against adversarial attacks, whereas nearest prototype classifiers have not chalked similar successes. This thesis presents the learning dynamics and numerical stability regarding the Crammer-normalization and the Hein-normalization for adversarial robustness of nearest prototype classifiers. Results of conducted experiments are penned down and analyzed to ascertain the bounds given by Saralajew et al. and Hein et al. for adversarial robustness of nearest prototype classifiers.
With globalization and the increasing diversity of the workforce, organizations are faced with the challenge of effectively managing multicultural teams. Understanding how employee engagement and job satisfaction are influenced by multicultural factors is crucial for organizations to create inclusive work environments that foster productivity and wellbeing. This literature review aims to explore the relationship between employee engagement, job satisfaction, and multi-cultural workplaces. It examines relevant studies and provides insights into the key factors, challenges, and strategies for enhancing employee engagement and job satisfaction in multicultural workplaces. The findings will shed light upon the author's research area on the factors influencing employee engagement and job satisfaction in multicultural work environments and contribute to a deeper understanding of cross-cultural dynamics in the workplace.
Traditional user management on the Internet has historically required individuals to give up control over their identities. In contrast, decentralized solutions promise to empower users and foster decentralized interactions. Over the last few years, the development of decentralized accounts and tokens has significantly increased, aiming at broader user adoption and shared social economies.
This thesis delves into smart contract standards and social infrastructure for Ethereum-based blockchains to enable identity-based data exchange between abstracted blockchain accounts. In this regard, the standardization landscapes of account and social token developments were analyzed in-depth to form guidelines that allow users to retain complete control over their data and grant access selectively.
Based on the evaluations, a pioneering Solidity standard is presented, natively integrating consensual restrictive on-chain assets for abstracted blockchain accounts. Further, the architecture of a decentralized messaging service has been defined to outline how new token and account concepts can be intertwined with efficient and minimal data-sharing principles to ensure security and privacy, while merging traditional server environments with global ledgers.
Laser engraving requires a precise ablation per pulse through all layers of a depth map. To transform this process towards areas of a square meter and more within an acceptable time, needs high-power ultra-short pulsed lasers for the precision and a high scan speed for the beam distribution. Scan speeds in the range of several 100 m/s can be achieved with a polygon scanner. In this work, a polygon scanner has been utilized within a roll-engraving machine to treat an 800 x 220 mm² (L x Dia) roll with 0.55 m² in a laser engraving process. The machine setup, the processing strategy and the data handling has been investigated and result in an efficient large area process. Pre-tests were performed with a multi-MHz-frequency nanosecond-pulsed laser, to investigate the processing strategy. A method to overcome the duty cycle of the polygon scanner was found in the synchronization of two polygons, enabling the use on a single laser source in a time-sharing concept. The throughput and the utilization of the laser source can be increased by the factor of two
In this work, Direct Laser Interference Patterning (DLIP) is used in conjunction with the polygon scanner technique to fabricate textured polystyrene and nickel surfaces through ultra-fast beam deflection. For polystyrene, the impact of scanning speed and repetition rate on the structure formation is studied, obtaining periodic features with a spatial period of 21 μm and reaching structure heights up to 23 μm. By applying scanning speeds of up to 350 m/s, a structuring throughput of 1.1 m²/min has been reached. Additionally, the optical configuration was used to texture nickel electrode foils with line-like patterns with a spatial period of 25 μm and a maximum structure depth of 15 μm. Subsequently, the structured nickel electrodes were assessed in terms of their performance for the Hydrogen Evolution Reaction (HER). The findings revealed a significant improvement in HER efficiency, with a 22% increase compared to the untreated reference electrode.
In laser drilling, one challenge is to achieve a high drilling quality in high aspect ratio drilling. Ultra-short pulsed lasers use different concepts like thin disks, fibers and rods. The slab technology is implemented because of their flexibility and characteristics. They bring together both advantages and deliver high pulse energies at high repetition rates. Materials with a thickness > 1.5 mm demand specialized optics handling the high power and pulse energies with adapted processing strategies, integrated in a machine setup. In this contribution, we focus on all the necessary components and strategies for drilling high precision holes with aspect ratios up to 1:40.
For monitoring laser beam welding processes and detecting or actively avoiding process defects, acoustic based measurements can be used in addition to optical measurement methods such as pyrometry. To reliably detect process events, it is essential to position the respective sensors in such a way that specific signal characteristics are reproducible and significant. However, there are only few investigations regarding the positioning for airborne sound sensors, especially for the detection of process emissions in the ultrasonic range. Therefore, in this research, the influence of the process distance as well as the angle and orientation of the microphone to a laser beam deep penetration welding process is investigated with respect to the detectability of process emissions in different frequency bands. It is shown that for a wide ultrasonic range a flat sensor angle with respect to the sample surface leads to an increased signal strength of the acoustic emissions compared to steep angles.
We report on our recent progress in creating a new type of compact laser that uses thulium-based fiber CPA technology to emit a central wavelength of 2 μm. This laser can produce pulse energies of >100 μJ and an average power of >15 W. It is designed to be long-lasting and is built for industrial use, making it a great fit for integration into laser machines used for materials processing. These laser parameters are ideal for working with semiconductors like silicon, allowing for tasks such as micro-welding, cutting of filaments, dicing, bonding and more.
Laser welding of hidden T-joints, connecting the web-sheet through the face-sheet of the joint can provide advantages like increased lightweight potential in manufacturing sandwich structures with thin-walled cores. However, maintaining the correct positioning of the beam relative to the joint is challenging. A method to reduce the effort of positioning is using optical coherence tomography (OCT), that interferometrically measures the reflection distance inside of the keyhole during laser deep penetration welding. In this study new approaches for targeted data processing of the OCT-signal to automatically detect misalignments are presented. It is shown that considering multiple components from the inference pattern and the respective signal intensities improve the detection accuracy of misalignments.
Analysis of the Forensic Preparation of Biometric Facial Features for Digital User Authentication
(2023)
Biometrics has become a popular method of securing access to data as it eliminates the need for users to remember a password. Although exploiting the vulnerabilities of biometric systems increased with their usage, these could also be helpful during criminal casework.
This thesis aims to evaluate approaches to bypass electronic devices with forged faces to access data for law enforcement. Here, obtaining the necessary data in a timely manner is critical. However, unlocking the devices with a password can take several years with a brute force attack. Consequently, biometrics could be a quicker alternative for unlocking.
Various approaches were examined to bypass current face recognition technologies. The first approaches included printing the user's face on regular paper and aimed to unlock devices performing face recognition in the visible spectrum. Further approaches consisted of printing the user's infrared image and creating three-dimensional masks to bypass devices performing face recognition in the near-infrared. Additionally, the underlying software responsible for face recognition was reverse-engineered to get information about its operation mode.
The experiments demonstrate that forged faces can partly bypass face recognition and obtain secured data. Devices performing face recognition in the visible spectrum can be unlocked with a printed image of the user's face. Regarding devices with advanced near-infrared face recognition, only one could be bypassed with a three-dimensional face mask. In addition, its underlying software provided evidence about the demands of face recognition. Other devices under attack remained locked, and their software provided no clues.
The Tutte polynomial is an important tool in graph theory. This paper provides an introduction to the two-variable polynomial using the spanning subgraph and rank-generating polynomials. The equivalency of definitions is shown in detail, as well as evaluations and derivatives. The properties and examples of the polynomial, i.e. the universality, coefficient relations, closed forms and recurrence relations are mentioned. Moreover, the thesis contains the connection between the dichromate and other significant polynomials.
Analysis of Continuous Learning Strategies at the Example of Replay-Based Text Classification
(2023)
Continuous learning is a research field that has significantly boosted in recent years due to highly complex machine and deep learning models. Whereas static models need to be retrained entirely from scratch when new data get available, continuous models progressively adapt to new data saving computational resources. In this context, this work analyzes parameters impacting replay-based continuous learning approaches at the example of a data-incremental text classification task using an MLP and LSTM. Generally, it was found that replay improves the results compared to naive approaches but achieves not the performance of a static model. Mainly, the performances increased with more replayed examples, and the number of training iterations has a significant influence as it can partly control the stability-plasticity-trade-off. In contrast, the impact of balancing the buffer and the strategy to select examples to store in the replay buffer were found to have a minor impact on the results in the present case.
The GeoFlow II experiment aims to replicate Earth’s core dynamics using a rotating spherical container with controlled temperature differences and simulated gravity. During the GeoFlow II campaign, a massive dataset of images was collected, necessitating an automated system for image processing and fluid flow visualization in the northern hemisphere of the spherical container. From here, we aim to detect the special structures appearing on the post processed images. Recognizing YOLOv5’s proficiency in object detection, we apply Yolov5 model for this task.
This study explores the opportunities and risks associated with user-generated content (UGC) in the communication strategies of marketing departments from a business perspective. With the rise of social media and online platforms, UGC has become a powerful tool for brands to engage with their audience, build trust, and enhance brand awareness. However, implementing UGC also comes with inherent risks, including the loss of control over brand messaging, potential negative user-generated content, and legal implications.
To investigate these dynamics, an empirical mixed-methods approach was employed, including expert interviews and a comprehensive literature review. The findings indicate that UGC offers significant opportunities for marketing departments, such as increased customer loyalty, enhanced authenticity, brand awareness, as well as a diverse set of possible content. However, the study also reveals the potential risks associated with UGC, highlighting the importance of managing these risks effectively.
RNA tertiary contact interactions between RNA tetraloops and their receptors stabilize the folding of ribosomal RNA and support the maturation of the ribosome. Here we use FRET assisted structure prediction to develop structural models of two ribosomal tertiary contacts, one consisting of a kissing loop and a GAAA tetraloop and one consisting of the tetraloop receptor (TLR) and a GAAA tetraloop. We build bound and unbound states of the ribosomal contacts de novo, label the RNA in silico and compute FRET histograms based on MD simulations and accessible contact volume (ACV) calculations. The predicted mean FRET efficiency from molecular dynamics (MD) simulations and ACV determination show agreement for the KL-TLGAAA construct. The KL construct revealed too high FRET efficiency and artificial dye behavior, which requires further investigation of the model. In the case of the TLR, the importance of the correct dye and construct parameters in the modeling was shown, which also leads to a renewed modeling. This hybrid approach of experiment and simulation will promote the elucidation of dynamic RNA tertiary contacts and accelerate the discovery of novel RNA interactions as potential future drug targets.