Refine
Document Type
- Master's Thesis (121)
- Bachelor Thesis (94)
- Conference Proceeding (66)
- Diploma Thesis (14)
- Final Report (6)
Year of publication
Language
- English (301) (remove)
Keywords
- Blockchain (40)
- Maschinelles Lernen (28)
- Vektorquantisierung (9)
- Algorithmus (7)
- Bioinformatik (6)
- Bitcoin (6)
- Graphentheorie (6)
- Internet der Dinge (6)
- Neuronales Netz (6)
- Unternehmen (6)
Institute
- Angewandte Computer‐ und Biowissenschaften (120)
- 06 Medien (35)
- 03 Mathematik / Naturwissenschaften / Informatik (28)
- Wirtschaftsingenieurwesen (21)
- 01 Elektro- und Informationstechnik (7)
- 04 Wirtschaftswissenschaften (7)
- Ingenieurwissenschaften (4)
- Sonstige (4)
- 02 Maschinenbau (2)
- 05 Soziale Arbeit (1)
The wind energy sector is undergoing digitalization processes that span multi-tier supply chains of turbine components and wind farm maintenance, amongst others. In an industrial use case that includes Siemens Gamesa Renewable Energy, Vestas and APQP4Wind, the processes of producing, fastening, and servicing bolts in turbines are mapped to a digital model. The model follows the lifetime of turbine bolts from the manufacturing phase, to fastening in turbines and maintenance, until their replacement and recycling. The development of the digital model is iteratively addressed in a design science research approach, as the authors actively contribute to the project. Distributed ledgers (DLs) support the notary documentation of the bolts and turbines, from their registration phase to the assembly-, technical service verification- and recycling phases. The immutable and decentralized nature of DLs secures the data against tampering and prevents any changes taken unilaterally by engaging the service stakeholders and component providers in a blockchain consortium.
This thesis deals with the development of a methodology / concept to analyse targeted attacks against IIoT / IoT devices. Building on the established background knowledge about honeypots, fileless malware and injection techniques a methodology is created that leads to a concept of a honeypot analyzation system. The system is created to analyse and detect novel threats like fileless attacks which are often utilized by Advanced Persistent Threats. That system is partially implemented and later evaluated by performing a simulated attack utilizing fileless attacks. The effectiveness is discussed and rated based on the results.
When entering waterways that are restricted either in height, width or by another vessel, the behaviour of a ship changes. The most evident effect of navigating in shallow water is the squat which has led to several groundings. Because of pressure differences the vessel is pulled down into the water and the trim is changed. Another shallow water effect is the speed loss due to an increase in resistance which can reduce the maximal speed by upto 50 percent. In general the behaviour of a ship in shallow water is said to be sluggish, meaning that it is more difficult to navigate which affects the radius of the turning circle among others. Sailing parallel to a close-by bank affects the lateral force and the yaw moment. The interaction with other ships has similar effects as bank effects, but is more sophisticated since more parameters play a major role. In this thesis each of these effects is researched by studying several papers by renowned researchers.
Several models are developed which are correspondent with the inherent model of forces and moments of the simulation program. The challenges and obstacles that arised during modelling and implementation are pointed out and solutions or approaches are given.
In this work, a protocol for portable nanopore sequencing of DNA from pollen collected from honey bees, bumble bees, and wild bees was developed. DNA metabarcoding is applied to identify genera within the mixed DNA samples. The DNA extraction and ITS and ITS2 PCR parameters tested for this purpose were applied to the collected pollen sample and the amplicons were then decoded using the Flongle sequencer adapter from Oxford Nanopore Technologies. It is shown that the main pollinator resources at the different sites can be identified in percentage proportions. The protocol generated in this study can be used for further ecological questions.
Development of a genetic biomonitoring test for the investigation of pollinator-plant-interactions
(2021)
There is a world-wide decline in biodiversity recorded. Especially insects and accompanying pollinators are threatened. When the foraging behaviour of pollinators is understood in detail, future crop and floral pollination services can be sustained and it is possible to establish projects for the conservation of pollinators and plant biodiversity. With the use of nanopore sequencing methods it is possible to detect pollen species that were collected by pollinators by their genetic information. In this study, a protocol for portable nanopore sequencing of DNA from pollen that was collected by honey bees, bumble bees and wild bees is being designed. DNAmetabarcoding is used to identify species within the mixed DNA sample. The ITS2-region will be used as a barcode. We will investigate pollen preferences of three pollinator species by placing their hives or nests at the same. Based on the results, landscape management schemes are developed that target pollen preferences and nutritional requirements of managed and wild social bee species as well as solitary wild bees.
Mapping identities, digital assets, and people’s profiles on the internet is getting much traction in the blockchain cosmos lately. The new technology is currently forming architectures that will further pave new ways to reach fundamental mechanisms to interact in a decentralized, user-centered manner. These schemes are often declared as the next generation of the web. Within the article will be shown, how the internet has evolved in managing identities, what problems arose, and how new data architectures help build applications on top of privacy rights. Both technological and ethical perspectives are viewed to answer which guidelines should be considered to fulfill the upcoming branch of decentralized services and what we can learn from historical schemes regarding their privacy, accounting, and user data.
Digital innovation in the quality management system from supply chain to final product conformityy
(2019)
As the new revolution is happening in the industry 4.0 as digitalization and the new trend in innovation is taken place. So, we want to digitalize the process from the supply chain to the final product conformity of the aircraft.
So every document which is received from the supplier like (eg.CoC, Inspection report, concession) digitally. When the part is received at the warehouse of the OEM the warehouse personal has a system to say that part A serial no X is the perfect fit for the part no By with the help of QR code and book the part into the ERP.
The biggest challenge we have is to reduce in production inspection method to be done by a human. We want to bring one more upper step that is automation with edition with IOT in the process to give better data processing to the Automation process plus reduce the overall inspection time and what is needed in create a proper visual automation control system and also with help of gauge Rand R make the process more accurate and also certify the traceability of the process . At finally there was so much data and we need data security for that to create a proper data source and data storage for supplier data as well as internal data security.
Digital Power of Attorney catalyzed by Software Requirements for Blockchain-based Applications
(2022)
Blockchain Technology (BT) with so-called web3 is at an inflection point between new sub-theme hypes and world-wide industrialization over last three years thanks to large companies like MicroStrategy [1], Facebook [2] and several Venture-Capital formations [3] who are already fighting over market share and community growth. Our work represents insights from Literature-based Software Requirement (SR) elicitation for a specific Blockchain-based Application, which is creation, managing and control of digital Power of Attorney (POA). The context of POA is not only a financial driven use-case it is by far a heavy weight universal legal transaction. We use a morphological box and reduced PRIMS-P to synthesis a generic specification for further Blockchain-based Application development. Formulated SRs in POA context are reflected on our core actors which are Grantor and authorized, trusted, external Entities. Proposed characteristics for relationship and effects are visualized in a reference model originally used in digital platform ecosystems [4]. This design and modelling approach facilitated closing discussion of BT and its future eCommerce perspective.
In the following bachelor thesis the current trends and potential applications of digitalization in the service industry will be discussed. With the nowadays surging demand on digitalization in all industries, there are branches of the service industry where digitalization is yet to be exploited to its full potential. However, it is difficult to pick and choose which branches of the industry should be fully digitized and which should be partially digitized. The result of this work should therefore facilitate the process of applying digitization in the consulting services where face to face human interaction has been the key to the industry for years. For this purpose, essential factors to be taken into account were identified, which are to be sought after through the analysis, in the specification of the system requirements as well as in the performance of a utility value analysis.
The endogen steroid hormone 17b-estradiol is a central player in a wide range of physiologic, behavioral processes and diseases in vertebrates. As a consequence, it is a main target for molecular design and drug discovery efforts in medicine and environmental sciences, which requires in-depth knowledge of protein-ligand binding processes. This work develops a bioinformatic framework based on local and global structure similarity for the characterization of E2-protein interactions in all 35 publicly available three-dimensional structures of estradiol-protein complexes. Subsequently, it uses gained data to identify four geometrically conserved estradiol binding residue motifs, against which the Protein Data Bank is queried. As result of this database query, 15 hits present in seven protein structures are found. Five of these structures do not contain E2 as ligand and had thus not been included in this work’s initial data set. One of these newly detected structures is structurally and functionally dissimilar, as well as evolutionarily distant from all other proteins analyzed in this work. Nevertheless, the ability of this protein to actually bind estradiol must be further analyzed. Finally, geometrically conserved E2-protein interactions are identified and a new research direction using these conserved interaction ensembles for the detection of novel estradiol targets is proposed.
Data streams change their statistical behaviour over the time. These changes can occur gradually or abruptly with unforeseen reasons, which may effect the expected outcome. Thus it is important to detect concept drift as soon as it occurs. In this thesis we chose distance based methodology to detect presence of concept drift in the data streams. We used generalized learning vector quantization(GLVQ) and generalized matrix learning vector quantization( GMLVQ) classifiers for distance calculation between prototypes and data points. Chi-square and Kolmogorov–Smirnov tests are used to compare the distance distributions of test and train data sets to indicate the drift presence.
DropConnect (the generalization of Dropout) is a very simple regularization technique that was introduced a few years ago and has become extremely popular because of its simplicity and effectiveness. In this thesis, a suitable architecture for applying DropConnect to Learning Vector Quantization networks is proposed along with a reference implementation and experimental results. Inmany classification tasks, the uncertainty of themodel is a vital piece of information for experts. Methods to extract the uncertainty and stability using DropConnect are also proposed and the corresponding experimental results are documented.
nicht vorhanden
We propose a method for edge detection in images with multiplicative noise based on Ant Colony System (ACS). To adapt the Ant Colony System algorithm to multiplicative noise, global pheromone matrix is computed by the Coefficient of Variation. We carried out a performance comparison of the edge detection Ant Colony System algorithm among several techniques, the best results were found in the gradient and the coefficient of variation.
Large bone defects are a major clinical problem affecting elderly disproportionally, particularly indeveloped countries where this population is the fastest growing. Current treatments include autologous and allogenous bone grafts, bone elongation with the Ilizarov technique, bone graft substitutes, and electrical stimulation. Each of these approaches enjoys varying degrees of success, however, each also has its associated problems and complications. A new, still experimental, treatment is Tissue Engineering that combines scaffolds, osteogenic stem cells and growth factors, and is showing encouraging early results in preclinical and initial clinical studies.
Electrical stimulation has been shown to enhance bone healing by promoting mesenchymal stem cell migration, proliferation, and differentiation. In the present study we combine Tissue Engineering with Electrical Stimulation and hypothesize that this combined approach will have a synergistic effect resulting in enhanced new bone formation. In our in vitro experiments we observed that the levels of electrical stimulation we tested had no cytotoxic effect, instead increased osteogenic differentiation, as determined by enhanced expression of the osteogenic marker, Alkaline Phosphatase. These findings support our hypothesis by demonstrating that in the tissue-engineering environment electrical stimulation promotes bone formation. The bioinformatics part of this project consisted of gene network analysis, identification of the top 10 osteogenic markers and analyzis of genegene interactions. We observed that in studies of stem cells from both human and rat the genes, BMPR1A, BMP5, TGFßR1, SMAD4, SMAD2, BMP4, BMP7, RUNX3, and CDKN1A, are associated with osteogenesis and interact with each other. We observed a total of 31 interactions for human and 29 interactions for rat stem cells. While this approach needs to be proven experimentally, we believed that these in vitro and in silico analyses could compliment each other and in doing so contribute to the field of bone healing research.
The thesis presents an investigation of the question whether it is viable for the English company Essential Care to introduce a direct selling channel in the United Kingdom. The thesis provides an outline of the direct selling and labour market in the United Kingdom, including organisations and legislation for direct selling. A SWOT analysis illustrates the external and internal factors that could have an influence on the feasibility of the project. The main part of the thesis focuses on a market research survey which was conducted in the United Kingdom. Followed by an analysis of the results it provides a detailed outline of the findings. At the end of the thesis the overall findings are summarised and recommendations for Essential Care are presented.
Embeddings for Product Data
(2022)
The E-commerce industry has grown exponentially in the last decade, with giants like Amazon, eBay, Aliexpress, and Walmart selling billions of products. Machine learning techniques can be used within the e-commerce domain to improve the overall customer journey on a platform and increase sales. Product data, in specific, can be used for various applications, such as product similarity, clustering, recommendation, and price estimation. For data from these products to be used for such applications, we have to perform feature engineering. The idea is to transform these products into feature vectors before training a machine learning model on them. In this thesis, we propose an approach to create representations for heterogeneous product data from Unite’s platform in the form of structured tabular records. These tables consist of attributes having different information ranging from product-ids to long descriptions. Our model combines popular deep learning approaches used in natural language processing to create numerical representations, which contain mostly non-zeros elements in an array or matrix called as dense representation for all products. To evaluate the quality of these feature vectors, we validate how well the similarities between products are captured by these dense representations. The evaluations are further divided into two categories. The first category directly compares the similarities between individual products. On the other hand, the second category uses these dense vectors in any of the above- mentioned applications as inputs. It then evaluates the quality of these dense representation vectors based on the accuracy or performance of the defined application. As result, we explain the impact of different steps within our model on the quality of these learned representations.
With globalization and the increasing diversity of the workforce, organizations are faced with the challenge of effectively managing multicultural teams. Understanding how employee engagement and job satisfaction are influenced by multicultural factors is crucial for organizations to create inclusive work environments that foster productivity and wellbeing. This literature review aims to explore the relationship between employee engagement, job satisfaction, and multi-cultural workplaces. It examines relevant studies and provides insights into the key factors, challenges, and strategies for enhancing employee engagement and job satisfaction in multicultural workplaces. The findings will shed light upon the author's research area on the factors influencing employee engagement and job satisfaction in multicultural work environments and contribute to a deeper understanding of cross-cultural dynamics in the workplace.
Proteins are macromolecules that consist of linear-bonded amino acids. They are essential elements in various metabolic processes. The three-dimensional structure of a protein is determined by the order of amino acids, also referred to as the protein sequence. This conformation corresponds to the structural state in which the protein is functionally active. However, relationships between protein sequence, structure and function have not been fully understood yet. Additionally, information about structural properties or even the entire protein structure are crucial for understanding the dynamics that define protein functionality and mechanisms. From this, the role of a protein in its molecular context can be described closely. For instance, interactions can be investigated and comprehended as a biological dynamic network that is sensitive to alternations, i.e. changes which are caused by diseases. Such knowledge can aid in drug design, whereas compounds need to be specifically tailored and adjusted to their molecular targets. Protein energy profile-basedmethods can be applied to investigate protein structures concerning dynamics and alternations. The publications enclosed to this work discuss in general the scientific potentials of energy profilebased techniques and algorithms. On the one hand, changes in stability caused by protein mutations and proteinligand interactions are discussed in the context of energy profiles. On the other hand, energetic relations to protein sequence, structure and function are elucidated in detail. Finally, the presented discussions focus on recent enhancements of the eProS (energy profile suite) database and toolbox. eProS freely provides all elucidated methodologies to the scientific community. Thus, one can address biological questions with the presented methods at hand. Additionally, eProS provides annotations related to foreign databases. This ensures a broad view on biological data and information. In particular, energetic characteristics can be identified which contribute to a protein’s structure and function.
Cyanobacteria, prokaryotic microorganisms with basically the same oxygenic photosynthesis as higher plants, are becoming excellent green cell factories for sustainable generation of renewable chemicals and fuels from solar energy and carbon dioxide. In the presentation I will visualize the concept green cell factories by introducing and discussing two examples: (i) engineering cyanobacteria to produce the important bulk chemical and potential blend-in biofuel butanol from sunlight and carbon dioxide, so called photosynthetic butanol, and (ii) generation of a functional semisynthetic [FeFe]-hydrogenase linking to the native metabolism in living cells of the unicellular cyanobacterium Synechocystis PCC 6803.
This thesis looks at Customer Relationship Management in a different way. In order to identify factors that influence the acceptance of one of its components – the analytical CRM – it focuses on theopinion of the company’s employees. The objective of this thesis is to identify factors that positvely influence the acceptance of analytical Customer Relationship Management within organizations.
Aminoacyl-tRNA synthetases (aaRSs) are key enzymes in the process of protein biosynthesis, charging tRNA molecules with their corresponding amino acid. Whereas adenosine phosphate fixation is common to all aaRSs, recognition of the respective amino acid to ensure correct translation poses a complex task, which is still not understood to its full extent. Using all aaRS structures in the Protein Data Bank (PDB), this thesis reveals further details about the specificitydetermining interactions of each aaRS. Moreover, inspection of the similarities between these enzymes using the structure-derived interaction data reinforces the sequence-based evolutionary trace of aaRSs to a certain degree: The concurrent development of two distinct Classes of aaRS is apparent at functional level, and previously determined evolutionary subclasses coincide altogether with specific aminoacyl recognition in each aaRS Type. Still, discrimination of amino acids in aaRSs involves a multitude of further relevant mechanisms. Eventually, analysis of specificity-relevant binding site interactions sheds light on how aaRS evolved to distinguish different amino acids.
In this work, the task is to cluster microarray gene expression data of the cyanobacterium Nostoc PCC 7120 for detection of messenger RNA (mRNA) degradation patterns. Searched are characteristic patterns of degradation which are caused by specific enzymes (ribonucleases) allowing a further biological investigation regarding biochemical mechanisms. The mRNA degradation is part of the regulation of gene expression because it regulates the amount and longevity of mRNA, which is available for translation into proteins. A particular class of RNA degrading enzymes are exoribonucleases which degrade the molecule from its ends, whereby a degradation from the 5’ end, the 3’ end or from both ends is theoretically possible.
In this investigation, the information about exoribonucleolytic degradation is given in a microarray data set containing gene expression values of 1,251 genes. The data set provides gene expression vectors containing the expression values of up to ten short distinct sections of a gene ordered from the genes 5’ end to its 3’ end. For each gene, expression vectors are available for both nitrogen fixing and non-nitrogen fixing conditions, which have to be considered separately due to biological reasons. Accordingly, after filtering and preprocessing, two datasets for clustering are obtained consisting of 133 ten-dimensional expression vectors. The similarity of the expression vectors is judged by a newly correlation based similarity measure and compared with the results obtained by use of the Euclidean distance. A non-linear transformation of the correlations was applied to obtain a dissimilarity measure. By choice of parameters within this transformation a user specific differentiation between negative and positive correlated gene expression vectors and an adequate adjustment regarding the noise level of gene expression values is possible.
Clustering was performed using Affinity Propagation (AP). The number of clusters obtained by AP depends on the so-called self-similarity for the data vectors. This dependence was used to identify stable cluster solutions by self-similarity control. To evaluate the clustering results, Median Fuzzy c-Means (M-FCM) was used. Further, several cluster validity measures are applied and visual inspections by t-distributed Stochastic Neighbor Embedding (t-SNE) as well as cluster visualization are provided for mathematical interpretation analysis of clusters.
To validate the clustering results biologically, the found data structure is checked for biological adequacy. A deeper investigation into the mechanisms behind mRNA-degradation was achieved by use of a RNA-Seq data set. Contained 40 (base pair) bp long reads for non-nitrogen fixing and nitrogen fixing conditions were assembled using bacteria-specific ab-initio assembly of Rockhopper. Thus, mRNA (transcript)-sequences of the clustered genes are obtained. A further investigation of the untranslated regions (UTRs) is performed here due to the assumption that exoribonucleases recognize specific transcript-sequences outside of the annotated gene regions as their binding sites. These UTRs need to be analyzed regarding sequence similarity using motif-finding algorithms.
kein Abstract vorhanden
Evolution of Game Music : a look at characteristic elements of music in video games across time
(2015)
Music in video games is a subject worth regarding. Nevertheless, it isn't totally explored yet. This thesis shows and explains characteristics every video game music has and explores them regarding the developments in the history of video games. The thesis contains information about video games that inspired the musical evolution of games or that contain music as key part, as well as information about technological advances that influenced the musical evolution.
After creating a new blockchain transaction, the next step usually is to make miners aware of it by having it propagated through the blockchain’s peer-to-peer network. We study an unintended alternative to peer-to-peer propagation: Exclusive mining. Exclusive mining is a type of collusion between a transaction initiator and a single miner (or mining pool). The initiator sends transactions through a private channel directly to the miner instead of propagating them through the peerto-peer network. Other blockchain users only become aware of these transactions once they have been included in a block by the miner. We identify three possible motivations for engaging in exclusive mining: (i) reducing transaction cost volatility (“confirmation as a service”), (ii) hiding unconfirmed transactions from the network to prevent frontrunning and (iii) camouflaging wealth transfers as transaction costs to evade taxes or launder money. We further outline why exclusive mining is difficult to prevent and introduce metrics which can be used to identify mining pools engaging in exclusive mining activity.
To investigate the effects of climate change on interactions within ecosystems, a microcosm experiment was conducted. The effects of temperature increase and predator diversity on Collembola communities and their decomposition rate were investigated. The predators used were mites and Chilopods, whose predation effects on several response variables were analysed. This data included Collembola abundance, biomass and body mass as well as basal respiration and microbial biomass carbon. These response variables were tested against the predictors in several models. Temperature showed high significance in interaction with mite abundance in almost all models. Furthermore, the results of the basal respiration and microbial biomass carbon support the suggestion of a trophic cascade within the animal interaction.
This Bachelor thesis provides an experimental validation of the “si-Fi” software, which was designed for RNAi off-target searches and silencing efficiency predictions. The experimental approach is based on using synthetic DNA as RNAi-target as well as RNAi-trigger sequence. The data was generated by two different types of experiments using a transient gene silencing system in bombarded barley epidermal cells. The efficiency of RNAi was estimated by scoring the effect of silencing of the susceptibility-related gene Mlo on resistance of transformed cells to the powdery mildew fungus Blumeria graminis f. sp. hordei by observing reduction of fluorescent signals coming from an RNAi target fused to the green fluorescent protein. The aim of this work was a comparison between in silicio prediction of RNAi efficiency and off-target effects in barley and experimental data.
This thesis comprehensively explores factors contributing to malaria-induced anemia and severe malarial anemia (SMA). The study utilizes a comprehensive dataset to investigate immunological interactions, genetic variations, and temporal dynamics. Findings highlight the complex interplay between immune markers, genetic traits, and cohort-specific influences. Notably, age, HIV status, and genetic variations emerge as crucial factors influencing anemia risk. The incorporation of Poisson regression models sheds light on the genetic underpinnings of SMA, emphasizing the need for personalized interventions. Overall, this research provides valuable insights into the multifaceted nature of malaria-induced complications, paving the way for further molecular investigations and targeted interventions.
The aim of this bachelor thesis was to establish extracytoplasmic function (ECF) σ factors as synthetic genetic regulators for biotechnological and synthetic biology applications in the new emerging model organism Vibrio natriegens. Therefore, synthetic genetic circuits were engineered on plasmids as test set-up for the investigated ECFs and their target promoters. The resulting plasmid library consisted of the reporter plasmids with the target promoter, fused to a lux cassette, a set of high-copy ECF plasmids and a backup set of lower-copy ECF plasmids. First, the high-copy plasmids were transformed in V. natriegens to test them for their functionality upon different inducer levels, which yielded good inducibility for few, but showed too high ECF-expression in most strains. For this reason, the set of lower copy plasmids was used for combinatorial co-transformation, to investigate the ECFs for their cross-talk to unspecific ECF target promoters. The switching to the lower-copy plasmid-set seemed to be partly helpful, while still much room for fine-tuning of the circuits remains. The knowledge gained can be used to achieve higher success rates when engineering synthetic circuits for various applications in V. natriegens, by using the ECFs here recommended as suitable synthetic genetic regulators.
This master thesis investigates a new method for the feature extraction of gray scale images, the so called „Non-Euclidean Principal Component Analysis“ 1. Thereby the standard inner product of the Euclidean space is substituted by a semi inner product in the well known learning rule of Oja and Sanger. The new method is compared with the standard principal component analysis (PCA) by extracting features (feature vectors) of different databases with class labels and judged regarding the accuracies of „Border Sensitive Generalized Learning Vector Quantization“ (BSGLVQ), „Feed Forward Neural Networks“ (FFNN) and the „Support Vector Machines“ (SVM).
This feasibility study shows possibilities, how logistical concepts can be
improved or reorganized. Therefore, the assambly line for hydraulic blocks at
Bosch Rexroth Changzhou is checked and new ideas are shown. To ensure
comparability, three different cases are considered. Based on this evaluation,
recommendations for further development are displayed.
Fermat proposed fermat’s little theorem in 1640, but a proof was not officially published until 1736. In this thesis paper, we mainly focus on different proofs of fermat’s little theorem like combinatorial proof by counting necklaces, multinomial proofs, proof by modular arithmetic, dynamical systems proof, group theory proof etc. We also concentrate on the generalizations of fermat’s little theorem given by Euler and Laplace. Euler was the first scientist to prove the fermat’s little theorem. We will also go through three different proofs given by Euler for fermat’s little theorem. This theorem has many applications in the field of mathematics and cryptography. We focus on applications of fermat’s little theorem in cryptography like primality testing and publickey cryptography. Primality test is used to determine if the given number n is a prime number or composite number. In this paper, we also concentrate on fermat primality test and Miller-Rabin primality test, which is an extension of fermat primality test. We also discuss the most widely used public-key cryptosystem i.e, the RSA Algorithm, named after its developers R. Rivest, A. Shamir, and L. Adleman. The algorithm was invented in 1978 and depends heavily on fermat’s little theorem.
This scientific work reveals the potential for the development of the renewable energy market, due to many reasons. The reasons are the unstable political situation in the world, rising energy prices, environmental degradation and the growing demand of Ger man residents for government measures to reduce the negative impact on the environment. This work is related to business planning and development using strategies based on the above reasons. The purpose of the study is to develop methods for successfully regulating the market for renewable resources to solve the problem of environmental pollution through the promotion of environmentally friendly products. The work explores the driving forces and problems hindering the development of the market for renewable resources. The problems raised concerned all interested parties, from consumers and producers to the state body for regulating and stimulating the industry . An analysis was also made of the methods of environmentally oriented companies and the tools they use to strengthen their positions in the market. Based on the data obtained from the conducted research, a concept and business strategy for a new environmentally oriented generation” was created. The business consulting company “Sun’s idea of the new company is to involve all parties using marketing tools, creating a healthy competitive environment among commercial companies and benefiting not only the companies themselves but also the end user of the products and the German government.
Financial fraud for banks can be a reason for huge monetary losses. Studies have shown that, if not mitigated, financial fraud can lead to bankruptcy for big financial institutions and even insolvency for individuals. Credit card fraud is a type of financial fraud that is ever growing. In the future, these numbers are expected to increase exponentially and that’s why a lot of researchers are focusing on machine learning techniques for detecting frauds. This task, however, is not a simple task. There are mainly two reasons
• varying behaviour in committing fraud
• high level of imbalance in the dataset (the majority of normal or genuine cases largely outnumbers the number of fraudulent cases)
A predictive model usually tends to be biased towards the majority of samples, in an unbalanced dataset, when this dataset is provided as an input to a predictive model.
In this Thesis this problem is tackled by implementing a data-level approach where different resampling methods such as undersampling, oversampling, and hybrid strategies along with bagging and boosting algorithmic approaches have been applied to a highly skewed dataset with 492 idetified frauds out of 284,807 transactions.
Predictive modelling algorithms like Logistic Regression, Random Forest, and XGBoost have been implemented along with different resampling techniques to predict fraudulent transactions.
The performance of the predictive models was evaluated based on Receiver Operating CharacteristicArea under the curve (AUC-ROC), Precision Recall Area under the Curve (AUC-PR), Precision, Recall, F1 score metrics.
Long-range tertiary interactions between RNA tetraloops and their receptors stabilize the folding of ribosomal RNA and support the maturation of the ribosome. Here, we use FRET-assisted structure prediction to develop a structural model of the GAAA tetraloop receptor (TLR) interaction and its dynamics. We build the docked TLR de novo, label the RNA in silico and compute FRET histograms based on MD simulations. The predicted mean FRET efficiency is remarkably consistent with single-molecule experiments of the docked tetraloop. This hybrid approach of experiment and simulation will promote the elucidation of dynamic RNA tertiary contacts and accelerate the discovery of novel RNA and RNA-protein interactions as potential future drug targets.
Our current research aims to establish a complete ribonucleic acid (RNA) production line from plasmid design to purification of in vitro transcribed RNA and labeling of RNA. RNA is the central molecule within the central dogma of molecular biology and is involved in most essential processes within a cell[1]. In many cases, only compact three-dimensional structures of the respective RNA are able to fulfill their function. In this context, RNA tertiary contacts such as kissing loops and pseudoknots are essential to stabilize three-dimensional folding[2]. We will produce a tertiary contact consisting of a kissing loop and a GAAA tetraloop that occurs in eukaryotic ribosomal RNA[3,4]. The RNA sequence is integrated into a vector plasmid. Subsequently, the plasmid is amplified in E. coli. After following plasmid purification steps, the RNA sequence will be transcribed in vitro[5,6]. In order for the RNA be used for Förster resonance energy transfer (FRET) experiments at the single molecule level, fluorescent dyes must be coupled to the RNA molecule[7].
The epithelial membrane proteins (EMP1-3), which belong to the family of peripheral myelin proteins 22-kDa (PMP22), are involved in epithelial differentiation. EMP2 was found to be a downstream target gene of the tumor suppressor gene HOPX, a homeobox-containing gene. Additionally, a dysregulation of EMP2 has been observed in various cancers, but the function of EMP2 in human lung cancer has not yet been clarified.
In this study, a real-time RT-PCR, Western blot and cytoblock analysis were performed to analyze the expression of EMP2. Gain-of-function was achieved by stable transfection with an EMP2 expression vector and loss-of-function by siRNA knockdown. Stable transfection led to overexpression of EMP2 at both mRNA and protein levels in the transfected cell lines H1299 and H2170.
Functional assays including proliferation, colony formation, migration and invasion assays as well as cell cycle analyzes were performed after stable transfection and it was found that the ectopic EMP2 expression resulted in a reduced cell proliferation, migration and invasion as well as a G1 cell cycle arrest. After the EMP2 gene was silenced by the siRNA knockdown, inhibition of the cell invasive property was observed. These phenomena were accompanied by reduced AKT, mTor and p38 activities.
Taken together, the data suggest that the epithelial membrane protein 2 (EMP2) is a tumor suppressor and exerts its tumor suppressive function by inhibiting AKT and MAPK signaling pathways in human lung cancer cells.
Simulating complex physical systems involves solving nonlinear partial differential equations (PDEs), which can be very expensive. Generative Adversarial Networks (GAN) has recently been used to generate solutions to PDEs-governed complex systems without having to numerically solve them.
However, concerns are raised that the standard GAN system cannot capture some important physical and statistical properties of a complex PDE-governed system, along side with other concerns for difficult and unstable training, the noisy appearance of generated samples and lack of robust assessment methods of the sample quality apart from visual examination. In this thesis, a standard GAN system is trained on a data set of Heat transfer images. We show that the generated data set can capture the true distribution of training data with respect to both visual and statistical properties, specifically the vertical statistical profile. Furthermore, we construct a GAN model which can be conditioned using variance-induced class label. We show that the variance threshold t = 0. 01 constructs a good conditional class label, such that the generated images achieve 96% accuracy
rate in complying with the given conditions.
In the following study we evaluated capabilities of how a simple autoencoder can be used to trainGeneralized Learning Vector Quantization classifier. Specifically, we proved that the bottlenecks of an autoencoder serve as an "information filter" which tries to best represent the desired output in that particular layer in the statistical sense of mutual information.
Autoencoder model was trained for purely unsupervised task and leveraged the advantages by learning feature representations. As a result, the model got the significant value of the accuracy. Implementation and tuning of the model was carried out using Tensor Flow [1].
An extra study has been dedicated to improve traditional GLVQ algorithm taken from sklearn-lvg [2] using the bottleneck from an autoencoder.
The study has revealed potential of bottlenecks of an autoencoder as pre-processing tool in improving the accuracy of GLVQ. Specifically, the model was capable to identify 75% improvements of accuracy in GLVQ comparing to original one, which has about 62%. Consequently, the research exposed the need for further improvement of the model in the present problem case.
In this work, a transgenic zebrafish line that expresses the fluorophore dsRed under the endogenous zebrafish cochlin promotor is supposed to be established, using the CRISPR/Cas9 system. dsRed was cloned into a pBluescript vector, followed by the cloning of the cochlin locus into this vector. This bait construct was then supposed to be micro injected into wild type AB zebrafish embryos. The micro injection of Cas9 mRNA, single guide RNA and a bait construct was practiced with the tyrosinase gene, which was disrupted using CRISPR/Cas9.
This thesis provides an overview of Generation Z with a focus on Mittweida University of Applied Sciences students. It explores the general issues of students' behavior in life, as well as their attitudes toward the financial and banking sectors. It also examines the German banking market, its strengths and weaknesses in attracting new clients. At the end, possible strategies for the development of the bank in terms of attractiveness for young people are provided.
Pollinating insects are of vital importance for the ecosystem and their drastic decline imposes severe consequences for the environment and humankind. The comprehension of their interaction networks is the first step in order to preserve these highly complex systems. For that purpose, the following study describes a protocol for the investigation of honey bee pollen samples from different agro-environmental areas by DNA extraction, PCR amplification and nanopore sequencing of the barcode regions rbcL and ITS. It was shown, that the most abundant species were classified consistently by both DNA barcodes, while species richness was enhanced by single-barcode detection of less abundant species. The analysis of the the different landscape variables exhibited a decline of species richness, Shannon diversity index, and species evenness with increasing organic crop area. However, sampling was only carried out in August and further investigations are suggested to display a more complete picture of honey bee foraging throughout the seasons.
Genetic sex determination of ancient DNA samples based on one simple mathematical algorithm, which considers the number of mapped reads on autosomal, X, and Y chromosomes. The algorithm is implemented in one command line tool - SiD. SiD is used to deter-mine the sex of 16 samples, which have been shotgun sequenced and captured with a 1240k panel.
Cryptorchidism is the most common disorder of sex development in dogs. It describes a failure of one or both testes to descend into the scrotum in due time. It is a heritable multifactorial disease. In this work, selected dogs of a german sheep poodle breed were sequenced with nanopore sequencing and subsequently examined for genetic variations correlating with cryptorchidism. The relationships of the studied dogs were also analyzed and visually processed.
Prototype-based classification methods like Generalized Matrix Learning Vector Quantization (GMLVQ) are simple and easy to implement. An appropriate choice of the activation function plays an important role in the performance of (deep) multilayer perceptrons (MLP) that rely on a non-linearity for classification and regression learning. In this thesis, successful candidates of non-linear activation functions are investigated which are known for MLPs for application in GMLVQ to realize a non-linear mapping. The influence of the non-linear activation functions on the performance of the model with respect to accuracy, convergence rate are analyzed and experimental results are documented.
Going green, environmental protection, eco-friendliness, sustainability or sustainable development have become frequent terms in everyone’s life. The negative impact of human activities, causing increased environmental pollution and decline, is a matter of dire concern nowadays. In the last few decades greater attention has been payed towards these issues. Understanding society´s new concerns, increasingly more companies have begun to modify their behaviours toward a more eco-friendly and responsible one. The term green marketing is an emerging area of interest, and is a tool of modern marketing used by companies in various industries. It is a full-service marketing strategy that includes green marketing plan development, sustainable auditing and planning, branding, design, and communication. An effective, authentic and transparent green presentation of a company provides a chance to successfully assert on the market, communicate core company values and build long-term customer relations. The young and innovative company SWOX Surf Protection, which entered the market with a long-lasting waterproof sunscreen particular designed for surfers and snowboarders, wants to foster growth by expanding their existing target group to a broader segment comprising all outdoor activists. Moreover, the brand strives to become the leading sunscreen manufacturer for outdoor sports and wants to position itself as a lifestyle brand. In 2016 the company started to produce “greener” sunscreen tubes with an imminent launch at hand. Due to the fact that especially surfers, snowboarders and outdoor activists are in close contact with nature and spend a lot of time in the sun, it is assumed that they have particular interest in making use of sunscreen on a healthrelated aspect, while at the same time showing increased commitment towards environmental protection. In this context, it is assumed that a holistic green and organic sunscreen could provide added values. This paper intends to examine whether green marketing could be a relevant strategy for SWOX Surf Protection to differentiate themselves from their competitors, attract potential customers, build long-term customer relations - and as a result position itself as a successful sunscreen lifestyle brand in the market. This will be verified through comprehensive literature review and detailed market research.
Classification of time series has received an important amount of interest over the past years due to many real-life applications, such as environmental modeling, speech recognition, and computer vision.
In my thesis, I focus on classification of time series by LVQ classifiers. To learn a classifiers, we need a training set. In our case, every data point in the training set contains a sequence (an ordered set) of feature vectors. Thus, the first task is to construct a new feature vector (or matrix) for each sequence.
Inspired by [2], I use Hankel matrices to construct the new feature vectors. This choice comes from a basic assumption that each time series is generated by a single or a set of unknown Linear Time Invariant (LTI) systems.
After generating new feature vectors by Hankel matrices, I use two approaches to learn a classifier: Generalized Learning Vector Quntization (GLVQ) and Median variant of Generalized Learning Vector Quantization (mGLVQ).
The subject of the following paper is the mental well-being of employees at their work and how the leader can improve this well-being using positive psychology. The paper is compilatory in nature because it uses research and literature of experts to analyse how employee mental well-being can be further stimulated. The expert literature is used to present tools, but also to demonstrate the effectiveness of these tools through real-life case studies and evidence. The paper wishes to inform persons, leaders, and entire organizations how positive psychology can be beneficial to organizational members’ well-being in the long term. Using a compilation of positive psychology literature and reallife case studies’ analysis, the informative purpose of the thesis can be achieved.
In this work a novelty detection framework provided by M. Filippone and G. Sanguinetti is considered, which is useful especially when only few training samples are available. It is restricted to Gaussian mixture models and makes use of information theory, applying the Kullback-Leibler divergence. In this work two variations of the framework are presented, applying the symmetric Hellinger divergence and a statistical likelihood approach.
In laser drilling, one challenge is to achieve a high drilling quality in high aspect ratio drilling. Ultra-short pulsed lasers use different concepts like thin disks, fibers and rods. The slab technology is implemented because of their flexibility and characteristics. They bring together both advantages and deliver high pulse energies at high repetition rates. Materials with a thickness > 1.5 mm demand specialized optics handling the high power and pulse energies with adapted processing strategies, integrated in a machine setup. In this contribution, we focus on all the necessary components and strategies for drilling high precision holes with aspect ratios up to 1:40.
Increasing speed in laser processing is driven by the development of high-power lasers into ranges of more than 1 kW. Additionally, a proper distribution of these laser power is required to achieve high quality processing results. In the case of high pulse repletion rates, a proper distribution of the pulses can be obtained from ultrafast beam deflection in the range of several 100 m/s. A two-dimensional polygon mirror scanner has been used to distribute a nanosecond pulsed laser with up to 1 kW average power at a wavelength of 1064 nm for multi pass laser engraving. The pulse duration of this laser can be varied between 30 ns and 240 ns and the pulse repetition rate is set between 1 and 4 MHz. The depth information is included in greyscale bitmaps, which were used to modulate the laser during the scanning accordingly to the lateral position and the depth. The process allows high processing rates and thus high throughput.
In this work, Direct Laser Interference Patterning (DLIP) is used in conjunction with the polygon scanner technique to fabricate textured polystyrene and nickel surfaces through ultra-fast beam deflection. For polystyrene, the impact of scanning speed and repetition rate on the structure formation is studied, obtaining periodic features with a spatial period of 21 μm and reaching structure heights up to 23 μm. By applying scanning speeds of up to 350 m/s, a structuring throughput of 1.1 m²/min has been reached. Additionally, the optical configuration was used to texture nickel electrode foils with line-like patterns with a spatial period of 25 μm and a maximum structure depth of 15 μm. Subsequently, the structured nickel electrodes were assessed in terms of their performance for the Hydrogen Evolution Reaction (HER). The findings revealed a significant improvement in HER efficiency, with a 22% increase compared to the untreated reference electrode.
Laser engraving requires a precise ablation per pulse through all layers of a depth map. To transform this process towards areas of a square meter and more within an acceptable time, needs high-power ultra-short pulsed lasers for the precision and a high scan speed for the beam distribution. Scan speeds in the range of several 100 m/s can be achieved with a polygon scanner. In this work, a polygon scanner has been utilized within a roll-engraving machine to treat an 800 x 220 mm² (L x Dia) roll with 0.55 m² in a laser engraving process. The machine setup, the processing strategy and the data handling has been investigated and result in an efficient large area process. Pre-tests were performed with a multi-MHz-frequency nanosecond-pulsed laser, to investigate the processing strategy. A method to overcome the duty cycle of the polygon scanner was found in the synchronization of two polygons, enabling the use on a single laser source in a time-sharing concept. The throughput and the utilization of the laser source can be increased by the factor of two
This master’s thesis was written in cooperation with the Spanish company sí-internships. Developing an effective promotion strategy for this startup spending as little financial resources as possible is the main objective of this work. To do so an extensive research on the current internal, external and integral market situation follows. Building on the results of this analysis promotional objectives are being determined and a target audience chosen. Next a promotion strategy is being established.
How Covid-19 impacts the workplace of knowledge workers in a pandemic and post pandemic world
(2021)
The following master thesis covers the topic workplace. The focus lies on the corona pandemic and how the pandemic has affected and will continue to affect the workplaces of knowledge workers. Therefore, the workplace as a research area has been described holistically, followed by the presentation of gathered secondary data and the conducted in depth interviews by the author. The presented secondary data and primary data are agreeing in the workplace how people know it will be changed after the pandemic. The most likely outcome is the hybrid workplace concept which mixes the home office, the office and alternatively third places. For these changes the companies have to be equipped and prepared. The meaning of the office will increase and has to be redesigned in order to meet the needs of the knowledge workers which are coming back to the office eventually.
he automatic comparison of RNA/DNA or rather nucleotide sequences is a complex task requiring careful design due to the computational complexity. While alignment-based models suffer from computational costs in time, alignment-free models have to deal with appropriate data preprocessing and consistently designed mathematical data comparison. This work deals with the latter strategy. In particular, a systematic categorization is proposed, which emphasizes two key concepts that have to be combined for a successful comparison analysis: 1) the data transformation comprising adequate mathematical sequence coding and feature extraction, and 2) the subsequent (dis-)similarity evaluation of the transformed data by means of problem specific but mathematically consistent proximity measures. Respective approaches of different categories
of the introduced scheme are examined with regard to their suitability to distinguish natural RNA virus sequences from artificially generated ones encompassing varying degrees of biological feature preservation. The challenge in this application is the limited additional biological information available, such that the decision has to be made solely on the basis of the sequences and their
inherent structural characteristics. To address this, the present work focuses on interpretable, dissimilarity based classification models of machine learning, namely variants of Learning Vector Quantizers. These methods are known to be robust and highly interpretable, and therefore,
allow to evaluate the applied data transformations together with the chosen proximity measure with respect to the given discrimination task. First analysis results are provided and discussed, serving as a starting point for more in-depth analysis of this problem in the future.
RNA tertiary contact interactions between RNA tetraloops and their receptors stabilize the folding of ribosomal RNA and support the maturation of the ribosome. Here we use FRET assisted structure prediction to develop structural models of two ribosomal tertiary contacts, one consisting of a kissing loop and a GAAA tetraloop and one consisting of the tetraloop receptor (TLR) and a GAAA tetraloop. We build bound and unbound states of the ribosomal contacts de novo, label the RNA in silico and compute FRET histograms based on MD simulations and accessible contact volume (ACV) calculations. The predicted mean FRET efficiency from molecular dynamics (MD) simulations and ACV determination show agreement for the KL-TLGAAA construct. The KL construct revealed too high FRET efficiency and artificial dye behavior, which requires further investigation of the model. In the case of the TLR, the importance of the correct dye and construct parameters in the modeling was shown, which also leads to a renewed modeling. This hybrid approach of experiment and simulation will promote the elucidation of dynamic RNA tertiary contacts and accelerate the discovery of novel RNA interactions as potential future drug targets.
Brassica oleracea like all crucifers plants have a defense mechanism against natural enemies, which are chemical compounds formed form the enzymatic degradation of glucosinolates. In the presence of epithiospecifier proteins (ESP), the hydrolysis of glucosinolates will form epithionitriles or nitriles depending on the glucosinolate structure, This research proved that three predicted sequences (ESP) taken from NCBI database has a role in the enzymatic hydrolysis of glucosinolates in Brassica oleracea.
Obesity is a major public health issue in many countries and its development leads to many severe conditions. Adipose tissue (AT) simply called fat, in males visceral adipose tissues (VAT) are dominant. Estrogens play an important role in many pathological processes.
In this study, one of the subtypes of the estrogen receptor ER-beta is activated using KB (Specific ligand) treatment on VAT.
In this study, I investigated the metabolism effectof KB treatment on VAT using bioinformatics methods.
In this thesis study, I applied several bioinformatics methods such as differential expression gene analysis, pathway analysis, RNA splicing analysis and SNPs callings to make the prediction of the effect of KB treatment on VAT. A list of candidate genes, pathways and SNPs were identified in this study, which could provide some clues to reveal the genetic mechanism underlying the KB treatment effect. The results of my study show that the KB treatment on VAT has caused significant effect.
The digital transformation of higher education demands effective and efficient methods for learning support and assessment of learning processes. This paper relates learning support and assessment to each other in the context of learning management systems. It refers to previous studies carried out in multiple introductory economic courses of the University of Applied Sciences Mittweida which examine possible connections between the use of digital tests and learning success, investigate student’s acceptance and self-perceived learning success with respect to the webbased portion of a blended course and a purely online based course. Based on a survey (n = 71) and a quantitative analysis (n = 214) with logging and exam assessment data, the previous work shows that students approached the web-based course portion with rather reserved attitudes. Still, they perceived the individual course elements, namely videos, podcasts, interactive worksheets, online tests, and a comprehensive PDF file to be beneficial to their learning experience. Especially we could indicate a positive correlation between the points students achieved in the online tests and the exam results.
Genetic sequence variations at the level of gene promoters influence the binding of transcription factors. In plants, this often leads to differential gene expression across natural accessions and crop cultivars. Some of these differences are propagated through molecular networks and lead to macroscopic phenotypes. However, the link between promoter sequence variation and the variation of its activity is not yet well understood. In this project, we use the power of deep learning in 728 genotypes of Arabidopsis thaliana to shed light on some aspects of that link. Convolutional neural networks were successfully implemented to predict the likelihood of a gene being expressed from its promoter sequence. These networks were also capable of highlighting known and putative new sequence motifs causal for the expression of genes. We tested our algorithms in various scenarios, including single and multiple point mutations, as well as indels on synthetic and real promoter sequences and the respective performance characteristics of the algorithm have been estimated. Finally, we showed that the decision boundary to classify genes as expressed and non-expressed depends on the sensitivity of the transcriptome profiling assay and changing it has an impact on the algorithm’s performance.
In this work a second version for the Python implementation of an algorithm called Probabilistic Regulation of Metabolism (PROM) was created and applied to the metabolic model iSynCJ816 for the organism Synechocystis sp. PCC 6803. A crossvalidation was performed to determine the minimal amount of expression data needed to produce meaningful results with the PROM algorithm. The failed reproduction of the results of a method called Integrated and Deduced Regulation of Metabolism (IDREAM) is documented and causes for the failed reproduction are discussed.
Implementation of a customised business model for innovative engineering consultancy services
(2019)
Business development is vital for every organisation who intend to grow. It follows expansion through organic and inorganic means. Also, there are many innovative business styles which help organisations to expand. This thesis shows how engineering services organisation chose its form of business expansion
The following thesis explains how engineering service sector company uses its expertise to expand its business towards consultancy market with the demonstration of the real-life executed business model.
The thesis provides a solution for the following issues
1) What is the best in-house strategy to be developed for business expansion in the service industry?
2) How did the niche market experiences help for business expansion?
The aim of this bachelor thesis is to find out how the use of artificial intelligence, specifically the one used in combat situations, can increase the playing time or even the replay value of games in the action role-playing genre. Thereby, it focuses mainly on combat situations between a player and an artificial intelligence.
To begin with, this bachelor thesis examines the action role-playing genre in order to find a suitable definition for it. Accordingly, action role-playing games involve titles that send the player on a hero’s journey-like adventure in which they must prove their skills in combat against virtual opponents. The greatest challenge of these real-time battles comes from the required quick reflexes, skill queries and hand-eye coordination.
Next, six means of increasing the replayability of a game are explored: Experience and Nostalgia, Variety and Randomness, Goals and Completion, Difficulty, Learning, and Social Aspect. The paper then proceeds to give an explanation for the term Artificial Intelligence and examines the various methods used to create intelligent behavior as well as the general advancement of the research field. Special attention is given to the implementation methods of Finite State Machines and Behavior Trees, as they are the most widely used methods for creating behavioral patterns of virtual characters.
Finally, a study conducted as part of the bachelor thesis is described, which compares a mathematically balanced artificial intelligence with a behaviorally balanced one in terms of game performance regarding the willingness of test subjects to purchase and play through the game as well as its replay value. The thesis concludes with the findings that while the behavioral approach is more promising than the mathematical approach, a combination of the two methods ultimately leads to the best outcome. Furthermore, the study shows that the use of artificial intelligence to individualize gaming experiences is promising for the future of the gaming industry.
In this paper, we designed, implemented, and tested a special surveillance camera system based on a combination of classical image processing algorithms. The system’s sub-objective consists of tracking experimental vehicles driving on a defined trajectories (Rail) in real time. Furthermore, it analyzes the scene to collect additional vehicles & rail-related information. The system then uses the gathered data to reach its main objective which confines oneself in independently predicting vehicles collision. Consequently, we propose a hybrid method of detecting and tracking ATLAS-vehicles efficiently. To detect the vehicle at the beginning of the video, periodically every n-frame, and in the case where the tracked vehicle has been lost, we used Histogram Back-Projection. By contrast, Kernelized correlation filter is used to track the detected vehicles. Combining these two methods provides one of the best trade-offs between accuracy and speed even on a single processing core. The proposed method achieves the best performance compared with three different approaches on a custom dataset.
The loss of photoreceptors is a major course for visual impairment and blindness with no cure currently established. Photoreceptor replacement into mouse models of retinal degeneration is currently investigated as a potential future therapy. To evaluate visual function in mice before and after treatment two vision-based behavioral tests (optomotor tracking and the light/dark box) were investigated including their feasibility to distinguish between rod and cone photoreceptor function. Both methods turned out to be an objective and reliable readout for vision ability in wildtype mice and mice with vision impairment due to retinal degeneration. The capability of the methods to assess slight vision improvements have to be further evaluated.
Therefore options for improvement of the established tests and an idea for a new test paradigm have been introduced.
This thesis work is focusing on the optimization and improvement of IP network and IP transit operations and strategy as well as service offerings. Therefore, this thesis tries to give suggestions at different areas of engineering, business, strategy and operational contexts. This thesis is written in English, as this topic itself is mainly handled in English language too. The first part will try to identify and evaluate methods which are helpful to improve the practical work which will be focused in the second part of this work.
Footage of organoids taken by means of fluorescence microscopy and segmented as well as triangulated by image analysis software like LimeSeg and Mastodon often needs to be visualized in aesthetic manner for presentation of the results in scientific papers, talks and demonstrations. The goal of this work was to create a simple to use addon “Biobox” for the open source 3D – visualization package “Blender” which would allow to import triangulated 3D data with animation over time (4D), produced by image analysis software, and optimize it for efficient usage. ”Biobox” offers several visualization tools for the creation of rendered images and animation videos by biologists.
The optimization of imported data was performed by using Blender intern modifiers. The optimized data can then be visualized by using several tools built for visualizing the organoid in frozen, animated and semi-transparent manners. A dynamic link for object selection and dynamic data exchange between Blender and Mastodon was developed. Additionally, a user interface was developed for manual correction errors of segmentation and steering the object detection algorithms of LimeSeg. The benchmark of the developed addon “Biobox” was performed on real scientific data. The benchmark test demonstrated that developed optimization result in significant (~5 fold) decrease of RAM usage and acceleration of visualization more than 160 times.
Social media platforms play an increasing role in marketing, politics and police affairs, because they can strongly influence opinions. So called “opinion leaders” exert their influence in a given network and shape the opinions of other users. Identifying central nodes in a social graph has been of interest for decades. However, not all centrality measures were developed for social media platforms. They were built for social graphs, which did not include additional metrics (e.g. “likes”, “shares”). Nevertheless, these metrics play a crucial role on modern platforms. Hence, outdated measures need to be adjusted and additional metrics need to be integrated to ensure the best possible results.
The main purpose of this Bachelor thesis was to find and to compile comprehensive information on barley genes expressed in the context of pollen embryogen esis. In the present study, this approach was confined to genes that were previously known to be associated with the initiation of embryogenesis in different plant species. First, candidate transcript sequences were identified in barley. Second, transcript and associated genomic sequences were analyzed in silico to provide suitable structural and functional annotations. Finally, the results of one representative example are presented and interpreted in detail. This work aims to contribute to a significantly improved understanding of pollen embryogenesis - a biological phenomenon broadly used for haploid technology in crop improvement.
Purpose: The study is aimed to determine the Incentives for German SMEs to offshore their business activities in India and China.
Design: This study is based on quantitative approach. Primary and secondary data is being used in the study. The data was collected from individuals working in different SMEs in Germany, having relative offshoring experience. Theories from the articles, peer reviewed journal along with relevant books were consulted throughout the study.
Findings: The findingssuggest that the benefits and advantages of offshoring strategy in India and China are cost efficiency and technology. Moreover, the challenges that are being faced by the firms while executive offshoring strategy is cultural mix especially language/cultural barriers, security issues and loss of market performance.
Originality and Value: The study on incentives of German SMEs to offshore business activities in India and China enables me to understand why companies are interested in offshoring strategy in low cost countries for expanding their business while evaluating the challenges, merits and demerits of offshoring
This master thesis was developed based on public information about Linde AG. It analyzed and evaluated macroeconomic factors influencing the pеrformance of the company. Microeconomic and macroeconomic indicators play the central role for the financial management of each global company. Thus, performance measurement is important for understanding the vаlue and extent of the environment. The study of the thesis aims at estimating the extent to which a company may opеrate on the global market and what factors contribute to its performance the most.
Firstly, the thesis examines theoretical background based on the previous researches. It defines the specific macroeconomic and microeconomic factors and their role in the company’s performance. Afterwards the thesis analyses Linde AG activities on domestic and foreign markets. The present structure, the current position in the markets and financial indicators are analyzed. The correlation and regression analysis were developed with the aim to find the links between the company’s performance and the macroeconomic environment. It is believed that inflation, exchange and interest rates as well as stock market index have a significant influence on the Linde’s performance.
The results showed that the indicators of inflation rate and stock market index play a significant role in the Linde’s performance. Thus, when it comes to exchanging rates, more data needs to be evaluated in order to derive concrete conclusions.
For monitoring laser beam welding processes and detecting or actively avoiding process defects, acoustic based measurements can be used in addition to optical measurement methods such as pyrometry. To reliably detect process events, it is essential to position the respective sensors in such a way that specific signal characteristics are reproducible and significant. However, there are only few investigations regarding the positioning for airborne sound sensors, especially for the detection of process emissions in the ultrasonic range. Therefore, in this research, the influence of the process distance as well as the angle and orientation of the microphone to a laser beam deep penetration welding process is investigated with respect to the detectability of process emissions in different frequency bands. It is shown that for a wide ultrasonic range a flat sensor angle with respect to the sample surface leads to an increased signal strength of the acoustic emissions compared to steep angles.
For the first time it was discovered that ultraviolet radiation with a wavelength of 200 to 400 nm (maximum 365 nm) radiated from a distance of 40 cm (intensity: 3500 mW/cm²) to PMMA altered its surface wettability as well as a roughness at the nanoscale that was observed with an atomic force microscope (AFM). The roughness rises and falls again in a short time ( 1-2days ) after 75 min and 180 min irradiation time. However , during the next 10 days roughness became stabilized and there was no influence of UV if PMMA was stored in air or in a Petri dish out of glass.
Digital data is rising day by day and so is the need for intelligent, automated data processing in daily life. In addition to this, in machine learning, a secure and accurate way to classify data is important. This holds utmost importance in certain fields, e.g. in medical data analysis. Moreover, in order to avoid severe consequences, the accuracy and reliability of the classification are equally important. So if the classification is not reliable, instead of accepting the wrongly classified data point, it is better to reject such a data point. This can be done with the help of some strategies by using them on top of a trained model or including them directly in the objective function of the desired training model. We discuss such strategies and analyze the results on data sets in this thesis.
Glycans play an important role in the intracellular interactions of pathogenic bacteria. Pathogenic bacteria possess binding proteins capable of recognizing certain sugar motifs on other cells, which are found in glycan structures. Artificial carbohydrate synthesis allows scientists to recreate those sugar motifs in a rational, precise, and pure form. However, due to the high specificity of sugar-binding proteins, known as lectins, to glycan structures, methods for identifying suitable binding agents need to be developed. To tackle this hurdle, the Fraunhofer Institute for Cell Therapy and Immunology (Fraunhofer IZI) and the Max-Planck Institute of Colloids and Interfaces (MPIKG) developed a binding assay for the high throughput testing of sugar motifs that are presented on modular scaffolds formed by the assembly of four DNA strands into simple, branched DNA nanostructures. The first generation of this assay was used in combination with bacteria that express a fluorescent protein as a proof-of-concept. Here, the assay was optimized to be used with bacteria not possessing a marker gene for a fluorescent protein by staining their genomic DNA with SYBR® Green. For the binding assay, DNA nanostructures were combined with artificially synthesized mannose polymers, typical targets for many lectins on the surface of bacteria, presenting them in a defined constellation to bind bacteria strongly due to multivalent cooperativity. The testing of multiple mannose polymers identified monomeric mannose with a 5’-carbon linker and 1,2-linked dimeric mannose with linker as the best binding candidates for E. coli, presumably due to binding with the FimH protein on the surface. Despite similarities between the FimH proteins of E. coli and K. pneumoniae, binding was only observed between E. coli and the different sugar molecules on DNA structures. Furthermore, the degree of free movement seemed to affect the binding of mannose polymers to targeted proteins, since when utilizing a more flexible DNA nanostructure, an increase in binding could be observed. An alternative to the simple DNA nanostructures described above is the use of larger, more complex DNA origami structures consisting of several hundred strands. DNA origami structures are capable of carrying dozens of modifications at the same time. The results for the DNA origami structure showed a successful functionalization with up to 71 1,2-linked dimeric mannose with linker molecules. These results point towards a solution for the high-throughput analysis of potential binding agents for pathogenic bacteria e.g. as an alternative treatment for antibiotic-resistant.
Marker-based systems can digitally record human movements in detail. Using the digital biomechanical human model Dynamicus, which was developed by the Institut für Mechatronik, it is possible to model joint angles and their velocities such accurately that it can be used to improve motion analysis in competitive sports or for ergonomic evaluation of motion sequences. In this paper, we use interpretable machine learning techniques to analyze the gait. Here, the focus is on the classification between foot touchdown and drop-off during normal walking. The motion data for training the model is labeled using force plates. We analyze how we could apply our machine learning models directly on new motion data recorded in a different scenario compared to the initial training, more precise on a treadmill. We use the properties of the interpretable model
to detect drift and to transfer our model if necessary.
Internationalization and business expansion appear to be the most challenging processes in business conduction today. Every step of the foreign market entry process and overseas operations establishment is full of obvious risks and hidden pitfalls. Theoretical background, multiplied with the vital practice, is playing the key role in such a complicated business process; such information can be used as a guideline by further market entrants and players. At present, Germany with its well-developed engineering industry represents a broad space for research of internationalization process in its different forms, as well as can show both successful and negative results of foreign market entries.
Introducing natural adversarial observations to a Deep Reinforcement Learning agent for Atari Games
(2021)
Deep Learning methods are known to be vulnerable to adversarial attacks. Since Deep Reinforcement Learning agents are based on these methods, they are prone to tiny input data changes. Three methods for adversarial example generation will be introduced and applied to agents trained to play Atari games. The attacks target either single inputs or can be applied universally to all possible inputs of the agents. They were able to successfully shift the predictions towards a single action or to lower the agent’s confidence in certain actions, respectively. All proposed methods had a severe impact on the agent’s performance while producing invisible adversarial perturbations. Since natural-looking adversarial observations should be completely hidden from a human evaluator, the negative impact on the performance of the agents should additionally be undetectable. Several variants of the proposed methods were tested to fulfil all posed criteria. Overall, seven generated observations for two of three Atari games are classified as natural-looking adversarial observations.
In this work, we discuss the key role that “conflict minerals” (Gold, Coltan, Cobalt, Tin, Tungsten) play in global supply chains and high-technology industries, and the issues surrounding their extraction and trade in origin
countries, particularly in the African Congo Basin and the Great Lakes Region. We discuss ongoing international efforts to combat violence, child labour and human rights violations at mineral extraction areas, particularly in the Democratic Republic of the Congo (DRC), where very large mineral reserves have been discovered. We present the OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas, and the
GOTS MineralTrace mineral proof-of-origin and trade chain certification solution developed by ibes AG in Germany, which automates and simplifies the implementation of the OECD Guidance. We discuss a pilot project in DRC involving the GOTS GoldTrace application, based on the MineralTrace platform. We point out MineralTrace’s benefits and its limitations. We analyse possible solutions to said limitations, including an analysis of blockchain-based transactional information exchange and record keeping systems, and finally we propose a new MineralTrace Application Programming Interface (API) that solves current limitations, introduces configuration flexibility for client applications, introduces workflow flexibility to adapt MineralTrace to any country or region, and simplifies data export functionality.
This thesis focuses on the introduction of a process for the fracture toughness testing of epoxy resin systems, in the light of the linear elastic fracture mechanic approach. Based on the requirements of ISO 13586, SENB-specimen were designed and especially the precracking process was analysed and the tapping process was optimized by designing and testing a drop-weight device. After successful validating the test process using specimen made of Araldite LY556, the in uence of GNP loading on the fracture toughness was analysed. The pure epoxy showed a KIc of 0.73 MPap
m, being perfectly in line with the manufacturers datasheet. A peak in fracture toughness of 0.83 MPap
m was archived at 1 wt% and a loading rate of 10 mm/min, showing a decreasing trend as the loading is increased further. As the loading rate is increased, the fracture toughness reduces slightly for 0.5 wt% and 2 wt% GNP, but
drops signicantly for 1 wt% GNP obliterating the peak. The load vs. displacement curves showed quasi-brittle material behaviour. The fracture surfaces were analysed using SEM and while the neat resin did not show any features, did the reinforced samples show pattern of crack pinning in connection with bridging and pull-out. The resulting improvement is less signicant as observed by other researchers for larger GNPs. This is in line with the general idea, that small particles are not able to yield as high improvements, but the signicant decrease for higher loading rates is not observed or described so far. It is suspected that tests at lower loading rates (e.g. 1 or 0.5 mm/min) show an even higher fracture toughness.
Since its foundation as an application of algebra, coding theory is obtaining a day by day increasing importance. For instance, any communication system needs the concepts of coding theory to function efficiently. In this thesis, reader will find an introductory explanation to linear codes and binary hamming codes including some of the algebraic tools devised in their applications. All the described software applications are verified using SageMath 9.0 using Hochschule Mittweida’s JupyterHub.
Different small molecule kinase inhibitors, which have an influence on cell growth, proliferation and cell survival were tested alone and in combination with Erlotinib in the Erlotinib-resistant non-small cell lung cancer cell line PC-9ER and Cisplatin in the K-Ras mutant cell line H358. The aim was to find out, which combinations produce the best antiproliferative effects in non-small cell lung cancer cell lines.
Neural networks have become one of the most powerful algorithms when it comes to learning from big data sets and it is used extensively for classification. But the deeper the network models, the lesser is the interpretability of such models. Although many methods exist to explain
the output of such networks, the lack of interpretability makes them black boxes. On the other hand, prototype-based machine learning algorithms are known to be interpretable and robust.
Therefore, the aim of this thesis is to find a way to interpret the functioning of the neural networks by introducing a prototype layer to the neural network architecture. This prototype layer will train alongside the neural network and help us interpret the model. We present architectures of neural networks consisting of autoencoders and prototypes that perform activity recognition from heart rates extracted from ECG signals. These prototypes represent the different activity groups that the heart rates belong to and thereby aid in interpretability.
Vicia faba leaves and calli were transformed using CRISPR Cas RNP. Two kinds of CPP fused SpyCas9 were used with sgRNA7, sgRNA5 or sgRNA13 targeting PDS exon 1, PDS exon 2 or MgCh exon 3 respectively. RNP were applied using high pressure spraying, biolistic delivery, incubation in RNP solution and infiltration of leaf tissue. A PCR and restriction enzyme based approach was used for detection of mutation. Screening of 679 E. coli colonies containing the cloned fragments resulted in detection of 14 mutations. Most of the 14 mutations were deletions of sizes 150, 500 or 730 bp. 5 out of the 14 mutations were point mutations located two to three bp upstream of PAM.
In bioinformatics one important task is to distinguish between native and mirror protein models based on the structural information. This information can be obtained from the atomic coordinates of the protein backbone. This thesis tackles the problem of distinction of these conformations, looking at the statistics of the dihedral angles’ distribution regarding the protein backbone. This distribution is visualized in Ramachandran plots. By means of an interpretable machine learning classification method – Generalized Matrix Learning Vector Quantization – we are able to distinguish between native and mirror protein models with high accuracy. Further, the classifier model supplies supplementary information on the important distributional regions for distinction, like α-helices and β-strands.
The Tutte polynomial is an important tool in graph theory. This paper provides an introduction to the two-variable polynomial using the spanning subgraph and rank-generating polynomials. The equivalency of definitions is shown in detail, as well as evaluations and derivatives. The properties and examples of the polynomial, i.e. the universality, coefficient relations, closed forms and recurrence relations are mentioned. Moreover, the thesis contains the connection between the dichromate and other significant polynomials.
While blockchain technology is still in an early stage of its development, it is already of surging economic importance.
In the literature, blockchain is referred to as either being a disruptive, institutional, foundational, or general purpose technology. There is still no consensus about the economic theory that should apply for analyzing its economic effects. This article draws on use cases from the coffee supply chain to explore, which theories could potentially apply to an emerging blockchain economy.
A classical topic in the theory of random graphs is the probability of at least one isolated vertex in a given random graph. An isolated node has a huge impact on social networks which can be given by a random graph. We present a distribution on the number of isolated vertex using the probability generating function. We discuss the relationship between isolated edges and extended cut polynomials, extended matching polynomials using the principle of inclusion exclusion. We introduce an algorithm based on colored graphs for general graphs. We apply this to the components of a graph as well. Finally, we implement the idea on a special class of graphs like cycle, bipartite graph, path, and others. We discuss recursive procedure based on the analogous coloring rules for ladder and fan graphs.
The aim of this master thesis is to describe the key factors of successful energy efficiency projects. In particular, local conditions of such projects in Kazakhstan will be emphasized and a country-specific guideline will be provided at the end. The following topics will be covered in this thesis: energy efficiency technologies, financing, and capacities. The first part examines the energy efficiency approaches and their potential in the local industry. The second part deals with available financing methods, their specific characteristics and appropriateness for overcoming investment barriers in Kazakhstan. The third part of the master thesis concerns necessary project capacities. The application of the three elements for successful project implementation is described in the end.
nicht vorhanden
Die vorliegende Arbeit befasst sich mit der Analyse der kritischen Erfolgsfaktoren für die Zulassung europäischer Industrieprodukte in Indien, anhand eines europäisch entwickelten und produzierten Produktes für die indische Rolling Stock Industrie. Die dabei berücksichtigen Themenschwerpunkte, die im Detail betrachtet sind über: Welche Standards werden derzeit in Indien bzw. in Europa offiziell für den Zulassungsprozess herangezogen? Aktuelle Situation erfassen. Vergleich der technischen Zulassung Standards zwischen IR (Indischen Railway) Standards und
Pulsed laser processing of vacuum component surfaces is a promising method for electron cloud mitigation in particle accelerators. By generating a hierarchically structured surface, the escape probability of secondary electrons is reduced. The choice of laser treatment parameters – such as laser power, scanning speed and line distance – has an influence on the resulting surface morphology as well as on its performance. The impact of processing parameters on the surface properties of copper is investigated by Secondary Electron Yield (SEY) measurements, Scanning Electron Microscopy (SEM), ablation depth measurements in an optical microscope and particle release analysis. Independent of the laser wavelength (532nm and 1064nm), it was found that the surface morphology changes when varying the processing parameters. The ablation depth increases and the SEY reduces with increasing laser fluence. The final application requires the capability to treat tens of meters of vacuum pipes. The limiting factors of this type of surface treatment for the applicability in particle accelerators are discussed.
This thesis investigates the efficacy of four machine learning algorithms, namely linear regression, decision tree, random forest and neural network in the task of lead scoring. Specifically, the study evaluates the performance of these algorithms using datasets without sampling and with random under-sampling and over-sampling using SMOTE. The performance of each algorithm is measure using various performance metrics, including accuracy, AUC-ROC, specificity, sensitivity, precision, recall, F1 score, and G-mean. The results indicate that models trained on the dataset without sampling achieved higher accuracy than those trained on the dataset with either random under-sampling or random over-sampling using SMOTE. However, the neural network demonstrated remarkable results on each dataset compared to the other algorithms. These findings provide valuable insights into the effectiveness of machine learning algorithms for lead scoring tasks, particularly when using different sampling techniques. The findings of this study can aid lead management practices in selecting the most suitable algorithm and sampling technique for their needs. Furthermore, the study contributes to the literature by providing a comprehensive evaluation of the performance of machine learning algorithms for lead scoring tasks. This thesis has practical implications for businesses looking to improve their lead management practices, and future research could extend the analysis to other machine learning algorithms or more extensive datasets.
The object of the thesis is the material and information flows in the production systems of enterprises, in particular, of LLC "Kolibri".
As a subject of thesis, the improvement of the management of material and related information flows of the company LLC "Kolibri" was chosen.
The purpose of the thesis is theoretical substantiation and development of practical recommendations for the effective management of the flows of material and information resources of the enterprise on the principles of logistics.
We demonstrate a thulium-based fiber amplifier delivering pulses tunable between <120fs and 2ps duration at up to 228 μJ of pulse energy at a center wavelength of 1940 nm and 500-kHz repetition rate. Due to the excellent long-term stability, this system proves the ability of this technology to be integrated into ultra-fast material processing machines.