Master's Thesis
Refine
Document Type
- Master's Thesis (121) (remove)
Year of publication
Language
- English (121) (remove)
Keywords
- Maschinelles Lernen (24)
- Vektorquantisierung (8)
- Blockchain (7)
- Algorithmus (5)
- Bioinformatik (5)
- Neuronales Netz (5)
- Deep learning (4)
- Kryptologie (4)
- Virtuelle Währung (4)
- China (3)
When entering waterways that are restricted either in height, width or by another vessel, the behaviour of a ship changes. The most evident effect of navigating in shallow water is the squat which has led to several groundings. Because of pressure differences the vessel is pulled down into the water and the trim is changed. Another shallow water effect is the speed loss due to an increase in resistance which can reduce the maximal speed by upto 50 percent. In general the behaviour of a ship in shallow water is said to be sluggish, meaning that it is more difficult to navigate which affects the radius of the turning circle among others. Sailing parallel to a close-by bank affects the lateral force and the yaw moment. The interaction with other ships has similar effects as bank effects, but is more sophisticated since more parameters play a major role. In this thesis each of these effects is researched by studying several papers by renowned researchers.
Several models are developed which are correspondent with the inherent model of forces and moments of the simulation program. The challenges and obstacles that arised during modelling and implementation are pointed out and solutions or approaches are given.
The aim of this master thesis is to describe the key factors of successful energy efficiency projects. In particular, local conditions of such projects in Kazakhstan will be emphasized and a country-specific guideline will be provided at the end. The following topics will be covered in this thesis: energy efficiency technologies, financing, and capacities. The first part examines the energy efficiency approaches and their potential in the local industry. The second part deals with available financing methods, their specific characteristics and appropriateness for overcoming investment barriers in Kazakhstan. The third part of the master thesis concerns necessary project capacities. The application of the three elements for successful project implementation is described in the end.
After the expression of the titin-Hsp27-construct with the following purification supplies no satisfied results which makes the realization of the atomic force microscopy not possible. The devel-opment of the structure model by using different bioinformatic methods can establish a model for the protein sequence. As bioinformatic methods the template search by different BLAST runs and free available software like SwissModel, Pcons, ModWeb and other tools are used. Nevertheless, the generated model is not the native conformation and has to be analyzed with other software until a stable conformation of the structure can be predicted. Depending on the time which is provided the generated model is a good approach for the aim this master thesis has.
Proteins are macromolecules that consist of linear-bonded amino acids. They are essential elements in various metabolic processes. The three-dimensional structure of a protein is determined by the order of amino acids, also referred to as the protein sequence. This conformation corresponds to the structural state in which the protein is functionally active. However, relationships between protein sequence, structure and function have not been fully understood yet. Additionally, information about structural properties or even the entire protein structure are crucial for understanding the dynamics that define protein functionality and mechanisms. From this, the role of a protein in its molecular context can be described closely. For instance, interactions can be investigated and comprehended as a biological dynamic network that is sensitive to alternations, i.e. changes which are caused by diseases. Such knowledge can aid in drug design, whereas compounds need to be specifically tailored and adjusted to their molecular targets. Protein energy profile-basedmethods can be applied to investigate protein structures concerning dynamics and alternations. The publications enclosed to this work discuss in general the scientific potentials of energy profilebased techniques and algorithms. On the one hand, changes in stability caused by protein mutations and proteinligand interactions are discussed in the context of energy profiles. On the other hand, energetic relations to protein sequence, structure and function are elucidated in detail. Finally, the presented discussions focus on recent enhancements of the eProS (energy profile suite) database and toolbox. eProS freely provides all elucidated methodologies to the scientific community. Thus, one can address biological questions with the presented methods at hand. Additionally, eProS provides annotations related to foreign databases. This ensures a broad view on biological data and information. In particular, energetic characteristics can be identified which contribute to a protein’s structure and function.
There are a lot of people taking part in more than one competition. The competitions are also of a different kind. From local events with a small number of participants to international tournaments watched by many viewers. Naturally it becomes necessary a system to assess and compare the success in various competitions.
The existing ranking systems are usually specialized to fit their application area. More general ranking methods also exist. They can be applied to a wide spectrum of competition fields. However these ranking methods are still not universal and don't cover some important features of the competitions.
A totally new ranking system has been developed within the present master thesis. Its primary purpose is to evaluate and measure prestige gained by participants in competitions. The main contribution of the thesis consists of an original mathematical model that makes the ranking system unique.
The developed ranking system claims to be universal and interdisciplinary. It is based on the fundamental element that distinguishes the competition from the non-competition areas, namely standings that rank the participants according to their performance. The universality and the interdisciplinarity of the ranking system make available cross-disciplinary comparisons, which is usually very subjective and difficult for implementation.
The contribution of the master thesis extends beyond the theoretical area. A ranking software that fully implements this novel ranking system has been designed and developed. The software makes the practical benefits of the ranking system immediately available to potential application areas such as sports clubs and universities.
And finally, the developed ranking system offers a new viewpoint to the competitions – as a way of gaining prestige, rather than the traditional viewpoint of demonstrating mastery.
This master thesis investigates a new method for the feature extraction of gray scale images, the so called „Non-Euclidean Principal Component Analysis“ 1. Thereby the standard inner product of the Euclidean space is substituted by a semi inner product in the well known learning rule of Oja and Sanger. The new method is compared with the standard principal component analysis (PCA) by extracting features (feature vectors) of different databases with class labels and judged regarding the accuracies of „Border Sensitive Generalized Learning Vector Quantization“ (BSGLVQ), „Feed Forward Neural Networks“ (FFNN) and the „Support Vector Machines“ (SVM).
This study shows the potential for the make-or-buy theory in several scenarios – production, assembling and development. The evaluation of these possibilities is conducted, based on Bosch’s core competencies. A decision model is developed to support the decision making process. Based on these results, the serial production at RBAC in China is planned and suggestions for setting up the assembly line are given
In this work a novelty detection framework provided by M. Filippone and G. Sanguinetti is considered, which is useful especially when only few training samples are available. It is restricted to Gaussian mixture models and makes use of information theory, applying the Kullback-Leibler divergence. In this work two variations of the framework are presented, applying the symmetric Hellinger divergence and a statistical likelihood approach.
For the first time it was discovered that ultraviolet radiation with a wavelength of 200 to 400 nm (maximum 365 nm) radiated from a distance of 40 cm (intensity: 3500 mW/cm²) to PMMA altered its surface wettability as well as a roughness at the nanoscale that was observed with an atomic force microscope (AFM). The roughness rises and falls again in a short time ( 1-2days ) after 75 min and 180 min irradiation time. However , during the next 10 days roughness became stabilized and there was no influence of UV if PMMA was stored in air or in a Petri dish out of glass.
As widely discussed in literature spatial patterns of amino acids, so-called structural motifs, play an important role in protein function. The functional responsible part of a protein often lies in an evolutionary highly conserved spatial arrangement of only few amino acids, which are held in place tightly by the rest of the structure. In general, these motifs can mediate various functional interactions, such as DNA/RNA targeting and binding, ligand interactions, substrate catalysis, and stabilization of the protein structure.
Hence, characterizing and identifying such conserved structural motifs can contribute to understanding of structurefunction relationships in diverse protein families. Therefore and because of the rapidly increasing number of solved protein structures, it is highly desirable to identify, understand and moreover to search for structural scattered amino acid motifs. The aim of this work was the development and the implementation of a matching algorithm to search for such small structural motifs in large sets of target structures. Furthermore, motif matches were extensively analyzed, statistically assessed and functionally classified. Following a novel approach, hierarchical clustering was combined with functional classification and used to deduce evolutionary structure-function relationships. The proposed methods were combined and implemented to a feature-rich and easy-to-use command line software tool, which is freely available and contributes to the field of structural bioinformatic research.
Protein structures are essential elements in every biological system evolved on earth, where they function as stabilizing elements, signaltransducers or replication machin eries. They are consisting of linear-bonded amino acids, which determine the three-dimensional structure of the protein, whereas the structure in turn determines the function. The native and biological active structure ofa protein can be understood as the folding state of a polypeptide chain at the global minimum of free energy.
By means of protein energy profiling, which is an approach derived from statistical physics it is possible to assign a so called energy profile to a protein structure. Such an energy profile describes the local energetic interaction features of every amino acid within the structure and introduces an energetic point of view, instead of a structural or sequential onto proteins.
This work aims to give a perspective to the question of how we may gain pattern information out of energy profiles. The concrete subjects are energy-mapped Pfam family alignments and investigations on finding motifs or patterns indiscretizised energy profile segments.
Proteins are involved in almost every aspect of life, mediating a wide range of cellular tasks. The protein sequence dictates the spatial arrangement of the residues and thus ultimately the function of a rotein. Huge effort is put into cumbersome structure eludication experiments which obtain models describing the observed spatial conformation of a protein, enabling users to predict their function, to understand their mode of action or to design tailored drugs to cure disease caused by misfolded or misregulated proteins.
However, the result of structure determination experiments are merely models of reality, made under simplifying assumptions - sometimes containing major undetected errors. On the other hand, such experiments are resource demanding and they cannot supply the actual demand.
Thus, scientists are predicting the structure of proteins in silico, resulting in models that are even
more prone to error.
In consequence, the structure biologists search after a practicable definition of structure quality and over the last two decades several model quality assessment programs emerged, measuring the local and global quality of peculiar structures. Seven representatives were studied, regarding the paradigms they follow and the features they use to describe the quality of residues. Their predications were compared, showing that there is almost no common ground among the tools.
Is there a way to combine their statements anyway?
Finally, the accumulated knowledge was used to design a novel evaluation tool, addressing problems previously spotted. Thereby, high quality of its predication as well as superior usability was
key. The strategy was compared to existing approaches and evaluated on suitable datasets.
Die vorliegende Arbeit befasst sich mit der Analyse der kritischen Erfolgsfaktoren für die Zulassung europäischer Industrieprodukte in Indien, anhand eines europäisch entwickelten und produzierten Produktes für die indische Rolling Stock Industrie. Die dabei berücksichtigen Themenschwerpunkte, die im Detail betrachtet sind über: Welche Standards werden derzeit in Indien bzw. in Europa offiziell für den Zulassungsprozess herangezogen? Aktuelle Situation erfassen. Vergleich der technischen Zulassung Standards zwischen IR (Indischen Railway) Standards und
The almost complete transcription of the human genome yield in a high number of transcripts, that do not encode proteins. However, the functional elucidation of especially long non cod-ing RNAs is still difficult. Secondary structure analysis is assumed to be a possible method to detect functional relationships of lncRNAs on a large scale, but it is still time consuming and error-prone. GRAPHCLUST, the currently most suitable clustering tool based on RNA secondary structure analysis, lacks mainly in an efficient method for the interpretation of its results. Hence, an independent and interactive RNA clustering interpretation tool was developed to allow visu-alisation and an efficient analysis of RNA clustering results.
A variety of methods have been used to describe natural systems and cellular functions. Most use continuous systems with differential equations. Based upon the neighbourhood relations in graphs and the complex interactions in cellular automata a mathematical model was designed and implemented as an application user interface. This discrete approach called graph automata was utilised to simulate diffusion processes and chemical kinetics. The progression of diffusion in cellular environments was described and resulted in a discrepancy of 20% in comparison to experimental results. Different chemical kinetics were simulated and found to be as accurate as their continuous counterparts. The proposed model appears to be a highly scalable and modular
approach to simulate natural systems.
nicht vorhanden
nicht vorhanden
This thesis investigated the generation of laser induced periodic surface structures (LIPSS) using femtosecond laser irradiation at a central wavelength of 775 nm.
The metals stainless steel and copper as well as a semiconducting thin film, ITO on glass substrate were investigated. The impact of the processing parameters was studied for single and multiple pulse irradiation to determine the ablation threshold of the materials
and the different types of LIPSS. These observations allowed the optimisation of area structuring with regards to processing speed and LIPSS quality.
The feasibility of the LIPSS generation in dynamic, real time polarisation control was then explored. By using a fast response, liquid-crystal polarisation rotation device, the direction of the linear polarisation of the laser beam could be dynamically controlled and synchronised to the scanning during laser processing. As a result, a range of complex micro- and nano-scale patterns with orthogonal direction of LIPSS were created. The samples were analysed using optical and electron microscopy. The orientation of the LIPSS was determined also from detection of light diffracted by the LIPSS.
Finally, two applications of large area LIPSS patterning were demonstrated, information encoding on metals and periodic structuring of a thin film conducting oxide for solar cells.
This master’s thesis was written in cooperation with the Spanish company sí-internships. Developing an effective promotion strategy for this startup spending as little financial resources as possible is the main objective of this work. To do so an extensive research on the current internal, external and integral market situation follows. Building on the results of this analysis promotional objectives are being determined and a target audience chosen. Next a promotion strategy is being established.
Cancer is one of the main causes of death in developed countries, and cancer treatment heavily depends on successful early detection and diagnosis. Tumor biomarkers are helpful for early diagnose. The goal of this discovery method is to identify genetic variations as well as changes in gene expression or activity that can be linked to a typical cancer state.
First, several cancer gene signaling pathways were introduced and then combined. 27 candidate genes were selected, through the analysis of several data sets in the GEO database, a few expression difference matrices were established. Those candidate genes were tested in the matrices and found five genes PLA1A, MMP14, CCND1, BIRC5 and MYC that have the potential to be tumor biomarkers. Two of these genes have been further discussed, PLA1A is a potential biomarker for prostate cancer, and MMP14 can be considered as a biomarker for NSC lung cancer.
Finally, the significance of this study and the potential value of the two genes are discussed, and the future research in this direction is a prospect.
Large bone defects are a major clinical problem affecting elderly disproportionally, particularly indeveloped countries where this population is the fastest growing. Current treatments include autologous and allogenous bone grafts, bone elongation with the Ilizarov technique, bone graft substitutes, and electrical stimulation. Each of these approaches enjoys varying degrees of success, however, each also has its associated problems and complications. A new, still experimental, treatment is Tissue Engineering that combines scaffolds, osteogenic stem cells and growth factors, and is showing encouraging early results in preclinical and initial clinical studies.
Electrical stimulation has been shown to enhance bone healing by promoting mesenchymal stem cell migration, proliferation, and differentiation. In the present study we combine Tissue Engineering with Electrical Stimulation and hypothesize that this combined approach will have a synergistic effect resulting in enhanced new bone formation. In our in vitro experiments we observed that the levels of electrical stimulation we tested had no cytotoxic effect, instead increased osteogenic differentiation, as determined by enhanced expression of the osteogenic marker, Alkaline Phosphatase. These findings support our hypothesis by demonstrating that in the tissue-engineering environment electrical stimulation promotes bone formation. The bioinformatics part of this project consisted of gene network analysis, identification of the top 10 osteogenic markers and analyzis of genegene interactions. We observed that in studies of stem cells from both human and rat the genes, BMPR1A, BMP5, TGFßR1, SMAD4, SMAD2, BMP4, BMP7, RUNX3, and CDKN1A, are associated with osteogenesis and interact with each other. We observed a total of 31 interactions for human and 29 interactions for rat stem cells. While this approach needs to be proven experimentally, we believed that these in vitro and in silico analyses could compliment each other and in doing so contribute to the field of bone healing research.
Classification of time series has received an important amount of interest over the past years due to many real-life applications, such as environmental modeling, speech recognition, and computer vision.
In my thesis, I focus on classification of time series by LVQ classifiers. To learn a classifiers, we need a training set. In our case, every data point in the training set contains a sequence (an ordered set) of feature vectors. Thus, the first task is to construct a new feature vector (or matrix) for each sequence.
Inspired by [2], I use Hankel matrices to construct the new feature vectors. This choice comes from a basic assumption that each time series is generated by a single or a set of unknown Linear Time Invariant (LTI) systems.
After generating new feature vectors by Hankel matrices, I use two approaches to learn a classifier: Generalized Learning Vector Quntization (GLVQ) and Median variant of Generalized Learning Vector Quantization (mGLVQ).
Stability of control systems is one of the central subjects in control theory. The classical asymptotic stability theorem states that the norm of the residual between the state trajectory and the equilibrium is zero in limit. Unfortunately, it does not in general allow computing a concrete rate of convergence particularly due to algorithmic uncertainty which is related to numerical imperfections of floating-point arithmetic. This work proposes to revisit the asymptotic stability theory with the aim of computation of convergence rates using constructive analysis which is a mathematical tool that realizes equivalence between certain theorems and computation algorithms. Consequently, it also offers a framework which allows controlling numerical imperfections in a coherent and formal way. The overall goal of the current study also matches with the trend of introducing formal verification tools into the control theory. Besides existing approaches, constructive analysis, suggested within this work, can also be considered for formal verification of control systems. A computational example is provided that demonstrates extraction of a convergence certificate for example dynamical systems.
It is possible to obtain a common updating rule for k-means and Neural Gas algorithms by using a generalized Expectation Maximization method. This result is used to derive two variants of these methods. The use of a similarity measure, specifically the gaussian function, provides another clustering alternative to the before mentioned methods. The main benefit of using the gaussian function is that it inherently looks for a common cluster center for similar data points (depending on the value of the parameter s ). In different experiments we report similar behaviour of batch and proposed variants. Also we show some useful results for the “alternative” similarity method, specifically when there is no clue about the number of clusters in the data sets.
The endogen steroid hormone 17b-estradiol is a central player in a wide range of physiologic, behavioral processes and diseases in vertebrates. As a consequence, it is a main target for molecular design and drug discovery efforts in medicine and environmental sciences, which requires in-depth knowledge of protein-ligand binding processes. This work develops a bioinformatic framework based on local and global structure similarity for the characterization of E2-protein interactions in all 35 publicly available three-dimensional structures of estradiol-protein complexes. Subsequently, it uses gained data to identify four geometrically conserved estradiol binding residue motifs, against which the Protein Data Bank is queried. As result of this database query, 15 hits present in seven protein structures are found. Five of these structures do not contain E2 as ligand and had thus not been included in this work’s initial data set. One of these newly detected structures is structurally and functionally dissimilar, as well as evolutionarily distant from all other proteins analyzed in this work. Nevertheless, the ability of this protein to actually bind estradiol must be further analyzed. Finally, geometrically conserved E2-protein interactions are identified and a new research direction using these conserved interaction ensembles for the detection of novel estradiol targets is proposed.
Going green, environmental protection, eco-friendliness, sustainability or sustainable development have become frequent terms in everyone’s life. The negative impact of human activities, causing increased environmental pollution and decline, is a matter of dire concern nowadays. In the last few decades greater attention has been payed towards these issues. Understanding society´s new concerns, increasingly more companies have begun to modify their behaviours toward a more eco-friendly and responsible one. The term green marketing is an emerging area of interest, and is a tool of modern marketing used by companies in various industries. It is a full-service marketing strategy that includes green marketing plan development, sustainable auditing and planning, branding, design, and communication. An effective, authentic and transparent green presentation of a company provides a chance to successfully assert on the market, communicate core company values and build long-term customer relations. The young and innovative company SWOX Surf Protection, which entered the market with a long-lasting waterproof sunscreen particular designed for surfers and snowboarders, wants to foster growth by expanding their existing target group to a broader segment comprising all outdoor activists. Moreover, the brand strives to become the leading sunscreen manufacturer for outdoor sports and wants to position itself as a lifestyle brand. In 2016 the company started to produce “greener” sunscreen tubes with an imminent launch at hand. Due to the fact that especially surfers, snowboarders and outdoor activists are in close contact with nature and spend a lot of time in the sun, it is assumed that they have particular interest in making use of sunscreen on a healthrelated aspect, while at the same time showing increased commitment towards environmental protection. In this context, it is assumed that a holistic green and organic sunscreen could provide added values. This paper intends to examine whether green marketing could be a relevant strategy for SWOX Surf Protection to differentiate themselves from their competitors, attract potential customers, build long-term customer relations - and as a result position itself as a successful sunscreen lifestyle brand in the market. This will be verified through comprehensive literature review and detailed market research.
Brassica oleracea like all crucifers plants have a defense mechanism against natural enemies, which are chemical compounds formed form the enzymatic degradation of glucosinolates. In the presence of epithiospecifier proteins (ESP), the hydrolysis of glucosinolates will form epithionitriles or nitriles depending on the glucosinolate structure, This research proved that three predicted sequences (ESP) taken from NCBI database has a role in the enzymatic hydrolysis of glucosinolates in Brassica oleracea.
Massive multiple-input multiple-output (MIMO), eine Technik bei der die Basisstation einer Mobilfunkzelle mit einer großen Anzahl an Antennen ausgestattet ist, wird derzeit als eine vielversprechende Schlüsseltechnologie zur Erfüllung der Anforderungen zukünftiger drahtloser Kommunikationsnetze der fünften Generation betrachtet. Die zuversichtlichen Angaben über die Leistung solcher Systeme beruht allerdings auf einer theoretischen, bisher kaum praktisch verizierten Annahme, dass die drahtlosen Übertragungskanäle verschiedener Nutzer aufgrund der hohen Anzahl an Antennen voneinander unabhängig sind. Das heißt, dass sogenannte günstige Übertragungsbedingungen herrschen. Die vorliegende Masterarbeit untersucht diese neuartigen Systeme unter zwei verschiedenen Perspektiven.
Im ersten Teil dieser Arbeit wird der Einfluss von realistischen Übertragungsbedingungen auf die Performance von massive MIMO Systemen evaluiert. Dazu werden entsprechende numerische Systemsimulationen durchgeführt und mit den Ergebnissen von praktischen massive MIMO Messkampagnen verglichen.
Die Untersuchungen ergeben, dass die sogenannten günstigen Übertragungsbedingungen in realistischen Umgebungen nur bedingt beobachtet werden können. Daher führen traditionelle Kanalmodelle zu einer ungenauen Abschätzung der Leistung von praktischen massive MIMO Systemen. Um diesem Problem zu begegnen, wird deshalb eine neuartige Parametrisierung des traditionellen Kronecker-Modells vorgeschlagen, sodass relevante Kenngrößen realistischer Kanäle mit diesem Modell präzise widergespiegelt werden.
Anschließend folgt eine Untersuchung verschiedener Methoden zur Kanalschätzung in massive MIMO Systemen unter den verschiedenen Kanalmodellen mittels numerischer Simulationen. Die Experimente zeigen auf, dass Schätzmethoden, welche speziell für massive MIMO unter der Annahme von günstigen Übertragungsbedinungen hergeleitet wurden, eine signifikante Leistungsminderung unter realistischen Kanalmodellen erfahren.
Im zweiten Teil dieser Arbeit liegt der Fokus auf der Anwendung von massive MIMO Systemen in sogenannten Internet of Things (IoT) Netzwerken. Die typischerweise hohe Anzahl an aktiven IoT-Geräten macht die Anwendung von effizienten Scheduling-Algorithmen notwendig. Daher wird ein Downlink-Scheduling-Algorithmus präsentiert, welcher sich die Eigenschaften von massive MIMO Systemen und die typischen Anforderungen an die Datenraten von IoT-Geräten zunutze macht. Im Speziellen wird vorgeschlagen, die IoT-Nutzer in Gruppen aufzuteilen und die verschiedenen Gruppen nacheinander zu versorgen. Die Gruppengröße wird dabei mit Hilfe asymptotischer Eigenschaften von massive MIMO Systemen hergeleitet.
Um die Gruppenmitglieder zu selektieren, wird eine modifizierte Version des populären Semi-Orthogonal-User-Selection (SUS) Algorithmus vorgeschlagen. Die anschließend durchgeführten numerischen Simulationen bestätigen, dass die modifizierte Version von SUS die Nachteile des originalen Algorithmus eliminiert, was wiederum zu verbesserten Datenraten in dem betrachteten System führt.
This master thesis was developed based on public information about Linde AG. It analyzed and evaluated macroeconomic factors influencing the pеrformance of the company. Microeconomic and macroeconomic indicators play the central role for the financial management of each global company. Thus, performance measurement is important for understanding the vаlue and extent of the environment. The study of the thesis aims at estimating the extent to which a company may opеrate on the global market and what factors contribute to its performance the most.
Firstly, the thesis examines theoretical background based on the previous researches. It defines the specific macroeconomic and microeconomic factors and their role in the company’s performance. Afterwards the thesis analyses Linde AG activities on domestic and foreign markets. The present structure, the current position in the markets and financial indicators are analyzed. The correlation and regression analysis were developed with the aim to find the links between the company’s performance and the macroeconomic environment. It is believed that inflation, exchange and interest rates as well as stock market index have a significant influence on the Linde’s performance.
The results showed that the indicators of inflation rate and stock market index play a significant role in the Linde’s performance. Thus, when it comes to exchanging rates, more data needs to be evaluated in order to derive concrete conclusions.
Obesity is a major public health issue in many countries and its development leads to many severe conditions. Adipose tissue (AT) simply called fat, in males visceral adipose tissues (VAT) are dominant. Estrogens play an important role in many pathological processes.
In this study, one of the subtypes of the estrogen receptor ER-beta is activated using KB (Specific ligand) treatment on VAT.
In this study, I investigated the metabolism effectof KB treatment on VAT using bioinformatics methods.
In this thesis study, I applied several bioinformatics methods such as differential expression gene analysis, pathway analysis, RNA splicing analysis and SNPs callings to make the prediction of the effect of KB treatment on VAT. A list of candidate genes, pathways and SNPs were identified in this study, which could provide some clues to reveal the genetic mechanism underlying the KB treatment effect. The results of my study show that the KB treatment on VAT has caused significant effect.
This thesis focuses on the introduction of a process for the fracture toughness testing of epoxy resin systems, in the light of the linear elastic fracture mechanic approach. Based on the requirements of ISO 13586, SENB-specimen were designed and especially the precracking process was analysed and the tapping process was optimized by designing and testing a drop-weight device. After successful validating the test process using specimen made of Araldite LY556, the in uence of GNP loading on the fracture toughness was analysed. The pure epoxy showed a KIc of 0.73 MPap
m, being perfectly in line with the manufacturers datasheet. A peak in fracture toughness of 0.83 MPap
m was archived at 1 wt% and a loading rate of 10 mm/min, showing a decreasing trend as the loading is increased further. As the loading rate is increased, the fracture toughness reduces slightly for 0.5 wt% and 2 wt% GNP, but
drops signicantly for 1 wt% GNP obliterating the peak. The load vs. displacement curves showed quasi-brittle material behaviour. The fracture surfaces were analysed using SEM and while the neat resin did not show any features, did the reinforced samples show pattern of crack pinning in connection with bridging and pull-out. The resulting improvement is less signicant as observed by other researchers for larger GNPs. This is in line with the general idea, that small particles are not able to yield as high improvements, but the signicant decrease for higher loading rates is not observed or described so far. It is suspected that tests at lower loading rates (e.g. 1 or 0.5 mm/min) show an even higher fracture toughness.
This study presents an analysis of the coverage made by the journals El País (Spain), Folha de S. Paulo (Brazil) and Süddeutsche Zeitung (Germany) about the protests in Brazil against the 2013 Confederations Cup and the 2014 FIFA World Cup to establish a comparison between them and see which topics were emphasized by the newspapers and which tone they use in their reporting. Based on the research questions, four categories were developed for the analysis of the journals: article structure; topic of the article; actors/group of persons and tone of the reporting, all of them composed by several subcategories. It was concluded that the themes highlighted by the European newspapers were different from those stressed on the Brazilian diary. Nonetheless, all the reviewed newspapers made a neutral coverage of the protests.
kein Abstract vorhanden
Path decomposition of a graph has received an important amount of interest over the past decades because of its applications in algorithmic graph theory and in real life problems. For the computation of a path decomposition of small width, we use different heuritics approaches. One of the most useful method is by Bodlaender and Kloks. In this thesis, we focus on the computation, applications, transformation and approximation of a path decomposition of small width.
It is easy to convert a path decomposition in to nice path decomposition with same width, which is more convinent to use to find the graph parameters like independent sets, chromatic polynomials etc. Inspired by [28], we find an algorithm to compute the chromatic polynomial of a graph via nice path decomposition with small width.
In this master thesis, we define a new bivariate polynomial which we call the defensive alliance polynomial and denote it by da(G; x; y). It is a generalization of the alliance polynomial and the strong alliance polynomial. We show the relation between da(G; x; y) and the alliance, the strong alliance, the induced connected subgraph polynomials as well as the cut vertex sets polynomial. We investigate information encoded about G in da(G; x; y). We discuss the defensive alliance polynomial for the path graphs, the cycle graphs, the star graphs, the double star graphs, the complete graphs, the complete bipartite graphs, the regular graphs, the wheel graphs, the open wheel graphs, the friendship graphs, the triangular book graphs and the quadrilateral book graphs. Also, we prove that the above classes of graphs are characterized by its defensive alliance polynomial. We present the defensive alliance polynomial of the graph formed of attaching a vertex to a complete graph. We show two pairs of graphs which are not characterized by the alliance polynomial but characterized by the defensive alliance polynomial.
Also, we present three notes on results in the literature. The first one is improving a bound and the other two are counterexamples.
In the following study we evaluated capabilities of how a simple autoencoder can be used to trainGeneralized Learning Vector Quantization classifier. Specifically, we proved that the bottlenecks of an autoencoder serve as an "information filter" which tries to best represent the desired output in that particular layer in the statistical sense of mutual information.
Autoencoder model was trained for purely unsupervised task and leveraged the advantages by learning feature representations. As a result, the model got the significant value of the accuracy. Implementation and tuning of the model was carried out using Tensor Flow [1].
An extra study has been dedicated to improve traditional GLVQ algorithm taken from sklearn-lvg [2] using the bottleneck from an autoencoder.
The study has revealed potential of bottlenecks of an autoencoder as pre-processing tool in improving the accuracy of GLVQ. Specifically, the model was capable to identify 75% improvements of accuracy in GLVQ comparing to original one, which has about 62%. Consequently, the research exposed the need for further improvement of the model in the present problem case.
Community acquired pneumonia (CAP) is a very common, yet infectious and sometimes lethal disease. Therefor, this disease is connected to high costs of diagnosis and treatment. To actually reduce the costs for health care in this matter, diagnosis and treatment must get cheaper to conduct with no loss in predictive accuracy. One effective way in doing so would be the identification of easy detectable and highly specific transcriptomic markers, which would reduce the amount of work required for laboratory tests by possibly enhanced diagnosis capability.
Transcriptomic whole blood data, derived from the PROGRESS study was combined with several documented features like age, smoking status or the SOFA score. The analysis pipeline included processing by self organizing maps for dimensionality and noise reduction, as well as diffusion pseudotime (DPT). Pseudotime enabled modelling a disease run of CAP, where each sample represented a state/time in the modelled run. Both methods combined resulted in a proposed disease run of CAP, described by 1476 marker genes. The additional conduction of a geneset analysis also provided information about the immune related functions of these marker genes.
Soft Learning Vector Quantisation (SLVQ) andRobust Soft Learning Vector Quantisation (RSLVQ) are supervised data classification methods, that have been applied successfully to real world classification problems. The performance of SLVQ and RSLVQ, however, reduces, when they are applied tomore complicated classification problems. In this thesis, we have introducedmodi-fications to SLVQand RSLVQ, in order to havemore capable versions of them. A few possibilities to modify SLVQ and RSLVQ are considered, some of them are not successful enough and they have been included for the sake of completeness. The fruits of the thesis are plenty, including Tangent Soft Learning Vector Quantisation-Strong (TSLVQ-S), together with its more stable version Tangent Robust Soft Learning Vector Quantisation-Strong (TRSLVQ-S), Attraction Soft Learning Vector Quantisation (ASLVQ) and Grassmannian Soft Learning Vector Quantisation (GSLVQ).
Internationalization and business expansion appear to be the most challenging processes in business conduction today. Every step of the foreign market entry process and overseas operations establishment is full of obvious risks and hidden pitfalls. Theoretical background, multiplied with the vital practice, is playing the key role in such a complicated business process; such information can be used as a guideline by further market entrants and players. At present, Germany with its well-developed engineering industry represents a broad space for research of internationalization process in its different forms, as well as can show both successful and negative results of foreign market entries.
FUSO is one of the Japanese leading manufacturing of trucks and buses in the world and also it is an integral part of Daimler AG. Being a large manufacturer in trucks and buses, Fuso faces some marketing issues due to corrosion issues. Corrosion is one of the major issue to breakdown or damage the performance of the vehicles. To encounter this issue, FUSO initiated new project and called as “Anti-Corrosion Project”. The main mission of this project is to improve the corrosion resistivity or performance of the metal parts. Currently FUSO has almost 70 percent of parts which lies under Grade-III i.e. lesser than the one year corrosion resistivity.
In this project, the corrosion issues are collected by different types of audits like from customer as well as from taking two years old vehicle in worst conditions. Listed corrosion issues further investigated for current specification and requested for new proposal from supplier. Then the proposed solution is internally estimate the cost and make negotiation with the supplier. Later it’s forwarded to meeting with top management for approval. In case of higher corrosion specification, parts are taken from production line and tested in material lab which is available in FUSO. At last, the approved proposal is requested to release the drawing change and further the new proposal will be implemented. Entire project it should be coordinate with all different departments and working with teams gives more deep knowledge about the cause of issues.
With this project, parallel focused on the shop floor developments in return parts management area. FUSO is also responsible for the after sale services. In other words, FUSO provides warranty for the parts which breakdown within three years. Breakdown parts are directly delivered by the customers through dealers for warranty claim, so these parts called Warranty Part Investigation (WPI) parts. Sometimes customer wants to know the cause of the breakdown even though warranty has expired, in this case company will investigate the cause but they don’t provide the warranty. These kind of parts known as Product Quality Report (PQR) parts.
Company has a different shop floor for return parts and these parts are directly received by the company. RPM has four processes i.e. inwarding, pre-analysis, investigation and dispatch or scrap.
Usually, company used to get 30-50 parts per day, recently they decided to receive all the breakdown parts. Hence, it results in increasing the delay of inwarding and other processes. To solve this, standard layout and process are constructed. And, one of the main reasons for inward delay is higher documentation which is basically not required. These are converted into automation or digitalize work. Improvements are done using the lean manufacturing project methodology which results in more inward of failure parts and less inventory.
Many companies use machine learning techniques to support decision-making and automate business processes by learning from the data that they have. In this thesis we investigate the theory behind the most widely used in practice machine learning algorithms for solving classification and regression problems.
In particular, the following algorithms were chosen for the classification problem: Logistic Regression, Decision Trees, Random Forest, Support Vector Machine (SVM), Learning Vector Quantization (LVQ). As for the regression problem, Decision Trees, Random Forest and Gradient Boosted Tree were used. We then apply those algorithms to real company data and compare their performances and results.
The application described in this thesis has been created, built and designed to help nurses or any medical personnel all around the world in being able to access a real-time database to store patient records like Patient Name, Patient ID, Patient Age and Date of Birth, and the Symptoms that the patient is experiencing. A real-time database is a live database where all changes made to it are reflected across all devices accessing it. This application will be beneficial especially in countries where access to a computer or medical equipment is not always possible. A phone is always ready use and at the reach of the hand, users of this application will always be able to access the data at any given time and place. We will be able to add a new patient or search for existing patients. In addition, this application allows us to take RAW medical images that can be used to identify anomalies in the blood sample. RAW images are important for this application because they’re uncompressed, which means, they do not lose any quality or details. The users of this application are the medical personnel that will be taking care of the patients. These users will have to create a profile on the database in order to use the application, since their data, like user ID, will be used in order to control the behaviour of the data retrieved and stored. We will also discuss the current and future features of this application, as well as, the benefits of this application when it comes to the medical personnel, as well as patients. Finally, we will also go
over the implementation of such application from a hardware perspective, as well as a software one.
Implementation of a customised business model for innovative engineering consultancy services
(2019)
Business development is vital for every organisation who intend to grow. It follows expansion through organic and inorganic means. Also, there are many innovative business styles which help organisations to expand. This thesis shows how engineering services organisation chose its form of business expansion
The following thesis explains how engineering service sector company uses its expertise to expand its business towards consultancy market with the demonstration of the real-life executed business model.
The thesis provides a solution for the following issues
1) What is the best in-house strategy to be developed for business expansion in the service industry?
2) How did the niche market experiences help for business expansion?
Prototype-based classification methods like Generalized Matrix Learning Vector Quantization (GMLVQ) are simple and easy to implement. An appropriate choice of the activation function plays an important role in the performance of (deep) multilayer perceptrons (MLP) that rely on a non-linearity for classification and regression learning. In this thesis, successful candidates of non-linear activation functions are investigated which are known for MLPs for application in GMLVQ to realize a non-linear mapping. The influence of the non-linear activation functions on the performance of the model with respect to accuracy, convergence rate are analyzed and experimental results are documented.
This master thesis covers the topics of Customer relationships formation in the IT-outsourcing market on the example of “ABC” company. Most works related to the topic IT outsourcing cover the problems of implementation of IT services and the process of providing them to the customers and mostly all the issues are covered from the perspec-tive of consumers. Thus, problems and results of outsourcing providers of IT services remain almost uncovered. This master thesis is to reveal the specific features of IT out-sourcing business in Belarus and to develop an approach to the formation and construc-tion of a system of relationships between the company and its clients as a source of competitiveness increase.
Cryptorchidism describes a disease, in which one or both testes do not descend into the scrotum properly. With a prevalence of up to 10%, cryptorchidism is one of the most common birth defects of the male genital tract. Despite its associated health risks and accompanying economic damage, resulting from surgery and losses in breeding, studies on canine cryptorchidism and its causes are relatively rare. In this study a relational database for genetic causes of cryptorchidism was established and used as a basis for the identification of candidate genes. Associated regions were analysed by nanopore sequencing with the goal to identify genetic variants correlated with cryptorchidism in German Sheep Poodle.
In today’s market, the process of dealing with textual data for internal and external processes has become increasingly important and more complex for certain companies. In this context,the thesis aims to support the process of analysis of similarities among textual documents by analyzing relationships among them. The proposed analysis process includes discovering similarities among these financial documents as well as possible patterns. The proposal is based on the exploitation and extension of already existing approaches as well as on their combination with well-known clustering analysis techniques. Moreover, a software tool has been implemented for the evaluation of the proposed approach, and experimented on the EDGAR filings, on the basis of qualitative criteria.
This Master Thesis covers two main Topics: Sharing Economy and Risk Management and combines them in frames of this paper in order to provide a methodology (Uber was chosen as an example) of how a risk management process may be applied to a Sharing Economy business, as well as which types of risks are of special relevance for those types of businesses.