Refine
Document Type
- Master's Thesis (121)
- Bachelor Thesis (94)
- Conference Proceeding (66)
- Diploma Thesis (14)
- Final Report (6)
Year of publication
Language
- English (301) (remove)
Keywords
- Blockchain (40)
- Maschinelles Lernen (28)
- Vektorquantisierung (9)
- Algorithmus (7)
- Bioinformatik (6)
- Bitcoin (6)
- Graphentheorie (6)
- Internet der Dinge (6)
- Neuronales Netz (6)
- Unternehmen (6)
Institute
- Angewandte Computer‐ und Biowissenschaften (120)
- 06 Medien (35)
- 03 Mathematik / Naturwissenschaften / Informatik (28)
- Wirtschaftsingenieurwesen (21)
- 01 Elektro- und Informationstechnik (7)
- 04 Wirtschaftswissenschaften (7)
- Ingenieurwissenschaften (4)
- Sonstige (4)
- 02 Maschinenbau (2)
- 05 Soziale Arbeit (1)
Crowd-Powered Medical Diagnosis : The Potential of Crowdsourcing for Patients with Rare Diseases
(2023)
With the recent rise in medical crowdsourcing platforms,
patients with chronic illnesses increasingly broadcast their
medical records to obtain an explanation for their complex
health conditions. By providing access to a vast pool of
diverse medical knowledge, crowdsourcing platforms have
the potential to change the way patients receive a medical
diagnosis. We developed a conceptual model that details
a set of variables. To further the understanding of
crowdsourcing as an emerging phenomenon in health care,
we provide a contextualization of the various factors that
drive participants to exert effort. For this purpose, we used
CrowdMed.com as a platform from which we gathered and
examined a unique dataset that involves tasks of diagnosing
rare medical conditions. By promoting crowdsourcing
as a robust and non-discriminatory alternative to seeking
help from traditional physicians, we contribute to the acceptance
and adoption of crowdsourcing services in health
economics.
Derived from the Ancient Greek word τραῦμα (engl. wound,
damage), the word trauma refers to either physical or emotional wounds. Nowadays, it is mostly used in the context of psychological wounds, inflicted by an identity-shattering event – an event that causes the traumatised to not be able to reconcile their lived reality with the expectation of a human universal experience anymore. The last decade, the last two years in particular, and the last two weeks ad absurdum, have scarred the global landscape of human existence beyond recognition. From Putin’s unexpected reimposition of mutually assured destruction doctrines via the global SARS-Cov-2 pandemic to the lingering threat of climate doom, people all over the globe have been faced with persistent threats to their most basic perceptions of ontological safety. This article seeks to examine the impact of the SARS-Cov-2 pandemic and to which degree it is justified to speak of a pandemic trauma. In addition, it engages with the liminality of pandemic trauma as a shared, collective and an isolated, individual experience, and potential mitigation strategies for building community resilience.
Convolutional Neural network (CNN) has been one of most powerful and popular preprocessing techniques employed for image classification problems. Here, we use other signal processing techniques like Fourier transform and wavelet transform to preprocess the images in conjunction with different classifiers like MLP, LVQ, GLVQ and GMLVQ and compare its performance with CNN.
Purpose: The study is aimed to determine the Incentives for German SMEs to offshore their business activities in India and China.
Design: This study is based on quantitative approach. Primary and secondary data is being used in the study. The data was collected from individuals working in different SMEs in Germany, having relative offshoring experience. Theories from the articles, peer reviewed journal along with relevant books were consulted throughout the study.
Findings: The findingssuggest that the benefits and advantages of offshoring strategy in India and China are cost efficiency and technology. Moreover, the challenges that are being faced by the firms while executive offshoring strategy is cultural mix especially language/cultural barriers, security issues and loss of market performance.
Originality and Value: The study on incentives of German SMEs to offshore business activities in India and China enables me to understand why companies are interested in offshoring strategy in low cost countries for expanding their business while evaluating the challenges, merits and demerits of offshoring
Over the past few years, wind and solar power plants have increasingly contributed to energy production. However, due to fluctuating energy sources, the energy production data contain disruption. Such disrupted data lead to the wrong prediction performance, and they need to be estimated by other values. In this thesis, we provide a comparative study to estimate the online disrupted data based on the data of similar groups of power plants, We apply three estimation techniques, e.g., mean, interpolation, and k-nearest neighbor to estimate the disruption on training data. We then apply four clustering algorithms, e.g., k-means, neural gas, hierarchical agglomerative, and affinity propagation, with two similarity measures, e.g., euclidean and dynamic time warping to form groups of power plants and compare the results. Experimental results show that when KNN estimation is applied to data, and neural gas and agglomerative with dtw are used to cluster the data, the cluster quality scores and execution time give better results compared to others. Therefore, we conclude and choose KNN estimation to reconstruct the online disrupted data on each group of a similar power plants.
Studying and understanding the metabolism of plants is essential to better adapt them to future climate conditions. Computational models of plant metabolism can guide this process by providing a platform for fast and resource-saving in silico analyses. The reconstruction of these models can follow kinetic or stoichiometric approaches with Flux Balance Analysis being one of the most common one for stoichiometric models. Advances in metabolic modelling over the years include the increasing number of compartments, the automation of the reconstruction process, the modelling of plant-environment interactions and genetic variants or temporally and spatially resolved models. In addition, there is a growing focus on introducing synthetic pathways in plants to increase their agricultural potential regarding yield, growth and nutritional value. One example is the β-hydroxyaspartate cycle (BHAC) to bypass photorespiration. After the implementation in a stoichiometric C3 plant model, in silico flux analyses can help to understand the resulting metabolic changes. When comparing with in vivo experiments with BHAC plants, the metabolic model can reproduce most results with exceptions regarding growth and oxaloacetate. To evaluate whether the BHAC is suitable to establish a synthetic C4 cycle, the pathway is implemented in a two-cell type model that is capable of running a C4 cycle. The results show that the BHAC is only beneficial under light limitation in the bundle sheath cell. An additional engineering target for improved performance of plants is malate synthase. This work serves as the basis for further analyses combining the different factors boosting the advantages of the BHAC and for in vivo experiments in C3 and C4 plants.
In the past few years Generative models have become an interesting topic in the field of Machine Learning (ML). Variational Autoencoder (VAE) is one of the popular frameworks of generative models based on the work of D.P Kingma and M. Welling [6] [7]. As an alternative to VAE the authors in [12] proposed and implemented Information Theoretic Learning (ITL) based Autoencoder. VAE and ITL Autoencoder are a combination of the neural networks and probabilistic graphical models (PGM) [7]. In modern statistics it is difficult to compute the approximation ofthe probability densities. In this paper we make use of Variational Inference (VI) technique from machine learning that approximate the distributions through optimization. The closeness between the distributions are measured by the information theoretic divergence measures such as Kullbach-Liebler, Euclidean and Cauchy Schwarz divergences. In this thesis, we study theoretical and experimental results of two different frameworks of generative models which generate images of MNIST handwritten characters [8] and Yale face database B [3]. The results obtained show that the proposed VAE and ITL Autoencoder are capable of generating the underlying structure of the example datasets
Digital data is rising day by day and so is the need for intelligent, automated data processing in daily life. In addition to this, in machine learning, a secure and accurate way to classify data is important. This holds utmost importance in certain fields, e.g. in medical data analysis. Moreover, in order to avoid severe consequences, the accuracy and reliability of the classification are equally important. So if the classification is not reliable, instead of accepting the wrongly classified data point, it is better to reject such a data point. This can be done with the help of some strategies by using them on top of a trained model or including them directly in the objective function of the desired training model. We discuss such strategies and analyze the results on data sets in this thesis.
This Bachelor thesis investigates the learning rules of the Hebbian, Oja and BCM neuron models for their convergence to, and the stability of, the fixed points. Existing research is presented in a structured manner using consistent notation. Hebbian learning is neither convergent nor stable. Oja learning converges to a stable fixed point, which is the eigenvector corresponding to the largest eigenvalue of the covariance matrix of the input data. BCM learning converges to a fixed point which is stable, when assuming a discrete distribution of orthogonal inputs that occur with equal probability. Hebbian learning can therefore not be used in further applications, where convergence to a stable fixed point is required. Furthermore, this Bachelor thesis came to the conclusion that determining the fixed points of the BCM learning rule explicitly involves extensive calculation and other methods for verifying the stability of possible fixed points should be considered.
This scientific work deals with the current opportunities of business development. Purpose of the work is study and analysis of the organization's development strategy and its development. The subject of the study is the mechanism of formation of an organization's development strategy, understanding of business development and its core methodologies and branches. This thesis is based on the operations of the real engineering company and main part of the research could be applied in reality. Main goal of the thesis is to find recommendations on the implementation of strategic changes organization's development strategy.
Simulating complex physical systems involves solving nonlinear partial differential equations (PDEs), which can be very expensive. Generative Adversarial Networks (GAN) has recently been used to generate solutions to PDEs-governed complex systems without having to numerically solve them.
However, concerns are raised that the standard GAN system cannot capture some important physical and statistical properties of a complex PDE-governed system, along side with other concerns for difficult and unstable training, the noisy appearance of generated samples and lack of robust assessment methods of the sample quality apart from visual examination. In this thesis, a standard GAN system is trained on a data set of Heat transfer images. We show that the generated data set can capture the true distribution of training data with respect to both visual and statistical properties, specifically the vertical statistical profile. Furthermore, we construct a GAN model which can be conditioned using variance-induced class label. We show that the variance threshold t = 0. 01 constructs a good conditional class label, such that the generated images achieve 96% accuracy
rate in complying with the given conditions.
Current research in identity management is focusing on decentralized trust establishment for distributed identities. One of these decentralized trust models is Self-Sovereign Identities (SSI). With SSI each entity should be able to independently present and manage provable information about itself as well as request and review evidence from other entities. Using a distributed blockchain, information for verifying the authenticity of this evidence can be obtained from any other entity. This concept can be used not only for people, but also for authentication and authorization during the life cycle of devices in the Internet of Things (IoT). This paper presents an SSI-based concept for authentication and authorization of IoT devices among each other, intended to contribute to the change in trust on the internet. The SSI methodology employing a blockchain offers the possibility to establish mutual trust and proof of ownership without relying on any third party. The paper describes the concept, offers a reference implementation, and gives a discussion of the approach.
As economies are getting more and more interconnected, the importance of the global logistics sector grew accordingly. However, both structural challenges and current events lead to recent supply chain disruptions, exposing the vulnerabilities of the sector. Simultaneously, blockchain has emerged as a key innovative technology with use cases going far beyond the exchange of virtual currencies. This paper aims to analyze how the technology is transforming global logistics and its challenges. Therefore, six use cases, are presented to give an overview of the technological possibilities of blockchain and smart contracts. The analysis combines theoretical approaches from scientific journals and combines them with findings from real-world implementations. The paper finds that the technology can change supply chain design fundamentally, with processes and decisions being automated and power within supply chain structures changing. However, implementations also face technological, environmental, and organizational challenges that need to be solved for wide-spread adoption.
More than 10 years after the invention of Bitcoin, the underlying blockchain technology is having an increasing effect on today’s society. Although one of the most popular application areas of blockchain is still the field of cryptocurrencies, the technological concepts are crossing into further application domains such as international supply chains. Fast-changing markets, high costs of time and risk management as well as biased relationships between the actors pose big challenges to an appropriate supply chain management. Based on a case study about sensor tracking, this paper explores the potential impact of blockchain on small and medium enterprises within an international supply chain. We will show that blockchain technologies offers a high potential to reduce inequalities of power relations between involved actors within supply chains. To achieve this, the requirements for the use of blockchain in supply chain management will be analyzed by means of a conducted case study and an expert survey of the companies concerned.
Digital Power of Attorney catalyzed by Software Requirements for Blockchain-based Applications
(2022)
Blockchain Technology (BT) with so-called web3 is at an inflection point between new sub-theme hypes and world-wide industrialization over last three years thanks to large companies like MicroStrategy [1], Facebook [2] and several Venture-Capital formations [3] who are already fighting over market share and community growth. Our work represents insights from Literature-based Software Requirement (SR) elicitation for a specific Blockchain-based Application, which is creation, managing and control of digital Power of Attorney (POA). The context of POA is not only a financial driven use-case it is by far a heavy weight universal legal transaction. We use a morphological box and reduced PRIMS-P to synthesis a generic specification for further Blockchain-based Application development. Formulated SRs in POA context are reflected on our core actors which are Grantor and authorized, trusted, external Entities. Proposed characteristics for relationship and effects are visualized in a reference model originally used in digital platform ecosystems [4]. This design and modelling approach facilitated closing discussion of BT and its future eCommerce perspective.
Dynamic object roles and corresponding contexts can model complex applications with higher-level abstraction. These abstracted applications can be used in wider areas such as financial institutions, health care, and supply chain network. Role management which consists of the creation of role objects, and binding role object between core objects still suffers from non-intrusive logging-monitoring, auditing, and resilient data source for role-based applications. Moreover, immutable smart contracts cause problems concerning bug fixing and maintenance without dynamic binding to new smart contract objects. An object that is created from a smart contract (contract class) can be transparently attached to a role object utilizing the Role Object Pattern (ROP). However, ROP itself does not contain a context definition and context-specific role assignment grouping the definition of smart contract relationships in abstracted data types. In this study, we would like to implement an extended version of the role object pattern called Context-based Role Object Pattern (ContextROP) with an onchain smart contract language called Solidity to solve fundamental problems. To evaluate the proposal, we will implement a use case with the design pattern proceeding with qualitative and quantitative analysis.
Humans started using the principles of insurance thousands of years ago when they lived in tribes in smaller villages. If one of the tribe members were injured, the others would take care of him and his family. The basic principle of insurance is several people covering each other against a particular risk. Today, most people in regions like Europe have access to insurance, while many people worldwide still have no access at all. The cost and accessibility may be improved with a blockchain-based parametric approach. The insurance process in a parametric approach is exclusively based on data, and decisions are made objectively. Blockchain is a necessary and integral part of the approach to create transparency and connect the customer’s and investor’s risk capital. The paper offers an overview of the opportunities and challenges of blockchain-based parametric insurance, a catalog of criteria for such insurance, a description of all components and their interaction for implementation on Ethereum, and a reference implementation of a train delay insurance in Germany.
The topic of soulbound, non-transferable tokens is getting lots of interest within the blockchain space lately as decentralized societies become more tangible with Web3 social media applications and DAOs. In this article, I want to outline how such tokens function, their problems for adoption and standardization, and how they differ from verifiable credentials in the SSI field. As such soulbound assets will likely rely on extended recovery and asset management schemes to become viable identities that safely gain reputation and trust, features like social recovery and contract-based accounting are incorporated. By combining those new technologies and the theoretical crypto-native identity construct, the paper will give an impression of the future user-centric data economy.
Applications and Potential Impacts of Blockchain Technology in Logistics and Supply Chain Areas
(2022)
The motive of the present thesis is to analyze the applications and potential impacts of blockchain technology in the logistics and supply chain areas. For this purpose, the literature from different sources has been used to analyze and get an overview of the current status and role of blockchain technology within the logistics and supply chain areas. Different use cases, as well as pilot projects from organizations all over the world and also from Germany, have been included. Suggestions for further applications and implementations of blockchain technology along with their potential impacts have been made. Additionally, the cost of implementing blockchain-based solutions and applications has been estimated along with providing recommendations and suggestions for important and key points to be considered before preparing and deciding to implement blockchain-based solutions in any organization.
Influenza A viruses are responsible for the outbreak of epidemics as well as pandemics worldwide. The surface protein neuraminidase of this virus is responsible, among other things, for the release of virions from the cell and is thus of interest in pharmacological research. The aim of this work is to gain knowledge about evolutionary changes in sequences of influenza A neuraminidase through different methods. First, EVcouplings is used with the goal of identifying evolutionary couplings within the protein sequences, but this analysis was unsuccessful. This is probably due to the great sequence length of neuraminidase. Second, the natural vector method will be used for sequence embedding purposes, in hopes to visualize sequential progression of the virus protein over time. Last, interpretable machine learning methods will be applied to examine if the data is classifiable by the different years and to gain information if the extracted information conform to the results from the EVcouplings analysis. Additionally to using the class label year, other labels such as groups or subtypes are used in classification with varying results. For balanced classes the machine learning models performed adequately, but this was not the case for imbalanced data. Groups and subtypes can be classified with a high accuracy, which was not the case for the years, continents or hosts. To identify the minimal number of features necessary for linear separation of neuraminidase group 1 subtypes, a logistic regression was performed at last, resulting in the identification of 15 combinations of nine amino acid frequencies. Since the sequence embedding as well as the machine learning methods did not show neuraminidase evolution over time, further research is necessary, for example with focus on one subtype with balanced data.