Refine
Document Type
- Master's Thesis (121)
- Bachelor Thesis (94)
- Conference Proceeding (66)
- Diploma Thesis (14)
- Final Report (6)
Year of publication
Language
- English (301) (remove)
Keywords
- Blockchain (40)
- Maschinelles Lernen (28)
- Vektorquantisierung (9)
- Algorithmus (7)
- Bioinformatik (6)
- Bitcoin (6)
- Graphentheorie (6)
- Internet der Dinge (6)
- Neuronales Netz (6)
- Unternehmen (6)
- Deep learning (5)
- Ethereum (5)
- Supply Chain Management (5)
- Kryptologie (4)
- Künstliche Intelligenz (4)
- Proteine (4)
- RNS (4)
- Sequenzanalyse <Chemie> (4)
- Smart contract (4)
- Softwareentwicklung (4)
- Videospiel (4)
- Virtuelle Währung (4)
- Bildgebendes Verfahren (3)
- Biomarker (3)
- Biotechnologie (3)
- China (3)
- DNA Barcoding (3)
- Fluoreszenz-Resonanz-Energie-Transfer (3)
- Kundenmanagement (3)
- Lernendes System (3)
- Social Media (3)
- Strategisches Management (3)
- Support-Vektor-Maschine (3)
- Vektor (3)
- Zeitreihe (3)
- Bildverarbeitung (2)
- Biomedizin (2)
- COVID-19 (2)
- CRISPR/Cas-Methode (2)
- Cluster-Analyse (2)
- Cryptocurrency (2)
- DNS (2)
- Deutschland (2)
- E-Learning (2)
- Education (2)
- Film (2)
- Geschichte (2)
- Indien (2)
- Industrie 4.0 (2)
- Kraftfahrzeugbau (2)
- Kryptorchismus (2)
- Logistik (2)
- Membranproteine (2)
- Nanopartikel (2)
- Objekterkennung (2)
- Pandemie (2)
- Sphäroproteine (2)
- Thulium (2)
- Trust (2)
- Ultrafast (2)
- Unternehmensentwicklung (2)
- Verifikation (2)
- Windkraftwerk (2)
- cis-trans-Isomerie (2)
- zero knowledge proof (2)
- 3D-Druck (1)
- Accounting (1)
- Ackerbohne (1)
- Agri-food (1)
- Aminosäurensequenz (1)
- Ammoniumverbindungen (1)
- Amyloid (1)
- Anomalieerkennung (1)
- Ant Colony System (1)
- Anthropocene Disease (1)
- Anthropologie (1)
- Anämie (1)
- Arbeitgeber (1)
- Arbeitsplatz (1)
- Arbeitszufriedenheit (1)
- Assembly (1)
- Assessment (1)
- Atomic Swaps (1)
- Ausländer (1)
- Auswirkung (1)
- Axialbelastung (1)
- Beam shaping (1)
- Bekleidungsindustrie , Marketingstrategie (1)
- Beratung (1)
- Berufszufriedenheit (1)
- Beurteilung (1)
- Beziehungsmanagement (1)
- Bibliometric analysis (1)
- Biene <Gattung> (1)
- Big Data (1)
- Bildung (1)
- Biochemie (1)
- Biomarker , Krebs <Medizin> (1)
- Biometrie (1)
- Blender <Programm> (1)
- Bodenorganismus (1)
- Bridge (1)
- Bruchmechanik (1)
- Bruchzähigkeit (1)
- Business Perspective (1)
- Business Reputation System (1)
- Chemotherapie (1)
- Cloud Computing (1)
- Cluster , Cluster-Analyse (1)
- Cluster <Datenanalyse> (1)
- Codierungstheorie (1)
- Collective Action (1)
- Common Pool Resources (1)
- Computerforensik (1)
- Computersicherheit (1)
- Computerspiel , Musik (1)
- Corporate Social Responsibility (1)
- Cross-Chain (1)
- Crypto currencies (1)
- Cryptocurrencies (1)
- Cyanide (1)
- Cyber-physisches System (1)
- DAB <Rundfunktechnik> (1)
- DAO (1)
- DNA-metabarcoding (1)
- DNS , Geschlechtsbestimmung (1)
- Datenanalyse (1)
- Datenbank (1)
- Datenbanksystem (1)
- Datenerfassung (1)
- Datenübertragung (1)
- DeFi (1)
- DeSci (1)
- Decentralized Crypto Economics (1)
- Degeneration (1)
- Depression , Stressor (1)
- Deutschland , Nordamerika , Alkoholismus , Suchttherapie (1)
- Dezentralisation (1)
- Dienstleistung (1)
- Diffusion , Mathematisches Modell , Zellularer Automat (1)
- Digital Certificates (1)
- Digital Identity (1)
- Digital Signatures (1)
- Digitalisierung (1)
- Direct Laser Interference Patterning (1)
- Direktvertrieb (1)
- Diskreter Logarithmus (1)
- Distributed Ledger Technologies (1)
- Distributed Ledger Technology (1)
- Distributed-Ledger-Technology (1)
- Dokumentverarbeitung (1)
- Druckfallkrankheit (1)
- Dürrestress (1)
- Echtzeitsystem (1)
- Edge Detection (1)
- Effizienz (1)
- Eigenwertproblem (1)
- Einkauf , Strategische Planung (1)
- Electronic Commerce (1)
- Elektrizitätserzeugung (1)
- Elektrizitätswirtschaft , USA (1)
- Elektrostimulation , Stammzelle , Knochenbildung (1)
- Embryonalentwicklung (1)
- Energiewirtschaft (1)
- Engineering cyanobacteria (1)
- Entscheidungsbaum (1)
- Epidemiologie (1)
- Erfolgsfaktor (1)
- Erneuerbare Energien (1)
- Erweiterte Realität <Informatik> (1)
- Erzaufbereitung (1)
- Extraktion (1)
- Femtosekundenlaser (1)
- Fernsehsendung (1)
- Fernunterricht (1)
- Feuchtgebiet (1)
- Fiber-laser (1)
- Filmwirtschaft (1)
- Finanzdienstleistungsinstitut (1)
- Fledermäuse (1)
- Flexibilitätsmarkt (1)
- Flexplattform (1)
- Fluoreszenzmarkierung (1)
- Formula Student Germany (1)
- Forschung (1)
- Fusion (1)
- Führung (1)
- GAAA tetraloop (1)
- GDPR (1)
- Game-Based Learning (1)
- Ganganalyse (1)
- Gedruckte Schaltung (1)
- Gen (1)
- General Purpose Technology (1)
- Generative Adversarial Network (1)
- Genexpression (1)
- Gerste (1)
- Geschäftsmodell (1)
- Geschäftsplan (1)
- Gesichtserkennung (1)
- Gesundheitsfürsorge (1)
- Globalisierung (1)
- Glucosinolate , Kreuzblütler , Proteine , Hydrolysat (1)
- Golderz (1)
- Graph (1)
- Green hydrogen (1)
- Haus , Schalldämmung , Trittschallschutz , Luftschall , Mathematisches Modell (1)
- Hirntumor (1)
- Hitzeschock-Proteine (1)
- Hydraulik (1)
- Hydroakustik (1)
- Hydroventil (1)
- ID Union (1)
- IP (1)
- Identitätsverwaltung (1)
- Immunologische Diagnostik (1)
- In silico-Methode (1)
- Industrial Internet (1)
- Influencer (1)
- Influenza-A-Virus (1)
- Informationstechnik (1)
- Informationsverarbeitung , Mehragentensystem (1)
- Inhibitor , Rezeptor-Tyrosinkinasen , Epidermaler Wachstumsfaktor-Rezeptor , Lungenkrebs , Zelllinie (1)
- Innovation (1)
- Integriertes Lernen (1)
- Intelligent methods (1)
- Intelligentes Stromnetz (1)
- Interkulturelle Kompetenz (1)
- Internet , Medienkonsum , Konzentrationsfähigkeit (1)
- Internet-TV (1)
- Interpretable Models (1)
- Journalismus (1)
- Kanal (1)
- Kasachstan (1)
- Kaufverhalten (1)
- Kind (1)
- Klein- und Mittelbetrieb (1)
- Klimaänderung (1)
- Kollektive Handlung (1)
- Kommunikationsstrategie (1)
- Komplexität (1)
- Konfokale Mikroskopie (1)
- Konnossement (1)
- Kontrolltheorie , Stabilität , Steuerungstheorie (1)
- Korrosion (1)
- Kryptoanalyse (1)
- Kryptosystem (1)
- Kugelspalt (1)
- Kulturpflanzen (1)
- Kunde (1)
- Kunststoff (1)
- Landwirtschaft (1)
- Laser beam welding (1)
- Laser end rod melting (1)
- Laserablation (1)
- Lebensraum (1)
- Lernerfolg (1)
- Ligand <Biochemie> (1)
- Linearer Code (1)
- Literature Review (1)
- Local Flexibility Market (1)
- Logistiksystem (1)
- Los Angeles- Hollywood (1)
- Luftschall (1)
- Lungenentzündung (1)
- MD simulation (1)
- MIMO (1)
- Makroökonomie (1)
- Malaria (1)
- Malawi (1)
- Marke (1)
- Markenpolitik (1)
- Marketingstrategie (1)
- Marktanalyse , Sales-promotion (1)
- Markteintrittsstrategie (1)
- Marktforschung (1)
- Maschinelles Sehen (1)
- Materialfluss (1)
- Materialität (1)
- Mathematische Modellierung , Computersimulation , Simulationsspiel , Schiffsnavigation (1)
- Mathematisches Modell (1)
- Maximal Extractable Value (1)
- Medizin (1)
- Meinungsbildung (1)
- Mergers and Acquisitions (1)
- MerkleProof (1)
- Messenger-RNS (1)
- Metrik <Mathematik> (1)
- MicroLED (1)
- Microstructure (1)
- Migration (1)
- Mikrofinanzierung (1)
- Mikroorganismus (1)
- Mikroskopie (1)
- Mikrospore (1)
- Mikrostruktur (1)
- Molekülstruktur (1)
- Motion Capturing (1)
- Multifunktionalität (1)
- Multiplicative Noise (1)
- Mutante (1)
- München (1)
- Nachhaltigkeit (1)
- Nanostruktur (1)
- Nepal , Biogasgewinnung , Sozioökonomischer Wandel (1)
- Netzwerkanalyse (1)
- Netzwerkverwaltung (1)
- Neuromarketing (1)
- Neuseeland (1)
- Nichteuklidische Geometrie (1)
- Nitinol (1)
- Non-Fungible Token (1)
- Non-coding RNA (1)
- Numerische Mathematik (1)
- Oberflächenbehandlung (1)
- Object Detection and Tracking (1)
- Objektorientierte Programmierung (1)
- Offshoring (1)
- Optische Spektroskopie (1)
- Oxidation (1)
- Paper-based Coffee Cups (1)
- Parvalbumine (1)
- Passwort (1)
- Pathogene Bakterien (1)
- Patient (1)
- Peer-to-Peer-Netz (1)
- Personalmarketing (1)
- Pestizid (1)
- Pflanzen (1)
- Pflanzenkläranlage (1)
- Philanthropie (1)
- Photorezeptor , Netzhautdegeneration (1)
- Photosynthese (1)
- Photosynthetic butanol (1)
- Planar Homography (1)
- Planung (1)
- Pollen (1)
- Polygon scanner processing (1)
- Polymethylmethacrylate (1)
- Polynom (1)
- Polynom , Graphentheorie (1)
- Polysaccharide (1)
- Predictive maintenance (1)
- Primaten (1)
- Procurement (1)
- Produkt (1)
- Produkteinführung (1)
- Programmierung (1)
- Projektmanagement (1)
- Projektplanung (1)
- Prostatakrebs (1)
- Proteinbiosynthese (1)
- Proteine , Bioinformatik (1)
- Proteinfaltung (1)
- Proteinfamilie , Alignment <Biochemie> , Bioinformatik (1)
- Proteinmuster , Bioinformatik (1)
- Prototye-based models (1)
- Prozessüberwachung (1)
- Präsidentenwahl (1)
- Prüfmittel (1)
- Qualitätsmanagement (1)
- Quantencomputer (1)
- RFID (1)
- RNS-Interferenz (1)
- Ranking , Software , Wettbewerb (1)
- Raucher (1)
- Real time quantitative PCR , Genotypisierung (1)
- Realistische Computergrafik (1)
- Recurent Neural Networks (1)
- Regularisierung (1)
- Requirements engineering (1)
- Risiko (1)
- Risikomanagement (1)
- Role-Object Pattern (1)
- Role-based Programming (1)
- Rollenspiel (1)
- SARS-Cov- 2 (1)
- SSI (1)
- Sandwich Attacks (1)
- Satellitenfunk (1)
- Satellitentechnik (1)
- Schallausbreitung (1)
- Schifffahrt (1)
- Schrägkugellager (1)
- Security (1)
- Sekundärstruktur (1)
- Selbstorganisierende Karte (1)
- Selenoproteide (1)
- Self-Sovereign identities (1)
- Semisynthetic [FeFe]-hydrogenase (1)
- Sharing Economy (1)
- Siliziumbearbeitung (1)
- Smart City (1)
- Smart Contract Programming (1)
- Smart Contracts (1)
- Smart Market (1)
- Software (1)
- Soulbound Token (1)
- Soziale Software , Business-to-Business-Marketing (1)
- Spaltströmung (1)
- Sportartikelmarkt (1)
- Sportberichterstattung (1)
- Sportsponsoring (1)
- Stakeholder (1)
- Stickstoffverbindungen (1)
- Stochastisches Modell (1)
- Stoffwechsel (1)
- Strukturmodell (1)
- Stöchiometrie (1)
- Surface texturing (1)
- Surface topography (1)
- Systemmedizin (1)
- Südafrika , Zeitung , Fernsehen , Hörfunk (1)
- TIRFM (1)
- Techno-economic analysis (1)
- Teilchenbeschleuniger (1)
- Telekommunikation (1)
- Tiefschweißen (1)
- Tokenization (1)
- Traceability (1)
- Transactions (1)
- Transkriptionsfaktor (1)
- Transparenz (1)
- Tutte-Polynom (1)
- Typoloy (1)
- Ultrakurzpulslaser (1)
- Ultraviolett (1)
- Umweltbelastung (1)
- Umweltbezogenes Management , Marketingstrategie (1)
- Unternehmen , Internationalisierung (1)
- Unternehmensgründung (1)
- Unternehmensgründung , Sozioökonomischer Wandel (1)
- Unternehmenskultur (1)
- User Generated Content (1)
- Vakuumtechnik (1)
- Vector Association (1)
- Vector Quantization (1)
- Verbraucherverhalten (1)
- Versicherung (1)
- Viability analysis (1)
- Videokonferenz (1)
- Virtuelle Realität (1)
- Visualisierung (1)
- Vollmacht (1)
- Vorstellungsgespräch (1)
- Wahrscheinlichkeitsrechnung (1)
- Wahrscheinlichkeitsverteilung (1)
- Wasserschall (1)
- Web of Things (1)
- Web3 (1)
- Weblog , Kommunikation , Electronic Commerce (1)
- Werbesendung (1)
- Wert (1)
- Wettbewerbsvorteil (1)
- Wildtiere (1)
- Wirtschaftsentwicklung (1)
- Work-Life-Balance (1)
- YouTube (1)
- Zebrabärbling (1)
- Zeitreihe , Vektor , Hankel-Matrix (1)
- Zeitreihenanalyse (1)
- Zeitreise (1)
- Zellkultur , Säugetiere , Hydrogel (1)
- Zigarette (1)
- Zufallsgraph (1)
- Zulassung (1)
- anomaly detection (1)
- atomic swaps (1)
- automated trading (1)
- bccm (1)
- beam splitting (1)
- bee foraging (1)
- bias-variance (1)
- bill of lading (1)
- biodiversity monitoring (1)
- bloxberg (1)
- carbon emissions (1)
- catalog of criteria (1)
- climate change (1)
- collective trauma (1)
- cross cultural work environment (1)
- cross-cultural dynamics (1)
- dApp (1)
- data annotation (1)
- decentralized computation (1)
- decentralized computation architecture (1)
- decentralized science (1)
- diffractive optics (1)
- digital data management technologies (1)
- digital identity (1)
- digital signatures (1)
- digital twin (1)
- double descent (1)
- e-Voting (1)
- employee engagement (1)
- exploit detection (1)
- fasteners (1)
- green finance (1)
- high repetition rate (1)
- high throughput (1)
- high troughput (1)
- human skeletal remains (1)
- hybrid modeling (1)
- ightning network (1)
- incubed (1)
- industrial lasers (1)
- intercultural competence (1)
- interpretable models (1)
- job satisfaction (1)
- laser applications (1)
- laser drilling (1)
- laser micro cutting (1)
- laser micro drilling (1)
- laser micro turning (1)
- laser processing (1)
- laser scanning (1)
- launch strategies (1)
- learning motivation (1)
- local energy market (1)
- machine to machine communication (1)
- materiality (1)
- metal surface structuring (1)
- mev-inspect (1)
- micro drilling (1)
- micromachining (1)
- mindful leadership (1)
- mint mechanism (1)
- molecular sorting, (1)
- molecule classification (1)
- multicultural workplace (1)
- nanosecond pulsed laser (1)
- negatively-valenced emotions (1)
- network analysis (1)
- optical coherence tomography (1)
- optimization (1)
- pandemic (1)
- parametric (1)
- polygon mirror scanner (1)
- polygon scanner (1)
- pricing strategy (1)
- redactable blockchain (1)
- saira (1)
- scholar publishing (1)
- scientific paper token (1)
- sdg (1)
- sensor fusion (1)
- sensor technology (1)
- sensors evaluation (1)
- silicon processing, high power (1)
- slock.it (1)
- smart contracts (1)
- supply chain (1)
- technology studies (1)
- temporal energy deposition (1)
- temporal network analysis (1)
- time standard (1)
- train delay insurance (1)
- trauma studies (1)
- ultra-fast (1)
- ultrafast laser (1)
- unwind (1)
- value (1)
- waitro (1)
- wind turbine (1)
- workplace mental health (1)
- zk-SNARKS (1)
- Ökosystem (1)
Institute
- Angewandte Computer‐ und Biowissenschaften (120)
- 06 Medien (35)
- 03 Mathematik / Naturwissenschaften / Informatik (28)
- Wirtschaftsingenieurwesen (21)
- 01 Elektro- und Informationstechnik (7)
- 04 Wirtschaftswissenschaften (7)
- Ingenieurwissenschaften (4)
- Sonstige (4)
- 02 Maschinenbau (2)
- 05 Soziale Arbeit (1)
Gold cyanidation is a process by which gold is removed from low-grade ore. Due to its efficiency it has found widespread application around the world, including Peru. The process requires free cyanide in high concentration. After the gold extraction is completed, free cyanide as well as metal cyanide complexes remain in the effluent of gold mines and refineries. Often these effluents are kept in storage ponds where they pose considerable risk to health and environ-ment. Thus, it is preferable to degrade cyanide to minimize the risk of exposure. In the context of this thesis cyanide degradation was explored in a UV-light based prototype. Degradation with a combination of hydrogen peroxide and UV-light has proven to be very effective at degrading cyanide concentrations of 100 mg/L and 1000 mg/L. Furthermore, the presence of ammonia as a degradation product could also be confirmed. Membrane distillation may provide an alternative to cyanide destruction in the form of cyanide recovery. Promising results were gathered from several membrane experiment.
Die biologische Ammoniumoxidation ist ein zentraler Bestandteil des globalen Stickstoffkreislaufs. Angesichts der extremen Massen Stickstoff anthropogenen Ursprungs in der Umwelt, liegt die Entfernung reaktiven Stickstoffs im Interesse der Umwelt und der öffentlichen Gesundheit. In der folgenden Arbeit werden Bedingungen zur anaeroben Ammoniumoxidation mit Nitrat in einem Anammox-Reaktor untersucht. Dabei wurden 2 Laborreaktoren für eine Zeit von insgesamt 116 Tagen betrieben und beobachtet, die ausschließlich als Elektronendonatoren und Akzeptoren Ammonium und Nitrat enthielten. Zusätzlich wurden Batchkulturen mit Zellen eines Reaktors angezüchtet und auf ihre Gaszusammensetzung abhängig unterschiedlicher Eigenschaften untersucht. Hierbei wurde eine Reihe unterschiedlicher analytischer Quantifizierungsmethoden genutzt und es konnte gezeigt werden, dass ein Abbau unter den Bedingungen stattfindet.
Die aktuelle Forschung zu dieser Reaktion ist spärlich und verleiht der Bachelorarbeit dadurch Relevanz.
In this thesis, we implement, correct, and modify the compartmental model described in “Transmission Dynamics of Large Coronavirus Disease Outbreak in Homeless Shelter, Chicago, Illinois, USA, 2020”. Our objective is to engage in reading and understanding scientific literature, reproduce the results, and modify or generalize an existing mathematical model. We provide an overview of epidemiological models, focusing on simple compartmental SEIR models. We correct inaccuracies and misprints in the original implementation and use the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm to fit the model’s parameters. Furthermore, we modify the model by introducing an additional compartment. The resulting model has a more intuitive interpretation and relies on fewer assumptions. We also perform the fitting process for this alternative model. Finally, we demonstrate the advantages of our modified implementations and discuss other possible approaches.
In this work, we identify similarities between Adversarial Examples and Counterfactual Explanations, extend already stated differences from previous works to other fields of AI such as dimensionality, transferability etc. and try to observe these similarities and differences in different classifier with tabular and image data. We note that this topic is an open discussion and the work here isn’t definite and canbe further extended or modified in the future, if new discoveries found.
This thesis comprehensively explores factors contributing to malaria-induced anemia and severe malarial anemia (SMA). The study utilizes a comprehensive dataset to investigate immunological interactions, genetic variations, and temporal dynamics. Findings highlight the complex interplay between immune markers, genetic traits, and cohort-specific influences. Notably, age, HIV status, and genetic variations emerge as crucial factors influencing anemia risk. The incorporation of Poisson regression models sheds light on the genetic underpinnings of SMA, emphasizing the need for personalized interventions. Overall, this research provides valuable insights into the multifaceted nature of malaria-induced complications, paving the way for further molecular investigations and targeted interventions.
As new sensors are added to VR headsets, more data can be collected. This introduces a new potential threat to user privacy. We focused on the feasibility of extracting personal information from eye-tracking. To achieve this, we designed a preliminary user study focusing on the pupil response to audio stimuli. We used a variation of machine learning models to test the collected data to determine the feasibility of obtaining information such as the age or gender of the participant. Several of the experiments show promise for obtaining this information. We were able to extract with reasonable certainty whether caffeine was consumed and the gender of the participant. This demonstrates the unknown threat that embedded sensors pose to users. A further studies are planned to verify the results.
Computationally solving eigenvalue problems is a central problem in numerical analysis and as such has been the subject of extensive study. In this thesis we present four different methods to compute eigenvalues, each with its own characteristics, strengths and weaknesses. After formally introducing the methods we use them in various numerical experiments to test speed of convergence, stability as well as performance when used to compute eigenfaces, denoise images and compute the eigenvector centrality measure of a graph.
Footage of organoids taken by means of fluorescence microscopy and segmented as well as triangulated by image analysis software like LimeSeg and Mastodon often needs to be visualized in aesthetic manner for presentation of the results in scientific papers, talks and demonstrations. The goal of this work was to create a simple to use addon “Biobox” for the open source 3D – visualization package “Blender” which would allow to import triangulated 3D data with animation over time (4D), produced by image analysis software, and optimize it for efficient usage. ”Biobox” offers several visualization tools for the creation of rendered images and animation videos by biologists.
The optimization of imported data was performed by using Blender intern modifiers. The optimized data can then be visualized by using several tools built for visualizing the organoid in frozen, animated and semi-transparent manners. A dynamic link for object selection and dynamic data exchange between Blender and Mastodon was developed. Additionally, a user interface was developed for manual correction errors of segmentation and steering the object detection algorithms of LimeSeg. The benchmark of the developed addon “Biobox” was performed on real scientific data. The benchmark test demonstrated that developed optimization result in significant (~5 fold) decrease of RAM usage and acceleration of visualization more than 160 times.
Robust soft learning vector quantization (RSLVQ) is a probabilistic approach of Learning vector quantization (LVQ) algorithm. Basically, the RSLVQ approach describes its functionality with respect to Gaussian mixture model and its cost function is defined in terms of likelihood ratio. Our thesis work involves an approach of modifying standard RSLVQ with non-Gaussian density functions like logistic, lognormal, and Cauchy (referred as PLVQ). In this approach, we derive new update rules for prototypes using gradient of cost function with respect to non-Gaussian density functions. We also derive new learning rules for the model parameters like s and s, by differentiating the cost function with respect to parameters. The main goal of the thesis is to compare the performance results of PLVQ model with Gaussian-RSLVQ model. Therefore, the performance of these classification models have been tested on the Iris and Seeds dataset. To visualize the results of the classification models in an adequate way, the Principal component analysis (PCA) technique has been used.
This paper examines the communication channels used by innovation projects at the ProtoSpace Hamburg, when engaging with stakeholders, and tries to answer the thesis question whether new media channels improve the chances of success for innovation projects, when used for this communication. Expert interviews with eight experts in com-munication, innovation and stakeholder management were conducted and then analyzed through the application of Mayring´s qualitative content analysis, in order to answer the posed question.
The number of Internet of Things (IoT) devices is increasing rapidly. The Trustless Incentivized Remote Node Network, in short IN3 (Incubed), enables trustworthy and fast access to a blockchain for a large number of low-performance IoT devices. Although currently IN3 only supports the verification of Ethereum data, it is not limited to one blockchain due to modularity. This thesis describes the fundamentals, the concept and the implementation of the Bitcoin verification in IN3.
In this thesis two novel methods for removing undesired background illumination are de-veloped. These include a wavelet analysis based approach and an enhancement of a deep learning method. These methods have been compared with conventional methods, using real confocal microscopy images and synthetic generated microscopy images. These synthetic images were created utilizing a generator introduced in this thesis.
Machine learning models for timeseries have always been a special topic of interest due to their unique data structure. Recently, the introduction of attention improved the capabilities of recurrent neural networks and transformers with respect to their learning tasks such as machine translation. However, these models are usually subsymbolic architectures, making their inner working hard to interpret without comprehensive tools. In contrast, interpretable models such learning vector quantization are more transparent in the ability to interpret their decision process. This thesis tries to merge attention as a machine learning function with learning vector quantization to better handle timeseries data. A design on such a model is proposed and tested with a dataset used in connection with the attention based transformers. Although the proposed model did not yield the expected results, this work outlines improvements for further research on this approach.
This work emphasises the synergy between anthropologi-cal research on human skeletal remains and suitable doc-umentation strategies. Highlighting the significance of data recording and the use of digital databases in various aspects of anthropological work on bones, including scien-tific standards, skeletal collections, analysis of research re-sults, ethical considerations, and curation, it provides a comprehensive examination of these topics to demonstrate the value of investing time and resources in this field, countering the existing lack of funding that has led to sig-nificant deficiencies. Additionally, the paper outlines the requirements and challenges associated with standard data protocoling and suggests that digital data manage-ment frameworks and technologies such as ontologies and semantic web technologies for anthropological information should be a central focus in developing solutions.
In this paper, we conduct experiments to optimize the learning rates for the Generalized Learning Vector Quantization (GLVQ) model. Our approach leverages insights from cog- nitive science rooted in the profound intricacies of human thinking. Recognizing that human-like thinking has propelled humankind to its current state, we explore the applica- bility of cognitive science principles in enhancing machine learning. Prior research has demonstrated promising results when applying learning rate methods inspired by cognitive science to Learning Vector Quantization (LVQ) models. In this study, we extend this approach to GLVQ models. Specifically, we examine five distinct cognitive science-inspired GLVQ variants: Conditional Probability (CP), Dual Factor Heuristic (DFH), Middle Symmetry (MS), Loose Symmetry (LS), and Loose Symme- try with Rarity (LSR). Our experiments involve a comprehensive analysis of the performance of these cogni- tive science-derived learning rate techniques across various datasets, aiming to identify optimal settings and variants of cognitive science GLVQ model training. Through this research, we seek to unlock new avenues for enhancing the learning process in machine learning models by drawing inspiration from the rich complexities of human cognition. Keywords: machine learning, GLVQ, cognitive science, cognitive bias, learning rate op- timization, optimizers, human-like learning, Conditional Probability (CP), Dual Factor Heuristic (DFH), Middle Symmetry (MS), Loose Symmetry (LS), Loose Symmetry with Rarity (LSR).
Adversarial robustness of a nearest prototype classifier assures safe deployment in sensitive use fields. Much research has been conducted on artificial neural networks regarding their robustness against adversarial attacks, whereas nearest prototype classifiers have not chalked similar successes. This thesis presents the learning dynamics and numerical stability regarding the Crammer-normalization and the Hein-normalization for adversarial robustness of nearest prototype classifiers. Results of conducted experiments are penned down and analyzed to ascertain the bounds given by Saralajew et al. and Hein et al. for adversarial robustness of nearest prototype classifiers.
With globalization and the increasing diversity of the workforce, organizations are faced with the challenge of effectively managing multicultural teams. Understanding how employee engagement and job satisfaction are influenced by multicultural factors is crucial for organizations to create inclusive work environments that foster productivity and wellbeing. This literature review aims to explore the relationship between employee engagement, job satisfaction, and multi-cultural workplaces. It examines relevant studies and provides insights into the key factors, challenges, and strategies for enhancing employee engagement and job satisfaction in multicultural workplaces. The findings will shed light upon the author's research area on the factors influencing employee engagement and job satisfaction in multicultural work environments and contribute to a deeper understanding of cross-cultural dynamics in the workplace.
Traditional user management on the Internet has historically required individuals to give up control over their identities. In contrast, decentralized solutions promise to empower users and foster decentralized interactions. Over the last few years, the development of decentralized accounts and tokens has significantly increased, aiming at broader user adoption and shared social economies.
This thesis delves into smart contract standards and social infrastructure for Ethereum-based blockchains to enable identity-based data exchange between abstracted blockchain accounts. In this regard, the standardization landscapes of account and social token developments were analyzed in-depth to form guidelines that allow users to retain complete control over their data and grant access selectively.
Based on the evaluations, a pioneering Solidity standard is presented, natively integrating consensual restrictive on-chain assets for abstracted blockchain accounts. Further, the architecture of a decentralized messaging service has been defined to outline how new token and account concepts can be intertwined with efficient and minimal data-sharing principles to ensure security and privacy, while merging traditional server environments with global ledgers.
Laser engraving requires a precise ablation per pulse through all layers of a depth map. To transform this process towards areas of a square meter and more within an acceptable time, needs high-power ultra-short pulsed lasers for the precision and a high scan speed for the beam distribution. Scan speeds in the range of several 100 m/s can be achieved with a polygon scanner. In this work, a polygon scanner has been utilized within a roll-engraving machine to treat an 800 x 220 mm² (L x Dia) roll with 0.55 m² in a laser engraving process. The machine setup, the processing strategy and the data handling has been investigated and result in an efficient large area process. Pre-tests were performed with a multi-MHz-frequency nanosecond-pulsed laser, to investigate the processing strategy. A method to overcome the duty cycle of the polygon scanner was found in the synchronization of two polygons, enabling the use on a single laser source in a time-sharing concept. The throughput and the utilization of the laser source can be increased by the factor of two
In this work, Direct Laser Interference Patterning (DLIP) is used in conjunction with the polygon scanner technique to fabricate textured polystyrene and nickel surfaces through ultra-fast beam deflection. For polystyrene, the impact of scanning speed and repetition rate on the structure formation is studied, obtaining periodic features with a spatial period of 21 μm and reaching structure heights up to 23 μm. By applying scanning speeds of up to 350 m/s, a structuring throughput of 1.1 m²/min has been reached. Additionally, the optical configuration was used to texture nickel electrode foils with line-like patterns with a spatial period of 25 μm and a maximum structure depth of 15 μm. Subsequently, the structured nickel electrodes were assessed in terms of their performance for the Hydrogen Evolution Reaction (HER). The findings revealed a significant improvement in HER efficiency, with a 22% increase compared to the untreated reference electrode.
In laser drilling, one challenge is to achieve a high drilling quality in high aspect ratio drilling. Ultra-short pulsed lasers use different concepts like thin disks, fibers and rods. The slab technology is implemented because of their flexibility and characteristics. They bring together both advantages and deliver high pulse energies at high repetition rates. Materials with a thickness > 1.5 mm demand specialized optics handling the high power and pulse energies with adapted processing strategies, integrated in a machine setup. In this contribution, we focus on all the necessary components and strategies for drilling high precision holes with aspect ratios up to 1:40.
For monitoring laser beam welding processes and detecting or actively avoiding process defects, acoustic based measurements can be used in addition to optical measurement methods such as pyrometry. To reliably detect process events, it is essential to position the respective sensors in such a way that specific signal characteristics are reproducible and significant. However, there are only few investigations regarding the positioning for airborne sound sensors, especially for the detection of process emissions in the ultrasonic range. Therefore, in this research, the influence of the process distance as well as the angle and orientation of the microphone to a laser beam deep penetration welding process is investigated with respect to the detectability of process emissions in different frequency bands. It is shown that for a wide ultrasonic range a flat sensor angle with respect to the sample surface leads to an increased signal strength of the acoustic emissions compared to steep angles.
We report on our recent progress in creating a new type of compact laser that uses thulium-based fiber CPA technology to emit a central wavelength of 2 μm. This laser can produce pulse energies of >100 μJ and an average power of >15 W. It is designed to be long-lasting and is built for industrial use, making it a great fit for integration into laser machines used for materials processing. These laser parameters are ideal for working with semiconductors like silicon, allowing for tasks such as micro-welding, cutting of filaments, dicing, bonding and more.
Laser welding of hidden T-joints, connecting the web-sheet through the face-sheet of the joint can provide advantages like increased lightweight potential in manufacturing sandwich structures with thin-walled cores. However, maintaining the correct positioning of the beam relative to the joint is challenging. A method to reduce the effort of positioning is using optical coherence tomography (OCT), that interferometrically measures the reflection distance inside of the keyhole during laser deep penetration welding. In this study new approaches for targeted data processing of the OCT-signal to automatically detect misalignments are presented. It is shown that considering multiple components from the inference pattern and the respective signal intensities improve the detection accuracy of misalignments.
Analysis of the Forensic Preparation of Biometric Facial Features for Digital User Authentication
(2023)
Biometrics has become a popular method of securing access to data as it eliminates the need for users to remember a password. Although exploiting the vulnerabilities of biometric systems increased with their usage, these could also be helpful during criminal casework.
This thesis aims to evaluate approaches to bypass electronic devices with forged faces to access data for law enforcement. Here, obtaining the necessary data in a timely manner is critical. However, unlocking the devices with a password can take several years with a brute force attack. Consequently, biometrics could be a quicker alternative for unlocking.
Various approaches were examined to bypass current face recognition technologies. The first approaches included printing the user's face on regular paper and aimed to unlock devices performing face recognition in the visible spectrum. Further approaches consisted of printing the user's infrared image and creating three-dimensional masks to bypass devices performing face recognition in the near-infrared. Additionally, the underlying software responsible for face recognition was reverse-engineered to get information about its operation mode.
The experiments demonstrate that forged faces can partly bypass face recognition and obtain secured data. Devices performing face recognition in the visible spectrum can be unlocked with a printed image of the user's face. Regarding devices with advanced near-infrared face recognition, only one could be bypassed with a three-dimensional face mask. In addition, its underlying software provided evidence about the demands of face recognition. Other devices under attack remained locked, and their software provided no clues.
The Tutte polynomial is an important tool in graph theory. This paper provides an introduction to the two-variable polynomial using the spanning subgraph and rank-generating polynomials. The equivalency of definitions is shown in detail, as well as evaluations and derivatives. The properties and examples of the polynomial, i.e. the universality, coefficient relations, closed forms and recurrence relations are mentioned. Moreover, the thesis contains the connection between the dichromate and other significant polynomials.
Analysis of Continuous Learning Strategies at the Example of Replay-Based Text Classification
(2023)
Continuous learning is a research field that has significantly boosted in recent years due to highly complex machine and deep learning models. Whereas static models need to be retrained entirely from scratch when new data get available, continuous models progressively adapt to new data saving computational resources. In this context, this work analyzes parameters impacting replay-based continuous learning approaches at the example of a data-incremental text classification task using an MLP and LSTM. Generally, it was found that replay improves the results compared to naive approaches but achieves not the performance of a static model. Mainly, the performances increased with more replayed examples, and the number of training iterations has a significant influence as it can partly control the stability-plasticity-trade-off. In contrast, the impact of balancing the buffer and the strategy to select examples to store in the replay buffer were found to have a minor impact on the results in the present case.
The GeoFlow II experiment aims to replicate Earth’s core dynamics using a rotating spherical container with controlled temperature differences and simulated gravity. During the GeoFlow II campaign, a massive dataset of images was collected, necessitating an automated system for image processing and fluid flow visualization in the northern hemisphere of the spherical container. From here, we aim to detect the special structures appearing on the post processed images. Recognizing YOLOv5’s proficiency in object detection, we apply Yolov5 model for this task.
This study explores the opportunities and risks associated with user-generated content (UGC) in the communication strategies of marketing departments from a business perspective. With the rise of social media and online platforms, UGC has become a powerful tool for brands to engage with their audience, build trust, and enhance brand awareness. However, implementing UGC also comes with inherent risks, including the loss of control over brand messaging, potential negative user-generated content, and legal implications.
To investigate these dynamics, an empirical mixed-methods approach was employed, including expert interviews and a comprehensive literature review. The findings indicate that UGC offers significant opportunities for marketing departments, such as increased customer loyalty, enhanced authenticity, brand awareness, as well as a diverse set of possible content. However, the study also reveals the potential risks associated with UGC, highlighting the importance of managing these risks effectively.
RNA tertiary contact interactions between RNA tetraloops and their receptors stabilize the folding of ribosomal RNA and support the maturation of the ribosome. Here we use FRET assisted structure prediction to develop structural models of two ribosomal tertiary contacts, one consisting of a kissing loop and a GAAA tetraloop and one consisting of the tetraloop receptor (TLR) and a GAAA tetraloop. We build bound and unbound states of the ribosomal contacts de novo, label the RNA in silico and compute FRET histograms based on MD simulations and accessible contact volume (ACV) calculations. The predicted mean FRET efficiency from molecular dynamics (MD) simulations and ACV determination show agreement for the KL-TLGAAA construct. The KL construct revealed too high FRET efficiency and artificial dye behavior, which requires further investigation of the model. In the case of the TLR, the importance of the correct dye and construct parameters in the modeling was shown, which also leads to a renewed modeling. This hybrid approach of experiment and simulation will promote the elucidation of dynamic RNA tertiary contacts and accelerate the discovery of novel RNA interactions as potential future drug targets.
The following thesis contains a detailed business plan of a formula student combustion racecar. This includes the evaluating of existing knowledge about the car combined with required information about the market and seed capital. Subsequently the already presented plan is described with the interpretation for future business plans. In this connection the acceptance of electro mobility shall be evaluated and first ideas for the presentation of an electric car shall be created.
Over recent years, Maximal Extractable Value (MEV) has gained significant importance within the decentralized finance (DeFi) ecosystem. Remarkably, within just two years of its emergence, MEV has seen an extraction of approximately 600 million USD - a phenomenon that has sparked concerns regarding potential threats to blockchain stability.
With growing interest in the Ethereum network and the growing DeFi sector, research surrounding MEV has substantially increased. This work aims to offer a comprehensive understanding of MEV. Additionally, this research quantifies the largest types of MEV (Arbitrage, Sandwich and Liquidations) from March 2022 to March 2023. The data are then compared to other sources, revealing a general upward trend, with a particularly noticeable increase in Sandwich Attacks.
In the field of Blockchain Technology applications and research, non-fungible tokens (NFTs) have gained significant attention in recent years. Whilst current research is focused on NFT use cases or the purchase of NFTs from an investor’s perspective, the NFT launch (i.e. primary market) from a creator’s perspective remains uncovered. However, the launch strategy is considered to be an important factor for the success of a product. Therefore, our research paper aims to explore launch strategies of NFTs. Thereby, we discuss the marketing mix instruments price (i.e. pricing strategy), place (i.e. mint mechanism), and promotion. Through an empirical approach of conducting eight expert interviews, we examine parameters that are used to define an NFT launch strategy and assess their preference of different stakeholders.
A Systematic Literature Review on Blockchain Oracles: State of Research, Challenges, and Trends
(2023)
To enable data exchange between the Blockchain protocol (on-chain) and the real world (off-chain), e.g., non-Blockchain-based applications and systems, a software called Oracle is used [3]. Blockchain oracle is an important component in the use of off-chain data for on-chain smart contracts. However, there is limited scientific literature available on this important blockchain topic. Therefore, in this paper, a novel systematic literature review based on intelligent methods, e.g., information linking, topic clustering and focus identification through frequency calculations, is proposed. Thus, the current state of scientific research interest, content and challenges, and future research directions for blockchain oracles are identified. This paper shows that there is little unbiased literature that does not call oracles a problem. From the results of this new literature review framework, relevant areas of data handling and verification with blockchain oracles are identified for future research.
Safety, quality, and sustainability concerns have arisen from global supply chains. Stakeholders incur risk regarding these factors, given their significance and complexity. Thus, each business's supply chain risk management must prioritize product characteristics. Accordingly, an effective traceability solution that can monitor and regulate product and supply chain aspects is crucial, especially in a given scenario. This re-search paper elucidates the potential of smart contracts in blockchain to enhancing the efficacy of business transactions and ensuring comprehensive traceability within the supply chain of paper-based coffee cups The improved levels of transaction transparency and security in traditional supply chains have been achieved through the digitization of supply chain ecosystem interactions and transactions. This approach makes verifying sources, manufacturing procedures, and quality standards easier in complex supply chains. Accordingly, the integration helps stakeholders monitor and track the whole ecosystem, promoting transparency, predictability, and dependability.
In the swiftly changing world of academic publishing, the Sea of Wisdom platform seizes the opportunity to innovate. By combining the technologies of blockchain, decentralized finance (DeFi), and Non-Fungible Tokens (NFTs) with traditional scholarly communication, we present a groundbreaking, decentralized solution. Our design, although adaptable, primarily uses Ethereum's Virtual Machine, tapping into its robust scientific community.
This desk research will initiate an exploration of present and potential blockchain applications in the higher education sector of Europe. The aim of this research is to create a theoretical base for a further postgraduate research and analysis, so to create an effective model/framework to augment the integration of blockchain technology into existing organizational processes, initially in higher educational institutions, but which may be adaptable and generalizable to other specific uses. Due to the novelty of the topic, academic resources related to the research area are limited. Most studies seem to focus on blockchain-based applications in industries such as finance, healthcare, and supply chain management, and there is little evidence of the impact of blockchain technology on education. This paper discusses present and suggests some potential blockchain-based applications in education in Europe and beyond. This research provides a groundwork for education and academia stakeholders, policymakers and researchers to exploit the potential of blockchain in different functions of an education system.
Currently, the Internet of Things (IoT) is connected to the virtual world through the Web of Things (WoT), allowing efficient utilization of real-world objects with Internet technologies. The WoT facilitates abstract interaction between applications and connected IoT devices, allowing owners to switch between devices while using multiple ones. To achieve this, virtual assets in WoT devices can be tokenized through smart contracts and transferred using hashed proof as transactions within blockchain networks that support virtual currencies. The goal of Web of Things is to establish connectivity, interoperability, and integration among IoT devices using web standards and protocols, reducing reliance on device manufacturers. This enables easy integration of Web 3.0 cryptocurrency for device management. This study proposes a solution for WoT applications involving different cryptocurrency definitions. Finally, simulation results are presented to demonstrate the tokenization-based ownership transfer in the Web of Things.
Decentralization is one of the key attributes associated with blockchain technology. Among the different developments in recent years, decentralized autonomous organizations (DAOs) have been of growing interest. DAOs are currently a key part of another emerging use case, namely decentralized science (DeSci). Given the novelty of the field, an integrative definition of DeSci has not been established, but some inherent concepts and ideas can be traced back to the Open Science movement. Although the DeSci movement has the potential to benefit the public, for example through funding underrepresented research areas or more inclusive and transparent research in general, some negative aspects of decentralization should not be neglected. Due to the novelty of blockchain and emerging use cases, research can and should precede mass adoption, to which this paper aims to contribute.
To investigate the effects of climate change on interactions within ecosystems, a microcosm experiment was conducted. The effects of temperature increase and predator diversity on Collembola communities and their decomposition rate were investigated. The predators used were mites and Chilopods, whose predation effects on several response variables were analysed. This data included Collembola abundance, biomass and body mass as well as basal respiration and microbial biomass carbon. These response variables were tested against the predictors in several models. Temperature showed high significance in interaction with mite abundance in almost all models. Furthermore, the results of the basal respiration and microbial biomass carbon support the suggestion of a trophic cascade within the animal interaction.
The cryptocurrency ecosystem has seen significant growth with Ethereum and Bitcoin as foundational pillars. Ethereum introduced smart contracts revolutionizing decentralized applications (dApps) across various domains. Scalability challenges led to alternative ecosystems like Binance Smart Chain and Polygon, maintaining compatibility through the Ethereum Virtual Machine (EVM). Bitcoin also faces scalability issues, leading to the Lightning Network's development—an off-chain solution with payment channels for scalable instant transactions. Interoperability is increasingly crucial as the cryptocurrency ecosystem continues to grow, enabling seamless interactions between assets and data across multiple blockchain platforms. EVM-compatible blockchains and the Lightning Network offer unique advantages in their respective use cases. This paper utilizes atomic swaps to create a secure, fast, and user-friendly trustless bridge between the Lightning Network and EVM-compatible blockchains, fostering the growth of both ecosystems and unlocking novel opportunities.
Reputation is indispensable for online business since it supports customers in their buying decisions and allows sellers to justify premium prices. While IS research has investigated reputation systems mainly as review systems on online platforms for business-to-consumer (B2C) transactions, no proper solutions have been developed for business-to-business (B2B) transactions yet. We use blockchain technology to propose a new class of reputation systems that apply ratings as voluntary bonus payments: Before a transaction is performed, customers commit to pay a bonus that is granted if a service provider has performed a service properly. As opposed to rival reputation systems that build on cumulated ratings or reviews, our system enables monetized reputation mechanisms that are inextricably linked with online transactions. We expect this system class to provide more trustworthy ratings, which might reduce agency costs and serve quality providers to establish a reputation towards new customers.
This scientific work reveals the potential for the development of the renewable energy market, due to many reasons. The reasons are the unstable political situation in the world, rising energy prices, environmental degradation and the growing demand of Ger man residents for government measures to reduce the negative impact on the environment. This work is related to business planning and development using strategies based on the above reasons. The purpose of the study is to develop methods for successfully regulating the market for renewable resources to solve the problem of environmental pollution through the promotion of environmentally friendly products. The work explores the driving forces and problems hindering the development of the market for renewable resources. The problems raised concerned all interested parties, from consumers and producers to the state body for regulating and stimulating the industry . An analysis was also made of the methods of environmentally oriented companies and the tools they use to strengthen their positions in the market. Based on the data obtained from the conducted research, a concept and business strategy for a new environmentally oriented generation” was created. The business consulting company “Sun’s idea of the new company is to involve all parties using marketing tools, creating a healthy competitive environment among commercial companies and benefiting not only the companies themselves but also the end user of the products and the German government.
The research of this thesis aims to analyze how a specific CSR approach from the Adidas Group on sustainability is perceived globally based on an analysis of the movements on the stock market combined with a sentiment analysis of tweet activities on Twitter. The thesis analyzed both positive feedback and critic from customers worldwide regarding the approach and other initiatives from the Adidas Group and their partner Parley for the Oceans, a non-governmental organization working towards a more sustainable world.
The occurence of prostate cancer (PCa) has been consistently rising since three decades and remains the third leading cause of cancer-related deaths after lung and bowel cancer in Germany. Despite of new methods of early detection, such as prostate-specific antigen (PSA) testing, it persists to be the most common cancer in german men with over 63,400 new diagnoses in Germany every year and exhibits high prevalence in other countries of Northern andWestern Europe as well [64]. Men over the age of 70 are most commonly affected by the lethal disease, whereas an indisposition before 50 is rare. The malignant prostate tumor can be healed through operation or irradiation while the cancer hasn’t reached the stage of metastasis in which other therapeutic methods have to be employed [14] [15]. In the metastatic phase, the patient usually exhibits symptoms when the tumors size affects the urethra or the cancer spreads to other tissue, often the bones [16].
The high prevalence of this disease marks the importance of further research into prognosis and diagnosis methods, whereby identification of further biomarkers in PCa poses a major topic of scientific analysis. For this task, the effectiveness of high-throughput RNA sequencing of the transcriptome (RNA molecules of an organism or specific cell type) is frequently exploited [66]. RNA sequencing or RNA-Seq in short, offers the possibility of transcriptome assessment, enabling the identification of transcriptional aberrations in diseases as well as uncharacterized RNA species such as non-coding RNAs (ncRNAs) which remain undetected by conventional methods [49]. To alleviate interpretation of the sequenced reads they are assembled to reconstruct the transcriptome as close to the original state as possible, thus enabling rapid detection of relevant biomolecules in the data [49]. Transcriptomic studies often require highly accurate and complete gene annotations on the reference genome of the examined organism. However, most gene annotations and reference genomes are far from complete, containing a multitude of unidentified protein-coding and non-coding genes and transcripts. Therefore, refinement of reference genomes and annotations by inclusion of novel sequences, discovered in high quality transcriptome assemblies, is necessary [24].
Glycans play an important role in the intracellular interactions of pathogenic bacteria. Pathogenic bacteria possess binding proteins capable of recognizing certain sugar motifs on other cells, which are found in glycan structures. Artificial carbohydrate synthesis allows scientists to recreate those sugar motifs in a rational, precise, and pure form. However, due to the high specificity of sugar-binding proteins, known as lectins, to glycan structures, methods for identifying suitable binding agents need to be developed. To tackle this hurdle, the Fraunhofer Institute for Cell Therapy and Immunology (Fraunhofer IZI) and the Max-Planck Institute of Colloids and Interfaces (MPIKG) developed a binding assay for the high throughput testing of sugar motifs that are presented on modular scaffolds formed by the assembly of four DNA strands into simple, branched DNA nanostructures. The first generation of this assay was used in combination with bacteria that express a fluorescent protein as a proof-of-concept. Here, the assay was optimized to be used with bacteria not possessing a marker gene for a fluorescent protein by staining their genomic DNA with SYBR® Green. For the binding assay, DNA nanostructures were combined with artificially synthesized mannose polymers, typical targets for many lectins on the surface of bacteria, presenting them in a defined constellation to bind bacteria strongly due to multivalent cooperativity. The testing of multiple mannose polymers identified monomeric mannose with a 5’-carbon linker and 1,2-linked dimeric mannose with linker as the best binding candidates for E. coli, presumably due to binding with the FimH protein on the surface. Despite similarities between the FimH proteins of E. coli and K. pneumoniae, binding was only observed between E. coli and the different sugar molecules on DNA structures. Furthermore, the degree of free movement seemed to affect the binding of mannose polymers to targeted proteins, since when utilizing a more flexible DNA nanostructure, an increase in binding could be observed. An alternative to the simple DNA nanostructures described above is the use of larger, more complex DNA origami structures consisting of several hundred strands. DNA origami structures are capable of carrying dozens of modifications at the same time. The results for the DNA origami structure showed a successful functionalization with up to 71 1,2-linked dimeric mannose with linker molecules. These results point towards a solution for the high-throughput analysis of potential binding agents for pathogenic bacteria e.g. as an alternative treatment for antibiotic-resistant.
Cryptorchidism is the most common disorder of sex development in dogs. It describes a failure of one or both testes to descend into the scrotum in due time. It is a heritable multifactorial disease. In this work, selected dogs of a german sheep poodle breed were sequenced with nanopore sequencing and subsequently examined for genetic variations correlating with cryptorchidism. The relationships of the studied dogs were also analyzed and visually processed.
Assessment of COI and 16S for insect species identification ti determine the diet of city bats
(2023)
Despite the numerous benefits of urbanization to human living conditions, urbanization has also negatively affected humans, their environment, and other organisms that share urban habitats with humans. Undoubtedly adverse while some wild animals avoid living in urban areas, others are more tolerant or prefer life in urban habitats. There are more than 1,400 species of bats in the world.
Therefore, they have the potential to contribute significantly to the mammalian biodiversity in urban areas. Insectivorous bats species play a key role in agriculture by improving yields and reducing chemical pesticide costs. Using metabarcoding, it is possible to determine the prey consumed by these noctule mammals based on the DNA fragments in their fecal pellets. This study
aimed to evaluate COI and 16S metabarcodes for insect species identification to determine the diet of metropolitan bats. For this purpose, COI and 16S metabarcodes were extracted, amplified, and sequenced from 65 bat feces collected in the Berlin metropolitan areas. Following a taxonomic annotation, I found that 73% of all identified insects could only be detected using the COI method, while 15% could be recovered using the 16S approach. Just 12% of all detected insects were identified simultaneously by both markers. According to this result, COI is more suitable for the taxonomic identification of insects from bat feces. However, given the bias of COI primers, it is recommended to use both markers for a more precise estimation of species diversity. Additionally,based on the insect species identified, I noticed that urban bats fed mainly on Diptera, Coleoptera,and Lepidoptera. The bat species Nyctalus noctula was most abundant in the samples. His diet analysis revealed that 91% of the samples contained the insect species Chironomus plumosus. 14 pest insect species were also found in his diet.
In the field of satellites it is common practice to combine multiple ground stations into one network, to increase communication times with satellites. This work focuses on TIM, which is an international academic colaborative project. Important criteria for this project are elaborated and used to evaluate existing ground station networks. It concludes that there is no appropriate solution availiable for this specific use case and establish a proposed solution. The proposed ground station network software will be elaborated and evaluated.
Our current research aims to establish a complete ribonucleic acid (RNA) production line from plasmid design to purification of in vitro transcribed RNA and labeling of RNA. RNA is the central molecule within the central dogma of molecular biology and is involved in most essential processes within a cell[1]. In many cases, only compact three-dimensional structures of the respective RNA are able to fulfill their function. In this context, RNA tertiary contacts such as kissing loops and pseudoknots are essential to stabilize three-dimensional folding[2]. We will produce a tertiary contact consisting of a kissing loop and a GAAA tetraloop that occurs in eukaryotic ribosomal RNA[3,4]. The RNA sequence is integrated into a vector plasmid. Subsequently, the plasmid is amplified in E. coli. After following plasmid purification steps, the RNA sequence will be transcribed in vitro[5,6]. In order for the RNA be used for Förster resonance energy transfer (FRET) experiments at the single molecule level, fluorescent dyes must be coupled to the RNA molecule[7].
Recently a deep neural network architecture designed to work on graph- structured data have been capturing notice as well as getting implemented in various domains and application. However, learning representation (feature embedding) from graphical data picking pace in research and constructing graph(s) from dataset remains a challenge. The ability to map the data to lower dimensions further makes the task easier while providing comfort in applying many operations. Graph neural network (GNN) is one of the novel neural network models that is catching attention as it is outperforming in various applications like recommender systems, social networks, chemical synthesis, and many more. This thesis discusses a unique approach for a fundamental task on graphs; node classification. The feature embedding for a node is aggregated by applying a Recurrent neural network (RNN), then a GNN model is trained to classify a node with the help of aggregated features and Q learning supports in optimizing the shape of neural networks. This thesis starts with the working principles of the Feedforward neural network, recurrent units like simple RNN, Long short-term memory (LSTM), and Gated recurrent unit (GRU), followed by concepts of Reinforcement learning (RL) and the Q learning algorithm. An overview of the fundamentals of graphs, followed by the GNN architecture and workflow, is discussed subsequently. Some basic GNN models are discussed in brief later before it approaches the technical implementation details, the output of the model, and a comparison with a few other models such as GraphSage and Graph attention network (GAN).
The games industry has significantly grown over the last 30 years. Projects are getting bigger and more expensive, making it essential to plan, structure and track them more efficiently.
The growth of projects has increased the administrative workload for producers, project managers and leads, as they have to monitor and control the progress of the project in order to keep a permanent overview of the project. This is often accompanied by a lack of insight into the project and basic communication within the team. Therefore, the goal of this thesis is to enhance conventional project management methods with process structures that occur frequently in game development.
This thesis initially elaborates on what project management in the game industry actually is: Here, methods are considered, especially agile methods and progress tracking prac-tices, which were created for software development and have become a standard in game development. Subsequently, an example is used to demonstrate how process management can function within the development of video games. Based on this, the ideal is depicted, which is implemented and used in a tool at the German games studio KING Art GmbH. This ideal is compared with expert interviews in order to verify its gen-eral validity in the industry.
By integrating process structures, the administrative effort can be reduced, communica-tion within game development can be simplified, while the current project status can be permanently presented. This benefits both project management and leads, as well as the entire team. Further application tests of this theory would have to be organized to check scalability and to draw comparisons to other applications.
In the past few years, social media has become the most popular communication software, replacing phone calls, text messages, television and even advertisements. Social media has become the most important channel for spreading opinions. As a result of this trend, many politicians have also started to operate social media (Wang, Tsai, & Chen 2019). This study was conducted in order to understand whether there was an intercandidate agenda-setting effect between the Facebook posts of legislative candidates and presidential candidates during the election period, and whether the legislative candidates' Facebook posts were influenced by the presidential candidates' Facebook posts. The target population of this study was the three presidential candidates in Taiwan's 2020 presidential election — Dr. Tsai Ing-Wen, Mr. Han Kuo-Yu, and Mr. James Soong — as well as the 36 legislative candidates in Taipei, Taichung, and Kaohsiung.
The study focused on Facebook posts from 1thNovember 2019 to 10th January 2020, 10 weeks before the voting day. Text-mining and cosine similarity were used to organize the posts and compare the similarity between posts. Finally, the similarity between posts was presented as a line graph.
The study revealed that there was an inter-candidate agenda-setting effect between legislative candidate posts and presidential candidate posts, and that Dr. Tsai Ing-Wen, who was also the incumbent president during the campaign, was the most influential Facebook poster during the entire election.
Future research is proposed on the inter-candidate agenda-setting effect only analyzing the similarity of posts among the candidates to discuss the influence of the candidates' Facebook agenda-setting during a specific election period.
This is the first study in which the Facebook posts of Taiwanese politicians are analyzed and the relationships were analyzed and the relationships were systematically compared, across multiple degrees, which opens up a whole new subject for future elections in Taiwan.
Since its foundation as an application of algebra, coding theory is obtaining a day by day increasing importance. For instance, any communication system needs the concepts of coding theory to function efficiently. In this thesis, reader will find an introductory explanation to linear codes and binary hamming codes including some of the algebraic tools devised in their applications. All the described software applications are verified using SageMath 9.0 using Hochschule Mittweida’s JupyterHub.
As the cryptocurrency ecosystem rapidly grows, interoperability has become increasingly crucial, enabling assets and data to interact seamlessly across multiple chains. This work describes the concept and implementation of a trustless connection between the Bitcoin Lightning Network and EVM-compatible blockchains, allowing the transfer of assets between the two ecosystems. Establishing such a connection can significantly contribute to the growth of both ecosystems as they can benefit from each other’s advantages and emerge new pos- sibilities.
In this work, a transgenic zebrafish line that expresses the fluorophore dsRed under the endogenous zebrafish cochlin promotor is supposed to be established, using the CRISPR/Cas9 system. dsRed was cloned into a pBluescript vector, followed by the cloning of the cochlin locus into this vector. This bait construct was then supposed to be micro injected into wild type AB zebrafish embryos. The micro injection of Cas9 mRNA, single guide RNA and a bait construct was practiced with the tyrosinase gene, which was disrupted using CRISPR/Cas9.
This thesis investigates the efficacy of four machine learning algorithms, namely linear regression, decision tree, random forest and neural network in the task of lead scoring. Specifically, the study evaluates the performance of these algorithms using datasets without sampling and with random under-sampling and over-sampling using SMOTE. The performance of each algorithm is measure using various performance metrics, including accuracy, AUC-ROC, specificity, sensitivity, precision, recall, F1 score, and G-mean. The results indicate that models trained on the dataset without sampling achieved higher accuracy than those trained on the dataset with either random under-sampling or random over-sampling using SMOTE. However, the neural network demonstrated remarkable results on each dataset compared to the other algorithms. These findings provide valuable insights into the effectiveness of machine learning algorithms for lead scoring tasks, particularly when using different sampling techniques. The findings of this study can aid lead management practices in selecting the most suitable algorithm and sampling technique for their needs. Furthermore, the study contributes to the literature by providing a comprehensive evaluation of the performance of machine learning algorithms for lead scoring tasks. This thesis has practical implications for businesses looking to improve their lead management practices, and future research could extend the analysis to other machine learning algorithms or more extensive datasets.
How Covid-19 impacts the workplace of knowledge workers in a pandemic and post pandemic world
(2021)
The following master thesis covers the topic workplace. The focus lies on the corona pandemic and how the pandemic has affected and will continue to affect the workplaces of knowledge workers. Therefore, the workplace as a research area has been described holistically, followed by the presentation of gathered secondary data and the conducted in depth interviews by the author. The presented secondary data and primary data are agreeing in the workplace how people know it will be changed after the pandemic. The most likely outcome is the hybrid workplace concept which mixes the home office, the office and alternatively third places. For these changes the companies have to be equipped and prepared. The meaning of the office will increase and has to be redesigned in order to meet the needs of the knowledge workers which are coming back to the office eventually.
In machine learning, Learning Vector Quantization (LVQ) is well known as supervised vector quantization. LVQ has been studied to generate optimal reference vectors because of its simple and fast learning algorithm [2]. In many tasks of classification, different variants are considered while training a model and a consideration of variants of large margin in LVQ helps to get significant
results [20]. Large margin LVQ (LMLVQ) is to maximize the distance between decision hyperplane and data points. In this thesis, a comparison of different variants of Generalized Learning Vector Quantization (GLVQ) and Large margin in LVQ is proposed along with visualization, implementation and experimental results.
With the growing market of cryptocurrencies, blockchain is becoming central to various research areas relevant from a mathematical and cryptographic point of view. Moreover, it is capable of transforming the traditional methods involving centralized network operations into decentralized peer-to-peer functionalities. At the same time, it provides an alternative to digital payments in a robust and tamperproof manner by adding the element of cryptography, consequently making it traversable for an individual who is a part of the blockchain network. Furthermore, for a blockchain to be optimal and efficient, it must handle the blockchain trilemma of security, decentralization, and scalability constraints in an effective manner. Algorand, a blockchain cryptocurrency protocol intended to solve blockchain’s trilemma, has been studied and discussed. It is a permissionless (public) blockchain protocol and uses pure proof of stake as its consensus mechanism.
Generating electricity from wind power is one of the fastest growing methods in the world. The kinetic energy of the moving air is converted into electricity by wind turbines that are installed in places where the weather conditions are most favorable.
Wind turbines can be used individually, but are often grouped together to form wind parks also called wind farm. Electricity generated from wind parks can be used to meet local needs or to supply an electricity distribution network for homes and businesses further away.
Energy obtained from the wind can also be converted into hydrogen and used as transport fuel or stored for subsequent electricity generation. The use of this form of energy, reduces the impact of electricity generation on the environment as it does not require fuel and does not produce any pollutants or greenhouse gases.
Wind energy is growing significantly and since 1994 the world market has grown by around 30% per year. The installed capacity worldwide rose from 17,400 up to 650,560 MW between 2000 and the end of 2019. In the European market, which concentrates most of the world's wind farm, Germany remains the leader with almost half of the total capacity. Spain recorded the strongest growth in the last three years with an annual growth rate of 28%. Europe also concentrates industrial and technological activities: Eight European manufacturers are among the top ten in the world, with 70% of devices sold in 2018.
The objective of this Bachelor Project is the creation of a tool that should support forensic investigators during IT forensic interventions. It uses Kismet as the base program and adds functionalities to it via the plugin interface. The installation of the plugin shall be explained, how the plugin works, and a recommendation on how to use it. To understand the underlying basics, an introduction about WLAN and Bluetooth is given. The tests that were performed with the new plugin are described as well as their results. It is therefore briefly discussed why the tool is applicable for locating Wi-Fi devices, especially access points, but not Bluetooth devices. Using all this a few ideas on how to improve the tool and what can be researched in this area are provided.
There are multiple ways to gain information about an individual and its health status, but an increasingly popular field in medicine has become the analysis of human breath, which carries a lot of information about metabolic processes within the individuals body. The information in exhaled breath consists of volatile (organic) compounds (VOCs). These VOCs are products of metabolic processes within the individuals body, thus might be an indicator for diseases disturbing those processes. The compounds are to be detected by mass-spectrometric (MS) or ion-mobility spectrometric (IMS) techniques, making the analysis of these compounds not only bounded to exhaled breath. The resulting data is spectral data, capturing concentrations of the VOCs indirectly through intensities. However, a number of about 3000 VOCs [1] could already be determined in human exhaled breath. The number of research paper about VOC-analysis and detection had risen nearly constantly over the last decade 1. Furthermore, the technique to identify VOCs could also be used to capture biomarker from alien species within the individuals body. Extracting VOCs from an individual can be done by non- or minimal invasive techniques. However, the manual identification of VOCs and biomarkers related to a certain disease or infection is not feasible due to the complexity of the sample and often unknown metabolic products, thus automized techniques are needed. [1–4] To establish breath analysis as a diagnosis tool, machine learning methodes could be used. Machine learning has become a popular and common technique when dealing with medical data, due to the rapid analysis. Taking this advantage, breath analysis using machine learning could become the model of choice for diagnosis, keeping in mind that conventional methodes are laboratory based and thus when trying detect bacterial infection need sometimes several days to identify the organism. [5]
This thesis aims to research the platform YouTube and whether “being a YouTuber” qualifies as a profession or not and what leads to this. The author combines existing scientific data and information provided by YouTubers doing this as a job and uses the compilation method. The author merges that material and uses it to create a bachelor thesis that covers both the theoretical and practical approach. The aim was to find out if there is a success recipe that can be followed that leads to views and clicks which are essential for the profession as a YouTuber. To do this, the author created two channels to see how the factors mentioned in this thesis are applied and if the approach leads to success. The findings of this thesis showed, that although the profession of a YouTuber can be classified as a job, it needs to be viewed differently from commonly known and in society accepted careers. Becoming a YouTuber and making money from this business, therefore, cannot be guaranteed.
In this work, a protocol for portable nanopore sequencing of DNA from pollen collected from honey bees, bumble bees, and wild bees was developed. DNA metabarcoding is applied to identify genera within the mixed DNA samples. The DNA extraction and ITS and ITS2 PCR parameters tested for this purpose were applied to the collected pollen sample and the amplicons were then decoded using the Flongle sequencer adapter from Oxford Nanopore Technologies. It is shown that the main pollinator resources at the different sites can be identified in percentage proportions. The protocol generated in this study can be used for further ecological questions.
Drought is one of the most common and dangerous threats plants have to face, costing the global agricultural sector billions of dollars every year and leading to the loss of tons of harvest. Until people drastically reduce their consumption of animal products or cellular agriculture comes of age, more and more crops will need to be produced to sustain the ever growing human population. Even then, as more areas on earth are becoming prone to drought due to climate change, we may still have to find or breed plant varieties more suitable to grow and prosper in these changing environments.
Plants respond to drought stress with a complex interplay of hormones, transcription factors, and many other functional or regulatory proteins and mapping out this web of agents is no trivial task. In the last two to three decades or so, machine learning has become immensely popular and is increasingly used to find patterns in situations that are too complex for the human mind to overlook. Even though much of the hype is focused on the latest developments in deep learning, relatively simple methods often yield superior results, especially when data is limited and expensive to gather.
This Master Thesis, conducted at the IPK in Gatersleben, develops an approach for shedding light on the phenotypic and transcriptomic processes that occur when a plant is subjected to stress. It centers around a random forest feature selection algorithm and although it is used here to illuminate drought stress response in Arabidopsis thaliana, it can be applied to all kinds of stresses in all kinds of plants.
Genetic sequence variations at the level of gene promoters influence the binding of transcription factors. In plants, this often leads to differential gene expression across natural accessions and crop cultivars. Some of these differences are propagated through molecular networks and lead to macroscopic phenotypes. However, the link between promoter sequence variation and the variation of its activity is not yet well understood. In this project, we use the power of deep learning in 728 genotypes of Arabidopsis thaliana to shed light on some aspects of that link. Convolutional neural networks were successfully implemented to predict the likelihood of a gene being expressed from its promoter sequence. These networks were also capable of highlighting known and putative new sequence motifs causal for the expression of genes. We tested our algorithms in various scenarios, including single and multiple point mutations, as well as indels on synthetic and real promoter sequences and the respective performance characteristics of the algorithm have been estimated. Finally, we showed that the decision boundary to classify genes as expressed and non-expressed depends on the sensitivity of the transcriptome profiling assay and changing it has an impact on the algorithm’s performance.
Data streams change their statistical behaviour over the time. These changes can occur gradually or abruptly with unforeseen reasons, which may effect the expected outcome. Thus it is important to detect concept drift as soon as it occurs. In this thesis we chose distance based methodology to detect presence of concept drift in the data streams. We used generalized learning vector quantization(GLVQ) and generalized matrix learning vector quantization( GMLVQ) classifiers for distance calculation between prototypes and data points. Chi-square and Kolmogorov–Smirnov tests are used to compare the distance distributions of test and train data sets to indicate the drift presence.
Anomaly Detection is a very acute technical problem among various business enterprises. In this thesis a combination of the Growing Neural Gas and the Generalized Matrix Learning Vector Quantization is presented as a solution based on collected theoretical and practical knowledge. The whole network is described and implemented along with references and experimental results. The proposed model is carefully documented and all the further open researching questions are stated for future investigations.
The aim of this bachelor thesis is to find out how the use of artificial intelligence, specifically the one used in combat situations, can increase the playing time or even the replay value of games in the action role-playing genre. Thereby, it focuses mainly on combat situations between a player and an artificial intelligence.
To begin with, this bachelor thesis examines the action role-playing genre in order to find a suitable definition for it. Accordingly, action role-playing games involve titles that send the player on a hero’s journey-like adventure in which they must prove their skills in combat against virtual opponents. The greatest challenge of these real-time battles comes from the required quick reflexes, skill queries and hand-eye coordination.
Next, six means of increasing the replayability of a game are explored: Experience and Nostalgia, Variety and Randomness, Goals and Completion, Difficulty, Learning, and Social Aspect. The paper then proceeds to give an explanation for the term Artificial Intelligence and examines the various methods used to create intelligent behavior as well as the general advancement of the research field. Special attention is given to the implementation methods of Finite State Machines and Behavior Trees, as they are the most widely used methods for creating behavioral patterns of virtual characters.
Finally, a study conducted as part of the bachelor thesis is described, which compares a mathematically balanced artificial intelligence with a behaviorally balanced one in terms of game performance regarding the willingness of test subjects to purchase and play through the game as well as its replay value. The thesis concludes with the findings that while the behavioral approach is more promising than the mathematical approach, a combination of the two methods ultimately leads to the best outcome. Furthermore, the study shows that the use of artificial intelligence to individualize gaming experiences is promising for the future of the gaming industry.
Pollinating insects are of vital importance for the ecosystem and their drastic decline imposes severe consequences for the environment and humankind. The comprehension of their interaction networks is the first step in order to preserve these highly complex systems. For that purpose, the following study describes a protocol for the investigation of honey bee pollen samples from different agro-environmental areas by DNA extraction, PCR amplification and nanopore sequencing of the barcode regions rbcL and ITS. It was shown, that the most abundant species were classified consistently by both DNA barcodes, while species richness was enhanced by single-barcode detection of less abundant species. The analysis of the the different landscape variables exhibited a decline of species richness, Shannon diversity index, and species evenness with increasing organic crop area. However, sampling was only carried out in August and further investigations are suggested to display a more complete picture of honey bee foraging throughout the seasons.
Where does the cocoa, which we consume on a regular basis, come from? Supply chains are not always transparent, much less easily comprehensible. The cocoa industry faces ongoing challenges. Whether it be the chocolate manufacturers’ promise to maintain a sustainable and ethical supply chain, the minimal impact on the environment or the maximum adherence to human rights in their production process. This paper revises important steps which lead to the compliance with UN standards and questions the role of consumers in the construct of ethical chocolate products.
In response to prevailing environmental conditions, Arabidopsis thaliana plants must increase their photosynthetic capacity to acclimate to potential harmful environmental high light stress. In order to measure these changes in acclimation capacity, different high throughput imaging-based methods can be used. In this master thesis we studied different Arabidopsis thaliana knockout mutants-and accessions in their capacity to acclimate to potential harmful environmental high light and cold temperature conditions using a high throughput phenotyping system with an integrated chlorophyll fluorescence measurement system. In order to determine the acclimation capacity, Arabidopsis thaliana knockout mutants of previously not high light assigned genes as well as accessions of two different haplotype groups with a reference and alternative allele from different countries of origin were grown under switching high light and temperature environmental conditions. Photosynthetic analysis showed that knockout mutant plants did differ in their Photosystem II operating efficiency during an increased light irradiance switch but did not significantly differ a week later under the same circumstances from the wildtype. High throughput phenotyping of haplotype accessions revealed significant better acclimation capacity in non-photochemical quenching and steady-state photosynthetic efficiency in Russian domiciled accessions with an altered SPPA gene during high light and cold stress.
Sequences are an important data structure in molecular biology, but unfortunately it is difficult for most machine learning algorithms to handle them, as they rely on vectorial data. Recent approaches include methods that rely on proximity data, such as median and relational Learning Vector Quantization. However, many of them are limited in the size of the data they are able to handle. A standard method to generate vectorial features for sequence data does not exist yet. Consequently, a way to make sequence data accessible to preferably interpretable machine learning algorithms needs to be found. This thesis will therefore investigate a new approach called the Sensor Response Principle, which is being adapted to protein sequences. Accordingly, sequence similarity is measured via pairwise sequence alignments with different sequence alignment algorithms and various substitution matrices. The measurements are then used as input for learning with the Generalized Learning Vector Quantization algorithm. A special focus lies on sequence length variability as it is suspected to affect the sequence alignment score and therefore the discriminative quality of the generated feature vectors. Specific datasets were generated from the Pfam protein family database to address this question. Further, the impact of the number of references and choice of substitution matrices is examined.
In this thesis, we focus on using machine learning to automate manual or rule-based processes for the deduplication task of the data integration process in an enterprise customer experience program. We study the underlying theoretical foundations of the most widely used machine learning algorithms, including logistic regression, random forests, extreme gradient boosting trees, support vector machines, and generalized matrix learning vector quantization. We then apply those algorithms to a real, private data set and use standard evaluation metrics for classification, such as confusion matrix, precision, and recall, area under the precision-recall curve, and area under the Receiver Operating Characteristic curve to compare their performances and results.
Differentiation is ubiquitous in the field of mathematics and especially in the field of Machine learning for calculations in gradient-based models. Calculating gradients might be complex and require handling multiple variables. Supervised Learning Vector Quantization models, which are used for classification tasks, also use the Stochastic Gradient Descent method for optimizing their cost functions. There are various methods to calculate these gradients or derivatives, namely Manual Differentiation, Numeric Differentiation, Symbolic Differentiation, and Automatic Differentiation. In this thesis, we evaluate each of the methods mentioned earlier for calculating derivatives and also compare the use of these methods for the variants of Generalized Learning Vector Quantization algorithms.
Financial fraud for banks can be a reason for huge monetary losses. Studies have shown that, if not mitigated, financial fraud can lead to bankruptcy for big financial institutions and even insolvency for individuals. Credit card fraud is a type of financial fraud that is ever growing. In the future, these numbers are expected to increase exponentially and that’s why a lot of researchers are focusing on machine learning techniques for detecting frauds. This task, however, is not a simple task. There are mainly two reasons
• varying behaviour in committing fraud
• high level of imbalance in the dataset (the majority of normal or genuine cases largely outnumbers the number of fraudulent cases)
A predictive model usually tends to be biased towards the majority of samples, in an unbalanced dataset, when this dataset is provided as an input to a predictive model.
In this Thesis this problem is tackled by implementing a data-level approach where different resampling methods such as undersampling, oversampling, and hybrid strategies along with bagging and boosting algorithmic approaches have been applied to a highly skewed dataset with 492 idetified frauds out of 284,807 transactions.
Predictive modelling algorithms like Logistic Regression, Random Forest, and XGBoost have been implemented along with different resampling techniques to predict fraudulent transactions.
The performance of the predictive models was evaluated based on Receiver Operating CharacteristicArea under the curve (AUC-ROC), Precision Recall Area under the Curve (AUC-PR), Precision, Recall, F1 score metrics.
Embeddings for Product Data
(2022)
The E-commerce industry has grown exponentially in the last decade, with giants like Amazon, eBay, Aliexpress, and Walmart selling billions of products. Machine learning techniques can be used within the e-commerce domain to improve the overall customer journey on a platform and increase sales. Product data, in specific, can be used for various applications, such as product similarity, clustering, recommendation, and price estimation. For data from these products to be used for such applications, we have to perform feature engineering. The idea is to transform these products into feature vectors before training a machine learning model on them. In this thesis, we propose an approach to create representations for heterogeneous product data from Unite’s platform in the form of structured tabular records. These tables consist of attributes having different information ranging from product-ids to long descriptions. Our model combines popular deep learning approaches used in natural language processing to create numerical representations, which contain mostly non-zeros elements in an array or matrix called as dense representation for all products. To evaluate the quality of these feature vectors, we validate how well the similarities between products are captured by these dense representations. The evaluations are further divided into two categories. The first category directly compares the similarities between individual products. On the other hand, the second category uses these dense vectors in any of the above- mentioned applications as inputs. It then evaluates the quality of these dense representation vectors based on the accuracy or performance of the defined application. As result, we explain the impact of different steps within our model on the quality of these learned representations.
Aspects of Mindful Leadership Upon the Psychological Health of Employees in an Intercultural Context
(2023)
Across the globe, organizations are in the midst of rapid transformation. Immigration, digitalization and the push for sustainability are just to name a few. Organizational structures are being pushed for more agility, co-opetition, integration, tenable and resilient workplaces. Social structures of companies are being reformed and the weight of cooperation and integration lays upon the leaders and employees. But from this weight of integration what psychological effects does it play upon the migrant and domestic employees to be engaged at work? What role does the leadership style impact the mental health and engagement in the cross-cultural workplace? Previous work has shown the importance of workplace integration, however, the impact of the mental health of domestic employees needs more attention from the scholars in this new context. The object of the research is to define the connection of mindful leadership and the psychological health of employees within a cross-cultural workplace and to develop strategies to improve workplace engagement.
The digital transformation of higher education demands effective and efficient methods for learning support and assessment of learning processes. This paper relates learning support and assessment to each other in the context of learning management systems. It refers to previous studies carried out in multiple introductory economic courses of the University of Applied Sciences Mittweida which examine possible connections between the use of digital tests and learning success, investigate student’s acceptance and self-perceived learning success with respect to the webbased portion of a blended course and a purely online based course. Based on a survey (n = 71) and a quantitative analysis (n = 214) with logging and exam assessment data, the previous work shows that students approached the web-based course portion with rather reserved attitudes. Still, they perceived the individual course elements, namely videos, podcasts, interactive worksheets, online tests, and a comprehensive PDF file to be beneficial to their learning experience. Especially we could indicate a positive correlation between the points students achieved in the online tests and the exam results.
Crowd-Powered Medical Diagnosis : The Potential of Crowdsourcing for Patients with Rare Diseases
(2023)
With the recent rise in medical crowdsourcing platforms,
patients with chronic illnesses increasingly broadcast their
medical records to obtain an explanation for their complex
health conditions. By providing access to a vast pool of
diverse medical knowledge, crowdsourcing platforms have
the potential to change the way patients receive a medical
diagnosis. We developed a conceptual model that details
a set of variables. To further the understanding of
crowdsourcing as an emerging phenomenon in health care,
we provide a contextualization of the various factors that
drive participants to exert effort. For this purpose, we used
CrowdMed.com as a platform from which we gathered and
examined a unique dataset that involves tasks of diagnosing
rare medical conditions. By promoting crowdsourcing
as a robust and non-discriminatory alternative to seeking
help from traditional physicians, we contribute to the acceptance
and adoption of crowdsourcing services in health
economics.
Derived from the Ancient Greek word τραῦμα (engl. wound,
damage), the word trauma refers to either physical or emotional wounds. Nowadays, it is mostly used in the context of psychological wounds, inflicted by an identity-shattering event – an event that causes the traumatised to not be able to reconcile their lived reality with the expectation of a human universal experience anymore. The last decade, the last two years in particular, and the last two weeks ad absurdum, have scarred the global landscape of human existence beyond recognition. From Putin’s unexpected reimposition of mutually assured destruction doctrines via the global SARS-Cov-2 pandemic to the lingering threat of climate doom, people all over the globe have been faced with persistent threats to their most basic perceptions of ontological safety. This article seeks to examine the impact of the SARS-Cov-2 pandemic and to which degree it is justified to speak of a pandemic trauma. In addition, it engages with the liminality of pandemic trauma as a shared, collective and an isolated, individual experience, and potential mitigation strategies for building community resilience.
Convolutional Neural network (CNN) has been one of most powerful and popular preprocessing techniques employed for image classification problems. Here, we use other signal processing techniques like Fourier transform and wavelet transform to preprocess the images in conjunction with different classifiers like MLP, LVQ, GLVQ and GMLVQ and compare its performance with CNN.
Purpose: The study is aimed to determine the Incentives for German SMEs to offshore their business activities in India and China.
Design: This study is based on quantitative approach. Primary and secondary data is being used in the study. The data was collected from individuals working in different SMEs in Germany, having relative offshoring experience. Theories from the articles, peer reviewed journal along with relevant books were consulted throughout the study.
Findings: The findingssuggest that the benefits and advantages of offshoring strategy in India and China are cost efficiency and technology. Moreover, the challenges that are being faced by the firms while executive offshoring strategy is cultural mix especially language/cultural barriers, security issues and loss of market performance.
Originality and Value: The study on incentives of German SMEs to offshore business activities in India and China enables me to understand why companies are interested in offshoring strategy in low cost countries for expanding their business while evaluating the challenges, merits and demerits of offshoring
Over the past few years, wind and solar power plants have increasingly contributed to energy production. However, due to fluctuating energy sources, the energy production data contain disruption. Such disrupted data lead to the wrong prediction performance, and they need to be estimated by other values. In this thesis, we provide a comparative study to estimate the online disrupted data based on the data of similar groups of power plants, We apply three estimation techniques, e.g., mean, interpolation, and k-nearest neighbor to estimate the disruption on training data. We then apply four clustering algorithms, e.g., k-means, neural gas, hierarchical agglomerative, and affinity propagation, with two similarity measures, e.g., euclidean and dynamic time warping to form groups of power plants and compare the results. Experimental results show that when KNN estimation is applied to data, and neural gas and agglomerative with dtw are used to cluster the data, the cluster quality scores and execution time give better results compared to others. Therefore, we conclude and choose KNN estimation to reconstruct the online disrupted data on each group of a similar power plants.
Studying and understanding the metabolism of plants is essential to better adapt them to future climate conditions. Computational models of plant metabolism can guide this process by providing a platform for fast and resource-saving in silico analyses. The reconstruction of these models can follow kinetic or stoichiometric approaches with Flux Balance Analysis being one of the most common one for stoichiometric models. Advances in metabolic modelling over the years include the increasing number of compartments, the automation of the reconstruction process, the modelling of plant-environment interactions and genetic variants or temporally and spatially resolved models. In addition, there is a growing focus on introducing synthetic pathways in plants to increase their agricultural potential regarding yield, growth and nutritional value. One example is the β-hydroxyaspartate cycle (BHAC) to bypass photorespiration. After the implementation in a stoichiometric C3 plant model, in silico flux analyses can help to understand the resulting metabolic changes. When comparing with in vivo experiments with BHAC plants, the metabolic model can reproduce most results with exceptions regarding growth and oxaloacetate. To evaluate whether the BHAC is suitable to establish a synthetic C4 cycle, the pathway is implemented in a two-cell type model that is capable of running a C4 cycle. The results show that the BHAC is only beneficial under light limitation in the bundle sheath cell. An additional engineering target for improved performance of plants is malate synthase. This work serves as the basis for further analyses combining the different factors boosting the advantages of the BHAC and for in vivo experiments in C3 and C4 plants.
In the past few years Generative models have become an interesting topic in the field of Machine Learning (ML). Variational Autoencoder (VAE) is one of the popular frameworks of generative models based on the work of D.P Kingma and M. Welling [6] [7]. As an alternative to VAE the authors in [12] proposed and implemented Information Theoretic Learning (ITL) based Autoencoder. VAE and ITL Autoencoder are a combination of the neural networks and probabilistic graphical models (PGM) [7]. In modern statistics it is difficult to compute the approximation ofthe probability densities. In this paper we make use of Variational Inference (VI) technique from machine learning that approximate the distributions through optimization. The closeness between the distributions are measured by the information theoretic divergence measures such as Kullbach-Liebler, Euclidean and Cauchy Schwarz divergences. In this thesis, we study theoretical and experimental results of two different frameworks of generative models which generate images of MNIST handwritten characters [8] and Yale face database B [3]. The results obtained show that the proposed VAE and ITL Autoencoder are capable of generating the underlying structure of the example datasets
Digital data is rising day by day and so is the need for intelligent, automated data processing in daily life. In addition to this, in machine learning, a secure and accurate way to classify data is important. This holds utmost importance in certain fields, e.g. in medical data analysis. Moreover, in order to avoid severe consequences, the accuracy and reliability of the classification are equally important. So if the classification is not reliable, instead of accepting the wrongly classified data point, it is better to reject such a data point. This can be done with the help of some strategies by using them on top of a trained model or including them directly in the objective function of the desired training model. We discuss such strategies and analyze the results on data sets in this thesis.
This Bachelor thesis investigates the learning rules of the Hebbian, Oja and BCM neuron models for their convergence to, and the stability of, the fixed points. Existing research is presented in a structured manner using consistent notation. Hebbian learning is neither convergent nor stable. Oja learning converges to a stable fixed point, which is the eigenvector corresponding to the largest eigenvalue of the covariance matrix of the input data. BCM learning converges to a fixed point which is stable, when assuming a discrete distribution of orthogonal inputs that occur with equal probability. Hebbian learning can therefore not be used in further applications, where convergence to a stable fixed point is required. Furthermore, this Bachelor thesis came to the conclusion that determining the fixed points of the BCM learning rule explicitly involves extensive calculation and other methods for verifying the stability of possible fixed points should be considered.
This scientific work deals with the current opportunities of business development. Purpose of the work is study and analysis of the organization's development strategy and its development. The subject of the study is the mechanism of formation of an organization's development strategy, understanding of business development and its core methodologies and branches. This thesis is based on the operations of the real engineering company and main part of the research could be applied in reality. Main goal of the thesis is to find recommendations on the implementation of strategic changes organization's development strategy.
Simulating complex physical systems involves solving nonlinear partial differential equations (PDEs), which can be very expensive. Generative Adversarial Networks (GAN) has recently been used to generate solutions to PDEs-governed complex systems without having to numerically solve them.
However, concerns are raised that the standard GAN system cannot capture some important physical and statistical properties of a complex PDE-governed system, along side with other concerns for difficult and unstable training, the noisy appearance of generated samples and lack of robust assessment methods of the sample quality apart from visual examination. In this thesis, a standard GAN system is trained on a data set of Heat transfer images. We show that the generated data set can capture the true distribution of training data with respect to both visual and statistical properties, specifically the vertical statistical profile. Furthermore, we construct a GAN model which can be conditioned using variance-induced class label. We show that the variance threshold t = 0. 01 constructs a good conditional class label, such that the generated images achieve 96% accuracy
rate in complying with the given conditions.
Current research in identity management is focusing on decentralized trust establishment for distributed identities. One of these decentralized trust models is Self-Sovereign Identities (SSI). With SSI each entity should be able to independently present and manage provable information about itself as well as request and review evidence from other entities. Using a distributed blockchain, information for verifying the authenticity of this evidence can be obtained from any other entity. This concept can be used not only for people, but also for authentication and authorization during the life cycle of devices in the Internet of Things (IoT). This paper presents an SSI-based concept for authentication and authorization of IoT devices among each other, intended to contribute to the change in trust on the internet. The SSI methodology employing a blockchain offers the possibility to establish mutual trust and proof of ownership without relying on any third party. The paper describes the concept, offers a reference implementation, and gives a discussion of the approach.
As economies are getting more and more interconnected, the importance of the global logistics sector grew accordingly. However, both structural challenges and current events lead to recent supply chain disruptions, exposing the vulnerabilities of the sector. Simultaneously, blockchain has emerged as a key innovative technology with use cases going far beyond the exchange of virtual currencies. This paper aims to analyze how the technology is transforming global logistics and its challenges. Therefore, six use cases, are presented to give an overview of the technological possibilities of blockchain and smart contracts. The analysis combines theoretical approaches from scientific journals and combines them with findings from real-world implementations. The paper finds that the technology can change supply chain design fundamentally, with processes and decisions being automated and power within supply chain structures changing. However, implementations also face technological, environmental, and organizational challenges that need to be solved for wide-spread adoption.
More than 10 years after the invention of Bitcoin, the underlying blockchain technology is having an increasing effect on today’s society. Although one of the most popular application areas of blockchain is still the field of cryptocurrencies, the technological concepts are crossing into further application domains such as international supply chains. Fast-changing markets, high costs of time and risk management as well as biased relationships between the actors pose big challenges to an appropriate supply chain management. Based on a case study about sensor tracking, this paper explores the potential impact of blockchain on small and medium enterprises within an international supply chain. We will show that blockchain technologies offers a high potential to reduce inequalities of power relations between involved actors within supply chains. To achieve this, the requirements for the use of blockchain in supply chain management will be analyzed by means of a conducted case study and an expert survey of the companies concerned.
Digital Power of Attorney catalyzed by Software Requirements for Blockchain-based Applications
(2022)
Blockchain Technology (BT) with so-called web3 is at an inflection point between new sub-theme hypes and world-wide industrialization over last three years thanks to large companies like MicroStrategy [1], Facebook [2] and several Venture-Capital formations [3] who are already fighting over market share and community growth. Our work represents insights from Literature-based Software Requirement (SR) elicitation for a specific Blockchain-based Application, which is creation, managing and control of digital Power of Attorney (POA). The context of POA is not only a financial driven use-case it is by far a heavy weight universal legal transaction. We use a morphological box and reduced PRIMS-P to synthesis a generic specification for further Blockchain-based Application development. Formulated SRs in POA context are reflected on our core actors which are Grantor and authorized, trusted, external Entities. Proposed characteristics for relationship and effects are visualized in a reference model originally used in digital platform ecosystems [4]. This design and modelling approach facilitated closing discussion of BT and its future eCommerce perspective.
Dynamic object roles and corresponding contexts can model complex applications with higher-level abstraction. These abstracted applications can be used in wider areas such as financial institutions, health care, and supply chain network. Role management which consists of the creation of role objects, and binding role object between core objects still suffers from non-intrusive logging-monitoring, auditing, and resilient data source for role-based applications. Moreover, immutable smart contracts cause problems concerning bug fixing and maintenance without dynamic binding to new smart contract objects. An object that is created from a smart contract (contract class) can be transparently attached to a role object utilizing the Role Object Pattern (ROP). However, ROP itself does not contain a context definition and context-specific role assignment grouping the definition of smart contract relationships in abstracted data types. In this study, we would like to implement an extended version of the role object pattern called Context-based Role Object Pattern (ContextROP) with an onchain smart contract language called Solidity to solve fundamental problems. To evaluate the proposal, we will implement a use case with the design pattern proceeding with qualitative and quantitative analysis.
Humans started using the principles of insurance thousands of years ago when they lived in tribes in smaller villages. If one of the tribe members were injured, the others would take care of him and his family. The basic principle of insurance is several people covering each other against a particular risk. Today, most people in regions like Europe have access to insurance, while many people worldwide still have no access at all. The cost and accessibility may be improved with a blockchain-based parametric approach. The insurance process in a parametric approach is exclusively based on data, and decisions are made objectively. Blockchain is a necessary and integral part of the approach to create transparency and connect the customer’s and investor’s risk capital. The paper offers an overview of the opportunities and challenges of blockchain-based parametric insurance, a catalog of criteria for such insurance, a description of all components and their interaction for implementation on Ethereum, and a reference implementation of a train delay insurance in Germany.
The topic of soulbound, non-transferable tokens is getting lots of interest within the blockchain space lately as decentralized societies become more tangible with Web3 social media applications and DAOs. In this article, I want to outline how such tokens function, their problems for adoption and standardization, and how they differ from verifiable credentials in the SSI field. As such soulbound assets will likely rely on extended recovery and asset management schemes to become viable identities that safely gain reputation and trust, features like social recovery and contract-based accounting are incorporated. By combining those new technologies and the theoretical crypto-native identity construct, the paper will give an impression of the future user-centric data economy.
Applications and Potential Impacts of Blockchain Technology in Logistics and Supply Chain Areas
(2022)
The motive of the present thesis is to analyze the applications and potential impacts of blockchain technology in the logistics and supply chain areas. For this purpose, the literature from different sources has been used to analyze and get an overview of the current status and role of blockchain technology within the logistics and supply chain areas. Different use cases, as well as pilot projects from organizations all over the world and also from Germany, have been included. Suggestions for further applications and implementations of blockchain technology along with their potential impacts have been made. Additionally, the cost of implementing blockchain-based solutions and applications has been estimated along with providing recommendations and suggestions for important and key points to be considered before preparing and deciding to implement blockchain-based solutions in any organization.
Influenza A viruses are responsible for the outbreak of epidemics as well as pandemics worldwide. The surface protein neuraminidase of this virus is responsible, among other things, for the release of virions from the cell and is thus of interest in pharmacological research. The aim of this work is to gain knowledge about evolutionary changes in sequences of influenza A neuraminidase through different methods. First, EVcouplings is used with the goal of identifying evolutionary couplings within the protein sequences, but this analysis was unsuccessful. This is probably due to the great sequence length of neuraminidase. Second, the natural vector method will be used for sequence embedding purposes, in hopes to visualize sequential progression of the virus protein over time. Last, interpretable machine learning methods will be applied to examine if the data is classifiable by the different years and to gain information if the extracted information conform to the results from the EVcouplings analysis. Additionally to using the class label year, other labels such as groups or subtypes are used in classification with varying results. For balanced classes the machine learning models performed adequately, but this was not the case for imbalanced data. Groups and subtypes can be classified with a high accuracy, which was not the case for the years, continents or hosts. To identify the minimal number of features necessary for linear separation of neuraminidase group 1 subtypes, a logistic regression was performed at last, resulting in the identification of 15 combinations of nine amino acid frequencies. Since the sequence embedding as well as the machine learning methods did not show neuraminidase evolution over time, further research is necessary, for example with focus on one subtype with balanced data.