Fabian Lehmann

Fabian Lehmann

Promotionsstudent

Humboldt-Universität zu Berlin

Über mich

Ich bin Fabian Lehmann und promoviere in Informatik am Lehrstuhl für Wissensmanagement in der Bioinformatik an der Humboldt-Universität zu Berlin. Ich werde über FONDA, ein Sonderforschungsbereich der Deutschen Forschungsgemeinschaft (DFG), gefördert.

Während meines Bachelorstudiums habe ich meine Faszination für komplexe, verteilte Systeme entdeckt. Ich begeistere mich dafür, die Limits solcher Systeme auszutesten und zu überwinden. In meiner Promotion fokussiere ich mich auf die Optimierung von Workflow Systemen zur Analyse von riesigen Datenmengen. Insbesondere konzentriere ich mich hierbei auf den Aspekt des Schedulings. Hierfür arbeite ich eng mit dem Earth Observation Lab der Humboldt-Universität zu Berlin zusammen, um die Anforderungen der Praxis zu verstehen.

Interessen
  • Verteilte Systeme
  • Wissenschaftliche Workflows
  • Workflow Scheduling
Bildung
  • Master Wirtschaftsinformatik, 2020

    Abschlussarbeit: Design and Implementation of a Processing Pipeline for High Resolution Blood Pressure Sensor Data

    Technische Universität Berlin

  • Bachelor Wirtschaftsinformatik, 2019

    Abschlussarbeit: Performance-Benchmarking in Continuous-Integration-Prozessen

    Technische Universität Berlin

  • Abitur, 2015

    Hannah-Arendt-Gymnasium (Berlin)

Erfahrungen

 
 
 
 
 
Wissensmanagement in der Bioinformatik (Humboldt-Universität zu Berlin)
Promotionsstudent (Informatik)
Nov. 2020 – Aktuell Berlin, Deutschland
In meinem Promotionsvorhaben fokussiere ich mich auf die Optimierung der Ausführung von großen wissenschaftlichen Workflows, die Hunderte Gigabytes an Daten verarbeiten.
 
 
 
 
 
DAI-Labor (Technische Universität Berlin)
Studentische Hilfskraft
Mai 2018 – Okt. 2020 Berlin, Deutschland
In meinem Studentenjob habe ich im Rahmen von DIGINET-PS Zeitreihenanalysen durchgeführt. Unter anderem haben wir die Auslastung der Parkplätze auf der Straße des 17. Juni vorhergesagt.
 
 
 
 
 
Universität Oxford
GeoTripNet - Fallstudie
Okt. 2019 – März 2020 Oxford, England, Großbritannien
Im Rahmen der Fallstudie haben wir die Bewertungen aller Restaurants in Berlin auf Google Maps gecrawlt. Anschließend haben wir die Beziehungen zwischen verschiedenen Restaurants analysiert, um die Gentrifizierung in Berliner Bezirken zu untersuchen. Ein Problem bestand darin, die große Datenmenge in Echtzeit zu verarbeiten, zu analysieren und zu visualisieren.
 
 
 
 
 
Einstein Center Digital Future
Fog Computing Projekt
Apr. 2019 – Sept. 2020 Berlin, Deutschland
In diesem Projekt haben wir die Fahrradfahrten von SimRa analysiert. Dafür haben wir eine verteilte Analyse Pipeline aufgesetzt und die Daten anschließend in einer interaktiven Web-App dargestellt. Anschließend konnten wir Gefahrenstellen für die Berliner Fahrradfahrer erkennen.
 
 
 
 
 
Conrad Connect
Anwendungssysteme Projekt
Okt. 2017 – März 2018 Berlin, Deutschland
Für Conrad Connect haben wir Hunderte Gigabytes an IoT Daten ausgewertet. Außerdem habe ich Sicherheitsmängel auf ihrer Website gefunden.
 
 
 
 
 
Reflect IT Solutions GmbH
Semesterferien-Job
März 2016 – Apr. 2016 & Sep 2016 – Oct 2016 Berlin, Deutschland
In meinen Semesterferien habe ich geholfen, das Backend für eine Software zur Unterstützung der Bauüberwachung zu entwickeln.
 
 
 
 
 
SPP Schüttauf und Persike Planungsgesellshaft mbH
Arbeit zwischen Abitur und Studium
Mai 2015 – Sept. 2015 Berlin, Deutschland
Bevor ich mit meinem Bachelorstudium begonnen habe, habe ich einige Monate die Bauüberwachung der Sanierung eines 18-Geschossers unterstützt.

IT-Kenntnisse

(Eine kleine Auswahl)

JAVA
Python
Docker
Kubernetes
Spring Boot
Latex
SQL
React
JavaScript
Nextflow
Haskell
Excel

Software

Common Workflow Scheduler

Resource Manager können mit Hilfe des Common Workflow Schedulers eine Schnittstelle bereitstellen, über die Workflow-Systeme Informationen zum Workflow-Graphen übermitteln können. Diese Daten ermöglichen es dem Scheduler des Resource Managers, bessere Entscheidungen zu treffen.

Benchmark Evaluator

Benchmark Evaluator

Der Benchmark Evaluator ist ein Plugin für den Jenkins Automatisierungsserver zum Laden und Auswerten von Benchmarkergebnissen.

Publikationen

Workflows Community Summit 2024: Future Trends and Challenges in Scientific Workflows
Ponder: Online Prediction of Task Memory Requirements for Scientific Workflows

Scientific workflows are used to analyze large amounts of data. These workflows comprise numerous tasks, many of which are executed repeatedly, running the same custom program on different inputs. Users specify resource allocations for each task, which must be sufficient for all inputs to prevent task failures. As a result, task memory allocations tend to be overly conservative, wasting precious cluster resources, limiting overall parallelism, and increasing workflow makespan.In this paper, we first benchmark a state-of-the-art method on four real-life workflows from the nf-core workflow repository. This analysis reveals that certain assumptions underlying current prediction methods, which typically were evaluated only on simulated workflows, cannot generally be confirmed for real workflows and executions. We then present Ponder, a new online task-sizing strategy that considers and chooses between different methods to cater to different memory demand patterns. We implemented Ponder for Nextflow and made the code publicly available. In an experimental evaluation that also considers the impact of memory predictions on scheduling, Ponder improves Memory Allocation Quality on average by 71.0% and makespan by 21.8% in comparison to a state-of-the-art method. Moreover, Ponder produces 93.8% fewer task failures.

Validity constraints for data analysis workflows

Porting a scientific data analysis workflow (DAW) to a cluster infrastructure, a new software stack, or even only a new dataset with some notably different properties is often challenging. Despite the structured definition of the steps (tasks) and their interdependencies during a complex data analysis in the DAW specification, relevant assumptions may remain unspecified and implicit. Such hidden assumptions often lead to crashing tasks without a reasonable error message, poor performance in general, non-terminating executions, or silent wrong results of the DAW, to name only a few possible consequences. Searching for the causes of such errors and drawbacks in a distributed compute cluster managed by a complex infrastructure stack, where DAWs for large datasets typically are executed, can be tedious and time-consuming. We propose validity constraints (VCs) as a new concept for DAW languages to alleviate this situation. A VC is a constraint specifying logical conditions that must be fulfilled at certain times for DAW executions to be valid. When defined together with a DAW, VCs help to improve the portability, adaptability, and reusability of DAWs by making implicit assumptions explicit. Once specified, VCs can be controlled automatically by the DAW infrastructure, and violations can lead to meaningful error messages and graceful behavior (e.g., termination or invocation of repair mechanisms). We provide a broad list of possible VCs, classify them along multiple dimensions, and compare them to similar concepts one can find in related fields. We also provide a proof-of-concept implementation for the workflow system Nextflow.

A qualitative assessment of using ChatGPT as large language model for scientific workflow development

Scientific workflow systems are increasingly popular for expressing and executing complex data analysis pipelines over large datasets, as they offer reproducibility, dependability, and scalability of analyses by automatic parallelization on large compute clusters. However, implementing workflows is difficult due to the involvement of many black-box tools and the deep infrastructure stack necessary for their execution. Simultaneously, user-supporting tools are rare, and the number of available examples is much lower than in classical programming languages.To address these challenges, we investigate the efficiency of large language models (LLMs), specifically ChatGPT, to support users when dealing with scientific workflows. We performed 3 user studies in 2 scientific domains to evaluate ChatGPT for comprehending, adapting, and extending workflows. Our results indicate that LLMs efficiently interpret workflows but achieve lower performance for exchanging components or purposeful workflow extensions. We characterize their limitations in these challenging scenarios and suggest future research directions.Our results show a high accuracy for comprehending and explaining scientific workflows while achieving a reduced performance for modifying and extending workflow descriptions. These findings clearly illustrate the need for further research in this area.

Lotaru: Locally Predicting Workflow Task Runtimes for Resource Management on Heterogeneous Infrastructures

Many resource management techniques for task scheduling, energy and carbon efficiency, and cost optimization in workflows rely on a-priori task runtime knowledge. Building runtime prediction models on historical data is often not feasible in practice as workflows, their input data, and the cluster infrastructure change. Online methods, on the other hand, which estimate task runtimes on specific machines while the workflow is running, have to cope with a lack of measurements during start-up. Frequently, scientific workflows are executed on heterogeneous infrastructures consisting of machines with different CPU, I/O, and memory configurations, further complicating predicting runtimes due to different task runtimes on different machine types.
This paper presents Lotaru, a method for locally predicting the runtimes of scientific workflow tasks before they are executed on heterogeneous compute clusters. Crucially, our approach does not rely on historical data and copes with a lack of training data during the start-up. To this end, we use microbenchmarks, reduce the input data to quickly profile the workflow locally, and predict a task’s runtime with a Bayesian linear regression based on the gathered data points from the local workflow execution and the microbenchmarks. Due to its Bayesian approach, Lotaru provides uncertainty estimates that can be used for advanced scheduling methods on distributed cluster infrastructures.
In our evaluation with five real-world scientific workflows, our method outperforms two state-of-the-art runtime prediction baselines and decreases the absolute prediction error by more than 12.5%. In a second set of experiments, the prediction performance of our method, using the predicted runtimes for state-of-the-art scheduling, carbon reduction, and cost prediction, enables results close to those achieved with perfect prior knowledge of runtimes.

Projekte

FONDA

FONDA

Grundlagen von Workflows für die Analyse großer naturwissenschaftlicher Daten

Kontakt