Fabian Lehmann

Fabian Lehmann

Promotionsstudent

Humboldt-Universität zu Berlin

Über mich

Ich bin Fabian Lehmann und promoviere in Informatik am Lehrstuhl für Wissensmanagement in der Bioinformatik an der Humboldt-Universität zu Berlin. Ich werde über FONDA, ein Sonderforschungsbereich der Deutschen Forschungsgemeinschaft (DFG), gefördert.

Während meines Bachelorstudiums habe ich meine Faszination für komplexe, verteilte Systeme entdeckt. Ich begeistere mich dafür, die Limits solcher Systeme auszutesten und zu überwinden. In meiner Promotion fokussiere ich mich auf die Optimierung von Workflow Systemen zur Analyse von riesigen Datenmengen. Insbesondere konzentriere ich mich hierbei auf den Aspekt des Schedulings. Hierfür arbeite ich eng mit dem Earth Observation Lab der Humboldt-Universität zu Berlin zusammen, um die Anforderungen der Praxis zu verstehen.

Interessen
  • Verteilte Systeme
  • Wissenschaftliche Workflows
  • Workflow Scheduling
Bildung
  • Master Wirtschaftsinformatik, 2020

    Abschlussarbeit: Design and Implementation of a Processing Pipeline for High Resolution Blood Pressure Sensor Data

    Technische Universität Berlin

  • Bachelor Wirtschaftsinformatik, 2019

    Abschlussarbeit: Performance-Benchmarking in Continuous-Integration-Prozessen

    Technische Universität Berlin

  • Abitur, 2015

    Hannah-Arendt-Gymnasium (Berlin)

Erfahrungen

 
 
 
 
 
Wissensmanagement in der Bioinformatik (Humboldt-Universität zu Berlin)
Promotionsstudent (Informatik)
Nov. 2020 – Aktuell Berlin, Deutschland
In meinem Promotionsvorhaben fokussiere ich mich auf die Optimierung der Ausführung von großen wissenschaftlichen Workflows, die Hunderte Gigabytes an Daten verarbeiten.
 
 
 
 
 
DAI-Labor (Technische Universität Berlin)
Studentische Hilfskraft
Mai 2018 – Okt. 2020 Berlin, Deutschland
In meinem Studentenjob habe ich im Rahmen von DIGINET-PS Zeitreihenanalysen durchgeführt. Unter anderem haben wir die Auslastung der Parkplätze auf der Straße des 17. Juni vorhergesagt.
 
 
 
 
 
Universität Oxford
GeoTripNet - Fallstudie
Okt. 2019 – März 2020 Oxford, England, Großbritannien
Im Rahmen der Fallstudie haben wir die Bewertungen aller Restaurants in Berlin auf Google Maps gecrawlt. Anschließend haben wir die Beziehungen zwischen verschiedenen Restaurants analysiert, um die Gentrifizierung in Berliner Bezirken zu untersuchen. Ein Problem bestand darin, die große Datenmenge in Echtzeit zu verarbeiten, zu analysieren und zu visualisieren.
 
 
 
 
 
Einstein Center Digital Future
Fog Computing Projekt
Apr. 2019 – Sept. 2020 Berlin, Deutschland
In diesem Projekt haben wir die Fahrradfahrten von SimRa analysiert. Dafür haben wir eine verteilte Analyse Pipeline aufgesetzt und die Daten anschließend in einer interaktiven Web-App dargestellt. Anschließend konnten wir Gefahrenstellen für die Berliner Fahrradfahrer erkennen.
 
 
 
 
 
Conrad Connect
Anwendungssysteme Projekt
Okt. 2017 – März 2018 Berlin, Deutschland
Für Conrad Connect haben wir Hunderte Gigabytes an IoT Daten ausgewertet. Außerdem habe ich Sicherheitsmängel auf ihrer Website gefunden.
 
 
 
 
 
Reflect IT Solutions GmbH
Semesterferien-Job
März 2016 – Apr. 2016 & Sep 2016 – Oct 2016 Berlin, Deutschland
In meinen Semesterferien habe ich geholfen, das Backend für eine Software zur Unterstützung der Bauüberwachung zu entwickeln.
 
 
 
 
 
SPP Schüttauf und Persike Planungsgesellshaft mbH
Arbeit zwischen Abitur und Studium
Mai 2015 – Sept. 2015 Berlin, Deutschland
Bevor ich mit meinem Bachelorstudium begonnen habe, habe ich einige Monate die Bauüberwachung der Sanierung eines 18-Geschossers unterstützt.

IT-Kenntnisse

(Eine kleine Auswahl)

JAVA
Python
Docker
Kubernetes
Spring Boot
Latex
SQL
React
JavaScript
Nextflow
Haskell
Excel

Software

Common Workflow Scheduler

Resource Manager können mit Hilfe des Common Workflow Schedulers eine Schnittstelle bereitstellen, über die Workflow-Systeme Informationen zum Workflow-Graphen übermitteln können. Diese Daten ermöglichen es dem Scheduler des Resource Managers, bessere Entscheidungen zu treffen.

Benchmark Evaluator

Benchmark Evaluator

Der Benchmark Evaluator ist ein Plugin für den Jenkins Automatisierungsserver zum Laden und Auswerten von Benchmarkergebnissen.

Publikationen

Validity Constraints for Data Analysis Workflows

Porting a scientific data analysis workflow (DAW) to a cluster infrastructure, a new software stack, or even only a new dataset with some notably different properties is often challenging. Despite the structured definition of the steps (tasks) and their interdependencies during a complex data analysis in the DAW specification, relevant assumptions may remain unspecified and implicit. Such hidden assumptions often lead to crashing tasks without a reasonable error message, poor performance in general, non-terminating executions, or silent wrong results of the DAW, to name only a few possible consequences. Searching for the causes of such errors and drawbacks in a distributed compute cluster managed by a complex infrastructure stack, where DAWs for large datasets typically are executed, can be tedious and time-consuming. We propose validity constraints (VCs) as a new concept for DAW languages to alleviate this situation. A VC is a constraint specifying some logical conditions that must be fulfilled at certain times for DAW executions to be valid. When defined together with a DAW, VCs help to improve the portability, adaptability, and reusability of DAWs by making implicit assumptions explicit. Once specified, VC can be controlled automatically by the DAW infrastructure, and violations can lead to meaningful error messages and graceful behaviour (e.g., termination or invocation of repair mechanisms). We provide a broad list of possible VCs, classify them along multiple dimensions, and compare them to similar concepts one can find in related fields. We also provide a first sketch for VCs' implementation into existing DAW infrastructures.

How Workflow Engines Should Talk to Resource Managers: A Proposal for a Common Workflow Scheduling Interface

Scientific workflow management systems (SWMSs) and resource managers together ensure that tasks are scheduled on provisioned resources so that all dependencies are obeyed, and some optimization goal, such as makespan minimization, is achieved. In practice, however, there is no clear separation of scheduling responsibilities between an SWMS and a resource manager because there exists no agreed-upon separation of concerns between their different components. This has two consequences. First, the lack of a standardized API to exchange scheduling information between SWMSs and resource managers hinders portability. It incurs costly adaptations when a component should be replaced by a different one (e.g., an SWMS with another SWMS on the same resource manager). Second, due to overlapping functionalities, current installations often actually have two schedulers, both making partial scheduling decisions under incomplete information, leading to suboptimal workflow scheduling. In this paper, we propose a simple REST interface between SWMSs and resource managers, which allows any SWMS to pass dynamic workflow information to a resource manager, enabling maximally informed scheduling decisions. We provide an implementation of this API as an example, using Nextflow as an SWMS and Kubernetes as a resource manager. Our experiments with nine real-world workflows show that this strategy reduces makespan by up to 25.1% and 10.8% on average compared to the standard Nextflow/Kubernetes configuration. Furthermore, a more widespread implementation of this API would enable leaner code bases, a simpler exchange of components of workflow systems, and a unified place to implement new scheduling algorithms.

Workflows Community Summit 2022: A Roadmap Revolution
Towards Advanced Monitoring for Scientific Workflows

Scientific workflows consist of thousands of highly parallelized tasks executed in a distributed environment involving many components. Automatic tracing and investigation of the components' and tasks' performance metrics, traces, and behavior are necessary to support the end user with a level of abstraction since the large amount of data cannot be analyzed manually. The execution and monitoring of scientific workflows involves many components, the cluster infrastructure, its resource manager, the workflow, and the workflow tasks. All components in such an execution environment access different monitoring metrics and provide metrics on different abstraction levels. The combination and analysis of observed metrics from different components and their interdependencies are still widely unregarded. We specify four different monitoring layers that can serve as an architectural blueprint for the monitoring responsibilities and the interactions of components in the scientific workflow execution context. We describe the different monitoring metrics subject to the four layers and how the layers interact. Finally, we examine five state-of-the-art scientific workflow management systems (SWMS) in order to assess which steps are needed to enable our four-layer-based approach.

Reshi: Recommending Resources for Scientific Workflow Tasks on Heterogeneous Infrastructures

Scientific workflows typically comprise a multitude of different processing steps which often are executed in parallel on different partitions of the input data. These executions, in turn, must be scheduled on the compute nodes of the computational infrastructure at hand. This assignment is complicated by the facts that (a) tasks typically have highly heterogeneous resource requirements and (b) in many infrastructures, compute nodes offer highly heterogeneous resources. In consequence, predictions of the runtime of a given task on a given node, as required by many scheduling algorithms, are often rather imprecise, which can lead to sub-optimal scheduling decisions. We propose Reshi, a method for recommending task-node assignments during workflow execution that can cope with heterogeneous tasks and heterogeneous nodes. Reshi approaches the problem as a regression task, where task-node pairs are modeled as feature vectors over the results of dedicated micro benchmarks and past task executions. Based on these features, Reshi trains a regression tree model to rank and recommend nodes for each ready-to-run task, which can be used as input to a scheduler. For our evaluation, we benchmarked 27 AWS machine types using three representative workflows. We compare Reshi’s recommendations with three state-of-the-art schedulers. Our evaluation shows that Reshi outperforms HEFT by a mean makespan reduction of 7.18% and 18.01% assuming a mean task runtime prediction error of 15%.

Projekte

FONDA

FONDA

Grundlagen von Workflows für die Analyse großer naturwissenschaftlicher Daten

Kontakt