Fabian Lehmann

Fabian Lehmann

Ph.D. candidate

Humboldt University of Berlin

Biography

I am Fabian Lehmann, a Ph.D. candidate in computer science at the Knowledge Management in Bioinformatics Lab at the Humboldt University of Berlin. I get my funding through FONDA, a collaborative research center of the German Research Foundation (DFG).

Since my bachelor studies, I have been fascinated by any complex, distributed system. I love to understand and overcome their limits. In my Ph.D. research, I focus on workflow engines, improving the execution of distributed workflows while analyzing large amounts of data. In particular, my goal is to improve scheduling and data management. Therefore, I work closely with the Earth Observation Lab at the Humboldt University of Berlin to understand real-world requirements.

Interests
  • Distributed Systems
  • Scientific Workflows
  • Workflow Scheduling
Education
  • Master of Science in Information Systems Management, 2020

    Thesis: Design and Implementation of a Processing Pipeline for High Resolution Blood Pressure Sensor Data

    Technical University of Berlin

  • Bachelor of Science in Information Systems Management, 2019

    Thesis: Performance-Benchmarking in Continuous-Integration-Processes

    Technical University of Berlin

  • Abitur (comparable to A Levels), 2015

    Hannah-Arendt-Gymnasium (Berlin)

Professional Experience

 
 
 
 
 
Knowledge Management in Bioinformatics Lab (Humboldt University of Berlin)
Ph.D. candidate (computer science)
Nov 2020 – Present Berlin, Germany
In my Ph.D. studies, I focus on improving the execution of large scientific workflows processing hundreds of gigabytes of data.
 
 
 
 
 
DAI-Labor (Technical University of Berlin)
Student Assistent
May 2018 – Oct 2020 Berlin, Germany
In my student job, we were working with time-series data in DIGINET-PS. For example, we predicted parking slot occupation.
 
 
 
 
 
University of Oxford
GeoTripNet - Case Study
Oct 2019 – Mar 2020 Oxford, England, United Kingdom
For the case study, we crawled restaurants' reviews on Google Maps to analyze the relations between different restaurants and examine gentrification in Berlin districts. One problem was to process and analyze the large amount of data in real-time.
 
 
 
 
 
Einstein Center Digital Future
Fog Computing Project
Apr 2019 – Sep 2020 Berlin, Germany
This project aimed to analyze SimRa’s bicycle rides. Therefore, we developed a distributed analysis pipeline. Moreover, we visualized the track information on an interactive web. We were able to classify risk hotspots for Berlin’s cyclists' tracks.
 
 
 
 
 
Conrad Connect
Application Systems Project
Oct 2017 – Mar 2018 Berlin, Germany
For Conrad Connect, we analyzed hundreds of gigabytes of IoT data. Moreover, I uncovered security vulnerabilities in their software.
 
 
 
 
 
Reflect IT Solutions GmbH
Semester Term Work
Mar 2016 – Apr 2016 & Sep 2016 – Oct 2016 Berlin, Germany
In my semester term work, I helped to develop the backend for a construction-progress-management system.
 
 
 
 
 
SPP Schüttauf und Persike Planungsgesellshaft mbH
Gap work between school and studies
May 2015 – Sep 2015 Berlin, Germany
Before I started my bachelor studies, I worked a few months, helping to manage a large construction project, gaining experience in dealing with different trades.

Computer skills

A small excerpt

JAVA
Python
Docker
Kubernetes
Spring Boot
Latex
SQL
React
JavaScript
Nextflow
Haskell
Excel

Software

Common Workflow Scheduler

Resource managers can enhance their scheduling capabilities by leveraging the Common Workflow Scheduler interface to receive workflow graph information from workflow systems. This enables the resource manager’s scheduler to make more advanced decisions.

Benchmark Evaluator

Benchmark Evaluator

The Benchmark Evaluator is a plugin for the Jenkins automation server to load benchmark data and decide on the success of a build accordingly.

Publications

Lotaru: Locally Predicting Workflow Task Runtimes for Resource Management on Heterogeneous Infrastructures

Many resource management techniques for task scheduling, energy and carbon efficiency, and cost optimization in workflows rely on a-priori task runtime knowledge. Building runtime prediction models on historical data is often not feasible in practice as workflows, their input data, and the cluster infrastructure change. Online methods, on the other hand, which estimate task runtimes on specific machines while the workflow is running, have to cope with a lack of measurements during start-up. Frequently, scientific workflows are executed on heterogeneous infrastructures consisting of machines with different CPU, I/O, and memory configurations, further complicating predicting runtimes due to different task runtimes on different machine types.
This paper presents Lotaru, a method for locally predicting the runtimes of scientific workflow tasks before they are executed on heterogeneous compute clusters. Crucially, our approach does not rely on historical data and copes with a lack of training data during the start-up. To this end, we use microbenchmarks, reduce the input data to quickly profile the workflow locally, and predict a task’s runtime with a Bayesian linear regression based on the gathered data points from the local workflow execution and the microbenchmarks. Due to its Bayesian approach, Lotaru provides uncertainty estimates that can be used for advanced scheduling methods on distributed cluster infrastructures.
In our evaluation with five real-world scientific workflows, our method outperforms two state-of-the-art runtime prediction baselines and decreases the absolute prediction error by more than 12.5%. In a second set of experiments, the prediction performance of our method, using the predicted runtimes for state-of-the-art scheduling, carbon reduction, and cost prediction, enables results close to those achieved with perfect prior knowledge of runtimes.

Validity Constraints for Data Analysis Workflows

Porting a scientific data analysis workflow (DAW) to a cluster infrastructure, a new software stack, or even only a new dataset with some notably different properties is often challenging. Despite the structured definition of the steps (tasks) and their interdependencies during a complex data analysis in the DAW specification, relevant assumptions may remain unspecified and implicit. Such hidden assumptions often lead to crashing tasks without a reasonable error message, poor performance in general, non-terminating executions, or silent wrong results of the DAW, to name only a few possible consequences. Searching for the causes of such errors and drawbacks in a distributed compute cluster managed by a complex infrastructure stack, where DAWs for large datasets typically are executed, can be tedious and time-consuming.
We propose validity constraints (VCs) as a new concept for DAW languages to alleviate this situation. A VC is a constraint specifying some logical conditions that must be fulfilled at certain times for DAW executions to be valid. When defined together with a DAW, VCs help to improve the portability, adaptability, and reusability of DAWs by making implicit assumptions explicit. Once specified, VC can be controlled automatically by the DAW infrastructure, and violations can lead to meaningful error messages and graceful behaviour (e.g., termination or invocation of repair mechanisms). We provide a broad list of possible VCs, classify them along multiple dimensions, and compare them to similar concepts one can find in related fields. We also provide a first sketch for VCs' implementation into existing DAW infrastructures.

How Workflow Engines Should Talk to Resource Managers: A Proposal for a Common Workflow Scheduling Interface

Scientific workflow management systems (SWMSs) and resource managers together ensure that tasks are scheduled on provisioned resources so that all dependencies are obeyed, and some optimization goal, such as makespan minimization, is achieved. In practice, however, there is no clear separation of scheduling responsibilities between an SWMS and a resource manager because there exists no agreed-upon separation of concerns between their different components. This has two consequences. First, the lack of a standardized API to exchange scheduling information between SWMSs and resource managers hinders portability. It incurs costly adaptations when a component should be replaced by a different one (e.g., an SWMS with another SWMS on the same resource manager). Second, due to overlapping functionalities, current installations often actually have two schedulers, both making partial scheduling decisions under incomplete information, leading to suboptimal workflow scheduling.
In this paper, we propose a simple REST interface between SWMSs and resource managers, which allows any SWMS to pass dynamic workflow information to a resource manager, enabling maximally informed scheduling decisions. We provide an implementation of this API as an example, using Nextflow as an SWMS and Kubernetes as a resource manager. Our experiments with nine real-world workflows show that this strategy reduces makespan by up to 25.1% and 10.8% on average compared to the standard Nextflow/Kubernetes configuration. Furthermore, a more widespread implementation of this API would enable leaner code bases, a simpler exchange of components of workflow systems, and a unified place to implement new scheduling algorithms.

Research Projects

FONDA

FONDA

Foundations of Workflows for Large-Scale Scientific Data Analysis

Contact