Fabian Lehmann

Fabian Lehmann

Ph.D. candidate

Humboldt University of Berlin

Biography

I am Fabian Lehmann, a Ph.D. candidate in computer science at the Knowledge Management in Bioinformatics Lab at the Humboldt University of Berlin. I get my funding through FONDA, a collaborative research center of the German Research Foundation (DFG).

Since my bachelor studies, I have been fascinated by any complex, distributed system. I love to understand and overcome their limits. In my Ph.D. research, I focus on workflow engines, improving the execution of distributed workflows while analyzing large amounts of data. In particular, my goal is to improve scheduling and data management. Therefore, I work closely with the Earth Observation Lab at the Humboldt University of Berlin to understand real-world requirements.

Interests
  • Distributed Systems
  • Scientific Workflows
  • Workflow Scheduling
Education
  • Master of Science in Information Systems Management, 2020

    Thesis: Design and Implementation of a Processing Pipeline for High Resolution Blood Pressure Sensor Data

    Technical University of Berlin

  • Bachelor of Science in Information Systems Management, 2019

    Thesis: Performance-Benchmarking in Continuous-Integration-Processes

    Technical University of Berlin

  • Abitur (comparable to A Levels), 2015

    Hannah-Arendt-Gymnasium (Berlin)

Professional Experience

 
 
 
 
 
Knowledge Management in Bioinformatics Lab (Humboldt University of Berlin)
Ph.D. candidate (computer science)
Nov 2020 – Present Berlin, Germany
In my Ph.D. studies, I focus on improving the execution of large scientific workflows processing hundreds of gigabytes of data.
 
 
 
 
 
DAI-Labor (Technical University of Berlin)
Student Assistent
May 2018 – Oct 2020 Berlin, Germany
In my student job, we were working with time-series data in DIGINET-PS. For example, we predicted parking slot occupation.
 
 
 
 
 
University of Oxford
GeoTripNet - Case Study
Oct 2019 – Mar 2020 Oxford, England, United Kingdom
For the case study, we crawled restaurants' reviews on Google Maps to analyze the relations between different restaurants and examine gentrification in Berlin districts. One problem was to process and analyze the large amount of data in real-time.
 
 
 
 
 
Einstein Center Digital Future
Fog Computing Project
Apr 2019 – Sep 2020 Berlin, Germany
This project aimed to analyze SimRa’s bicycle rides. Therefore, we developed a distributed analysis pipeline. Moreover, we visualized the track information on an interactive web. We were able to classify risk hotspots for Berlin’s cyclists' tracks.
 
 
 
 
 
Conrad Connect
Application Systems Project
Oct 2017 – Mar 2018 Berlin, Germany
For Conrad Connect, we analyzed hundreds of gigabytes of IoT data. Moreover, I uncovered security vulnerabilities in their software.
 
 
 
 
 
Reflect IT Solutions GmbH
Semester Term Work
Mar 2016 – Apr 2016 & Sep 2016 – Oct 2016 Berlin, Germany
In my semester term work, I helped to develop the backend for a construction-progress-management system.
 
 
 
 
 
SPP Schüttauf und Persike Planungsgesellshaft mbH
Gap work between school and studies
May 2015 – Sep 2015 Berlin, Germany
Before I started my bachelor studies, I worked a few months, helping to manage a large construction project, gaining experience in dealing with different trades.

Computer skills

A small excerpt

JAVA
Python
Docker
Kubernetes
Spring Boot
Latex
SQL
React
JavaScript
Nextflow
Haskell
Excel

Software

Benchmark Evaluator

Benchmark Evaluator

The Benchmark Evaluator is a plugin for the Jenkins automation server to load benchmark data and decide on the success of a build accordingly.

Publications

Reshi: Recommending Resources for Scientific Workflow Tasks on Heterogeneous Infrastructures

Scientific workflows typically comprise a multitude of different processing steps which often are executed in parallel on different partitions of the input data. These executions, in turn, must be scheduled on the compute nodes of the computational infrastructure at hand. This assignment is complicated by the facts that (a) tasks typically have highly heterogeneous resource requirements and (b) in many infrastructures, compute nodes offer highly heterogeneous resources. In consequence, predictions of the runtime of a given task on a given node, as required by many scheduling algorithms, are often rather imprecise, which can lead to sub-optimal scheduling decisions. We propose Reshi, a method for recommending task-node assignments during workflow execution that can cope with heterogeneous tasks and heterogeneous nodes. Reshi approaches the problem as a regression task, where task-node pairs are modeled as feature vectors over the results of dedicated micro benchmarks and past task executions. Based on these features, Reshi trains a regression tree model to rank and recommend nodes for each ready-to-run task, which can be used as input to a scheduler. For our evaluation, we benchmarked 27 AWS machine types using three representative workflows. We compare Reshi’s recommendations with three state-of-the-art schedulers. Our evaluation shows that Reshi outperforms HEFT by a mean makespan reduction of 7.18% and 18.01% assuming a mean task runtime prediction error of 15%.

Lotaru: Locally Estimating Runtimes of Scientific Workflow Tasks in Heterogeneous Clusters

Many scientific workflow scheduling algorithms need to be informed about task runtimes a-priori to conduct efficient scheduling. In heterogeneous cluster infrastructures, this problem becomes aggravated because these runtimes are required for each task-node pair. Using historical data is often not feasible as logs are typically not retained indefinitely and workloads as well as infrastructure changes. In contrast, online methods, which predict task runtimes on specific nodes while the workflow is running, have to cope with the lack of example runs, especially during the start-up. In this paper, we present Lotaru, a novel online method for locally estimating task runtimes in scientific workflows on heterogeneous clusters. Lotaru first profiles all nodes of a cluster with a set of short-running and uniform microbenchmarks. Next, it runs the workflow to be scheduled on the user’s local machine with drastically reduced data to determine important task characteristics. Based on these measurements, Lotaru learns a Bayesian linear regression model to predict a task’s runtime given the input size and finally adjusts the predicted runtime specifically for each task-node pair in the cluster based on the micro-benchmark results. Due to its Bayesian approach, Lotaru can also compute robust uncertainty estimates and provides them as an input for advanced scheduling methods. Our evaluation with five real-world scientific workflows and different datasets shows that Lotaru significantly outperforms the baselines in terms of prediction errors for homogeneous and heterogeneous clusters.

Geoflow - Novel Workflow Implementations To Facilitate Big EO Data Workflows in Nextflow

Research Projects

FONDA

FONDA

Foundations of Workflows for Large-Scale Scientific Data Analysis

Contact