Biography

I am Fabian Lehmann, a PhD candidate in computer science at the Knowledge Management in Bioinformatics Lab at the Humboldt-Universität zu Berlin. I get my funding through FONDA, a collaborative research center of the German Research Foundation (DFG).

Since my bachelor studies, I have been fascinated by any complex, distributed system. I love to understand and overcome their limits. In my PhD research, I focus on workflow engines, improving the execution of distributed workflows while analyzing large amounts of data. In particular, my goal is to improve scheduling and data management. Therefore, I work closely with the Earth Observation Lab at the Humboldt-Universität zu Berlin to understand real-world requirements.

Interests
  • Distributed Systems
  • Scientific Workflows
  • Workflow Scheduling
Education
  • PhD candidate (Dr.-Ing.) — Thesis submitted (not yet defended; degree pending), 2026

    Thesis: Adaptive Scheduling of Dynamic Workflows

    Humboldt-Universität zu Berlin

  • Master of Science in Information Systems Management, 2020

    Thesis: Design and Implementation of a Processing Pipeline for High Resolution Blood Pressure Sensor Data

    Technical University of Berlin

  • Bachelor of Science in Information Systems Management, 2019

    Thesis: Performance-Benchmarking in Continuous-Integration-Processes

    Technical University of Berlin

  • Abitur (comparable to A Levels), 2015

    Hannah-Arendt-Gymnasium (Berlin)

Professional Experience

 
 
 
 
 
Knowledge Management in Bioinformatics Lab (Humboldt-Universität zu Berlin)
PhD candidate (computer science)
Nov 2020 – Present Berlin, Germany
In my PhD studies, I focus on improving the execution of large scientific workflows processing hundreds of gigabytes of data.
 
 
 
 
 
DAI-Labor (Technical University of Berlin)
Student Assistent
May 2018 – Oct 2020 Berlin, Germany
In my student job, we were working with time-series data in DIGINET-PS. For example, we predicted parking slot occupation.
 
 
 
 
 
University of Oxford
GeoTripNet - Case Study
Oct 2019 – Mar 2020 Oxford, England, United Kingdom
For the case study, we crawled restaurants' reviews on Google Maps to analyze the relations between different restaurants and examine gentrification in Berlin districts. One problem was to process and analyze the large amount of data in real-time.
 
 
 
 
 
Einstein Center Digital Future
Fog Computing Project
Apr 2019 – Sep 2020 Berlin, Germany
This project aimed to analyze SimRa’s bicycle rides. Therefore, we developed a distributed analysis pipeline. Moreover, we visualized the track information on an interactive web. We were able to classify risk hotspots for Berlin’s cyclists' tracks.
 
 
 
 
 
Conrad Connect
Application Systems Project
Oct 2017 – Mar 2018 Berlin, Germany
For Conrad Connect, we analyzed hundreds of gigabytes of IoT data. Moreover, I uncovered security vulnerabilities in their software.
 
 
 
 
 
Reflect IT Solutions GmbH
Semester Term Work
Mar 2016 – Apr 2016 & Sep 2016 – Oct 2016 Berlin, Germany
In my semester term work, I helped to develop the backend for a construction-progress-management system.
 
 
 
 
 
SPP Schüttauf und Persike Planungsgesellshaft mbH
Gap work between school and studies
May 2015 – Sep 2015 Berlin, Germany
Before I started my bachelor studies, I worked a few months, helping to manage a large construction project, gaining experience in dealing with different trades.

Computer skills

A small excerpt

JAVA
Python
Docker
Kubernetes
Spring Boot
Latex
SQL
React
JavaScript
Nextflow
Haskell
Excel

Software

Common Workflow Scheduler

Resource managers can enhance their scheduling capabilities by leveraging the Common Workflow Scheduler interface to receive workflow graph information from workflow systems. This enables the resource manager’s scheduler to make more advanced decisions.

Benchmark Evaluator

Benchmark Evaluator

The Benchmark Evaluator is a plugin for the Jenkins automation server to load benchmark data and decide on the success of a build accordingly.

Publications

Differences in Workflow Systems: A Use-Case Driven Comparison

Scientific Workflow Management Systems (SWMS) are software systems designed to enable the scalable, distributed, and reproducible execution of complex data analysis workflows on large datasets. Due to the importance of such analyses, a plethora of different systems have been built over the last decades. Although all of them share the same core functions of allowing workflow specification, controlling task dependencies, and steering the correct execution of workflows on a given computational infrastructure, they differ notably in the specific implementations of these functionalities and often offer additional features that can benefit workflow developers. The differences are often subtle, yet impactful, and often lack proper documentation leading to unpleasant surprises when trying to port a workflow developed for one SWMS to another or when re-implementing a stand-alone application with an SWMS. In this chapter, we want to highlight some of the main differences between workflow systems by comparing the properties and features of four SWMSs: Nextflow, Airflow, Argo, and Snakemake. The comparison is conducted using SWMSs to reimplement a complex workflow from the remote sensing domain that analyzes satellite images to study the development of vegetation over the years on the island of Crete. We find and describe important distinctions between these systems in numerous aspects, including file handling, scheduling, parallelization strategies, and language elements. We believe that awareness of these differences and the difficulties they might incur in a specific setting is important for making informed decisions when choosing an SWMS for a new research project.

Optimizing Workflow Execution by Cost-effective I/O Monitoring, Bottleneck Analysis, and Proactive Resource Assignment

Modern workflow systems typically rely on static resource allocation that does not change during the execution and is often based on rough user resource estimates. As a result, resource allocation and scheduling may be underprovisioning, which leads to slowing down the execution, or overprovisioning, which causes wasting valuable resources.
Leveraging detailed knowledge regarding a task’s behavior, we can optimize runtime, resource consumption, and execution reliability. For that purpose, an innovative approach to modeling tasks, workflows, and their execution is used as part of the scheduling. Contrary to the typical workflow execution, this modeling is done on sub-task granularity to be able to proactively allocate parts of the resources just in time.
Since the accuracy of the model and the resulting predictions cannot be free of errors/noise, the decision-making for resource allocation and scheduling is repeated continuously during execution using updated execution metrics from monitoring data of the task. After the execution of a task, monitoring data from executions is used to refine the model and reduce the error in future predictions. This, in turn, enables a more efficient execution of the task in the current workflow execution and potentially for other workflows running on different infrastructures using this task.
This chapter presents a comprehensive workflow optimization approach with a focus on data intensive workflows. It integrates past work by the authors on specific aspects of the methodology with novel ideas that will be explored further in future research.

Portable and Scalable Workflows for Earth Observation Data Analysis with Nextflow

The amount of satellite data in Earth Observation (EO) is rapidly increasing, providing scientists with new opportunities to study how climate, weather events, and direct anthropogenic factors affect the Earth’s surface. However, the urgency for more accurate and holistic studies drives growth in the size of analyzed data sets (i.e., higher spatial resolution, larger study areas, increasing temporal depth) and complexity of analysis, leading to more complex and more data-intensive workflows. Analyzing such large data sets requires distributed computing resources, which are hard to program and often require specialized expertise to achieve satisfying runtimes. As a result, Earth Observation data to-date are often underused.
Recently, scientific workflow management systems (SWMS) emerged as a new programming paradigm for complex analysis pipelines over large data sets executed on distributed compute resources. They promise simple development, improved portability across systems, automatic scalability on different infrastructures, easier reuse of workflows, and reproducibility of analysis results. As such, using SWMS for EO analysis can boost the more efficient use of EO data and facilitate the dissemination of standardized data pre-processing and processing pipelines.
In this book chapter, we delve into the application of SWMS for Earth Observation. Specifically, we describe three research projects in which we used Nextflow, a popular open source scientific workflow engine, for programming portable and scalable data analysis pipelines. To this end, we describe SWMS in general and specifically Nextflow regarding their suitability for EO data analysis, and give practical examples to highlight advantages and challenges when using SWMS for analyzing large sets of satellite images.

Research Projects

FONDA

FONDA

Foundations of Workflows for Large-Scale Scientific Data Analysis

Contact