

I currently focus on online algorithms, in particular scheduling under random-order arrival.
Below are various papers on which I have worked.
In my field publications on conferences are valued higher than in journals.
The author order is alphabetical.
Online Makespan Minimization and List Update are two of the most basic online problems. Both problems have been studied since the sixties and are employed in scheduling and data management. Online problems conceptualize interactive settings where online algo- rithms make decisions without knowing the whole input in advance. This prevents optimal performance on general instances. Instead, online algo- rithms are judged by their competitive ratio, the maximum ratio by which their performance strays from being optimal due to not knowing the future. The first online problem, List Update, studies the linear linked list. Lists are often-used data structures for managing requests to a small number of items. Frequently requested items are served faster at the front of the list. Moving them there requires paid exchanges. The costs of paid exchanges, when compared to practical list implementations, are underestimated in the standard model. This led to the P d model, which considers a general exchange cost of d. In the Makespan Minimization problem, one is tasked to assign jobs to identical and parallel machines. Preemption is not allowed. The objective is to minimize the time it takes to process all jobs, called the makespan. The dual problem, Machine Covering, asks to maximize the time all machines are busy. Classical analyses, on the one hand, optimistically omit and neglect errors in the input. They are, on the other hand, too pessimistic in that they present online algorithms with worst-case inputs arranged in worst-case orders. Considering uniformly random orders instead, gives the recently popular random-order model. A stronger model of ours considers nearly worst- case orders. To model errors in the input, budgeted uncertainty assumptions are used. This thesis presents the currently best upper and lower bounds for five online problems. For the List Update problem in the P d model, the main result is a 2.2442-competitive online algorithm and a lower bound of 2. For Makespan Minimization in the random-order model, a 1.8478-competitive algorithm is provided, which beats prior adversarial lower bounds already on 24 machines [145]. In another random-order model, the input length n is known in advance. Here, a 1.535-competitive online algorithm beats even the pessimistic adversarial lower bounds for randomized algorithms. For Makespan Minimization under budgeted uncertainty, an upper bound of 2.9052 is complemented by a lower bound of 2. For Machine Covering in the random-order model, an improved algorithm is ˜O( 4 √m)-competitive while a lower bound of ˜Ω(log(m)) is provided.
PhD-Thesis, Technical University of Munich (TUM)
Download: Tum-Mediathek, from this website.
This paper studies Makespan Minimization in the secretary model. Formally, jobs, specified by their processing times, are presented in a uniformly random order. An online algorithm has to assign each job permanently and irrevocably to one of m parallel and identical machines such that the expected time it takes to process them all, the makespan, is minimized. We give two deterministic algorithms. First, a straightforward adaptation of the semi-online strategy LightLoad provides a very simple algorithm retaining its competitive ratio of 1.75. A new and sophisticated algorithm is 1.535-competitive. These competitive ratios are not only obtained in expectation but, in fact, for all but a very tiny fraction of job orders. Classically, online makespan minimization only considers the worst-case order. Here, no competitive ratio below 1.885 for deterministic algorithms and 1.581 using randomization is possible. The best randomized algorithm so far is 1.916-competitive. Our results show that classical worst-case orders are quite rare and pessimistic for many applications. They also demonstrate the power of randomization when compared to much stronger deterministic reordering models. We complement our results by providing first lower bounds. A competitive ratio obtained on nearly all possible job orders must be at least 1.257. This implies a lower bound of 1.043 for both deterministic and randomized algorithms in the general model.
Susanne Albers, Maximilian Janke
The 41st Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS) 2021
Download: arXiv, from this website.
In the Online Machine Covering problem jobs, defined by their sizes, arrive one by one and have to be assigned to m parallel and identical machines, with the goal of maximizing the load of the least-loaded machine. In this work, we study the Machine Covering problem in the recently popular random-order model. Here no extra resources are present, but instead the adversary is weakened in that it can only decide upon the input set while jobs are revealed uniformly at random. It is particularly relevant to Machine Covering where lower bounds are usually associated to highly structured input sequences. We first analyze Graham's Greedy-strategy in this context and establish that its competitive ratio decreases slightly to Θ(m/log(m)) which is asymptotically tight. Then, as our main result, we present an improved O(∜m·log(m))-competitive algorithm for the problem. This result is achieved by exploiting the extra information coming from the random order of the jobs, using sampling techniques to devise an improved mechanism to distinguish jobs that are relatively large from small ones. We complement this result with a first lower bound showing that no algorithm can have a competitive ratio of O(log(m)/loglog(m)) in the random-order model. This lower bound is achieved by studying a novel variant of the Secretary problem, which could be of independent interest.
Susanne Albers, Waldo Gálvez, Maximilian Janke
The 32nd International Symposium on Algorithms and Computation (ISAAC) 2021
We study Online Makespan Minimization with uncertain job processing times. Jobs are assigned to m parallel and identical machines. Preemption is not allowed. Each job has a regular processing time while up to Γ jobs fail and require additional processing time. The goal is to minimize the makespan, the time it takes to process all jobs if these Γ failing jobs are chosen worst possible. This models real-world applications where acts of nature beyond control have to be accounted for. So far Makespan Minimization With Budgeted Uncertainty has only been studied as an offline problem. We are first to provide a comprehensive analysis of the corresponding online problem. We provide a lower bound of 2 for general deterministic algorithms showing that the problem is more difficult than its special case, classical Online Makespan Minimization. We further analyze Graham’s Greedy strategy and show that it is precisely 3−2/m -competitive. This bound is tight. We finally provide a more sophisticated deterministic algorithm whose competitive ratio approaches 2.9052.
Susanne Albers, Maximilian Janke
The 17th Algorithm and Data Structures Symposium (WADS) 2021
Download: Conference, arXiv. Conference, from this website.
Makespan minimization on identical machines is a fundamental problem in online scheduling. The goal is to assign a sequence of jobs to m identical parallel machines so as to minimize the maximum completion time of any job. Already in the 1960s, Graham showed that Greedy is (2-1/m)-competitive. The best deterministic online algorithm currently known achieves a competitive ratio of 1.9201. No deterministic online strategy can obtain a competitiveness smaller than 1.88. In this paper, we study online makespan minimization in the popular random-order model, where the jobs of a given input arrive as a random permutation. It is known that Greedy does not attain a competitive factor asymptotically smaller than 2 in this setting. We present the first improved performance guarantees. Specifically, we develop a deterministic online algorithm that achieves a competitive ratio of 1.8478. The result relies on a new analysis approach. We identify a set of properties that a random permutation of the input jobs satisfies with high probability. Then we conduct a worst-case analysis of our algorithm, for the respective class of permutations. The analysis implies that the stated competitiveness holds not only in expectation but with high probability. Moreover, it provides mathematical evidence that job sequences leading to higher performance ratios are extremely rare, pathological inputs. We complement the results by lower bounds, for the random-order model. We show that no deterministic online algorithm can achieve a competitive ratio smaller than 4/3. Moreover, no deterministic online algorithm can attain a competitiveness smaller than 3/2 with high probability.
Susanne Albers, Maximilian Janke
The 47th International Colloquium on Automata, Languages and Programming (ICALP) 2020
Download: Journal, Conference, arXiv, from this website.
We study the fundamental list update problem in the paid exchange model Pd. This cost model was introduced by Manasse, McGeoch and Sleator and Reingold, Westbrook and Sleator. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps. We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than2in the partial cost model, where an access to the i-th item in the current list incurs a cost of i−1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model.Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d.
Susanne Albers, Maximilian Janke
The 37th International Symposium on Theoretical Aspects of Computer Science (STACS) 2020
Download: Conference, from this website.