Loading Events

« All Events

NWO-JSPS joint seminar: Computations on Networks with a Tree-Structure: From Theory to Practice

Sep 10 - Sep 14



Dynamic programming based on tree decomposition is one of the most successful approaches for solving important hard problems on networks, with problems ranging from several application areas (including operations research, computational biology, etc.). In theory, Bodlaender’s algorithm and Courcelle’s theorem give for a large collection of problems algorithms that solve them in time linear in the number of vertices, assuming we have a graph or network with the appropriate structure (i.e., the treewidth is bounded); a condition that appears to be true for many networks from quite various applications. However, while these results and the corresponding algorithms play a central role in many theoretical / foundational studies, and are seen as cornerstones in the field of parameterized algorithms (“fixed parameter tractability”), the direct practical use is under debate: while asymptotic optimal, the constant factors hidden in the big-Oh notation are huge. Thus, a worthy topic of study is to see how the theoretical results can be moved to useful practical results. This asks for the development of new algorithmic insights and algorithm engineering studies, alongside with further insights in the structure of networks that allow these type of dynamic programming algorithms. This seminar brings together experts from Japan and the Netherlands to initiate new studies towards significant progress both in practice as in theory. In particular, the seminar focuses on the following research questions:

  1. Parallel computations and positive instance driven computation of treewidth
  2. Practical FPT algorithms for computing treewidth and lower bounds
  3. Computation of centrality measures of networks



Hans Bodlaender University of Utrecht / TU Eindhoven
Bart Jansen TU Eindhoven
Erik Jan van Leeuwen University of Utrecht
Jesper Nederhof TU Eindhoven
Yota Otachi Kumamoto University
Tom van der Zanden University of Utrecht



Link to Time Schedule



Remy Belmonte (University of Electro-Communications, Japan)

Parameterized (Approximate) Defective Coloring
In Defective Coloring we are given a graph G=(V,E) and two integers chi_d,Delta^* and are asked if we can partition V into chi_d color classes, so that each class induces a graph of maximum degree Delta^*. We investigate the complexity of this generalization of Coloring with respect to several well-studied graph parameters, and show that the problem is W-hard parameterized by treewidth, pathwidth, tree-depth, or feedback vertex set, if chi_d=2. As expected, this hardness can be extended to larger values of chi_d for most of these parameters, with one surprising exception: we show that the problem is FPT parameterized by feedback vertex set for any chi_d != 2, and hence 2-coloring is the only hard case for this parameter. In addition to the above, we give an ETH-based lower bound for treewidth and pathwidth, showing that no algorithm can solve the problem in n^{o(pw)}, essentially matching the complexity of an algorithm obtained with standard techniques. We complement these results by considering the problem’s approximability and show that, with respect to Delta^*, the problem admits an algorithm which for any epsilon>0 runs in time (tw/epsilon)^{O(tw)} and returns a solution with exactly the desired number of colors that approximates the optimal Delta^* within (1+epsilon). We also give a (tw)^{O(tw)} algorithm which achieves the desired Delta^* exactly while 2-approximating the minimum value of chi_d. We show that this is close to optimal, by establishing that no FPT algorithm can (under standard assumptions) achieve a better than 3/2-approximation to chi_d, even when an extra constant additive error is also allowed.
(joint work with Michael Lampis and Valia Mitsou)


Hans Bodlaender (Utrecht & Eindhoven, Netherlands)

Treewidth and reduction algorithms
Graph reduction is a useful technique to obtain fast algorithms for recognizing graphs of bounded treewidth, and solving problems on graphs of bounded treewidth. In this talk, a number of results are surveyed, including techniques to compute treewidth through reduction, and deriving reduction rules for finite index properties and applications to kernelization. The talk ends with a recent application: given a knot diagram of treewidth two, we can test in linear time whether it represents the unknot.


Huib Donkers (TU Eindhoven)

F-minor-free deletion parameterised by a treewidth modulator only admits a polynomial Turing kernel for restricted F
For a fixed family of graphs F, the F-minor-free deletion problem takes as input a graph G and integer k and the objective is to determine whether there exists a set S ⊆ V(G) of size at most k such that G-S does not contain any graph of F as a minor. Many classical graph theoretic problems can be expressed as an F-minor-free deletion problem by carefully choosing F. For example when F contains a single K₃ graph, F-minor-free deletion is equivalent to the Feedback Vertex Set problem.
For a parameterised problem with parameter l, a Turing kernel is a polynomial time algorithm that can solve the problem when given access to an oracle that can solve small instances of the original problem. If the size of these queries can be bounded by a polynomial of l then we say this is a polynomial Turing kernel.
We consider the F-minor-free deletion problem parameterised by deletion distance to treewidth t, where t is the minimum treewidth of all graphs in F. And we show that F-minor-free deletion admits a polynomial Turing kernel if F contains a P₃-subgraph-free graph, but not for any other F unless certain complexity-theoretic assumptions fail. The same can be said for F-subgraph-free deletion where the objective is to find a vertex set S of size at most k such that G-S does not contain any graph of F as subgraph.


Alex Grigoriev (Maastricht University)

A survey on possible and impossible attempts to solve the treewidth problem via ILPs
We survey a number of integer programming formulations for the pathwidth and for the treewidth problems. The attempts to find good formulations for the problems span the period of 15 years, yet without any true success. Nevertheless, some formulations provide potentially useful frameworks for attacking these notorious problems. Some others are just curious and interesting fruits of mathematical imagination.


Yoichi Iwata (National Institute of Informatics, Japan)

On the Power of Tree-Depth for Fully Polynomial FPT Algorithms
There are many classical problems in P whose time complexities have not been improved over the past decades.
Recent studies of “Hardness in P” have revealed that, for several of such problems, the current fastest algorithm is the best possible under some complexity assumptions. To bypass this difficulty, the concept of “FPT inside P” has been introduced. For a problem with the current best time complexity O(n^c), the goal is to design an algorithm running in k^{O(1)}n^{c’} time for a parameter k and a constant c'<c. In this paper, we investigate the complexity of graph problems in P parameterized by tree-depth, a graph parameter related to tree-width. We show that a simple divide-and-conquer method can solve many graph problems, including Weighted Matching, Negative Cycle Detection, Minimum Weight Cycle, Replacement Paths, and 2-hop Cover, in O(td m) time or O(td (m+nlog n)) time, where td is the tree-depth of the input graph. Because any graph of tree-width tw has tree-depth at most (tw+1)log_2 n, our algorithms also run in O(tw mlog n) time or O(tw (m+nlog n)log n) time. These results match or improve the previous best algorithms parameterized by tree-width. Especially, we solve an open problem of fully polynomial FPT algorithm for Weighted Matching parameterized by tree-width posed by Fomin et al. (SODA 2017).


Bart Jansen (TU Eindhoven)

Lower Bounds for Dynamic Programming on Planar Graphs of Bounded Cutwidth
Many combinatorial problems can be solved in time O^*(c^tw) on graphs of treewidth tw, for a problem-specific constant c. In several cases, matching upper and lower bounds on c are known based on the Strong Exponential Time Hypothesis (SETH). In this paper we investigate the complexity of solving problems on graphs of bounded cutwidth, a graph parameter that takes larger values than treewidth. We strengthen earlier treewidth-based lower bounds to show that, assuming SETH, Independent Set cannot be solved in O^*((2-eps)^cutw) time, and Dominating Set cannot be solved in O^*((3-eps)^cutw) time. By designing a new crossover gadget, we extend these lower bounds even to planar graphs of bounded cutwidth or treewidth. Hence planarity does not help when solving Independent Set or Dominating Set on graphs of bounded width. This sharply contrasts the fact that in many settings, planarity allows problems to be solved much more efficiently.
(joint work with Bas van Geffen, Arnoud de Kroon, and Rolf Morel)


Mark Jones

Constructing tree-like phylogenetic networks
A Phylogenetic Tree depicts the evolutionary history of a set of species. A number of problems in phylogenetics involve the reconciliation of conflicting phylogenetic trees into a Phylogenetic Network, a directed acyclic graph in which hybridization events (vertices of in-degree at least 2) can be used to explain the differences between trees. These problems often aim to find a suitable network with fewest hybridization events, (i.e. as close to being a tree as possible).
While Hybridization Number (and the related measure Network Level) can be viewed as a restricted version of treewidth, the traditional approaches of dynamic programming on a tree decomposition do not directly apply to these problems, since the network is not known out the outset.
In this talk I will discuss some recent results in this area, and raise some open questions concerning the potential for meta-theorems on these sorts of problems.


Steven Kelk (Maastricht University)

Treewidth as a general instrument for computing (and capturing) phylogenetic dissimilarity…?
Phylogenetics is the branch of computational biology concerned with the construction of evolutionary trees (or, more generally, networks) which plausibly represent the evolutionary history of a set of species X. For biological or methodological reasons it is not uncommon to obtain several, topologically distinct evolutionary trees on a given dataset. It is natural then to compare these trees and to ask: how different are they really? Many dissimilarity/distance  measures have been proposed for this purpose, but often these measures are NP-hard to compute. However, it is has been observed that if two or more trees are merged into an auxiliary graph structure known as the “display graph”, the treewidth of the display graph is often bounded as a function of more commonly encountered phylogenetic dissimilarity measures. This raises a number of questions. First, can the bounded treewidth of the display graph be exploited algorithmically to obtain FPT algorithms for NP-hard phylogenetic dissimilarity measures? Second, can the treewidth of the display graph itself function as a measure of phylogenetic dissimilarity, and if so how does it behave under the action of FPT kernelization rules typically used to reduce phylogenetic trees in size? In this talk I give an overview of recent results in this area, obtained in conjunction with various authors.  The talk addresses both theoretical issues (relating to grid minors, forbidden minors, brambles, MSO etc.) and practical issues (the prospects for rolling out treewidth-based dynamic programs in practice.) The talk assumes no prior knowledge of phylogenetics or computational biology.


Sándor Kisfaludi-Bak

A framework for ETH-tight algorithms in geometric intersection graphs
We describe a treewidth-based framework for subexponential algorithms that applies to many classical NP-complete graph problems in geometric intersection graphs. The framework is applicable in Euclidean space of dimension at least 2, where the intersection graph is defined by similarly sized fat objects. We get the “square root phenomenon” running time exp(O(n^{1-1/d})) for example to the following problems:
Hamiltonian Cycle, Independent set, Steiner Tree, (Connected) Dominating Set.  Our framework makes no restrictions on the density of the objects defining the intersection graph, and it can often be applied on the graphs themselves, without having the geometric representation. We will also introduce some basic intuition behind the lower bound framework, and see some exciting new developments in hyperbolic space.


Yasuaki Kobayashi  (Kyoto University, Japan)

Towards Subexponential Parameterized Algorithms for Graph Drawing Problems
We consider two graph drawing problems: the two-layer crossing minimization problem and the one-page crossing minimization problem. The common objective of those problems is to decide whether there is a (problem specific) drawing with at most k crossings. It is known that a typical win-win approach can be applied to those problems:
we can prove that every feasible instance has pathwidth/treewidth at most f(k) and there are explicit dynamic programming algorithms on bounded width decompositions to compute an optimal drawing. Fortunately, we can prove that the function f(k) is O(\sqrt{k}), but the running time of the known algorithms still has 2^{O(k \log k)} factors.
In this talk, I will present an overview of the current state of those problems, and the goal of this talk is to discuss the subexponential fixed-parameter tractability of those problems. (This work is joint with Hiromu Ohtsuka and Hisao Tamaki.)


Yusuke Kobayashi (Kyoto University, Japan)

Linear min-max relation between the treewidth of H-minor-free graphs and its largest grid minor
A key theorem in algorithmic graph-minor theory is a min-max relation between the treewidth of a graph and its largest grid minor.
This min-max relation is a keystone of the Graph Minor Theory of Robertson and Seymour.
Demaine and Hajiaghayi proved a remarkable linear min-max relation for graphs excluding any fixed minor $H$: every $H$-minor-free graph of treewidth at least $c_H \cdot r$ has an $r \times r$-grid minor for some constant $c_H$.
However, as they pointed out, a major issue with this theorem is that their proof heavily depends on Graph Minor Theory, most of which lacks explicit bounds and is believed to have very large bounds.

Motivated by this problem, we give another (relatively short and simple) proof of this result without using the machinery of Graph Minor Theory. Hence we give an explicit bound for $c_H$, which is an exponential function of a polynomial in $|H|$.


Sudeshna Kolay

Bijective metric embedding into bounded treewidth graphs
The metric embedding problem takes as input two metric spaces (X,D_X) and (Y,D_Y) and a positive integer d. The objective is to determine whether there is an embedding F: X \rightarrow Y such that for each pair x,y\in X, D_X(x,y) \leq D_Y(F(x),F(y)) \leq dD_X(x,y). In other words, for each pair of elements in X the mapped shortest distance is not contracted, nor does it stretch by more than d times the shortest distance in D_X. Then the embedding F is said to have distortion d. In this talk, we will concentrate on finding d-distortion embeddings between two unweighted graph metric spaces. Moreover, the embedding should be a bijective function and the output unweighted graph metric (Y,D_Y) has treewidth t, for a universal constant t. For this special case, we look at an FPT algorithm parameterized by distortion d and maximum degree \Delta in the graph Y.


Arie Koster (Aachen, Germany)

Graphs with low treewidth
In this talk, we discuss two approaches for determining graphs with low treewidth. On the one hand, we evaluate the performance of reduction rules for graphs of treewidth at most four. On the other hand, we disucss preliminary results on the question “how many edges have to be removed from a graph such that the remaining graph has low treewidth?”.


Jesper Nederlof (TU Eindhoven)

Understanding the algorithmic value of graph decompositions via matrix rank (a survey)
We survey a number of recent results that relate the fine-grained complexity of several problems parameterized by (path/tree/cut)-width to the rank of certain matrices.


Yoshio Okamoto (The University of Electro-Communications, Japan)

Balanced Line Separators of Unit Disk Graphs
We prove a geometric version of the graph separator theorem for the unit disk intersection graph:
for any set of n unit disks in the plane there exists a line l such that l intersects at most  O(\sqrt{(m+n)log n}) disks and each of the halfplanes determined by l contains at most 2n/3 unit disks from the set, where m is the number of intersecting pairs of disks.
We also show that an axis-parallel line intersecting O(\sqrt{m+n}) disks exists, but each halfplane may contain up to 4n/5 disks.
We give an almost tight lower bound (up to sublogarithmic factors) for our approach, and also show that no line-separator of sublinear size in n exists when we look at disks of arbitrary radii, even when m=0.
Proofs are constructive and suggest simple algorithms that run in linear time.
Experimental evaluation has also been conducted, which shows that for random instances our method outperforms the method by Fox and Pach (whose separator has size O(\sqrt{m})).


Astrid Pieterse (TU Eindhoven)

Polynomial kernels for hitting forbidden minors using constant treedepth modulators
We investigate polynomial-time preprocessing for the problem of hitting forbidden minors in a graph, using the framework of kernelization. For a fixed finite set of graphs F, the F-Deletion problem is the following: given a graph G and integer k, is it possible to delete k vertices from G to ensure the resulting graph does not contain any graph from F as a minor? Earlier work by Fomin, Lokshtanov, Misra, and Saurabh [FOCS’12] showed that when F contains a planar graph, an instance (G,k) can be reduced in polynomial time to an equivalent one of size k^O(1). In this work we focus on structural measures of the complexity of an instance, with the aim of giving nontrivial preprocessing guarantees for instances whose solutions are large. Motivated by several impossibility results, we parameterize the F-Deletion problem by the size of a vertex modulator whose removal results in a graph of constant treedepth η.
We prove that for each set F of connected graphs and constant η, the F-Deletion problem parameterized by the size of a treedepth-η modulator has a polynomial kernel. Our kernelization is fully explicit and does not depend on protrusion reduction or well-quasi-ordering, which are sources of algorithmic non-constructivity in earlier works on F-Deletion. Our main technical contribution is to analyze how models of a forbidden minor in a graph G with modulator X, interact with the various connected components of G-X. By bounding the number of different types of behavior that can occur by a polynomial in |X|, we obtain a polynomial kernel using a recursive preprocessing strategy. Our results extend earlier work for specific instances of F-Deletion such as Vertex Cover and Feedback Vertex Set. It also generalizes earlier preprocessing results for F-Deletion parameterized by a vertex cover, which is a treedepth-one modulator.


Johan van Rooij (Utrecht University)

Fast joins on tree decompositions revisited: state changes with integrated Fourier and Möbius transforms
We consider several known techniques for fast joins on tree decompositions for the case where joins can be expressed as a table of state-combinations.
In this setting, dynamic programming tables are indexed by a state-colouring, representing the state of a vertex in a corresponding partial solution.
The table of state-combinations then defines a join operation by defining the resulting state when two colours are combined on a vertex, on from the partial solution on the left child node and one for the right child node.
That is, we consider:
– Generalised subset convolution using state changes (van Rooij, Bodlaender, Rossmanith, ESA 2009).
– Using fast fourier transforms (Cygan en Pilipczuk, Exact and Approximate Bandwidth, ICALP 2009).
– Rank-k fourier transforms (Solving connectivity problems parameterized by treewidth in single exponential time, FOCS 2011, deep in the appendix of the Arxiv version of this paper).

Taking the merits of these three approaches in a single algorithm, we obtain improvements in the polynomial factors of several join operations, including those for the [\sigma,\rho]-domination problems.
That is:
– We show that the join operation for the decision version of a [\sigma,\rho]-domination problem involving r-states can be computed in O(r^{k+1} k^2 log(r)) time, improving the previously fastest O(r^{k+1} (rk)^{2(r-2)}) algorithm (van Rooij, Bodlaender, Rossmanith, ESA 2009), saving many polynomial factors in r and k.
Consequently we also improve the algorithms for the maximisation, minimisation and counting variants for these problems.
– We improve the running time of generalised convolution variant of Cygan and Pilipczuk of O(r^{k+2} k^3 log(r)) time (Cygan en Pilipczuk, Exact and Approximate Bandwidth, ICALP 2009) to O(r^{k+2} k^2 log(r)), saving a linear factor in the treewidth k.
In all our results, we assume we work in a RAM-machine model with word size O(k log(r)), as this is required to address tables of size O(r^k).
We use Fourier and Möbius transforms throughout this work, all based on modular arithmetic, to avoid effects of (or analysis of) rounding errors compared to working in the complex plane.

Finally, when time permits, we consider when and when not, we can create a join from another efficiently computable join in the following way.
Consider the table of state-combinations for the efficiently computable join.
We consider when a join can be created by removing entries, we call this filtering, from the table of state-combinations of the efficiently computable join.
For example: one can obtain the fast subset convolution algorithm by filtering the ‘cover twice’ entry from the covering product.
We give a general linear algebra approach that characterises which joins can and cannot be filtered from another join in this way.


Hisao Tamaki (Meiji University, Japan)

Listing minimal separators for treewidth computation
We are interested in practically efficient algorithms for treewidth computation:
given a graph G and a positive integer k, we are to decide if the treewidth of G is at most k and, if so, construct a tree-decomposition of that width.
We revisit the standard two-stage approach as an alternative to the positive instance driven (PID) dynamic programming due to myself.
In the first stage, we list all minimal separators of G with cardinality at most k.
In the second stage, we decide the “feasibility” of each connected set C with neighborhood being a minimal separator. Here, a connected set C with N(C) being a minimal separator is defined to be feasible if |N(C)| <= k and there is a tree-decomposition of G[C \cup N(C)] of width at most k which has a bag containing N(C).

For the second stage, I have designed a variant of the dynamic programming algorithm due to Bouchitt\'{e} and Todinca which, unlike their original formulation, does not use an explicit list of potential maximal cliques.
This variant performs extremely well.  For example, a previously unsolved benchmark instance “myciel7” (191 vertices, 2060 edges, treewidth 66) can be solved in 20 seconds on a typical desktop if the list of all minimal separators of cardinality at most 66 is provided (there are 316,296). Note that the PID algorithm takes about 7 minutes to solve “myciel6” in the same family with 96 vertices.

My work on listing minimal separators is in progress and I hope to provide detailed experimental results in the talk.


Tom van der Zanden (Utrecht University)

Efficiently Computing the Shapley Value of Connectivity Games in Low-Treewidth Graphs
Game-theoretic centrality measures are a powerful tool to identify key players in covert networks (that model, e.g., the interactions between suspected terrorists or criminals). Unfortunately, such measures are often NP-hard to compute and thus intractable, even for small graphs. We show that, for three connectivity games, their Shapely Value can be efficiently computed if the underlying graph has low treewidth. Using this method, we are able to compute the Shapley Value for several graphs for which this was previously infeasible (including, notably, the 69-vertex graph of the terrorists involved in the 9-11 attacks studied in previous work on Shapley Value-based centrality).

Tom van der Zanden (Utrecht University)

Computing Treewidth on the GPU
We present a parallel algorithm for computing the treewidth of a graph on a GPU. We implement this algorithm in OpenCL, and experimentally evaluate its performance. Our algorithm is based on an O*(2^n)-time algorithm that explores the elimination orderings of the graph using a Held-Karp like dynamic programming approach. We use Bloom filters to detect duplicate solutions.

GPU programming presents unique challenges and constraints, such as constraints on the use of memory and the need to limit branch divergence. We experiment with various optimizations to see if it is possible to work around these issues. We achieve a very large speed up (up to 77x) compared to running the same algorithm on the CPU.



This workshop is on invitation only. Should you not be on the invitation list and you do want to participate, please send an email to Hans Bodlaender.


Practical Information



Sep 10
Sep 14
Event Category:


Eindhoven, Netherlands
+ Google Map