Workshop on

Algorithms in Complex Systems

September 24-26, 2007

EURANDOM, Eindhoven, The Netherlands

ABSTRACTS


Jeremie Bigot (Université Paul Sabatier)

Homeomorphic smoothing splines: a new tool for monotonizing an unconstrained estimator in nonparametric regression

In this talk, we focus on nonparametric estimation of a monotone regression function using general spline smoothing techniques. We propose to construct a new class of monotone splines by using some tools that have been recently developed in the context of image warping to compute smooth and bijective deformations. Our idea is to adapt these tools to construct a monotone spline that can be used to monotonize any unconstrained estimator of a regression function. Under mild conditions, this monotone smoother inherits the convergence properties of the unconstrained estimator. Our estimator is defined as the solution of an ordinary differential equation governed by an appropriate time-dependent vector field depending on the unconstrained estimate. Unlike some classical spline smoothing techniques under shape constraints, this method does not require the use of quadratic programming techniques under linear constraints and has therefore a low computational cost. We illustrate its finite sample performance via a simulation study, and we also compare it with some other constrained estimators. An illustration to some real data is given.


Gilles Blanchard (Fraunhofer FIRST.IDA)
Joint work with: Francois Fleuret, Etienne Roquain

Adaptive procedures for FDR control in multiple testing

Multiple testing is a classical statistical topic that has enjoyed a tremendous surge of interest in the past ten years, due to the growing domain of applications that are in demand for powerful and reliable procedures to this regard. For example, in bioinformatics it is often the case that multiple testing procedures are needed to process data in very high dimension where only a small number of sample points are available.

In their 1995 seminal work, Benjamini and Hochberg first introduced the false discovery rate (FDR), a notion of type I error control that is particularly well suited to screening processes where a very high number of hypotheses has to be tested. It has since then been recognized as a de facto standard.

We first review existing so-called "step-up" testing procedures with FDR control valid under several types of dependency assumptions on the joint test statistics, and show that we can recover (and extend) them by considering a very simple set-output point of view along with with what we call a "self-consistency condition" which is sufficient to ensure FDR control. We then proceed to consider adaptive procedures, where the estimation of the total proportion of true null hypotheses can lead to improved power. To this regard we introduce an algorithm that is almost always more powerful than an adaptive procedure proposed by Benjamini, Yekutieli and Krieger (2006).


Bernhard Burgeth (Saarland University)

Singular Diffusion Equations: Minimally Stochastic Solution Schemes

Total variation (TV) and balanced forward-backward (BFB) diffusion are popular examples of singular diffusion processes: Finite extinction time, the experimentally observed tendency to create piecewise constant regions, and the absence of parameters makes them very interesting image processing tools. However, their appropriate numerical treatment is still a challenge. In this contribution a minimally stochastic approach to the underlying singular equations is presented. It relies on analytical solutions of two-pixel signals and stochastic rounding. This introduces regularisation via integer arithmetic and does not require any limits on the diffusivity. Experiments demonstrate the favourable performance of the proposed probabilistic method.


Laurent Demaret (IBB, GSF)
Joint work with Mattia Fedrigo, Felix Friedrich and Hartmut Führ

Wedgelet Partitions and Image Processing

In many applications of Image Processing it is crucial to dispose of efficient tools for extraction, analysis and representation of geometrical contents in natural images. These latter can be modelled by classes of bivariate functions, regular on a finite number of regions separated by smooth boundaries.

It is by now a well-established fact that the usual two-dimensional tensor product wavelet bases are not optimal for approximating such classes. In the last ten years, several methods have been suggested as a remedy. Among them, wedgelets representations over quadtree structures represent a contour-based approach which allows an efficient digital implementation while capturing mainly geometric features of natural images. We discuss some algorithmic aspects due to the discrete nature of the method, leading to a fast computation of optimal solutions. As a possible application we present a new scheme for digital image compression based on these methods. The main ingredient for the design of an efficient coding scheme is to consider spatial redundancies between neighbouring atoms of the representation, relatively to the properties of the target regularity class.


Holger Dette (Ruhr-Universität Bochum)

A simple nonparametric estimator of a monotone regression function


Roberto Fernández (CNRS-University Rouen)
Joint work with L.R. Fontes and E.L.Neves (U. Sao Paulo)

An alternative modeling for biological signaling networks

Biological signaling networks transmit and process extra-cellular information, triggering complex transformations that lead to different cellular responses. The overall cellular behavior is well grasped by differential equations; the challenge is to produce mathematically affordable microscopic models leading to these equations. The usual modeling approach, based on chemical kinetics, is hampered by the large number of assumptions needed. As an alternative, we propose a spin-flip dynamics defined by asymmetric mean-field interactions. This dynamics yield density-profile processes whose trajectories converge almost surely to the solutions of dynamical systems with possibly complex behavior.


Antonio Galves (Universidade de S.Paulo)

Stochastic chains with variable length memory and the algorithm Context

Stochastic chains with variable length memory define an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called "context", is enough to predict the next symbol. The set of contexts can be represented by a rooted tree with finite labeled branches. The law of the chain is characterized by its tree of contexts and by an associated family of transition probabilities indexed by the tree.

These models were first introduced in the information theory literature by Rissanen (1983) as an universal tool to perform data compression. Recently, they have been used to model up scientific data in areas as different as biology, linguistics and music. Originally called "finite memory source" or "tree machines", these models became quite popular in the statistics literature under the name of "Variable Length Markov Chains" coined by Buhlmann and Wyner (1999).

In my talk I will present some of the basic ideas, problems and examples of application in the field. I will focus on the algorithm Context which estimates the tree of contexts and the associated family of transition probabilities defining the chain.


Arne Kovac (University of Bristol)

Shape constraints and algorithms

We consider several problems in the areas of nonparametric regression and image analysis under shape constraints. The task is always to produce simple functions with small number of local extreme values, while multiresolution criteria ensure good approximation of the fitted functions. These strategies easily lead to minimisation problems that can be very difficult to solve, hence the design of efficient algorithms is crucial. One of the problems that we study is concerned with online data where fast processing is particularly important.


Marloes Maathuis (Zürich)

Computation of the MLE for bivariate interval censored data

I will discuss the new R-package 'MLEcens', which computes the nonparametric maximum likelihood estimator (MLE) for the distribution function of bivariate interval censored data. The computation of the MLE consists of two steps: a parameter reduction step and an optimization step. I will discuss algorithms for both steps. I will also illustrate the R-package using several examples.


Eric Moulines (ENST, Paris)

MCMC, SMC,... What next ?

The Monte Carlo method was initially developed for scientific computing in statistical physics during the early days of the computers. Due to the rapid progress in computer technology and the need for handling large datasets and complex systems, the past two decades have witnessed a strong surge of interest in Monte Carlo methods from the scientific community. Researchers ranging from computational biologist to signal \& image processing engineers and to financial econometricians now view Monte Carlo techniques as essential tools for inference. Besides using the popular Markov chain Monte Carlo strategies and adaptive variants of it, various sequential Monte Carlo strategies have recently appeared on the scene, resulting in a wealth of novel and effective inferential and optimization tools. In this talk, we will present what we believe to be the "state-of-the art" in Monte-Carlo simulations for inference and will try to identify the next challenges.


Klaus-Robert Muller and Dr Mikio Braun Machine Learning Group Technical University of Berlin  and Fraunhofer Institut FIRST
Intelligent Data Analysis Group (IDA)
This is joint work with Claudia Sanelli and Joachim M. Buhmann

Denoising and Dimension Reduction in Feature Space

The talk presents recent work that interestingly complements our understanding the VC picture in kernel based learning.

Our finding is that the relevant information of a supervised learning problem is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved using only linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data for supervised learning problems. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to aid in model selection, and to (3) denoise in feature space in order to yield better classification results.

We complement our theoretical findings by reporting on applications of our method to data from gene finding and brain computer interfacing.


Joerg Polzehl (WIAS Berlin)

Structural adaptive smoothing: Images, fMRI and DWI

The talk presents a class of structural adaptive smoothing methods developed at WIAS. The main focus will be on the Propagation-Separation (PS) approach proposed in Polzehl and Spokoiny (2006). The method allows to simultaneously identify regions of homogeneity with respect to a prescribed model (structural assumption) and to use this information to improve local estimates. This is achieved by an iterative procedure. The name Propagation-Separation is a synonym for the two main properties of the algorithms. In case of homogeneity, that is if the prescribed model holds with the same parameters within a large region, the algorithm essentially delivers a series of nonadaptive estimates with decreasing variance and propagates to the best estimate from this series. Separation means that, as soon as in two design points i and j significant differences are detected between estimates, observations in j will not be used to estimate the parameter in j. We establish some theoretical {nonasymptotic} results on properties of the new algorithm. We present how this approach can be adjusted to different imaging modalities, ranging from denoising of greyvalue and color images, to the analysis of data from functional Magnetic Resonance Imaging (fMRI) and Diffusion Weigted Imaging (DWI) experiments.


Martin Welk (Saarland University)

Locally Analytic Schemes for Diffusion Filtering of Images

Nonlinear diffusion filtering has proven its value as a versatile tool for structure-preserving image denoising. Among the most interesting methods of this class are tensor-driven anisotropic diffusion as well as singular isotropic diffusion filters like total variation flow. For different reasons, devising good numerical algorithms for these filters is challenging.

A spatial discretisation transforms nonlinear diffusion partial differential equations into systems of ordinary differential equations. Their investigation yields insights into the properties of diffusion-based algorithms but leads also to the design of new algorithms with favourable stability properties which are at the same time simple to implement. Moreover, interesting links to wavelet-based denoising methods are established in this way.

The talk focusses on the construction and properties of locally (semi-)analytic schemes for nonlinear isotropic and anisotropic diffusion on 2D images, with extensions to the 3D case.


Gerhard Winkler (GSF/IBB)

Complexity Penalised M-Estimation: Fast Exact Computation

We sketch the ideas for very fast algorithms for the exact computation of estimators for time series, based on complexity penalised loglikelihood- or M-functions. The algorithms apply to a wide range of functionals with morphological constraints, in particular to Potts or Blake-Zisserman functionals. The latter are the discrete versions of the celebrated Mumford-Shah functionals. All such functionals contain model parameters. Our algorithms apply to the optimisation not only for each separate parameter, but even for all parameters simultaneously. This allows for the examination of the models in the sense of a family approach. The algorithms are accompanied by a series of illustrative examples from molecular biology.

We embed the report into some general remarks concerning optimization for complex systems. The case of 2-dimensional data is covered by the lecture of L. Demaret on this meeting.