"Image Analysis and Inverse Problems"
December 11-13, 2006
EURANDOM, Eindhoven, The Netherlands
Frank Bauer (Fuzzy Logic Laboratorium Linz-Hagenberg University of Linz)
Comparing Different Methods for Choosing the Regularization Parameter
In order to solve linear ill-posed problems one needs to stabilize i.e. regularize them. This means that one balances some a priori knowledge which one does not really know (mainly that the solution is not too big) and the instabilities imposed by the noise of whom one has as well just a very rough idea. Methods for solving these problems are e.g. Spectral Cut-Off or Tikhonov regularization which require the choice of a so called regularization parameter. Depending on the underlying chosen noise models quite a lot of different parameter choice methods have been developed in the recent years. This talk will present some of them in algorithmic way (old and new ones) and give a thorough evaluation of their quality and numerical stability.
Burgeth (Saarland University)
Joint work with S. Didas, J. Weickert, and L. Florack
Processing of Matrix Fields with Nonlinear PDEs: A Generic Framework
Matrix-valued functions, so-called matrix fields, are encountered in civil engineering to describe anisotropic behavior of physical quantities. Stress and diffusion tensors are important examples. The output of diffusion tensor magnetic resonance imaging (DT-MRI) are -matrix fields. In medical sciences this image acquisition technique has become an indispensable diagnostic tool. Evidently there is an increasing demand to develop image processing tools for the filtering and analysis of such matrix-valued functions. In the standard scalar setting nonlinear parabolic partial differential equations describing diffusion processes are employed to filter and to denoise greyvalue images . Most prominent examples are the PDEs of Perona-Malik, of TV-diffusion, and the morphological PDEs of dilation and erosion. In this talk we introduce a generic framework that allows to derive the matrix-valued counterparts of such PDEs and demonstrate their usefulness in processing tensor fields. We will present numerical schemes that can be used to solve these equations successfully in the matrix-valued setting. Numerical experiments on both synthetic and real-world data substantiate the effectiveness of our matrix-valued, nonlinear diffusion filters.
Remco Duits (Technische Universiteit Eindhoven)
Contour Enhancement and Completion via Left Invariant Second Order Stochastic Evolution Equations on the 2D-Euclidean Motion Group
Given an image we construct a local orientation score , which is a complex-valued function on the Euclidean motion group (the group of translations and rotations on ). The transformation is a wavelet transform constructed by an oriented wavelet and a representation of , which boils down to convolutionswith the oriented wavelet rotated at multiple angles. Under conditions on the wavelet transform is unitary, inducing well-posed image reconstruction by the adjoint. This allows us to relate operators on images to operators on (single scale) orientation scores in a robust way. To obtain Euclidean invariant image processing the operator on the orientation score must be left invariant. Therefore we consider left-invariant scale spaces on the Euclidean motion group generated by a quadratic form on left invariant vector fields of . These scale spaces correspond to well-known stochastic processes on for contour completion and contour enhancement. The linear scale spaces on are given by group convolution with the corresponding Green’s functions which were hitherto unknown. We derive the exact Green’s functions and suitable approximations, which we use together with the framework of invertible orientation scores for automatic contour enhancement and completion. We also consider non-linear adaptive scale spaces on and their practical value in coherent enhancing diffusion.
Michael Felsberg (Linköping University)
The Monogenic Framework: Fusing Phase-Based Image Processing, Structure Tensor, and Scale-Space
In many applications phase-based image processing is an interesting alternative to intensity-based algorithms. The basic idea is to move from intensity as a primary source of information to phase as the information-bearing part. This can be compared to phase-based modulation, and by considering the time-derivative instead, FM modulation. The problem is, however, to estimate phase from an intensity observation, i.e., to demodulate the image. On particularly coherent approach to this problem is the monogenic signal which estimates the phase from a signal and its Riesz transform. Similar to energy-based 1D demodulation techniques, a 2D energy tensor is defined which allows phase-based image analysis by means of a quadratic operator. The energy tensor contains the well known structure tensor, which only responds to locally odd signals, as one component. The other component corresponds to an operator responding to even signals only, which allows in combination with the former to extract the phase from images. Interpreting the Riesz transform as the image flow, one obtains a new linear scale space from the continuity equation, the Poisson scale space. The interesting property of the latter in combination with the Riesz transform, called the monogenic scale space, is the close connection to field theory. The latter allows to interpret the monogenic scale space as a potential field emerging from a minimum set of sources located at the negative half-space. Thus, each image becomes a projection of this potential field with some plane. The same is true for the phase and the logarithm of the amplitude - they become themselves components of a potential field. As a result of this theory, new image processing algorithms can be developed.
Michal Haindl (Institute of Information Theory and Automation, Prague)
Texture analysis and synthesis using MRF models
We will discuss MRF model-based applications for textures segmentation, compression and modelling. The recent supreme physically correct representation for real-world materials in virtual reality applications is the Bidirectional Texture Function (BTF) which describes rough texture appearance variations for varying illumination and viewing conditions. Such a function consists of thousands of measurements (images) per material sample. The resulted BTF size excludes its direct rendering in any graphical applications and some compression of these huge BTF data spaces is obviously inevitable. We will discuss our MRF based solution for BTF modelling allowing an efficient extreme compression with the possibilityof fast synthesis directly implemented inside the graphics card. Simultaneously this approach can be used to reconstruct missing parts of the BTF measurement space. In the second part we will discuss efficient MRF approaches to unsupervised multi-spectral texture / image segmentation.
Bart Janssen (Technische Universiteit Eindhoven)
Linear Image Reconstruction from Multi-Scale Interest Points
Exploration of information content of multi-scale features that are present in the scale space of images has led to the development of several reconstruction algorithms. These algorithms aim for a reconstruction from the typically sparse set of features that is visually close to the image from which the features are extracted. Degrees of freedom that are not fixed by the constraints are disambiguated with the help of a so-called prior (i.e. a user defined model). We explore several linear reconstruction schemes on both the bounded and unbounded domain. As an example we propose specific priors and apply them to the reconstruction from singular points of a scale space image. We also briefly show the application of these methods to conventional image processing applications.
Arne Kovac (University of Bristol)
Total variation and curves
We discuss the approximation of data from one- and two-dimensional curves using total variation-based techniques. Our aim will be to minimise complexity among all functions which satisfy a criterion for approximation. Complexity will be measured by the number of local extreme values or variational properties of the functions. Our criteria for approximation will be based on a multiscale analysis of the residuals.
Arjan Kuijper (Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences)
PDE-based topological image segmentation
Minimizing the p-norm of the gradient of an image under suitable boundary conditions gives PDEs that are well-known for p = 1, 2, namely Total Variation evolution and diffusion by the Laplacian (also known as Gaussian scale space), respectively. Without fixing p, one obtains a framework related to the p-Laplace equation. The partial differential equation describing the evolution can be simplified using gauge coordinates (directional derivatives), yielding an expression in the two second order gauge derivatives and the norm of the gradient. Ignoring the latter, one obtains a series of PDEs that form a weighted average of the second order derivatives, with Mean Curvature Motion as a specific case.
Both methods have the Gaussian scale space in common. Using singularity theory, one can exploit properties of the heat equation - the role of scale - in the full scale space and obtain a framework for topological image segmentation.
Maria Kulikova (INRIA)
Recognition of forms for classification of tree species
We consider the problem of tree species classification from high resolution aerial images based on a shape modeling. We use a notion of the shape space proposed by Klassen et al. which provide a shape description invariant to translation, rotation and scaling. Some shape features are extracted using a geodesic distance in the shape space. Then, we perform a classification using a SVM approach. We show that the shape descriptors improve the performance of the classifier relatively to a classification based only on radiometric and textural descriptors.
Florent Lafarge (INRIA)
3D city modeling using RJMCMC sampler
We present a 3D building reconstruction method from satellite images based on a structural approach. It consists in reconstructing buildings by assembling simple urban structures extracted from a grammar of 3D parametric models. Such an approach is particulary well adapted to data of average quality such as satellite images. This method is based on a density formulation defined in a Bayesian framework. The configuration which maximizes this density is found using the RJMCMC sampler which is efficient for the multiple parametric object recognition.
Monika Meise & Rahel Stichtenoth (Universität Duisburg-Essen)
Multiresolution based Smoothing
The white noise multiresolution criterion can be
used in various smoothing methods to guarantee the data closeness of an
In this talk we give a short explanation of the MR criterion and show how it can be applied to different one- and two-dimensional smoothing methods.
Patrick Perez ( IRISA/INRIA Rennes)
Tracking, mixtures and particles
In this presentation, I'll discuss sequential state estimation problems where the filtering distribution is a mixture. Two generic problems, which are of particular interest for visual tracking, fall in this category: (1) sequential estimation of multi-modal filtering distributions, (2) tracking with auxiliary discrete state variables. We shall see that standard filters can be easily extended to handle these problems. In particular, popular sequential Monte Carlo techniques (particle filters) can be readily mobilized. This yields to a clustered particle filter in (1) and to interacting particle filters with no sampling of the auxiliary variable in (2). In both cases, experimental illustration will be provided in the context of colour-based visual tracking. -
Havard Rue, Norwegian University of Science and Technology
Approximate Bayesian inference for latent MRF and Gaussian models
In this talk I will discuss some techniques for doing approximate Bayesian inference for latent MRF and Gaussian models. The task is to approximate the posterior marginals for the hyperparameters and the latent field. For the MRF case, we build our approximations makeing use of exact results for small lattices (smallest dimention less than 20). For the latent Gaussian case, we use integrated nested Laplace approximations. The approximations are very presice and (relative) quick to compute, and indicate that inference based on Markov chain Monte Carlo for such models is not needed.
Tomasz Schreiber (Nicolas Copernicus University)
Joint research with Marie-Colette van Lieshout and Rafal Kluszczynski
Polygonal Markov fields for image segmentation
Polygonal Markov fields, originally introduced by Arak and Surgailis, are continuum ensembles of polygonal contours in the plane, enjoying a number of interesting properties including consistency, isometry invariance, two-dimensional germ-Markov property and availability of exact formulae for various numeric characteristics. With their Gibbs-style construction the polygonal Markov fields share many features with the two-dimensional Ising model and, more generally, with lattice-indexed Markov fields. This seems to make them particularly well suited for image segmentation purposes where in principle they should do the same work the lattice-indexed Markov fields are designed for, while being completely free of lattice artifacts due to their continuum nature. The major obstacle for these applications has been so far the lack of efficient simulation algorithms. The first Metropolis-Hastings sampler has been developed by Clifford and Nicholls. In this talk I will present a completely new simulation algorithm for polygonal fields, introducing new moves of global nature aimed at obtaining good mixing rates and based on the so-called disagreement loops and dynamic representation of polygonal fields. The algorithm has been developed by the author in cooperation with Marie-Colette van Lieshout and Rafal Kluszczynski and encouraging results have been reported in test applications.
Vladimir Spokoiny (Weierstrass Institute and Humboldt University Berlin))
Structural adaptive smoothing by Propagation-Separation-methods with applications to medical imaging
The talk discusses a novel method of structure adaptive image denoising which allows for optimal noise reduction while keeping the important structure in the image. The performance of the method is illustrated by applications to MRI, CT and PET images and to analysis of fMRI and dMRI experiments.
Joachim Weickert (Saarland University)
Integrodifferential Equations for Wavelet Shrinkage
The relations between wavelet shrinkage and nonlinear diffusion for discontinuity-preserving signal denoising are fairly well-understood for single-scale wavelet shrinkage, but not for the practically relevant multiscale case. In this talk we show that 1-D multiscale continuous wavelet shrinkage can be linked to novel integrodifferential equations. They differ from nonlinear diffusion filtering and corresponding regularisation methods by the fact that they involve smoothed derivative operators and perform a weighted averaging over all scales. Moreover, by expressing the convolution-based smoothed derivative operators by power series of differential operators, we show that multiscale wavelet shrinkage can also be regarded as averaging over pseudodifferential equations. Joint work with Stephan Didas.
Roland Wilson (University of Warwick)
A Tool for Modelling Arbitrary Densities: Multiresolution Gaussian Mixture Models.
Gaussian mixtures have been in use for many years because they offer a general way of approximating arbitrary probability densities, in particular those with long tails or multiple modes. While this makes them attractive, they are beset by difficulties of identification and computation: how many components are required and how might they be estimated from a given data set? A variety of techniques have been proposed, ranging in complexity from EM algorithms to Reversible Jump MCMC. The approach I am proposing takes a different line of attack, using a recursive greedy algorithm to approximate the density to a given level of accuracy, which can be specified by the user. A simple MCMC algorithm is used at each step of the process to determine whether a given subset of the data should be split, in a Bayesian formalism. The technique has been used in a variety of applications in computer vision, from segmentation to motion modelling and manifold learning. It has been extended to networks of models, in a fashion which shares some features with self-organising maps. In the talk, I shall give an outline of the key features of MGM and discuss the issue of model selection, as well as presenting preliminary results on real data.
Gerhard Winkler (GSF - Forschungszentrum für Umwelt und Gesundheit)
The Family of Mumford-Shah Functionals in One Dimension
We deal with variational approaches to the segmentation of time series or signals into smooth pieces. This allows for sharp breaks in the estimated signals. The estimates are minimal points of special functionals. They hopefully are proper representations of the signals behind recorded data.
We study functionals and estimators in dependence on model parameters and sampling rate. Depending on the parameters we get in continuous time the Mumford-Shah and the complexity penalised functionals. In discrete time, the functionals are of Blake-Zisserman type.
Our aim is to illustrate that the functionals as well as the estimates depend continuously on the parameters, including the sampling rate. We restrict ourselves to dimension one in order to give a transparent treatment.
This page is maintained by Lucienne Coolen