About | Research | Events | People | Reports | Alumni | Contact | Home
11th International Workshop on Rare Event Simulation (RESIM 2016)
RESIM 2016 will be the 11th
workshop in a series of successful events held on the topic of rare event
simulation since 1997.
Indicate your interest in presenting a paper by submitting a title and a 1-page abstract by January 17, 2016 by email to Mrs. Patty Koorn (firstname.lastname@example.org).
Authors will be notified of the acceptance / rejection
of their paper by February 1, 2016.
All materials that should be included in the compendium (short or full paper, or abstract supplemented by viewgraphs; maximum 12 pages) are due by February 29, 2016.
Tuesday March 29
Wednesday March 30
Thursday March 31
Friday April 1
Rare event problems and simulation in the lognormal left tail
Sum of lognormal distributions come up in a number of problems in say
insurance, finance and wireless communication. The distribution of such sums is
not explicitly available so one has to resort to approximations and/or numerical
methods. We look at such problems with emphasis on the tails and simulation. The
starting point for the right tail is i.i.d. subexponential theory/algorithms but
we also sketch the extension to dependence. The left tail is of particular
importance in finance when considering portfolios with long positions and
light-tailed theory suggests to use saddlepoint
Forward Flux Sampling with Incomplete Progress Coordinate Information
It has been observed that adequate statistics for calculations using the Forward Flux Sampling (FFS) splitting method are difficult to achieve when studying nucleation phenomena, due to the presence of hidden variables not captured by the typically used progress coordinates. We demonstrate that the Rosenbluth variant of FFS ameliorates these problems, giving examples from a study of the hard sphere fluid-to-crystal nucleation process. We discuss updates to the open source freshs.org rare event harness system.
Estimating power spill probabilities in energy systems with wind generation and storage
The intermittent nature of the wind energy makes its integration to the energy system a highly challenging task. However, the challenges can be alleviated by incorporating energy storage devices into the system. Our study focuses on minimizing large power spilled by a stand-alone energy system. For this, we assessed a single domestic energy system incorporated with a wind turbine and a battery. The aim is to find the best operation mode of the battery such that the occurrence of large power spills is minimal.
Biased sampling of unbiased dynamical trajectories in complex molecular systems
The transition path sampling (TPS) methodology has enabled the sampling of rare events in complex systems using unbiased dynamical trajectories. The method works by restricting the trajectory space to paths that visit specific stable states. In the last decade the TPS framework has been extended in various ways. On the one hand by the transition interface approach (TIS) (related to the forward flux method) to sample kinetic properties more efficiently. On the other hand by allowing transitions between the multiple (meta) stable states that often are present in complex systems. Multiple state TIS can be recast into a single replica exchange method, that, in principle, is able to explore the entire trajectory space starting from a single stable minimum. In this presentation, I will give an overview of the TPS methodology, discuss the link with splitting methods, and illustrate the application of path sampling to rare events in molecular systems, such as protein folding, aggregation and association.
How exponential tilting facilitates posterior simulation in the Bayesian Lasso model
In this talk we propose a perfect
simulation method to sample from the posterior density of the Bayesian Lasso
Generalized Adaptive Multilevel Splitting: unbiasedness and applications
The Adaptive Multilevel Splitting (AMS) algorithm is a versatile and very effcient approach to simulate rare events, such as transitions between metastable states of stochastic processes. The main ingredient is as follows: n replicas interact (mutation-selection) through the deﬁnition of a sequence of (random) levels, computed adaptively using a reaction coordinate mapping. Obtaining reliable quantitative (e.g. reaction rates) and qualitative (e.g. reactive trajectories) information requires rigorous mathematical results. I would like to present the Generalized Adaptive Multilevel Splitting framework recently introduced in a preprint, which encompasses the AMS algorithm for simulating path of Markov chains; this framework also suggests several natural algorithmic new variants, and motivates new applications.
Analysis of a state-independent change of measure for the G|G|1 tandem queue
In (Parekh and Walrand, 1989) a method is introduced to efficiently estimate the probability of a rare event in a single queue or network of queueus. The event they consider is that the total number of customers in the system reaches some level N in a busy cycle. Parekh and Walrand introduce a simple change of measure, which is state-independent, in order to estimate this probability efficiently using simulation. However, they do not provide any proofs of some kind of efficiency of their method. For the single queue (with mutiple servers) it has been shown in (Sadowsky, 1991) that the change of measure as proposed by Parekh and Walrand is asymptotically efficient under some mild conditions.
In this work we study the state-independent change of measure of the G|G|1 tandem queue, along the lines of Parekh and Walrand, and we provide necessary conditions for asymptotic efficiency. To the best of our knowledge, no results on asymptotic effciency for the G|G|1 tandem queue had been obtained previously. Looking at the results of the M|M|1 tandem queue, it is expected that this state-independent change of measure is the only state-independent change of measure for the G|G|1 tandem queue that can possibly be asymptotically effcient. We show that, under some conditions, it is indeed the only exponential state-independent change of measure that can be asymptotically effcient. However, we have also identified conditions for the two node G|G|1 tandem queue under which the Parekh and Walrand change of measure is still not asymptotically effcient.
Rare event analysis and efficient simulation for a multi-dimensional ruin problem
We look at large deviations of multivariate stochastic processes in continuous time, in particular, we consider the event that both components of a bivariate stochastic process ((At, Bt))t≥0 ever simultaneously exceed some large level; a leading example is that of two Markov fluid queues driven by the same background process ever reaching some large level u. Exact analysis being prohibitive, we resort to asymptotic techniques and efficient simulation.
On the convergence of Adaptive Multilevel Splitting
Adaptive Multilevel Splitting (AMS) is a simple algorithm to sample rare paths from a Markov process, typically a diffusion. This algorithm is now well known and used in the Molecular Dynamics community, and this talk will present recent results about its convergence, which are related to a Fleming-Viot type of interacting particles, build upon a special stochastic process known in the literature as the stochastic wave.
Importance sampling of heavy-tailed stochastic perpetuities
Latent heat and the Fourier law
We present computer simulations run with a stochastic cellular automaton which describes 1D particle systems connected to reservoirs which keep two different densities at the endpoints. We fix the parameters so that there is a phase transition (of the van der Waals type) and observe that if the densities at the boundaries are metastable then, after a transient, the system reaches an apparently stationary regime where the current flows from the reservoir with smaller density to the one with larger density.
Subsolutions for the design and analysis of rare event simulation schemes
The two main methods used to estimate probabilities of rare
events or to approximate expected values that are largely determined by rare
events are importance sampling and splitting. Each of the two methods can be
associated with a function. In the case of splitting the function determines
the splitting thresholds, and in the case of importance sampling it determines a
state-dependent change of measure. In the light-tailed setting, good
qualitative behavior (e.g., sub-exponential growth in the number particles for
splitting) can be shown to hold if and only if the function is a subsolution for
an associated partial differential equation (a Hamilton-Jacobi-Bellman
equation). Moreover when the subsolution property holds, the performance of the
associated numerical method can be characterized in terms of the value of the
subsolution at a certain point.
Introduction to Rare-Event Simulation
In this tutorial talk, we first give a brief overview of the difficulties
encountered in rare-event simulation, then summarize the main ideas and methods
that can substantially reduce the computing effort required to obtain reliable
Dynamical sampling methods with applications
Cross-entropy minimization and importance sampling of rare events
Cross-entropy (CE) minimization is a versatile Monte Carlo method for
combinatorial optimization and sampling of rare events, going back to work by
Reuven Rubinstein and co-workers. I will report on recent algorithmic extensions
of the CE method to diffusions that can be used to design efficient importance
sampling strategies for computing the rare events statistics of equilibrated
systems. The approach is based on a Legendre-type duality between path
functionals of diffusion processes associated with certain sampling and control
problems that can be reformulated in terms of CE minimization. The method will
be illustrated with several numerical examples and discussed along with
algorithmic issues and possible extensions of the method to high-dimensional
Efficient importance sampling for computing credit value adjustment of interest rate portfolios
Prior to the financial crises in 2008, the credit risk
in derivatives was not appropriately accounted for. Since then there has been an
influx of value adjustments to derivative pricing collected under the name of
XVA. The counterparty credit risk is this: A bank B trading derivatives with a
counterparty C, is subject to the default risk of C. If C defaults at time T and
the value of the derivative at that time is V(T) > 0 for the bank, then most
likely B will not collect the full value of the derivative. The loss is called
Lost Given Default LGD. On the other hand if the value is negative then the
defaulting C will terminate the contract and use the money in the default.
Therefore, at default, there is a asymmetry in the value, and the loss is LGD*max(V(T),0).
Model selection for paleo-climatic time series: stable and fractional noise
Dynamical systems of the reaction-diffsion type with small noise have been instrumental to explain basic features of the dynamics of paleo-climate data. For instance, a spectral analysis of Greenland ice time series performed at the end of the 1990s representing average temperatures during the last ice age suggest an α−stable noise component with an α ~1.75. On the other hand, strong memory effects in the dynamics of global average temperatures are attributed to the global cryosphere. We model the time series as a dynamical system perturbed by α-stable and fractional Gaussian noise, and develop an effcient testing method for the best ﬁtting α resp. Hurst coefficient. The method is based on the observed power variations of the residuals of the time series. Their asymptotic behavior in case of α-stable noise is described by α-stable processes, while in the fractional Gaussian case normal asymptotic behavior is observed for suitably renormalized approximations of the quadratic variation.
Splitting for Rare Event Estimation for Problems with Rest Points
Francois Le Gland
Marginalization for Rare Event Simulation in Switching Diffusions
The problem addressed here is estimating the small probability that the continuous component X(t) of a switching diffusion (a special class of a Markov process (X(t), θ(t)) with a hybrid continuous/discrete state space) hits a critical region before some terminal time. In full generality, splitting methods provide alternative to importance sampling, with the interesting feature that trajectories are simulated under the original model, so that not only the probability of the rare but critical event can be estimated, but also typical critical trajectories are exhibited. Special issues are addressed here, which are specific to the hybrid nature of the state space.
Rare event simulation related to financial risks: efficient estimation and sensitivity analysis
In this paper, we develop our previously proposed reversible shaking transformation methods to estimate the rare event statistics arising in different financial risk settings which are embedded within a unified framework of isonormal Gaussian process. Namely, we combine splitting methods with both Interacting Particle System (IPS) technique and ergodic transformations using Parallel-One-Path (POP) estimatiors. We also propose an adaptive version for the POP method and prove its convergence. We demonstrate the application of our methods in various examples which cover usual semi-martingale stochastic models (not necessarily Markovian) driven by Brownian motion and, also, models driven by fractional Brownian motion (non semi-martingale) to addrss various financial risks. Interestingly, owing to the Gaussian process framework, our methods are also able to efficiently handle the important problem of sensitivities of rare event statistics with respect to the model parameters.
On estimation of some rare event probabilities related to Gaussian queues
We consider the probability that the normalized
cumulative workload grows at least as the length T of the considered interval in
case of Gaussian input traffic. Such probability is, in general, not available
Sequential Importance Sampling for Counting Hamiltonian Cycles on Random Graphs
In this paper we consider counting the number of Hamiltonian cycles in random graphs. For fixed graphs such a problem is commonly known as a #P-complete counting problem as it extends its associated NP-complete decision problem. Clearly, randomized approximation schemes can produce accurate estimates. Then it is more interesting to obtain these good estimates in fast running times. In this way we encounter similar concepts as in the rare-event simulation literature, namely polynomial running times of the algorithms to obtain a certain performance guarantee. In our paper we propose a sequential importance sampling algorithm that is based on a simple one-step-look-ahead approach (OSLA). In this method, nodes are one-by-one randomly added to the uncomplete cycle until either a complete Hamiltonian cycle comes out, or the procedure gets stuck. Also we consider an enhanced extension by calling an oracle who knows after each iteration how to continue for obtaining a feasible solution. We call this the n-step-look-ahead method (nSLA). We apply our algorithms to both directed and undirected random graphs.
Irreversible Langevin Samplers and Variance Reduction: A Large Deviations Approach and Diffusion on Graphs
Monte Carlo methods are very popular methods to sample
from high-dimensional target distributions, which very often are of Gibbs type.
Markov processes that have the target distribution as their invariant measure
are used to approximate the equilibrium dynamics. In this talk, we explore
performance criteria based on the related large deviations theory for random
measures and we focus on the diffusion setting. We find that large deviations
theory can not only adequately characterize the efficiency of the
approximations, but it can also be used as a vehicle to design Markov processes,
whose time average optimally (in the sense of variance reduction) approximates
the quantities of interest. We quantify the effect that added irreversibility
has in the speed of convergence to a target Gibbs measure and to the asymptotic
Permutation Monte Carlo: applications, analysis and extension
Permutation Monte Carlo (PMC) is a conditional Monte
Carlo method developed in the 90s to compute the probability u that a set of
nodes become disconnected in a graph with independent failing links. It makes
use of latent variables representing the repair times of links and looks at the
order in which the links are repaired. The probability u can be computed
conditional on this order. The turnip method is a variant removing from
consideration at each step the failures that did not occur yet and do not have
any impact on the system failure. It has been shown that these methods have been
proved to give estimators with bounded relative error (BRE), which means that
their standard deviation divided by the mean u remains bounded, when the link
unreliabilities and u
Pieter ten Wolde
Simulating rare events in cellular systems
Rare events play a central role in cellular systems. Cell differentiation during, e.g., embryonic development is driven by a sequence of rare switching events, and also the response of cellular systems to changes in their environment often involve rare switching events. Yet, simulating these events is a major computational challenge, because in conventional techniques most of the CPU time is wasted on the uneventful waiting time. In the past years, we have developed a new class of techniques, called Forward Flux Sampling (FFS), which makes it possible to simulate efficiently rare events in both equilibrium and non-equilibrium systems. More recently, we have extended these techniques so that they can also handle rare events in non-stationary (NS) systems. This NS-FFS technique can be used to study how sensitively systems respond to to transient fluctuations, or how efficiently they can be switched under the influence of an external signal, e.g. during cell differentiation. We apply NS-FFS to study the flipping of the phage lambda switch by an external signal. We show that there exists an optimal pulse that maximizes the switching efficiency.
Restart vs Splitting: A Comparative Study
In this paper the comparative study is made for both transient and steady-state simulation. Both original methods and also their truncation versions with different depths of truncation are compared. The models used for the study include several Jackson and non-Jackson networks and also a general model of reliability with different types of components. The main conclusion of the study is that RESTART behaves always significantly better than Splitting. Also, a small additional gain is achieved with RESTART with truncation in models where many thresholds can be set.
Optimal importance function for RESTART simulation of a two-queue tandem Jackson network
RESTART is a widely applicable accelerated simulation technique that allows
the evaluation of extremely low probabilities. In this method, a number of
simulation retrials are performed when the process enters importance regions,
i.e. regions of the state space where the chance of occurrence of a rare event
of interest is higher. In previous papers the importance of a state, which is a
measure of the chance of occurrence of the rare event when the state occurs, was
defined. The importance regions should be defined by means of the importance of
the states but, due to the difficulty of knowing them, they are defined by means
of another function of the states called the importance function. The choice by
the user of an importance function which reflects the importance of the states
is crucial for the effective application of RESTART. The paper presents exact
formulas of the importance of the states in a two-queue tandem Jackson network.
It allows using the optimal importance function in this network and to gain
insight on how to choose
Rare event simulation and splitting: a Point Process interpretation with application for discontinuous random variables
We introduce here a new approach to the Splitting methods with a Poisson process framework. Indeed for every real-valued random variable Y with continuous cdf (the output of a complex numerical code with random input for instance: Y = g(X)), we show that one can build a sequence of simulations whose distribution is related to a Poisson Process with intensity measure -log(P[Y > y]). On the one hand the number of samples (ie. simulations) required to get a realisation of Y above a given threshold q then follows a Poisson law with parameter -log(P[Y > q]) ; this is to be compared with the usual geometric law with parameter p obtained with naive Monte-Carlo. On the other hand simulating several processes in parallel allows to derive a Minimal Variance Unbiased Estimator (MVUE) for the probability p of exceeding the threshold q (it is also possible to build a quantile estimator given the probability p). Furthermore this framework allows us to handle possible discontinuities in the cdf of Y and to derive corrected MVUE to handle this case without knowing in advance if Y is actually discontinuous or not.
Understanding umbrella sampling approaches to rare event simulation
I will discuss an ensemble sampling scheme based on a decomposition of the target average of interest into subproblems that are each individually easier to solve and can be solved in parallel. The most basic version of the scheme computes averages with respect to a given density and is a generalization of the Umbrella Sampling method for the calculation of free energies. We have developed a careful understanding of the accuracy of the scheme that is sufficiently detailed to explain the success of umbrella sampling in practice and to suggest improvements including adaptivity. For equilibrium versions of the scheme we have developed error bounds that reveal that the existing understanding of umbrella sampling is incomplete and leads to a number of erroneous conclusions about the scheme. Our bounds are motivated by new perturbation bounds for Markov Chains that we recently established and that are substantially more detailed than existing perturbation bounds for Markov chains. They demonstrate, for example, that equilibrium umbrella sampling is robust in the sense that in limits in which the straightforward approach to sampling from a density becomes exponentially expensive, the cost to achieve a fixed accuracy with umbrella sampling can increase only polynomially. I will also discuss extensions of the stratification philosophy to the calculation of dynamic averages with respect a given Markov process. The scheme is capable of computing very general dynamic averages and offers a natural way to parallelize in both time and space.
Numerical Results for the Automated Rare Event Simulation of Stochastic Petri Nets
Rare-event simulation is an efficient method to estimate measures of interest
that depend on hitting or visiting a rare set of states of a discrete-event
system. It is especially useful when state-level numerical methods are
infeasible because numerical restrictions inhibit the use of (semi-)Markovian
analytical techniques, e.g., due to the state space being too large to store
completely. If the system can instead be expressed using a high-level
description language such as stochastic Petri nets, then (rare-event) simulation
can be performed on the higher level. Rare-event simulation methods based on
multilevel splitting or importance sampling have been presented in the
literature and software tool implementations are available for some of them.
However, they require the specification of paths to the rare set, the distance,
or importance function.
Eurandom, Mathematics and Computer Science Dept, TU Eindhoven,
Den Dolech 2, 5612 AZ EINDHOVEN, The Netherlands
Eurandom is located on the campus of
Eindhoven University of
Technology, in the
(4th floor) (about
the building). The university is
located at 10 minutes walking distance from Eindhoven main railway station (take
the exit north side and walk towards the tall building on the right with the
Registration is closed.
For invited participants, we will take care of accommodation. Other attendees will have to make their own arrangements.
We have a preferred hotel, which can be booked at special rates. Please email Patty Koorn for instructions on how to make use of this special offer.
For other hotels around the university, please see: Hotels (please note: prices listed are "best available").
More hotel options can be found on the webpages of the Tourist Information Eindhoven, Postbus 7, 5600 AA Eindhoven.
For those arriving by plane, there is a convenient direct train connection between Amsterdam Schiphol airport and Eindhoven. This trip will take about one and a half hour. For more detailed information, please consult the NS travel information pages or see Eurandom web page location.
Many low cost carriers also fly to Eindhoven Airport. There is a bus connection to the Eindhoven central railway station from the airport. (Bus route number 401) For details on departure times consult http://www.9292ov.nl
The University can be reached easily by car from the highways leading to Eindhoven (for details, see our route descriptions).
● Conference facilities : Conference room, Metaforum Building MF11&12
The meeting-room is equipped with a data projector, an overhead projector, a projection screen and a blackboard. Please note that speakers and participants making an oral presentation are kindly requested to bring their own laptop or their presentation on a memory stick.
● Conference Secretariat
Upon arrival, participants should register with the workshop officer, and collect their name badges. The workshop officer will be present for the duration of the conference, taking care of the administrative aspects and the day-to-day running of the conference: registration, issuing certificates and receipts, etc.
Should you need to cancel your participation, please contact Patty Koorn, the Workshop Officer.
There is no registration fee, but should you need to cancel your participation after January 2, 2014, we will be obliged to charge a no-show fee of 30 euro.
Mrs. Patty Koorn, Workshop Officer, Eurandom/TU Eindhoven, email@example.com
The organisers acknowledge the financial support/sponsorship of: