Workshop on

Workshop Mathematical Methodologies for Operational Risk

April 16-17-18, 2007

EURANDOM, Eindhoven, The Netherlands


New and Emerging Science and Technology (NEST)



Carina Andersson (Lund University)

An industrial case study of a selection method for software reliability growth models

An iterative case study on quality monitoring conducted in an industrial environment is presented. Empirical data from a large telecommunications company was collected and analyzed. The organization develops consumer products with a substantial proportion of its functionality implemented in software. The development process is characterized by iterative development, with component releases in small iterations, and product line architecture. In this highly iterative project, everything seems to happen at the same time; analysis, design and testing.
The analyzed data serve as feedback to the project staff to facilitate identification of software process improvement. The analysis is guided with the purpose of generalizing findings obtained from other research studies. Fault distributions are examined, in terms of detection phase, location of faults, and fault density. In addition, the data have been used for defect prediction.
We present a replication of a method for selecting software reliability growth models (SRGMs) to decide whether to stop testing and release software. We applied the selection method in an empirical study, conducted in a different development environment than the original study. By replicating an original study it may be shown that the results are either valid or invalid in another context, outside the specific environment in which the original study was launched. The results of the replication study show that with the changed values of stability and curve fit, the selection method appeared to work well on the empirical system test data available, i.e. the method was applicable in an environment that was different from the original one.

Pauline Barrieu (London School of Economics)

About financial risk measures

We present a brief introduction to financial risk measurement, insisting on the recent notion of convex risk measures. We also look at various possible applications of risk measures in the pricing, hedging and optimal design of financial contracts.

Tim Bedford (University of Strathclyde)

Measuring societal risk

This talk discusses some of the approaches to quantifying risk of industrial activities. The usual approach, as used for example in the Netherlands, is to plot the annual frequency of accidents causing at least N fatalities as a function of N, on log-log scales. However, different curves - possibly arising from different proposed risk reduction measures - are not ordered. We discuss the use of ideas from multicriteria decision analysis that enable such an ordering to be given, and a theoretically appropriate handling of epistemic uncertainty to be made. We finally discuss different ways of dealing with the time-dependent nature of the unwanted consequences.

Dr. Fevzi Belli (University Paderborn)

Interactive Systesm and Their Vulnerabilities - A Holistic Approach to Modelling and Testing

Michel Chaudron (Eindhoven University of Technology)
Joint work with Christian Lange

Effects of Defects in Software Models

The Unified Modeling Language (UML) is the de facto standard for modelling software systems. UML offers a large number of diagram types that can be used with varying degree of rigour. We present the results of a survey into the use of the UML in industry. This yields insight in common level of quality of UML models. The results of this survey show that industrial UML models that are used as basis for implementation and maintenance contain large numbers of defects. Subsequently, we study to what extent implementers detect defects in UML models and to what extent defects cause different interpretations by different readers. We performed two controlled experiments with a large group of students (111) and a group of industrial practitioners (48). The experiment’s results show that defects often remain undetected and cause misinterpretations. We present a classification of defect types based on a ranking of detection rate and risk for misinterpretation.

Additionally we observed effects of using domain knowledge to compensate defects. The results are generalizable to industrial UML users and can be used for improving quality assurance techniques for UML-based development.

More information

Julien Chiquet (Université de Technologie de Compiègne)

Modelling degradation processes through a Piecewise deterministic Markov process

In many industrial applications, structures may suffer degradations induced by the corresponding operating conditions. Mechanisms that cause structure failures are complex due to their inter-dependencies and their different physical time-scales.

In this talk, we present a stochastic model based upon a piecewise deterministic Markov process (PDMP) in order to describe the time evolution of a degradation process that increases randomly until it reaches a fixed failure boundary, which means the failure of the structure.

Thanks to Markov renewal theory, the transition function of the PDMP can be computed, thus we obtained a closed-form solution for the reliability of the system. We also present estimation results required to apply this model on real data sets. An numerical application for fatigue crack growth, which is a real degradation mechanism involved in various engineering fields, is provided.

Frank Coolen (Durham University)

Imprecise probability and risk assessment

Theory of imprecise probability generalizes classical probability theory, by quantifying uncertainty via lower and upper probabilities. In this talk, a very brief introduction to this topic will be given, followed by a few basic examples of inference in risk and reliability using lower and upper probabilities. This will include an example of nonparametric predictive inference on failures being caused by failure modes that have not yet been observed. There are many  related challenges for research and application, some of these will be briefly discussed. >

Michael Grottke (Duke University)

Achieving high availability via software rejuvenation and multiple levels of recovery

The classic strategy to combatting software faults is finding and removing them. However, for some types of software bugs this is not the only possible approach. In this talk, we clarify the terms "Bohrbug", "Heisenbug" and "Mandelbug." After discussing how this fault classification can help explain the effectiveness of various software recovery techniques, we present a model for a system with multiple levels of recovery. From this model, a closed-form expression of system availability can be derived. Extending our fault classification, we show how "aging-related bugs" are related to the other fault types and discuss approaches to modeling software aging and software rejuvenation.

Dimitrios Konstantinides (University of the Aegean)

Risk Models with Extremal Subexponentiality

Martin Newby (City University London)
Joint work with Colin Barker (City University London)

Optimal Non-Periodic Inspection for a Multivariate Degradation Model

We address the problem of determining inspection and maintenance strategy for a system whose state is described by a multivariate stochastic process. We relax and extend the usual approaches. The system state is a multivariate stochastic process, decisions are based on a performance measure defined by the values of a functional on the process, and the replacement decision is based on the crossings of a critical levels. The critical levels are defined for the performance measure itself and also as the probability of never returning to a satisfactory level of performance. The introduction of last exit times allows us to deal with non-negative transient processes which eventually escape to infinity. The last exit time is not a stopping time and so we introduce the probability of never returning to a satisfactory level of performance as a natural measure of the system’s state. By controlling the probability of non-return the model gives a guaranteed level of reliability throughout the life of the project. The inspection times are determined by a deterministic function of the system state. A non-periodic policy is developed by evaluating the expected lifetime costs and the optimal policy by an optimal choice of inspection function. In the particular case studied here, the underlying process is a multivariate Wiener process, the performance measure is the norm, and the last exit time from a critical set rather than the first hitting time determines the policy.
Keywords Wiener process; regenerative process; renewal-reward; dynamic programming; statistical testing; health monitoring

Fabrizio Ruggeri (CNR-IMATI)
Joint work with Refik Soyer and Tom Mazzuchi (George Washington University)

Some Bayesian models for software reliability

We present an ongoing research on Bayesian models for software reliability. In particular, we consider a self-exciting process which allows for the treatment of new bugs introduced during the debugging phase. The other model, motivated by the same interest as the first, deals with exponential interarrival times whose parameters depend on an underlying Markov chain.

Duane Steffey (Statistical and Data Sciences Exponent Inc.)

Sequential Test Designs for Estimation of Extreme Quantiles

Although reliability testing often concerns the estimation of central values (e.g, mean time to failure), interest sometimes focuses on determining the level at which a response will occur with small, but non-negligible, probability. Applying a method developed in health risk assessment for low-dose extrapolation, we designed an adaptive testing strategy to use available test resources to obtain the most efficient estimates of extreme values. Observations are taken at continuously updated estimates of lower and upper percentiles. This approach yields more accurate estimates than traditional testing at the center of the distribution. We have successfully implemented the testing algorithm in recent projects involving batteries and squibs.

Florentina Suter (University of Bucharest)

Risk assessment using software reliability simulation

Software risk assessment takes into consideration many aspects of the software product. One of these aspects, with critical influence on risk, is software reliability. For this reason, estimating software reliability measures is very important in the process of software risk assessment. In order to characterize as realistically as possible the evolution of software in time, the software reliability models should take into account the structure of the software. Such models are the component-based models in which software is not a black-box, but has several interconnected components. For this type of models, due to their complexity, mathematical tractability becomes difficult to obtain and simulation is a more flexible alternative. In this paper we will use discrete-event simulation applied to a component-based software model, in order to simulate the software failure process and to estimate software reliability.

Simon Wilson (Trinity College)

Decision theory approaches to the optimal time to test

Testing of software prior to release is an important stage of the software testing process. In this talk we look at a decision theoretic solution to the problem of deciding the optimal length of the testing period. We make use of a well known error detection model and a sensible utility. Several testing plans are described. A study to compare the performance of the plans shows the relative performance of each plan under a variety of assumptions about the quality of the software to be tested.

Michael Wiper (Universidad Carlos III de Madrid)

Bayesian software reliability models using metrics information 

We wish to predict the number of faults, N, and the time to next failure, T, of
a piece of software.  We assume that software metrics data are available so that E[N] can be estimated  via a (Poisson or negative binomial) regression model and that T given N follows a standard, fault based reliabiltiy model, such as Jelinski Moranda, or a generalized order statistics model.  Given metrics and software failure time data, we show that this model can be fitted using a fully integrated Bayesian approach.

 Jeannette Woerner (Universität Göttingen)

Analysis of market microstructure: risk introduced by model misspecification

The estimation of the volatility of log-returns is a crucial step in the framework of forecasting and risk assessment for financial data.
Intuitively a good choice is to use the realized volatility, i.e. the quadratic variation with the highest available sampling frequency.
However, empirical studies have shown that for sampling frequencies below around 15 minutes the realized volatility is increasing instead of settling down to some limit, the integrated volatility. One possibility to explain this behaviour is to introduce the concept of market microstructure or market friction by adding some iid noise component to the Brownian motion based model. Another possibility is to use models based on fractional Brownian motion with Hurst parameter H<0.5.

We compare both concepts and show that analysis in terms of power variation gives some evidence for the latter approach. Furthermore, we provide examples based on stock and index data and quantify the level of risk introduced by model misspecification.

David Wooff (Durham University )

Managing software testing risk via a Bayesian graphical approach

The Bayesian graphical model approach to software testing offers a  coherent  and holistic methodology for handling software testing problems. The  outputs  of the model relate naturally to, and are driven by, utility-type  scales and  consequently form the appropriate mechanism for handling risk for various,  possibly competing, stakeholders. In this talk, we discuss the metrics offered by the approach and how they may be used to drive automatic test design over the collections of Bayesian belief networks which represent the key observables of the software being tested. We similarly consider the test-retest problem, and show how outputs from the approach form the natural metric for managing the software testing problem.

Henry Wynn (London School of Economics)
Joint work with Eduardo Saenz de Cabezon (University of La Rioja)

Algebraic methods for system reliability bounds

Computational algebraic geometry is being used under a heading of "algebriac statistics" particularly to study contingency tables and in the design of experiments. But the ideas have considerable application to reliability (Giglio and Wynn, Ann. Statist., 1992). We extend that work using the idea of free resolution. In a well-defined sense the methods yield the sharpest upper and lower bounds of inclusion-exclusion (IE), or generalised Bonferroni, type. The special IE formulae are given by the generalised Hilbert series whose terms depend on the multi-graded Betti numbers in the the so-called free resolution case. These give tighter bounds than any other resolution and in particular the classical IE formula which is the Taylor resolution.
An advantage of the algebraic methods is that they apply naturally to multi-state coherent systems because of the monomial ideal framework.

Last up-dated 24-02-09

This page is maintained by Lucienne Coolen