Workshop on

Risk Measures & Risk Management for High-Frequency Data

March 6-8, 2006

EURANDOM, Eindhoven, The Netherlands

ABSTRACTS

W. Breymann, Zürcher Hochschule Winterthur

Intraday Diversified World Stock Index: Dynamics, Return Dirstributions, Dependence Structure


M. Dacorogna, Converium, Zürich

Multivariate Extremes, Aggregation and Risk Estimation

presentation


K. Dimitrios, University of the Aegean

Large Deviations and Ruin Probabilities for Solutions to Stochastic Recurrence Equations with Heavy-Tailed Innovations

presentation


V. Fasen, Munich University of Technology

Asymptotic results for sample ACF and extremes of generalized  Ornstein-Uhlenbeck processes

We consider a stationary generalized Ornstein-Uhlenbeck process, whose marginal distributions are under weak regularity conditions regularly varying. We show that this continuous-time process is regularly varying in the sense of Hult and Lindskg (2005). Regular variation plays a crucial role in establishing the large sample path behavior of a variety of statistics of generalized Ornstein-Uhlenbeck processes. A complete analysis of weak limit behavior as extremal behavior and the limit behavior of the sample ACF is given by means of a point process analysis.  Generalized Ornstein-Uhlenbeck processes exhibit clusters of extremes.  The behavior of the sample ACF depends on the existence of moments of > the stationary distribution. We demonstrate the theory in full detail for COGARCH(1,1) processes.

presentation


J.  Grammig, University of Tübingen

Time and the Price Impact of a Trade - A Structural Approach

This paper revisits the role of time in measuring the informational content of trades. Using a VAR methodology and NYSE data, Dufour and Engle (2000) showed that duration between trades carry informational content with respect to the price impact of a trade. This paper draws on their work, but addresses this issue within the framework of a structural model. For that purpose we extend Madhavan/Richardson/Roomans' microstructure model to account for time varying trade intensities. We estimate the model for a cross section of stocks traded on one of the largest European stock markets, the Xetra system operated by the German stock exchange. Our results provide contrasting evidence regarding the informational content of time. Although we also find that ''time matters'' in that the informational content of a trade increases with the duration since the last trade the informational content is quite different. While Dufour and Engle's results provided evidence for the hypothesis that ''no trade means no information'', which is in line with the Easley/O'Hara (1992) microstructure model, our results suggest that in an automated order book market with no dedicated market makers present the impact of time on the price impact of trades is more in accord with the predictions from the Admati/Pfleiderer (1988) model.


I. Grosse, Leibniz Institute of Plant Genetics and Crop Plant Research, Gatersleben

Data integration, analysis, and modelling in computational biology

Hundreds of databases with terabytes of valuable information as well as thousands of computer programmes for data analysis and modelling are currently being used by computational biologists worldwide. A major problem of the integrative analysis and modelling of the available masses of data with the armada of existing programmes is that a large fraction of the data is in contradiction to each other and that analysis programmes often produce contradictory results even when applied to the same data. The goal of the Plant Data Warehouse project initiated at the Bioinformatics Centre Gatersleben-Halle is the development of a flexible software platform for data integration, analysis, and modelling.

The integrated data include molecular, genotypic, phenotypic, taxonomic, geographic, and environmental data as well as data on plant genetic resources, and the integrated programmes allow the analysis and modelling of DNA and protein sequences, molecular markers, expression data, genotype-phenotype relationships, and environmental data such as weather data sampled in one-minute intervals at the Leibniz Institute for Plant Genetics and Crop Plant Research. The central entry point for accessing the integrated data, programmes, and analysis and modelling results is the Plant Bioinformatics Portal publicly available at http://portal.bic-gh.de/.

presentation


L. Gulyas, AITIA International

Statistical Challenges in Agent-Based Computational Modeling

Agent-based modeling (ABM) is a new branch of computer simulation, especially suited for the modeling of complex social systems. Its main tenet is to model the individual, together with its imperfections (e.g., limited cognitive and computational abilities), its idiosyncrasies, and unique interactions. Thus, the approach builds the model from ‘the bottom-up’, focusing mostly on micro rules and seeking the understanding of the emergence of macro behavior. In this talk I will provide an overview of the various issues and challenges of ABM related to statistics.

ABM was made possible by the increased performance of today's computers. However, this very same development is starting to cause its problems as well. Most agent-based simulations (ABS) are stochastic, therefore the validity of their (computational) results must be established by statistical analysis of a number of sampled runs. In addition, as most computer models, ABSs also have numerous parameters that create an enormous parameter space against which the robustness of their findings must be established. Therefore, agent-based modellers are facing the challenge of 'dimension collapse'. On the other hand, a significant interest has arisen recently in the empirical fitting and validation of agent-based models. Since an important characteristics of ABMs is their involvement at several levels (i.e., both at the micro and macro level), fitting and validation methods create a hard-to-meet data-requirement. Finally, ABM's special consideration of agent interactions asks for empricial data on the interactions in the modelled system -- another problem that is often hard to solve.

presentation


T. Hayashi, Columbia University,

Nonsynchronously observed diffusions and covariance estimation

We consider the problem of estimating the (integrated) covariance/correlation of two diffusion-type processes when they are observed at discrete times in a nonsynchronous manner. In our preceding work in 2003, we proposed a new estimation procedure that is free of any `synchronization' processing of original data and showed consistency of the resulting estimators as the mesh size shrinks to zero ([1]).
In the talk, we will present advances of the theory ([2], [3]). In particular, we will discuss asymptotic normality of the covariance estimator, joint with the realized volatilities, and, as its direct application, asymptotic normality of the correlation estimators, accompanied by some examples. Attempts will be made to discuss the results in a general setup where the processes are continuous semimartingales and the observation times are stopping times.

[1] Hayashi, T. and Yoshida, N. On Covariance Estimation of Non-synchronously Observed Diffusion Processes. Preprint (2003). Bernoulli, 11-2, 359-379 (2005).

[2] Hayashi, T. and Yoshida, N. Asymptotic Normality of A Covariance Estimator for Nonsynchronously Observed Diffusion Processes. Preprint (2004).

[3] Hayashi, T. and Yoshida, N. Estimating Correlations with Nonsynchronous Observations in Continuous Diffusion Models. Preprint (2005).


S. Haug, Technische Universität München

An exponential continuous time GARCH(p,q) process

We introduce an exponential continuous time GARCH(p,q) process. It is defined is such a way that it is a continuous time extension of the discrete time EGARCH(p,q) process. We investigate stationarity and moment properties of the new model. One feature of the EGARCH(p,q) process is the ability to model the leverage effect. It is shwon that this is also true for our continuous time extension. To incorporate a long memory effect we extend the model to a fractionally integrated exponential continuous time GARCH(p,q) process.

presentation


A. Lindner, Technische Universität Münich

On continuous time GARCH processes of higher order

A continuous time GARCH model of order (p,q) is introduced, which is driven by a single Levy process. It extends many of the  features of discrete time GARCH(p,q) processes to a continuous time  setting. When p=q=1, the process thus defined reduces to the COGARCH(1,1) process of Klueppelberg et al. (2004). We give sufficient conditions for the existence of stationary solutions and show that the volatility process has the same  autocorrelation structure as a continuous time ARMA process. The autocorrelation of the squared increments of the process is also  investigated, and conditions ensuring a positive volatility are  discussed. The talk is based on joint work with Peter Brockwell and  Erdenebaatar Chadraa (2005).


A. Lunde, Aarhus School of Business

Designing realised kernels to measure the ex-post variation of equity

presentation


L. Mancini, Universität Zürich
Joint work with Y. Aїt-Sahalia

Out of Sample Forecasts of Quadratic Variation

presentation


T. Mikosch, University of Copenhagen

A Poisson cluster model for high frequency arrrivals


S. Mittnik, University of Munich
Joint work with Fulvio Corsi (University of Lugano) / Uta Kretschmer (University of Bonn) / Christian Pigorsch (University of Munich)

The Volatiliy of Realized Volatiliy

Using unobservable conditional variance as measure, latent-variable approaches, such as GARCH and stochastic-volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing "observable" or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time-series models for realized volatility exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time-varying volatility of realized volatility leads to a substantial improvement of the model's fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.


P. Mykland, University of Chicago

A Gaussian Calculus for Inference from High Frequency Data

Abstract: In the econometric literature of high frequency data, it is often assumed that one can carry out inference conditionally on the underlying volatility processes. In other words, conditionally Gaussian systems are considered. This is often referred to as the assumption of ``no leverage effect". This is often a reasonable thing to do, as general estimators and results can often be conjectured from considering the conditionally Gaussian case. The purpose of this paper is to try to give some more structure to the things one can do with the Gaussian assumption. We shall argue in the following that there is a whole treasure chest of tools that can be brought to bear on high frequency data problems in this case. We shall in particular consider approximations involving locally constant volatility processes, and develop a general theory for this approximation. As applications of the theory, we propose an improved estimator of quarticity, an ANOVA for processes with multiple regressors, and an estimator for error bars on the Hayashi-Yoshida estimator of quadratic covariation


W. Polasek, Institut für Höhere Studien (IHS)

Irregularly Spaced AR and ARCH (ISAR-ARCH) Models

Irregularly spaced time series are a rather common phenomenon in financial time series. We adopt a simple AR(p) model and extend its lag pattern to a continuous linear lag response function which will be estimated by the irregularly (i.e. unequally) spaced time series. This model is called linear ISAR(p) model and the estimation procedure is worked out with the help of the Gibbs and Metropolis sampler. Extending the same concept to cope with the volatility problem, we have extended the approach to ARCH(q) models using the same basic data transformations. Again, these ISAR(p) - ISAR(q) models are estimated by MCMC methods (Metropolis sampler). 3 types of lag response functions are used in the modeling process and the model choice is done with the help of the marginal and predictive likelihood and the conditional predictive ordinary (CPO-plots). The ISAR(p) - ISARCH(q) - models are demonstrated for simulation results and finally applied for the high frequency data of foreign exchange rates from Oct 1, 1 Sept. 30, 1993 which was distributed by the Olson company in Zurich. The negative correlation for small time intervals could be confirmed by this mean. Possible multivariate extensions are also discussed.

Keywords: AR-ARCH(p,q) models, conditional Weibull duration models, irregularly spaced time series, Metropolis-within-Gibbs, predictive distribution.


E. Renault, University of North Carolina at Chapel Hill

Causality Effects in Return Volatility Measures with Random Times


J. Schmiegel, University of Aarhus

Time change and universality: Heavy tailed distribution in turbulence and finance


R. Stresing, Universitaet Oldenburg

Analysis of financial data on different timescales

A general non-parametric method, utilizing a Fokker-Planck equation, is presented to describe the statistics of the log return distribution of financial data as a function of the timescale for medium and small timescales. For very small timescales of less than one minute, a new non-parametric approach is presented which is based on a measure that quantifies the distance of a considered distribution to a reference distribution. The existence of a small timescale regime is demonstrated, which exhibits different properties compared to timescales larger than one minute. This regime seems to be universal for individual stocks. It is shown that the existence of a small timescale regime is not dependent on the special choice of the distance measure or the reference distribution. The existance of such a regime has important implications for risk analysis, in particular for the probability of extreme events.

presentation


J. Woerner, University of Göttingen

On the fine structure of price processes


N. Yoshida, University of Tokyo

Polynomial type large deviation inequalities and quasi-likelihood analysis for stochastic differential equations

Abstract: We prove a certain polynomial type large deviation inequality for a statistical random field. From this result, it is possible to obtain weak convergence of the statistical random field, and asymptotic properties of statistics related to it. We apply those results to quasi-likelihood analysis for sampled stochastic differential equations. It turns out that Yury Kutoyants' scheme to apply Ibragimov-Hasminskii's theory to stochastic processes such as semimartingales was finally correct.


Last up-dated 24-02-09

This page is maintained by Lucienne Coolen