BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Eurandom - ECPv4.9.12//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Eurandom
X-ORIGINAL-URL:https://www.eurandom.tue.nl
X-WR-CALDESC:Events for Eurandom
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20110101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20111010
DTEND;VALUE=DATE:20111013
DTSTAMP:20191209T055838
CREATED:20190826T101900Z
LAST-MODIFIED:20190826T124245Z
UID:2836-1318204800-1318463999@www.eurandom.tue.nl
SUMMARY:YES V: Adaptation in Nonparametric Statistics
DESCRIPTION:Summary\nThe quality of statistical inference essentially depends on how complex we assume the underlying statistical model to be: generally\, the richer the model\, the worse the quality of statistical inferences. On the other hand\, if the proposed model is too simple\, it may not be able to provide a reasonable fit to the data. In an adaptive setup\, instead of one particular model one deals with a family of models\, often ordered or embedded from simple to complex. Depending on the statistical problem at hand (for instance\, regression function estimation\, testing hypothesis\, confidence set)\, the general problem of adaptation is\, loosely formulated\, to design a so called adaptive method for solving this statistical problem which performs in multiple model situation as good as in a single model\, or\, if this is not possible\, with the smallest loss of quality. \nIn the last two decades\, several adaptive methods (optimal in one or other sense) have been developed: cross-validation\, blockwise method\, Lepski's method\, wavelet thresholding\, penalized estimators etc. Most of the adaptive methods are for the estimation problems. \nThere is not much known yet about adaptive confidence sets\, how to define and to construct an optimal adaptive confidence set seems to be a subtle issue. Since recently\, this topic is of increasing interest in the statistical community. \nTo compare different statistical procedures\, a developed machinery for optimality considerations within the minimax framework was built up in 80's. In the meantime\, a new approach to optimality have been developed\, namely\, the oracle approach. The interplay between these two approaches\, minimax and oracle\, is one of the main topics of this workshop. \nThe present workshop is directed at statisticians\, in particular Ph.D. students\, postdocs and junior researchers\, who are interested in the subject of adaptation in nonparametric models. \nSponsors\n \nOrganisers\n\n\n\nAngelika Rohde\nUniversitat Hamburg\n\n\nEduard Belitser\nTU Eindhoven\n\n\nGeurt Jongbloed\nTU Delft\n\n\n\n \nSpeakers\n\n\n\nAlexander Goldenshluger\nUniversity of Haifa\n\n\nRichard Nikl\nUniversity of Cambridge\n\n\nLaurent Cavalier\nUniversity Aix-Marseille 1\n\n\nEduard Belitser\nTU Eindhoven\n\n\n\nProgramme\nMonday October 10 \n\n\n\n9:30-10:30\nRegistration\n\n\n\n10:30-10:45\nWelcome\nConnie Cantrijn\n\n\n10:45-11:30\nAlexander Goldenshluger\nIntroduction to adaptive nonparametric estimation by selection of estimators\n\n\n11:35-12:20\nLaurent Cavalier\nInverse problems in statistics\n\n\n12:30-13:30\nLUNCH\n\n\n\n13:30-14:15\nEduard Belitser\nOracle approach\, interplay with minimax adaptation\n\n\n14:20-14:40\nChristoph Breunig\nAdaptive estimation of functionals in nonparametric instrumental regression\n\n\n14:45-15:05\nMaik Schwarz\nAdaptive estimation in a Gaussian sequence model\n\n\n15:15-15:45\nCoffee/tea break\n\n\n\n15:45-16:30\nLaurent Cavalier\nOracle inequalities in inverse problems\n\n\n16:35-16:55\nItai Dattner\nOn deconvolution of distribution functions\n\n\n\nTuesday October 11 \n\n\n\n9:00-9:45\nRichard Nickl\nConfidence sets in nonparametric statistics\n\n\n9:50-10:10\nJakob Söhl\nConfidence sets in nonparametric calibration of exponential Lévy models\n\n\n10:15-10:45\nCoffee/tea break\n\n\n\n10:45-11:30\nAlexander Goldenshluger\nGeneral procedure for selecting linear estimators\n\n\n11:35-11:55\nClaudia Strauch\nSharp adaptive drift estimation in multidimensional diffusion models\n\n\n12:00-12:20\nRudolf Schenk\nAdaptive local functional linear regression\n\n\n12:35-13:30\nLUNCH\n\n\n\n13:30-14:15\nEduard Belitser\nLower bounds\, Bayesian model selection and adaptive confidence sets\n\n\n14:20-14:40\nCatia Scriccolo\nAdaptive Bayesian density estimation using Dirichlet process Gaussian mixture priors\n\n\n14:45-15:05\nBotond Szabo\nUnderstanding the asymptotic behaviour of the empirical Bayes method\n\n\n15:15-15:45\nCoffee/tea break\n\n\n\n15:45-16:30\nRichard Nickl\nAdaptive confidence sets I - confidence bands\n\n\n16:35-16:55\nAdam Bull\nHonest adaptive confidence bands\n\n\n18.30 -\nConference dinner\n\n\n\n \nWednesday October 12 \n\n\n\n9:00-9:45\nAlexander Goldenshluger\nAggregation of estimators\n\n\n9:50-10:35\nLaurent Cavalier\nRisk hull method\n\n\n10:45-11:15\nCoffee/tea break\n\n\n\n11:15-12:00\nRichard Nickl\nAdaptive confidence sets II\n\n\n12:00-12:30\nDiscussion\n\n\n\n12:30\nClosing\n\n\n\nAbstracts\nEduard Belitser (TU Eindhoven) \nTalk I: Oracle approach\, interplay with minimax adaptation \nA classical approach to optimality considerations in adaptation problems is via minimax framework. There is a way to look at the adaptation problem from another perspective\, namely\, by using the so called oracle approach. We introduce the notions of oracle inequality\, oracle risk\, oracle estimator and describe the oracle approach in a general setting. We also discuss the interplay between the oracle and minimax frameworks. The main message here is that\, loosely speaking\, an oracle inequality result implies adaptive minimaxity results over all functional classes which are covered by the (not too rich) family of estimators\, acting in the oracle inequality. We also describe two Bayesian approaches to the adaptation problem: pure Bayes and empirical Bayes approaches. \nTalk II: Bayesian adaptation\, oracle posterior rates \nWe introduce an oracle optimality framework for the Bayesian approach. A statistical model and a family of priors determines the corresponding family of the posterior rates. The oracle prior corresponds to the best posterior rate (which is called posterior oracle rate). Our goal is to design an adaptive prior to mimic the performance of the oracle prior. We apply the Bayesian oracle approach to the problem of projection estimation of a signal observed in the Gaussian white noise model. A proposed family of priors models the projection estimation oracle in the sense that the family of resulting posterior rates essentially coincides with the family of risks of the projection estimators. Under an appropriate hierarchical prior\, we study the performance of the resulting (appropriately adjusted by the empirical Bayes approach) posterior distribution and establish that the posterior concentrates about the true signal with the oracle projection convergence rate. \nTalk III: Lower bounds\, Bayesian model selection and adaptive confidence sets \nWe complement the upper bound results on the posterior rate from the second talk by a lower bound result for the oracle posterior rate. When applying the Bayesian approach to adaptation problems\, besides the original statistical inference problem\, one can consider an attendant problem of a data-based choice for structural parameter which marks the model. One can thus regard this attendant problem as the model selection problem. We study implications of the results form the second talk for the model selection problem\, namely\, we propose a Bayes model selector and assess its quality in terms of the so-called false selection probability. At the end we touch upon the problem of construction of adaptive confidence sets by using a Bayesian approach. \n\nChristoph Breunig (Universität Mannheim) \nAdaptive estimation of functionals in nonparametric instrumental regression \nWe consider the problem of estimating the value l(g) of a linear functional\, where the structural function g models a nonparametric relationship in presence of instrumental variables. We propose a plug-in estimator which is based on a dimension reduction technique and additional thresholding. It is shown that this estimator is consistent and can attain the minimax optimal rate of convergence under additional regularity conditions. This\, however\, requires an optimal choice of the dimension pa- rameter m depending on certain characteristics of the structural function g and the joint distribution of theregressor and the instrument\, which are unknown in practice. We propose a fully data driven choice of m which combines model selection and Lepski’s method. We show that the adaptive estimator attains the optimal rate of convergence up to a logarithmic factor. The theory in this paper is illustrated by considering classical smoothness assumptions and we discuss examples such as pointwise estimation or estimation of averages of the structural function g. \nPRESENTATION \n\nLaurent Cavalier (University Aix-Marseille I) \nTalk I: Inverse problems \nThere exist many fields where inverse problems appear. Some examples are: astronomy (blurred images of the Hubble satellite)\, econometrics (instrumental variables)\, financial mathematics (model calibration of the volatility)\, medical image processing (X-ray tomography) and quantum physics (quantum homodyne tomography). These are problems where we have indirect observations of an object (a function) that we want to reconstruct\, through a linear operator $A$. One needs regularization methods in order to get a stable and accurate reconstruction. We present the framework of statistical inverse problems where the data are corrupted by some stochastic error. This white noise model may be discretized in the spectral domain using Singular Value Decomposition (SVD)\, when the operator $A$ is compact. Several examples of inverse problems where the SVD is known are presented (circular deconvolution\, tomography). We explain some basic issues regarding nonparametric statistics applied to inverse problems. Standard regularization methods are presented (projection\, Landweber\, Tikhonov\,...). \nTalk II: Adaptation and oracle inequalities in inverse problems \nSeveral classical statistical approaches like minimax risk and optimal rates of convergence\, are presented. Optimal rates of convergence are given for estimating functions in the Sobolev and analytic classes of functions. The notion of optimal rate of convergence leads to some optimal choice of the tuning parameter. However these optimal parameters are unachievable since they depend on the unknown smoothness of the function. This leads to more recent concepts like adaptive estimation and oracle inequalities. A data-driven selection procedure of the regularization parameter based on Unbiased Risk Estimation (URE) is presented. Oracle inequalities are obtained for this specific data-driven selection procedure. \nTalk III: Risk hull method \nWe consider the Gaussian white noise model in inverse problems where $A$ is a known compact operator with singular values converging to zero with polynomial decay. The unknown function $f$ is recovered by a projection method using the SVD of $A$\, method also called truncated SVD or spectral cut-off. The bandwidth choice $N$ of this projection regularization is governed by a data-driven procedure which is based on the principle of the risk hull minimization (RHM). This new method may be presented as a penalized empirical risk minimization with a penalty slightly stronger than the usual URE (or Akaike) penalty. We provide oracle inequalities for the mean square risk of this method and we show\, in particular\, that in numerical simulations\, this approach may substantially improve the classical method of unbiased risk estimation. \nPRESENTATION 1 PRESENTATION 2 PRESENTATION 3 \n\nAdam Bull (Cambridge University) \nHonest adaptive confidence bands \nConfidence bands are confidence sets for an unknown function \, containing all functions within some sup-norm distance of an estimator. We consider the problem of constructing adaptive confidence bands\, whose width contracts at an optimal rate over a range of Hölder classes. While adaptive estimators exist\, in general adaptive confidence bands do not\, and to proceed we must place further assumptions on. We discuss previous approaches to this issue\, and show it is necessary to restrict to fundamentally smaller classes of functions. We then consider the self-similar functions\, whose Hölder norm is similar at large and small scales. We show that such functions may be considered typical functions of a given Hölder class\, and that the assumption of self-similarity is both necessary and sufficient for the construction of adaptive bands. Finally\, we show that this assumption allows us to resolve the problem of undersmoothing\, creating bands which are honest simultaneously for functions of any Hölder norm. \n\nItai Dattner (Eurandom) \nOn deconvolution of distribution functions \nIt is well known that rates of convergence of estimators in deconvolution problems are affected by the smoothness of the error density and the density to be estimated. However\, the problem of distribution deconvolution is more delicate than what was considered so far. We derive different rates of convergence with respect to the tail behavior of the error characteristic function. We present optimal in order deconvolution estimators\, both for known and unknown error distribution. An adaptive estimator which achieves the optimal rates within a logarithmic factor is developed. Simulation studies comparing the adaptive estimator to other methods are presented and support the superiority of our method. An example with real data is also discussed. Based on joint works with Alexander Goldenshluger and Benjamin Reiser. \n\nAlexander Goldenshluger (University of Haifa) \nTalk I: Introduction to adaptive nonparametric estimation by selection of estimators \nIn the first talk we survey the problem of adaptive nonparametric estimation of a univariate regression function. The objective is to construct an optimal in the minimax sense estimator that does not require any prior information on the smoothness of the regression function. Our discussion will concetrate on the methods based on selection of linear estimators from a given collection. These methods originate in the works by Oleg Lepski in early 90-ies. We present corresponding minimax and adaptive minimax results and discuss different extensions. \nTalk II: General procedure for selecting linear estimators \nThe second talk deals with adaptive estimation of multivariate functions from noisy observations. We present ageneral selection procedure and derive oracle inequalities for the risk of the selected estimator. The proposed selection procedure leads to adaptive minimax estimatorsin a wide variety of estimation settings. In particular\,the resulting estimators can adapt both to unknown smoothness and structure of the function to be estimated. \nTalk III: Aggregation of estimators \nThe third talk is about the aggregation problem. The goal is\, based on the noisy observations\, to select an estimator from a fixed collection of arbitrary estimators so that the accuracy of the selected estimator is as close as possible to the accuracy of the best estimator in the collection. We present an aggregation scheme that applies to families of arbitrary estimators; it is easily extended to different models and global accuracy measures. We derive oracle inequalities and show that they cannot be improved in the minimax sense. \nPRESENTATION \n\nRichard Nickl (University of Cambridge) \nTalk I: Confidence Sets in Nonparametric Statistics \nWe shall give a broad introduction to the confidence problem in nonparametric statistics\, starting with classical results due to Kolmogorov\, Smirnov\, Bickel and Rosenblatt. The focus will be on nonparametric regression\, density and distribution function estimation\, and we will explain some of the mathematical machinery that is needed in the theory\, mostly from empirical process theory. \nTalk II: Adaptive Confidence Sets I -- Confidence Bands \nWe shall discuss the important topic of nonparametric confidence bands\, and explain the theory of adaptation in this case\, where the size of the confidence set is measured in uniform norm. We shall review the classical 'negative' results due to Low and some very recent developments in this field that introduce a 'separation' approach to confidence sets. \nTalk III: Adaptive Confidence Sets II \nWe shall consider the existence of adaptive confidence sets in the most commonly used loss-function in nonparametrics\, $L^2$-loss (MISE). We shall review some nice results from the last decade\, discuss why the situation is substantially different than it is for confidence bands\, and then give a unified explanation of when adaptive confidence sets exist by linking the 'geometry' of the problem with nonparametric testing theory. \n\nRudolf Schenk (Université catholique de Louvain)\n(joint work with Jan Johannes) \nAdaptive local functional linear regression \nWe consider the estimation of the value of a linear functional of the slope parameter in functional linear regression\, where scalar responses are modeled in dependence of random functions. Johannes and Schenk [2010] propose a plug-in estimator which is based on dimension reduction and additional thresholding and show that this estimator can attain the minimax optimal rate of convergence up to a constant. However\, this estimation procedure requires an optimal choice of the dimension parameter with regard to certain characteristics of the slope function and the covariance operator of the regressor. As these are unknown in practice\, we investigate a fully data-driven choice of the dimension parameter using a variation of the classical model selection approach. The construction of the proposed estimator involves both an estimated penalized contrast function and an estimated collection of models. We show that this adaptive procedure attains the lower bound for the minimax risk up to a logarithmic factor over a wide range of classes of slope functions and covariance operators. In particular\, our theory covers point-wise estimation as well as the estimation of local averages of the slope parameter. \n\nMaik Schwarz (Université catholique de Louvain)\n(joint work with Jan Johannes) \nAdaptive Estimation in a Gaussian Sequence Model \nAbstract \n\nCatia Scricciolo (Bocconi University\, Italy) \nAdaptive Bayesian density estimation using Dirichlet process Gaussian mixture priors \nWe consider Bayesian nonparametric estimation of smooth densities using infinite Gaussian mixtures. The posterior distribution corresponding to a Dirichlet process Gaussian mixture prior is shown to shrink around the data-generating distribution at a minimax optimal rate\, up to a logarithmic factor\, for any smoothness degree of the sampling density. Thus\, the corresponding Bayes’ estimator is fully rate adaptive. \n\nJakob Söhl (Humboldt-Universität zu Berlin) \nConfidence sets in nonparametric calibration of exponential Lévy models. \nIn this talk we consider statistical inference for exponential Lévy models. We consider Lévy processes with a jump component of finite intensity and absolutely continuous jump distribution. In the estimation method the exponent of the Lévy-Khintchine representation is estimated first and then the diffusion coefficient\, the drift and the Lévy measure are estimated. The estimators are based on a cutoff-scheme in the spectral domain. To analyze the asymptotic distribution of the estimators we simplify the observation scheme and work with continuous observations given by the Gaussian white noise model. We show that the estimators of the diffusion coefficient\, the drift and the jump intensity are asymptotically normally distributed. We also derive asymptotic normality for the pointwise estimation of the Lévy density and study the joint distribution of these estimators. Together with the choice of undersmoothing cut-off values\, these results on the asymptotic normality of the estimators allow us to construct confidence intervals and confidence sets. \nPRESENTATION \n\nClaudia Strauch (Universität Hamburg) \nSharp adaptive drift estimation in multidimensional diffusion models \nWe consider the problem of adaptively estimating the drift function of a multivariate ergodic diffusion. Exact adaptive estimation procedures are proposed\, both for global and pointwise estimation. The sharp results in particular reflect the influence of the diffusion matrix on the problem of drift estimation. We briefly discuss the problem under specific additional constraints like a single index structure and indicate its behavior in higher dimension. \n\nBotond Szabo (TU/e)\n(Aad van der Vaart and Harry van Zanten) \nUnderstanding the asymptotic behaviour of the empirical Bayes method \nIn recent years there has been a huge increase in the use of Bayesian methods in high-dimensional or nonparametrical statistical problems. One very popular adaptive Bayesian technique is the empirical Bayes method. The empirical Bayes method is widely used in practice\, for example in ecology\, genomic data analysis\, high dimensional classifications\, revenue sharing and quality assurance. Although it has a wide area of applications\, the technique itself does not have a full theoretical underpinning. In my talk I will aim to contribute in the fundamental understanding of this widely used method. In Bayesian nonparametrics it is well known that the performance of a statistical procedure depends crucially on the choice of the prior distribution. Wrong choices can result in a posterior distribution that does not concentrate around the "true" parameter\, or that does contract\, but at a sub-optimal rate. A common approach that helps to avoid this problem is to work with a whole family of prior distributions\, indexed by one or more scaling parameters. Popular adaptive methods for choosing the appropriate values of these hyperparameters are full\, hierarchical Bayes procedures and empirical Bayes methods. We study the latter approach in the context of the Gaussian white noise model and compare its performance to an oracle procedure that uses the optimal\, deterministic scaling that yields the minimax rate of convergence. We\nprove that in some cases the empirical Bayes method matches the performance of the oracle\, while in other cases it gives a significantly worse contraction rate than the oracle. \nPRESENTATION \n
URL:https://www.eurandom.tue.nl/event/yes-v-adaptation-in-nonparametric-statistics/
END:VEVENT
END:VCALENDAR