BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Eurandom - ECPv4.9.10//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Eurandom
X-ORIGINAL-URL:https://www.eurandom.tue.nl
X-WR-CALDESC:Events for Eurandom
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20190101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20191209
DTEND;VALUE=DATE:20191214
DTSTAMP:20191018T092701
CREATED:20190516T110056Z
LAST-MODIFIED:20191016T133946Z
UID:2685-1575849600-1576281599@www.eurandom.tue.nl
SUMMARY:Workshop "Heavy Tails"
DESCRIPTION:Summary\nThe goal of the workshop is to bring together researchers from probability\, statistics\, and various application areas such as computer science\, operations research\, physics\, engineering and finance and learn from each other on the latest developments in theory [covering both stochastic processes and spatial models]\, statistical and simulation algorithms\, and applications. \nSponsors\n \n \nOrganizers\n\n\n\nRemco van der Hofstad\nTU Eindhoven\n\n\nAdam Wierman\nCaltech\n\n\nBert Zwart\nCWI / TU Eindhoven\n\n\n\nSpeakers\n\n\n\nBojan Basrak\nUniversity of Zagreb\n\n\nAyan Bhattacharya\nWroclaw University\n\n\nJose Blanchet\nStanford University\n\n\nAlessandra Cipriani\nTU Delft\n\n\nAaron Clauset\nUniversity of Colorado\n\n\nClaudia Klüppelberg\nTU München\n\n\nAnja Janssen\nKTH\n\n\nDmitri Krioukov\nNortheastern University\n\n\nDaniel Lacker\nColumbia University\n\n\nMarie-Colette van Lieshout\nCWI Amsterdam\n\n\nNelly Litvak\nUniversity of Twente\n\n\nThomas Mikosch\nUniversity of Copenhagen\n\n\nSid Resnick\nCornell University\n\n\nChang-Han Rhee\nNorthwestern University\n\n\nGennady Samorodnitsky\nCornell University\n\n\nJohan Segers\nUCLouvain\n\n\nFiona Sloothaak\nTU Eindhoven\n\n\nClara Stegehuis\nUniversity of Twente\n\n\nCaspar de Vries\nErasmus University Rotterdam\n\n\nNassim Taleb\nNew York University\n\n\nOlivier Wintenberger\nSorbonne Université\n\n\n\nProgramme\nThe workshop will begin:\nMonday December 9\, 10.00\nExpected closing:\nFriday December 13\, 16.00 \n \nAbstracts\nAnja Janssen \nA k-means clustering procedure for extremes\nDimension reduction has become an important topic in statistics and has more recently also been applied in the context of extreme value theory.\nIn this talk\, we start by giving an overview over some approaches which have been pursued in this context so far and continue with discussing how the standard assumption of multivariate regular variation can be used to construct simple and efficient ways to model and describe dependency structures of multivariate extremes. In particular\, we introduce a k-means clustering procedure on the empirical spectral measure that allows for a comprehensive description of "extremal prototypes". We illustrate our method with several data examples.\n(joint work with Phyllis Wan from Erasmus University Rotterdam) \n \nDmitri Krioukov \nPower Loss with Power Laws\nOne common task in network/data science is to make reliable inferences from data\, which is always finite. Perhaps the simplest example: Given a real-world network adjacency matrix\, is the network sparse or dense? It appears to be not widely recognized in network science that the first question cannot have any rigorous answer. It is not surprising then that the question of whether a given network is power-law or not\, has not been rigorously addressed at all\, even though this question is so foundational in the history of network science. \nWe review the state of the art in extreme value statistics where power laws are understood as regularly varying distributions that properly formalize the idea in network science that "power laws are straight lines in the loglog scale". There exists a multitude of power-law exponent estimators whose consistent behavior in application to any regularly varying data had been proven long before network science was born. In application to real-world networks these estimators tell us what we already know -- that many of these networks are scale-free. Yet applied to any data these estimators always report some estimates\, and the nature of the infinite-dimensional space of regularly varying distributions is such that such estimates cannot be translated to any rigorous guarantees or hypothesis testing methodologies that would be able to tell whether the data comes from a regularly varying distribution or not. This situation is conceptually no different from the impossibility to tell whether a given finite data set is sparse or dense\, or whether it comes from a finite- or infinite-variance distribution\, or whether it shows that the system has a phase transition. All these questions can be rigorously answered only in the infinite data size limit\, never achieved in reality. An interesting big open problem in data science is how and why we tend to make correct inferences about finite data using tools and concepts that are known to work properly only at infinity and whose convergence speed is unknown. \n \nMarie-Colette van Lieshout \nNearest-neighbour Markov point processes on graphs with Euclidean edges\nWe define nearest-neighbour point processes on graphs with Euclidean edges and linear networks. They can be seen as analogues of renewal processes on the real line. We show that the Delaunay neighbourhood relation on a tree satisfies the Baddeley–Møller consistency conditions and provide a characterisation of Markov functions with respect to this relation. We show that a modified relation defined in terms of the local geometry of the graph satisfies the consistency conditions for all graphs with Euclidean edges that do not contain triangles. \n \nGennady Samorodnitsky \nRisk forecasting in the context of time series\nWe propose an approach for forecasting risk contained in future observations in a time series. We take into account both the shape parameter and the extremal index of the data. This significantly improves the quality of risk forecasting over methods that are designed for i.i.d. observations and over the return level approach.\nWe prove functional joint asymptotic normality of the common estimators of the shape parameter and and extremal index estimators\, based on which statistical properties of the proposed forecasting procedure can be analyzed.\n(joint work with Xiaoyang Lu) \n \nJohan Segers \nOne- versus multi-component regular variation\nOne-component regular variation refers to the weak convergence of a properly rescaled random vector conditionally on the event that a single given variable exceeds a high threshold. Although the weak limit depends on the variable concerned by the conditioning event\, the various limits are connected through an identity that resembles the time-change formula for regularly varying stationary time series. The formula is most easily understood through a single multi-component regular variation property concerning some (but not necessarily all) variables simultaneously. \nThe theory is illustrated for max-linear models\, in particular recursive max-linear models on acyclic graphs\, and for Markov trees. In the latter case\, the one-component limiting distributions take the form of a collection of coupled multiplicative random walks generated by independent increments indexed on the edges of the tree. Changing the conditioning variable then amounts to changing the directions of certain edges and transforming their increment distributions in a specific way. \nReference:\nSegers\, J. (2019). "One- versus multi-component regular variation and extremes of Markov trees"\, https://arxiv.org/abs/1902.02226. \n \nRegistration\nPlease use this link to register \n \nMore information to follow soon!! \n \n \n
URL:https://www.eurandom.tue.nl/event/workshop-heavy-tails-2/
LOCATION:MF 11-12 (4th floor MetaForum Building\, TU/e)
CATEGORIES:Workshop
END:VEVENT
END:VCALENDAR