KTH Mathematics   


Grandell
To home page Scientific programme Registration

Paul Embrechts

Professor of Mathematics at the ETH Zurich specialising in actuarial mathematics and quantitative risk management

Statistics and Quantitative Risk Management

Abstract: In Quantitative Risk Management, the aggregation of risks across different risk classes is of great importance. The underlying mathematical problems are very much related to the theory of risk mesaures (coherence, convex, distortion, ...). Also related are the concepts of diversification and concentration. In this talk I will review some of the applied issues and discuss the theory necessary in order to understand the scope and limitations of the various approaches available in the literature.

Thomas Mikosch

Professor at the Laboratory of Actuarial Mathematics, University of Copenhagen

Point process techniques in non-life insurance models

Point processes are rather natural objects for modeling in non-life insurance. Due to the discrete structure of claim arrivals, claim sizes, delay in reporting and claim settlement, marked point processes, in particular Poisson random measures, are nicely suited for describing the complicated dynamics in an insurance business in a mathematically tractable way. The point process approach has been propagated by Norberg for many years, in particular in his influential article in ASTIN Bulletin in 1993. The aim of the talk is to recall some of the basic theory on point processes, to apply it in some simple non-life situations and to show how the method can be made to work for building of reserves.

The talk is based on my lecture notes for the Master students of Actuarial Mathematics at the University of Copenhagen.

Gunnar Andersson

Executive vice president Folksam, CEO KPs pensionsstiftelse

Biometric assumptions in life insurance with focus on the Swedish market

Within life insurance there is a long tradition of dealing with biometric risks of different types. We will here discuss mainly mortality and disability risks. Typically, the point when one first calculates the different risks, in order to establish the cost for a specific life insurance product, lies in most cases many years in advance of when a possible claim is registered. Thus, time is an important issue since a continuous stream of medical achievements during a life cycle has a major impact on risks under consideration.

We will consider the assessment of different types of risks, i.e. mortality and disability risks. Crude failure rates of the biological systems we are monitoring are recorded and smoothed with different smoothing techniques.

Different smoothing techniques have different drawbacks. For instance, not including time trends in mortality models have proven to less efficient when modelling mortality risks. The consequence of which is that the estimates have turned less accurate with different financial effects as result.

When it comes to disability assumptions, the smoothing technique in place has very much been adjusted to the legislation that lies underneath each product sold. This is an old and also today less appropriate smoothing technique. We will discuss how this can be adjusted to a more modern situation.

Søren Asmussen

Professor of Applied Probability at Aarhus University

Limit theorems for failure recovery in computing and data transmission

Abstract: A task such as the execution of a computer program or the transmission of a file has an ideal execution time T with distribution F. However, failures may cause the actual execution time X to be different.

Various schemes for failure recovery have been considered, among with the most notable are REPLACE, RESUME, RESTART and checkpointing. The two first are fairly easy to study, while the RESTART has long resisted detailed analysis. Here the task has to start over if a failure occurs before completion, say the failure time is U with distribution G, and multiple failures may occur.

Based upon Cramér-Lundberg theory, we show that the RESTART total time XT has an exponential tail if F has finite support. Otherwise, X is always heavy-tailed, and the tail behaviour is quantified under various assumptions on F and G.

Two related settings are also studied. In parallel computing, the total task time becomes the maximum of independent copies of X. Classical extreme value theory combined with the RESTART results immediately give the order of the total task time in a simple i.i.d. setting, but we also look into some non-classical triangular array extreme value problems. In checkpointing, the task is split into subtasks, each of which behave as in the RESTART setting. The tail of the total task time is given for a variety of specific models for the checkpointing.

Hanspeter Schmidli

Professor for Stochastics/Actuarial Mathematics at the Institute of Mathematics of the University of Cologne

Cox Risk Processes and Ruin

Abstract: More than 100 years ago Filip Lundberg introduced the classical risk model and estimated the probability of ruin. Later, Harald Cramér generalised Lundberg's results. These results apply to the small claims case. This model serves still as a skeleton for more modern models in actuarial science. A first generalisation goes back to Hans Ammeter, where the claim intensity becomes random in a period of fixed length. The intensities in different periods are independent. Another generalisation goes back to Tomas Björk and Jan Grandell. In their model the length of the periods is random also; and length and intensity may depend on each other. More generally, one can model the intensity as a general positive random process and the claim number process, conditioned on the intensity, as an inhomogeneous Poisson process with the given intensity. In this talk we will review some of these models and give asymptotic results for the ruin probabilities both in the small as well as in the large claim case.

Tomas Björk

Professor of Mathematical Finance at Stockholm School of Economics

Time Inconsistent Stochastic Control

Abstract: In this talk we will present some recent work on non-classical stochastic control problems which, in different ways are "time inconsistent" in the sense that they cannot be treated by dynamic programming. We present a game-theoretic approach to such problems and we derive extended versions of the Hamilton-Jacobi-Bellman equation in terms of systems of PDEs for the determination of the associated subgame perfect Nash equilibrium strategies. We also present applications from finance.

Joint work with Agatha Murgoci.

webmaster: Filip Lindskog
Updated: 13/02-2008