zum Hauptinhalt wechseln zum Hauptmenü wechseln zum Fußbereich wechseln Universität Bielefeld Play Search

Wintersemester 2017/18

Dienstag, 17.10.2017, 12-13 Uhr - Raum: W9-109

Dr. Takis Besbeas
School of Mathematics, Statistics & Actuarial Science , University of Kent

Exact integrated population modeling

Integrated population modelling (IPM) is the current state of the art approach used for estimating population dynamics when data at the population and individual levels on members of the same wild animal population are available. Several methods can be used to fit the models, including methods based on the Kalman filter (Besbeas et al., 2002), and Bayesian techniques (Brooks et al., 2004). IPM remains an active area of research, with recent papers providing procedures for determining goodness of fit (Besbeas & Morgan, 2014), distinguishing between process and observation variances (Besbeas & Morgan, 2017) and model selection (Besbeas et al., 2015). We provide a brief review of IPM in the first part of the talk. An advantage of the methodology can be the estimation of important demographic parameters for which there is no direct survey information. Recently Abadi et al (2010) and Schaub & Fletcher (2015) estimate immigration into a single population by making use of IPM, based on a Bayesian approach. It is these papers which motivate the second part of the talk. For the application considered, which is typical for short?lived bird species, we show that the proposed population models can be reduced to equivalent scalar state?space models (SSM), with a consequent simplicity of exposition. Furthermore, we show how to use efficient hidden Markov modelling machinery to fit the models by maximum likelihood, which provides the first exact IPM analysis of its kind and avoids the complications of prior selection and sensitivity. We illustrate the theory using both real and simulated data, and discuss extension to higher SSM dimensions.

 

Dienstag, 07.11.2017, 12-13 Uhr - Raum: W9-109

Niels Aka
DIW Berlin

Using Model Confidence Sets for Forecasting and Impulse Response Estimation and the Value of Model Averaging

This talk will outline the model confidence set (MCS) procedure and its value for forecasting and impulse response estimation. Model confidence sets allow to quantify the uncertainty surrounding model choice and therefore usually include more than one model in finite samples. In practice, it may be unclear how to formerly proceed with a particular analysis which takes model uncertainty into account. We suggest to use all of the models in the estimated MCS in any subsequent analysis by averaging either across models or the final quantities of interest (e.g. forecasts). We employ different weighting schemes for averaging, in particular equal weights and weights given by Jackknife model averaging. Weighted averaging is compared to classical model selection, shrinkage estimation and simple Jackknife model averaging by computing multi-step ahead forecasts and impulse responses. In Monte Carlo simulations, applying Jackknife model averaging to model confidence sets turns out to be beneficial in small samples and robust in larger samples where using the Schwarz criterion has great merits. In an empirical exercise, the procedures are applied and compared using 143 U.S. macroeconomic time series.

 

Dienstag, 14.11.2017, 12-13 Uhr - Raum: W9-109

Jan Lipovsek
Universität Bielefeld

Die "hot hand" im Basketball - NBA-Feldwurfdaten analysiert mit Hidden Markov und Zustandsraummodellen

Seit über 30 Jahren streiten Wissenschaftler über die Gültigkeit der weitverbreiteten Annahme, dass eine Reihe von Treffern eines Basketballspielers ein Anzeichen einer temporär erhöhten Leistungsfähigkeit ist, die als "hot hand" bezeichnet wird. Das Hauptargument gegen die "hot hand" ist, dass die Treffersträhnen als Zufallsprodukte eines konstanten Bernoulli-Prozesses interpretiert werden können. Das einfache Bernoulli-Modell wurde allerdings, obwohl es mit klassischen Test bisher nicht verworfen werden konnte, immer wieder als unplausibel für Würfe aus dem Basketballfeld kritisiert. In der vorliegenden Masterarbeit werden zwei Alternativmodelle vorgestellt, die das Phänomen der "hot hand" prinzipiell erfassen können: Ein Hidden Markov Model (HMM) mit zwei Zuständen und ein Modell mit kontinuierlichem Zustandsraum (SSM). Via numerischer Maximum-Likelihood Schätzung wurden die Modelle an die Feldwurfdaten der letzten fünf NBA-Saisons angepasst. Die über 44000 Fälle, in denen ein Spieler in einem Spiel mehr als zehn Wurfversuche aus dem Feld unternahm, wurden als Stichproben einer Zeitreihe behandelt. Zusätzlich wurden individuelle Modelle für fünf der besten NBA-Spieler berechnet. Sowohl HMM als auch SSM werden je mit und ohne zwei zusammengesetze Kovariablen präsentiert, die Informationen über die Wurfdistanz und andere Einflussfaktoren auf die Wurfschwierigkeit enthalten. Die resultierenden HMMs und SSMs sind keine Evidenz für einen "hot hand"-Effekt. Dennoch sind beide Modelle deutlich besser (AIC) als Referenzmodelle mit Unabhängigkeitsannahme. Das SSM erweist sich als das adäquateste für die untersuchten Daten. Die Fluktuation der Leistungsfähigkeit eines Spielers innerhalb eines Spiel scheint allerdings nicht ausschlaggebend für die Parameter des Modells zu sein, da andere Faktoren, wie z.B. erhöhte Trefferchance nach Offensivrebounds, stärker ins Gewicht fallen. Um die "hot hand"-Debatte voranzubringen, müsste das SSM um ein besseres Modell der Wurfschwierigkeit, insb. des Verhaltens der Verteidigung, ergänzt werden.

 

Dienstag, 28.11.2017, 12-13 Uhr - Raum: W9-109

Dr. Yuanyuan Li
Universität Bielefeld

Long VAR approximation in I(2) context: theory and simulations

In this paper we extend the asymptotic theory for long VAR approximations to I(2) processes. The analysis is mainly performed in the framework of a triangular representation admitting an infinite-order autoregressive representation subject to summability conditions on the autoregressive coefficients. The results, however, also have implications for more general data generating processes. Similar results as in I(1) cases are achieved including the consistency of the estimated coefficients as well as their asymptotic distributions for properly chosen lag length. Based on these results, tests for linear restrictions on the coefficients can be derived. The results are also the starting point for the derivation of rank tests and the asymptotic distributions of reduced rank estimators. Furthermore, a detailed simulation study examines the finite sample properties of rank testing procedures to specify the integer parameters (the two involved cointegration ranks) in the long VAR approximations for I(2) processes.

 

Dienstag, 12.12.2017, 12-13 Uhr - Raum: W9-109

Prof. Bernard Hanzon
Department of Mathematics, University College Cork

On global optimization of the likelihood function for linear time series models

This talk is based on joint work with Wolfgang Scherrer (TU Vienna). System Identification algorithms for linear Gaussian time series models are well-established. However, as far as we know, the problem of finding the Maximum Likelihood Estimator, i.e. finding the global optimum of the likelihood function is still not fully solved. Here we describe a geometrical approach to try to solve this problem. We use an innovations form of the model in state space form, of which the parameters are assumed to be time-invariant. However no assumptions on stability or minimum phase properties are made. First the likelihood function is optimized partially with respect to some of the parameters. What results is a criterion function that depends only on the row space of a finite reachability matrix. The closure of this family turns out to be a compact differentiable manifold without boundary for which we obtain an explicit finite atlas. The idea is now to construct an extension of the likelihood function to the extended space. If the extension is continuous then it will attain a maximum. If it is also Lipschitz with a known Lipschitz constant then the maximum value can be found (at least in principle) with arbitrary precision, by using appropriate subdivisions of the compact space. However we do encounter some "obstacles" when trying to work out this approach. One of the obstacles is the surprising (to us at least) result that the maximum likelihood problem for our model class is ill-posed, as the supremum of the likelihood is plus infinity! We propose a relaxation of the problem to overcome the obstacles encountered and will discuss the effects of the relaxation.

 

Dienstag, 09.01.2018, 12-13 Uhr - Raum: W9-109

Martina Zaharieva
Westfälische Wilhelms-Universität Münster

Bayesian semiparametric multivariate stochastic volatility with an application to international volatility co-movements

The talk is based on a joint work with M. Trede and B. Wilfling (University of Münster). In this paper, we establish a Cholesky-type multivariate stochastic volatility estimation framework, in which we let the innovation vector follow a Dirichlet process mixture, thus enabling us to model highly exible return distributions. The Cholesky decomposition allows parallel univariate process modeling and creates potential for estimating highly dimensional specifications. We use Markov Chain Monte Carlo methods for posterior simulation and predictive density computation. We apply our frame-work to a five-dimensional stock-return data set and analyze international volatility co-movements among the largest stock markets.

 

Dienstag, 30.01.2018, 12-13 Uhr - Raum: W9-109

Aktuelle Forschungsbereiche im ZeSt

Zum Seitenanfang