zum Hauptinhalt wechseln zum Hauptmenü wechseln zum Fußbereich wechseln Universität Bielefeld Play Search

Sommersemester 2010

Dienstag, 4. Mai 2010, 11-12 Uhr - Raum: W9-109

Prof. Dr. Renate Meyer
University of Auckland

Some adaptive MCMC algorithms

Different strategies have been proposed to improve mixing of Markov Chain Monte Carlo algorithms. These are mainly concerned with customizing the proposal density in the Metropolis-Hastings algorithm to the specific target density. Various Monte Carlo algorithms have been suggested that make use of previously sampled states in defining a proposal density and adapt as they run, hence called 'adaptive' Monte Carlo.
In the first part of this talk, we look at the crucial problem in applications of the Gibbs sampler: sampling efficiently from an arbitrary univariate full conditional distribution. We propose an alternative algorithm, called ARMS2, to the widely used adaptive rejection sampling technique ARS by Gilks and Wild (1992, JRSSC 42, 337-48) for generating a sample from univariate log-concave densities. Whereas ARS is based on sampling from piecewise exponentials, the new algorithm uses truncated normal distributions and makes use of a clever auxiliary variable technique (Damien and Walker, 2001, JCGS 10, 206-15).
Next we propose a general class of adaptive Metropolis-Hastings algorithms based on Metropolis-Hastings-within-Gibbs sampling. For the case of a one-dimensional target distribution, we present two novel algorithms using mixtures of triangular and trapezoidal densities. These can also be seen as improved versions of the all-purpose adaptive rejection Metropolis sampling algorithm (Gilks et al., 1995, JRSSC 44, 455-72) to sample from non-logconcave univariate densities. Using various different examples, we demonstrate their properties and efficiencies and point out their advantages over ARMS and other adaptive alternatives such as the Normal Kernel Coupler.

 

Dienstag, 18. Mai 2010, 11-12 Uhr - Raum: W9-109

Prof. Pin T. Ng, Ph. D.
Northern Arizona University 

The Role of Splines in Quantile Regression

 

Dienstag, 1. Juni 2010, 11-12 Uhr - Raum: W9-109

Prof. Hoben Thomas, Ph. D.
Penn State University

Extensions of Classical Test Theory

A strong assumption in classical reliability theory is that individual's latent true scores match on two testing occasions. Such an individual is called stable; individuals are unstable otherwise. SURT (stable, unstable reliability theory) assumes a population of
stable and unstable individuals. A mixture model enables stable individuals to be probabilistically identified.  The  probability that
an individual is stable is used to yield a weighted correlation, r_w, which is reliability under SURT.  r_w is often much higher than the conventional test retest reliability r but SURT does not force it: The difference between r_w and r depends on the data. Should all individuals be stable, r=r_w.  The classical model is a special case of SURT.  Criterion prediction from a classical test theory perspective, correlational validity, regression, or errors-in-variables frameworks may be improved using the individual probability weights. Real examples are provided, including a dramatic SURT failure.  Estimation tools are available in the R package mixtools. Viewed from a SURT perspective, it would appear that many reliability estimates may be striking underestimates  because the assumption that all individuals are stable in the classical theory appears almost never to hold. 

 

Dienstag, 15. Juni 2010, 10-11 Uhr - Raum: W9-109

Prof. Dr. Friedrich Leisch
Ludwig-Maximilians-Universität

Variable Selection and Simultaneous Inference in Finite Mixtures of Regression Models

A general framework for simultaneous inference in finite mixtures of generalized linear regression models is presented. The asymptotic normality of the maximum likelihood estimate of all interesting model parameters is used to derive confidence regions and p-values using a maximum norm for the multivariate t-statistic. This allows to simultaneously test all regression coefficients wether they are zero and hence can be omitted from the model. Another application is to test for constant effects across mixture components. Size and power of the new methods are evaluated using artificial data. A real world data set on the productivity of PhD students is used to demonstrate the application of the procedures. All methods have been implemented in the R environment for statistical computing and graphics using package flexmix and are freely available from CRAN.

 

Dienstag, 29. Juni 2010, 11-12 Uhr - Raum: W9-109

Miguel Karlo R. de Jesus
Universität Bielefeld

Data Pre-cleaning through Exponential Smoothing as Antecedent to a Robustified MIDAS Regression

Classical regression analysis entails the use of explanatory and response variables with the same sampling frequency. However, there are instances where the regressors are observed more frequently than the regressand, producing a mixed process (e.g. doing a regression on quarterly GDP using monthly indicators as predictors). To handle this problem, Ghysels, Santa-Clara and Valkanov, introduced the Mixed Data Sampling (MIDAS) regression model. Here, the high frequency data is projected linearly onto the response, where the projection is characterized by a high frequency lag polynomial.
The contribution to this relatively new area is a robustifying procedure that ”pre-cleans” the data before the MIDAS regression. In this step, outliers are identified and then their effect downweighted to improve parameter estimation and forecasts in the presence of extreme observations. The data is ”pre-cleaned” through robust exponential smoothing, as proposed by Gelper, Fried and Croux.
Through simulations, the classical MIDAS is compared to four new robustified models with different ”pre-cleaning” schemes: through non-robust and robust univariate exponential smoothing, and through non-robust and robust multivariate exponential smoothing. At the same time, a new method is proposed that enables the application of multivariate exponential smoothing to mixed data processes.

 

Marc Vierhaus
Universität Bielefeld

Die Trennung von Testungs- und Alterseffekten bei der längsschnittlichen Erfassung internalisierenden Verhaltens im Kindes- und Jugendalter

Testungseffekte werden in Lehrbüchern durchweg als ein schwerwiegendes Manko von Längsschnittstudien genannt und beziehen sich darauf, dass das Antwortverhalten von Probanden allein dadurch verändert sein kann, dass bereits mindestens eine Testung erfolgt ist. Theoretisch ist dieser Einfluss unbestritten - empirisch hat man sich ihm jedoch kaum genähert. Die eigene Studie, die vorgestellt werden soll, befasst sich mit der Frage nach dem Einfluss des Erhebungsdesigns (Längsschnitt versus Querschnitt) auf die Erfassung internalisierenden Verhaltens. Hierzu wurden zunächst zwei Kohorten (432 Zweitklässler und 366 Viertklässler) in einem Kohortensequenzmodell einmal jährlich über einen Zeitraum von drei Jahren mittels des Youth Self-Report befragt. Die Daten beider Kohorten wurden auf der Basis latenter Wachstumskurven (LGM) zusammengeführt und analysiert. Parallel wurden entsprechende Daten in einer Querschnittsstudie in der gleichen Altersgruppe (zweite bis siebte Klasse) erhoben. Der Vergleich der beiden so gewonnenen Verläufe legt nahe, dass der Einfluss des Erhebungsdesigns in der Gruppe der Mädchen in der Tat einen Einfluss auf den ermittelten Verlauf internalisierenden Verhaltens hat.

Zum Seitenanfang