May 9, 2010

No Office Hours this Week

I will not be holding office hours during finals week (April 10 and April 12).

Best of luck with all your end-of-semester activities!

April 27, 2010

More HW hints, Ch4 Exercise 16

The idea is this: how well are each of these three models able to accommodate the outliers present in the data? One way to assess this is by considering the posterior distribution of the width of an interval in the *likelihood.* Some models will shift the mean up to reach the outlier, while others will have sufficiently heavy tails to accommodate it in the variance.

So for each model's population-level sigma parameter (i.e., 1/sqrt(tau0[k] for k=1,2,3), we consider the posterior distribution of the width of a 95% interval in the likelihood that model induces for the data.

To get this posterior, we use the CODA samples of sigma[k]. For each Gibbs draw, compute the .025 and .975 quantiles of the associated likelihood (normal, t, or DE), take their difference, and call this a single posterior sample from the likelihood interval width. Repeat this procedure for all the sigma[k] Gibbs draws and you will have a histogram for the posterior of this quantile difference in the k^th model. Then you can compare across models.

Materials for Joint Longitudinal-Survival lecture

Here are the materials I will be using today:

April 22, 2010

MT2 Key

Here's a key for the second midterm.

April 21, 2010

HW Hint, Ch4 Ex 16

Several students have asked about how to do the comparison across models for the stack loss data, versus the data in Example 4.7, where each model has a single parameter, theta.

In the stack loss models, you have some regression parameters (betas) and also some fitted medians (mu_j) for each year. I would say, take a look at the betas, say beta_1, in each model. Also look at a couple of the mu's, say mu_21 because we know that is any outlying year, and one other mu_j in a non-outlying year for comparison.

April 20, 2010

Here is a grade list for the points accumulated through the second exam, sorted by the last 4 digits of your student ID. The last column gives your weighted grade not including the final project, which is worth 30%. Because there was some extra credit available on the second exam, some of the percentages may be > 1.

UPDATE: A couple grades changed, so the file is updated accordingly.

Here's the distribution (stem and leaf) out of 35 possible (or 42 with extra credit):

```
2 | 14
2 | 6
3 | 011122223344
3 | 5579
```

I'll post a solution soon.

April 8, 2010

MCMC Sampling Stuff

I love reading Andrew Gelman's blog; it's a great mixture of serious statistical thought, practical data analysis questions, random personal musings, and links to the rest of the stats blogosphere. Gelman is a very smart, famous Bayesian and advisor to our own Cavan Reilly.

He recently wrote two posts about MCMC sampling that you might find interesting/useful.

One is some second-hand skepticism about a recent paper (published in the first post-Brad issue of Bayesian Analysis) on a new Metropolis sampler that doesn't require tuning.

The other is an extended reaction to a talk by Charlie Geyer (from the UofM Stats Department) about convergence issues.

Bug fixes

Hey guys: a couple of bug fixes. First of all, there was a mistake in my "outline" of the metropolis code. I had the inequality for the acceptance ratio logic going the wrong direction. The corrected code follows:

```## Sample Metropolis-Hastings algorithm
# function to compute log(h(theta))
log.h <- function(theta,data){

}
met <- function(data,initials,NREP){
# Constants
n <- dim(data)[1]
p <- length(initials)
# Empty matrix to hold samples
chain <- matrix(NA,NREP+1,p)
# Initialize the chain
chain[1,] <- initials
for (i in 2:(NREP+1)){
# Parameters of the proposal
meen <-
var <-
# Draw a proposal value
thetastar <- rnorm(1,meen, var)
# Compute the rejection ratio
logr <- log.h(thetastar) + dnorm(chain[i-1,],meen,sqrt(var),log=T)
- log.h(chain[i-1,]) - dnorm(thetastar,meen,sqrt(var),log=T)
# Decide whether to accept or reject
if (logr > 0){
chain[i,] <- thetastar
} elseif (logr > log(runif(1))) {
chain[i,] <- thetastar
} else {
chain[i,] <- chain[i-1,]
}
}
return(chain[-1,])
}
```

Second, it appears that in the GeoBugs extra credit opportunity, the maps aren't going to load in because it doesn't like negative x values. One solution is to simply add a large constant to all of the longitude values in the polygon file produced by R so that they are positive.

April 3, 2010

MT2 Extra Credit Opportunity

Hey guys,

An opportunity for you to help in the development of OpenBUGS and Brad decided to make it worth extra credit on problem 2 of the take-home midterm.

Extra Credit (worth up to 10% of the current total points available): re-fit your no-errors-in-covariates WinBUGS code using new OpenBUGS. Compare your point & interval estimates of the screening effect, as well as the fitted SMR maps. Are the answers equivalent up to Monte Carlo error?

Super Bonus Extra Credit: (also worth up to 10% of the current total points available): compare ESS and ES/second for WinBUGS and new OpenBUGS (see p.151 of the CL3 text). Is one program substantially more efficient than the other?

"The latest version of OpenBUGS is now posted at www.openbugs.info. The Windows version come with an R-like installer. It is close to a first release version. We now have a group that has started work on the BRugs/R2WinBUGS packages.

A thorough checkout of the GeoBUGS features, distributions and graphics, would be very helpful. The only outstanding issue we know of is a discrepancy between WinBUGS 1.4.3 and OpenBUGS on the 'Shared' example (a WInBUGS/OpenBUGS comparison is posted on the wiki). We have not been able to sort this out yet but it arose when some (appropriate) constraints were placed on starting values and appears to be a WinBUGS rather than OpenBUGS issue. If you can encourage your students to make the testing reproducible, that would also be good, as it would be good to incorporate as much of it as is feasible in our routine testing in the future."

March 30, 2010

MT2 is Posted

Hey folks, the second midterm is posted on the class web page under Spring 2010 version of Midterm 2 (take-home).

These are due on paper or email no later than 11:55pm on April 13, 2010. Send me your .doc (no .docx files!!) or .pdf. Remember-- don't turn in bare results! Some kind of intelligent comment is necessary for every result that you present.

Good luck!

March 29, 2010

Hints for sampling algorithms

Some students have asked for help getting started with the sampling algorithm code, so here is a little "pseudo code" to get you started. You'll have to fill in the problem-specific computations where indicated. I haven't debugged this, it's just a starting point! :) Pay particular attention to the dimension of the parameter you're drawing and the function h(), which you can either write out directly in the met() function or write as a separate function that you use within met().

## Sample Metropolis-Hastings algorithm
met <- function(data,initials,NREP){
# Constants
n <- dim(data)[1]
p <- length(initials)
# Empty matrix to hold samples
chain <- matrix(NA,NREP+1,p)
# Initialize the chain
chain[1,] <- initials
for (i in 2:(NREP+1)){
# Parameters of the proposal
meen <-
var <-
# Draw a proposal value
thetastar <- rnorm(1,meen, var)
# Compute the rejection ratio
logr <- h(thetastar) + dnorm(chain[i-1,],meen,var) - h(chain[i-1,]) - dnorm(thetastar,meen,var)
# Decide whether to accept or reject
if (logr > 0){
chain[i,] <- thetastar
} elseif (logr < log(runif(1))) {
chain[i,] <- thetastar
} else {
chain[i,] <- chain[i-1,]
}
}
return(chain[-1,])
}

March 28, 2010

Spatial Talk April 30

Hey guys, thought you might want to mark your calendars for Friday, April 30, 2010 at 10:00 am, Moos 2-530. That's when my academic grandfather (i.e., Brad's advisor) Alan Gelfand will be speaking on Process Modeling for Space-time Extremes. You guys haven't done much spatial yet, but you may appreciate that he helped popularize the Gibbs sampler for Bayesian analysis in a 1990 JASA article with Smith.

March 18, 2010

No class or office hours

Brad and I will be at ENAR in New Orleans next week, so no class on Tuesday, March 23 and no office hours for me Monday, March 22 or Wednesday, March 24. I will hold a make-up office hour on Friday, March 26 at 9:30.

March 2, 2010

Some people are having trouble with WinBUGS model code files saved from the code website.

This has to do with the way newlines are encoded on different operating systems and in different text editors. My advice is to check these files for correct line breaks using the most basic text editor you can find, i.e., one with no formatting like Notepad in Windows or gedit in Linux, etc.

February 22, 2010

Posterior Density Plots from BRugs

If you're using BRugs to fit your models (which I think has many advantages over using WinBUGS directly), here's how to make a density plot:

If you set the coda option to TRUE, the output object from BRugsFit will be of type mcmc.list. This is a list in which each element is one chain of MCMC samples. Within each chain, you have a matrix with one node in each column. To plot the histogram for both chains for a single node, use code like this:

myfit <- BRugsFit(...,coda=T)
Z13.samples <- unlist(myfit[,'Z[13]',]) # this gets the 'Z[13]' samples from all chains
hist(Z13.samples,freq=F)
lines(density(Z13.samples))