Best of luck with all your end-of-semester activities!

]]>The idea is this: how well are each of these three models able to accommodate the outliers present in the data? One way to assess this is by considering the posterior distribution of the width of an interval in the *likelihood.* Some models will shift the mean up to reach the outlier, while others will have sufficiently heavy tails to accommodate it in the variance.

So for each model's population-level sigma parameter (i.e., 1/sqrt(tau0[k] for k=1,2,3), we consider the posterior distribution of the width of a 95% interval in the likelihood that model induces for the data.

To get this posterior, we use the CODA samples of sigma[k]. For each Gibbs draw, compute the .025 and .975 quantiles of the associated likelihood (normal, t, or DE), take their difference, and call this a single posterior sample from the likelihood interval width. Repeat this procedure for all the sigma[k] Gibbs draws and you will have a histogram for the posterior of this quantile difference in the k^th model. Then you can compare across models.

]]>Simulated data

R code for processing data

WinBUGS model code

WinBUGS Output (just in case!)

Presentation

In the stack loss models, you have some regression parameters (betas) and also some fitted medians (mu_j) for each year. I would say, take a look at the betas, say beta_1, in each model. Also look at a couple of the mu's, say mu_21 because we know that is any outlying year, and one other mu_j in a non-outlying year for comparison.

]]>UPDATE: A couple grades changed, so the file is updated accordingly.

Here's the distribution (stem and leaf) out of 35 possible (or 42 with extra credit):

2 | 14

2 | 6

3 | 011122223344

3 | 5579

I'll post a solution soon.

]]>He recently wrote two posts about MCMC sampling that you might find interesting/useful.

One is some second-hand skepticism about a recent paper (published in the first post-Brad issue of Bayesian Analysis) on a new Metropolis sampler that doesn't require tuning.

The other is an extended reaction to a talk by Charlie Geyer (from the UofM Stats Department) about convergence issues.

]]>## Sample Metropolis-Hastings algorithm # function to compute log(h(theta)) log.h <- function(theta,data){} met <- function(data,initials,NREP){ # Constants n <- dim(data)[1] p <- length(initials) # Empty matrix to hold samples chain <- matrix(NA,NREP+1,p) # Initialize the chain chain[1,] <- initials for (i in 2:(NREP+1)){ # Parameters of the proposal meen <- var <- # Draw a proposal value thetastar <- rnorm(1,meen, var) # Compute the rejection ratio logr <- log.h(thetastar) + dnorm(chain[i-1,],meen,sqrt(var),log=T) - log.h(chain[i-1,]) - dnorm(thetastar,meen,sqrt(var),log=T) # Decide whether to accept or reject if (logr > 0){ chain[i,] <- thetastar } elseif (logr > log(runif(1))) { chain[i,] <- thetastar } else { chain[i,] <- chain[i-1,] } } return(chain[-1,]) }

Second, it appears that in the GeoBugs extra credit opportunity, the maps aren't going to load in because it doesn't like negative x values. One solution is to simply add a large constant to all of the longitude values in the polygon file produced by R so that they are positive.

]]>An opportunity for you to help in the development of OpenBUGS and Brad decided to make it worth extra credit on problem 2 of the take-home midterm.

**Extra Credit** (worth up to 10% of the current total points available): re-fit your no-errors-in-covariates WinBUGS code using new OpenBUGS. Compare your point & interval estimates of the screening effect, as well as the fitted SMR maps. Are the answers equivalent up to Monte Carlo error?

**Super Bonus Extra Credit**: (also worth up to 10% of the current total points available): compare ESS and ES/second for WinBUGS and new OpenBUGS (see p.151 of the CL3 text). Is one program substantially more efficient than the other?

For more information, see the following motivating email from Neal Thomas:

"The latest version of OpenBUGS is now posted at www.openbugs.info. The Windows version come with an R-like installer. It is close to a first release version. We now have a group that has started work on the BRugs/R2WinBUGS packages.

A thorough checkout of the GeoBUGS features, distributions and graphics, would be very helpful. The only outstanding issue we know of is a discrepancy between WinBUGS 1.4.3 and OpenBUGS on the 'Shared' example (a WInBUGS/OpenBUGS comparison is posted on the wiki). We have not been able to sort this out yet but it arose when some (appropriate) constraints were placed on starting values and appears to be a WinBUGS rather than OpenBUGS issue. If you can encourage your students to make the testing reproducible, that would also be good, as it would be good to incorporate as much of it as is feasible in our routine testing in the future."

These are due on paper or email no later than 11:55pm on April 13, 2010. Send me your .doc (no .docx files!!) or .pdf. Remember-- don't turn in bare results! Some kind of intelligent comment is necessary for every result that you present.

Good luck!

]]>## Sample Metropolis-Hastings algorithm

met <- function(data,initials,NREP){

# Constants

n <- dim(data)[1]

p <- length(initials)

# Empty matrix to hold samples

chain <- matrix(NA,NREP+1,p)

# Initialize the chain

chain[1,] <- initials

for (i in 2:(NREP+1)){

# Parameters of the proposal

meen <-

var <-

# Draw a proposal value

thetastar <- rnorm(1,meen, var)

# Compute the rejection ratio

logr <- h(thetastar) + dnorm(chain[i-1,],meen,var) - h(chain[i-1,]) - dnorm(thetastar,meen,var)

# Decide whether to accept or reject

if (logr > 0){

chain[i,] <- thetastar

} elseif (logr < log(runif(1))) {

chain[i,] <- thetastar

} else {

chain[i,] <- chain[i-1,]

}

}

return(chain[-1,])

}

]]>

This has to do with the way newlines are encoded on different operating systems and in different text editors. My advice is to check these files for correct line breaks using the most basic text editor you can find, i.e., one with no formatting like Notepad in Windows or gedit in Linux, etc.

]]>If you set the coda option to TRUE, the output object from BRugsFit will be of type mcmc.list. This is a list in which each element is one chain of MCMC samples. Within each chain, you have a matrix with one node in each column. To plot the histogram for both chains for a single node, use code like this:

myfit <- BRugsFit(...,coda=T)

Z13.samples <- unlist(myfit[,'Z[13]',]) # this gets the 'Z[13]' samples from all chains

hist(Z13.samples,freq=F)

lines(density(Z13.samples))