I thought I knew something, until...

| 3 Comments

I thought I knew at least a little something about research methodology, that was, until I read this blog post from the Social Science Statistics Blog and the accompanying article by Phil Schrodt on the seven deadly sins of contemporary qualitative analysis.

I read Schrodt's article, but I can really only say I understood about 25% of it, and that might be an over estimation. It was good to read none the less because it highlighted some of the empty assumptions that I've had about data analysis. I like to keep my skeptical whits about me and this article really was helpful. I also appreciate his suggestions for keeping it simple and avoiding fancy methods unless you know they're the most appropriate.

I'm also more interested than ever in learning about Beynesian Analysis. I don't know enough about it to understand when and where it's the right way to go. Let me know if you have any cool insights about it.

Anyway, leave a comment on this post if you read the essay. I'd like to hear your favorite peices.

3 Comments

The comments on Positivism in light of explanation vs. prediction in the blog were fine, but I couldn't get the link to the original article ("Seven deadly sins...") to come up. Anyone else having problems reading it??

Bayesian is a type of regression model that allows you to 'flex' some of the traditional assumptions that lock in OLS regressions. Being that it is not stuck in linear (my language), it works especially well with some types of economics- like fish; there you are working with biotic systems that need to 'inform' X and Y.

I should have not used the work 'linear' in the last sentence as with Bayesian equations you can put X to the second or third... power and even do a natural log transformation to use it as a non-linear reg.

The 'flex' of Bayesian Regressions (and I am no expert) is that when your few parameters which can cause OLS assumptions to crash(or also with 2SLS as it isn't an issue with error term correlation but rather the massive size of data panels dropping out parameters) it approximates for data gaps; it is inherently a probability model, and as you get output (with safeguards for multicollinearity is my guess)
it plows those estimates back in from what I understand for the next run.

I mentioned it is helpful for examining (salmon) fish as when looking at aggregate returns/catchment (at a large statewide level) you have unknown biotic dynamics from a couple years earlier that determine today's harvests due to spawning regimes that may be 3-7 years out for several species that compete for much of the same habitat on their own particular breeding timeline. Following each season with a Bayesian you can end up with time series that plug in the data gaps approximations, and can then build a model that for lack of a better word detects for the future regression runs (tracking the fish runs!)

It can also be helpful, though, for looking at large panels of data for non-resource/non-biotic information such as national inflation which will have plenty of gaps (or where reporting is perfect) and with a time series it can 'flex' (again, my word). NO expert here, yet I learned from my advisor (fisheries economist) in grad school who wrote his 900 page PhD dissertation on Bayesian and I have been to a national fisheries conference where presentations from the field showed Monte Carlo (Bayesian) as the regression of choice for their applications. It all goes back to picking the right model that has assumptions your data won't violate, and in some instances I'm sure Bayesian specialty models come up short....

About this Entry

This page contains a single entry by Neil Linscheid published on November 15, 2010 12:50 PM.

Budget Simulation was the previous entry in this blog.

Income and Economic Impacts with IMPLAN is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.