Main

November 14, 2013

GENMOD vs. other procs


In simple linear models, PROC GENMOD gives different coefficient standard errors, and also seems to be working with different degrees of freedom than other procs. This may be due to use/non-use of maximum likelihood or 'bias correction'.

Results from a class data set:





































Model Estimate SE Test stat Pr>|test stat|
REG -9.34937 6.05465 -1.54 0.1375
GENMOD -9.3494 5.7854 X^2=2.61 0.1061
MIXED -9.3494 6.0547 -1.54 0.1375
GLM -9.3493685 6.05465219 -1.54 0.1375

August 23, 2013

Baseline-adjusted change score

This is generally a bad idea

Y1-Yo = B1 + B2X + B3Yo

Why might you do this?

When can it bias results? See Glymour at al 2005.


  • Regression to the mean:

  • Horse racing:

Note that if the predictor is instrumental (randomly assigned) then I believe the biases mentioned by Glymour et al will not apply.

Change vs. Baseline-adjusted

Y1 = B1 + B2X + B3Yo is baseline adjusted model
Y1-Yo = B1 + B2X is change model according to PH?
Y1-B3Yo = B1 + B2X is change model according to JW?

If change is the concept of interest then it is convenient to use change. Does forcing B3=1 make sense? It will add precision.

In a very simple model using Box Lunch data I found that baseline-adjusted and baseline-adjusted change models gave identical results.

August 14, 2013

Non-significant findings

Something to think about when trying to publish non-significant results:

Assuming no measurement/model error, statistically significant findings are caused by a real correlation or by random chance. The proportion of the two should be predetermined (alpha). Statistically non-significant findings may be due to real lack of correlation or random chance. The proportion of the two can be estimated by the power, so it seems to me this should be reported.

I hear a lot of complaint about publication bias in favor of statistically significant results, and I believe it exists, if you are trying to publish non-significant results then you should have to persuade the reader that you would have detected a result had it been there. This may be the case if you are reporting the main outcome of a trial. If you are digging deep in the data then I think the likelihood that poor measurement is responsible grows.

I started an analysis using a measure that I made up partially to fill time during a data collection visit. I looked for a treatment effect and did not find one. If I had confidence that I was measuring what I wanted to measure then I would pursue that analysis, but I don't have it. If I had seen evidence for the hypothesis I proposed then I would have had more confidence.

What should I make of my internal 'interest bias'? It's terrible in one sense, but practical in another. Terribly practical.

May 8, 2013

Latent class analysis vs. principle components analysis


And throw trees in here as well

LCA


PCA


TREES
Throw in a set of 'predictor' variables and an outcome variable. The software runs through each of the variables with different dichotomous cutpoints and chooses the variable and cutpoint that creates the largest difference in mean outcome between each branch. This process is repeated (independently) in each branch, and continues until you run out of variables or branch size reaches a specified minimum. Conditional tree analysis generates p values for each branching.

May 3, 2013

Social networks

Seminar today was David Shoham (Loyola): Insights into childhood obesity using social network analysis and agent-based models

Reasons for correlation within social networks


  • Exogenous / Shared environment

  • Correlation / Homophily

  • Endogenous / Contagion



Adjusting for baseline removes some degree of homophily.

I spoke to him briefly afterwards, and he's interested in the Box Lunch data. Unlike what he used, it is a trial, and it is also among adults. I don't know if our network data is adequate.

[Downtown obscured by snow]

April 11, 2013

ICC - the Intra-class correlation coefficient

4 quarters with brief rests
isometrics

Intra-class correlation coefficient
A measure of how similar members of groups are compared to how similar they are to others outside their groups. How much birds of a feather flock together.

Think of a study looking at students from two schools.
Scenario 1: one all boys school and one all girls school. If you know the school, you know the sex of the child. The ICC is 1.
Scenario 2: Both schools have exactly 40% boys, 60% girls. Knowing the school tells you nothing about the sex. The ICC is 0.

These extreme examples obscure something: The difference between the groups only partially drives the ICC. The variability within the group matters as well. Two other pairs of schools with 10% difference in rates (e.g. 40% and 50% girls, 10% and 20% girls) will have different ICC's.

October 29, 2010

What does the 95% confidence interval mean?

My take on interpreting the 95% CI:

Predictor: Riding a fixed gear bicycle
Outcome: Low-rise jeans
Result: Riding a fixie is associated with a 50% (95% CI: 25%-75%) higher probability of wearing low-rise jeans
Bias/confounding: None, of course

Here is a generally agreeable statement:
If the study were repeated again an infinite number of times, 95% of those studies would have the true risk difference in the 95% CI.

Here is what SOME call a crime against inference:
There is a 95% probability that the true risk difference is between 25% and 75%.


Image I flip a coin, it lands on the ground and I step on it before you can see how it landed.
Q: What is the probability that it is heads?
A1: 50% - Infidel, you have committed a crime against inference and are condemned to death*.
A2: 0% or 100%: Congratulations, you can join the League of Pedantic Professors

The source of confusion? The LPP uses a technical definition of probability that involves repeated observations. Meanwhile in the real world, 99% of us say that while the coin IS heads up (100%) or tails up (0%), because we don't know which we still say it has a 50% probability of being either. [Insert obligatory Schrodinger's cat here.]

Unless I am wrong, and I'm never wrong**, there is no problem with the stronger statement because the kind of people that interpret the CI that way are the kind of people that say 50%.

* At the age of 80 years, 95% CI: 50-120
** Inconceivable!

May 26, 2010

Very large sample sizes = statistical significance


"Everything in my data set is associated, because the sample size is so large." I have heard students suggest any that you can find statistically significant associations between any two variables, if the sample size is extremely large. I see a misunderstanding in that statement.

Assume the null hypothesis is true. You CHOOSE your type I/false positive error rate, typically 0.05, and sample size does not play a role.

Two uncorrelated variables will not show a statistical association no matter how large the sample size is. Larger sample sizes simply let you 'find' smaller correlations. This has one causal implication and one practical implication:

Practically speaking, at some point a correlation can be so small that it makes no difference. For example: If parking your bicycle near the smoking area increases your change of getting lung cancer by .00001% then a study of a billion bicyclists might detect the increased risk - but it would not make sense to advise against parking there.

Causally speaking, I the smaller a correlation the more likely it is spurious. Weak confounders and residual confounding will be hard to eliminate. (If this is the only problem, a randomized trial may not be affected.) Note that

March 26, 2010

Stratify despite no interaction?

Q: I have a statistically significant association between y and x, adjusting for z. If I add an x*z interaction term the p value is well above 0.10, but if I choose to stratify one stratum has a statistically significant association and one does not. How can this be?

A: First I though this could be figured out by plotting the data, suspecting the ranges didn't overlap. The scatterplots were similar. Then I looked at the regression coefficients and confidence intervals. The coefficients were almost identical (and very small in clinical terms). One group had a slightly wider CI.

Figure A: The slopes are similar, though not identical

Figure B: Add the CI and one crosses horizontal, so the slope is not statistically significant

Figure C: Overlay the two slopes and you clearly cannot say they are different, so an interaction term will be non-significant

interaction.bmp

Conclusion: This could be put in the "Beware of p-values" category. There is probably an inverse but weak association between X and Y. At an intuitive level there is no difference between the two groups, so stratifying the data is not justified.

March 20, 2010

MAGNITUDE, direction and statistical significance

From a response to a question:

In what I took as Epi 2 we were drilled on always reporting the direction, statistical significance and MAGNITUDE of associations. There are two ways to get the magnitude:

If you run a simple correlation it is the correlation coefficient. SAS output gives that, the sample size and the p-value. The r and the n determine the p, so I suppose you can use any two to approximate the third. If you report a "significant" finding it could be

A) r=0.9 n=10 p=0.05 or
B) r=0.1, n=10,000 p=0.04

In either case they are "associated", but in the first case the association "explains" 1 percent and in the second it "explains" 81 percent of the variation.

This is worse when findings are described as 'not significant'. Association A could be 'not STATISTICALLY significant' if the p-value is 0.0501, but I would certainly say it is CAUSALLY significant. (Meaning that there is an association, confounded or not.)

Another expression of magnitude is the regression coefficient. If changing caloric intake explains 100% of weight loss that's great, but it may or may not be useful in obesity prevention. If you have to cut 5,000 kcal per day to lose 1 pound a year then it is a worthless public health strategy.

March 16, 2010

Hierarchical group means


When analyzing randomized groups by their individual measurements the degree of imprecision can/should be used to "shrink" the estimated group mean towards the overall mean, as shown below.

Overall mean is given by the yellow RR crossing shape.

Note that taking a simple group mean and taking a mixed model hierarchical group mean gives a different ranking of the black and red groups. This is because the red dots are closer (more precise).

Shrinkage.JPG

March 9, 2010

Choosing statistical tests

This table is one that I think every public health student should be able to generate after taking statistics. Unfortunately it is not presented this way

Dependent/outcome
Independent/predictor Dichotomous Categorical Continuous
Dichotomous Chi-square Chi-square t-test
Categorical Chi-square Chi-square ANOVA
Continuous Logistic regression ??? Correlation or regression

Caveat: This ignores study design. In real analyses it is common to adjust for covariates, use repeated measures or do something else that requires different tests. Nevertheless, it shows the basic test underlying what you will use.

November 30, 2009

Comparing treatment groups at baseline


Is anything more aggravating than a statement that treatment group differences were 'not statistically significant' at baseline? This can mask two fundamental misunderstandings - the meaning of statistical differences or the purpose of comparing groups.

Continue reading "Comparing treatment groups at baseline" »