## ERROR

 reality Hnull = TRUE Hnull = False Decision reject Hnull type I (alpha) correct (1 - beta) fail to reject (1 - alpha)correct type II (beta)
-probablility of making a type one error is your alpha level.
NOTE: if you increase your probability of making type 1 you decrease making type2 and vice versa, but it is not a linear realitionship.NOTE: significance rate should probably be better than one in twenty or .95.

R version 2.7.2 (2008-08-25)
Copyright (C) 2008 The R Foundation for Statistical Computing
ISBN 3-900051-07-0

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

Natural language support but running in an English locale

R is a collaborative project with many contributors.
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> attach(football)
NOTE:Comment the new code for each lab.
NOTE: At thirty the sampling distribution tends to become normal, but not always.

## ERROR

See table and notes above.

## What is the effect and is it useful?

NOTE: APA says that you must include an effect size.
>

## Estimates

Error: unexpected '<' in "<"

## Estimates

Point estimate: best one number guess. So guessing the population mean would mean we would guess the sample mean.
> mean(WGR)
[1] 59.16
> mean(BGR)
[1] 44.04
> NOTE: remeber the t-test is about making sure that the populations are different in the way wer see the samples to be different.
Error: unexpected '<' in "<"
NOTE: remeber the t-test is about making sure that the populations are different in the way wer see the samples to be different.
> Interval estimate: an interval were we are "pretty sure" that the population characteristic lies.
Error: unexpected symbol in "Interval estimate"
Interval estimate: an interval were we are "pretty sure" that the population characteristic lies.
-we want to make it useful and we want to be confident in it. Sampling error and degree of cinfidence affects the width of the interval.
Point estimate - error and point estimate + error make up the bounds of our interval.
point est. - error <= parameter <= point est. + error
P(-1.96 <= Zscore <= 1.96) = .95
Zscore is observation minus mean divided by the standard deviation of the destributionP(-1.96
P(-1.96 <= Z <= 1.96) = .95
P((mu-hat-1.96(sigma-hat/root-n))<=mu<=(mu-hat+1.96(sigma-hat/root-n))=.95
mu-hat=59.16
sigma-hat=13.92
n=50
> sd(WGR)
[1] 13.92305
> err<-1.96*sd(WGR)/sqrt(50)
> mean(wgr)-err
> mean(WGR)-err
[1] 55.30073
> mean(WGR)+err
[1] 63.01927
our .95 confidence interval is therefore 55.50 - 63.02
> t.test(WGR, mu=76,alt="two.sided")

One Sample t-test

data: WGR
t = -8.5525, df = 49, p-value = 2.766e-11
alternative hypothesis: true mean is not equal to 76
95 percent confidence interval:
55.20311 63.11689
sample estimates:
mean of x
59.16

See above "95 percent confidence interval"
> t.test(WGR,mu=76,alt="two.sided",conf.level=.99)

One Sample t-test

data: WGR
t = -8.5525, df = 49, p-value = 2.766e-11
alternative hypothesis: true mean is not equal to 76
99 percent confidence interval:
53.88313 64.43687
sample estimates:
mean of x
59.16

Our .01 or 99 percent confidence interval is 53.88 to 64.44
When you give an interval estimate you are either right or wrong. In reality there is not 95% sure. It either is or isn't
Confidence intervals only work for two sided tests. For a one-sided test you have to compute an asymetric confidence interval.
> help(t.test)
If one did a one-sided test, one would not provide this type of effect size.
>