Recently in Chris's Posts--Do not post your entries here! Category

http://www.snopes.com/science/train.asp

The above article argues whether or not placing a penny on train tracks can derail a train. For this claim we will evaluate two of the six principles of scientific thinking.

Firstly, we will evaluate the "extraordinary claims" principle. Is the evidence as strong as the claim? In this case there has been no such evidence to support this claim and no experiments have actually tested this claim. Basically one person said it has occurred in the past and people started to believe it. Tragically, people have died trying to set pennies on tracks to either "flatten pennies" or "derail the train".

The second principle we will evaluate is that of replicability. In this case, the results would easily be duplicated if a proper experiment was set up. To date, no other scientific studies have reported the same findings as this "urban legend".

An alternate explanation that a penny derailed a train could be that of the age of the railroad track. A person could have laid a penny on a set of tracks that happened to be old or not functioning properly. After, finding out that a train had been derailed on that certain track the person could have claimed that it was solely due to the penny that had been placed there. This does not take into account the reliability of the train tracks nor the fact that user error by the conductor could have played a part in the derailing of the train.

The principle most useful for evaluating this claim would be replicability. One could formulate an experiment to be conducted to test this claim. We could pick 10 sets of train tracks at random and place pennies along the tracks. Then we would have trains running at the same speed down the tracks all for the same distance. Based on how many trains got derailed we could test this claim and see if it is indeed true or false.

Blue Ribbon Posts

user-pic
Vote 0 Votes

This is where I'll post links to some of the best posts of the week. Good posts should be visually interesting, engaging, and scientifically sound. They should also make connections beyond what was covered in class by discussing additional research in the field, relating the material to real-life situations, etc. As we progress, the standard for excellent posts will rise, so feel free to use these posts as a model but also strive to go above and beyond them.

Writing 1

Lisa Hostetler

Matthew Barg

Lynzi Daly

Writing 2

Hannah Weiger

Brandon Budnicki

Connor Chapman

Writing 3

Anna Shrifteylik

Lisa Hostetler

Ngoc Nguyen

Hey all, I've read a lot of interesting blogs so far and it has made me eager to jump in with my own contributions. Since I'm not being graded, I'll eschew the requirement for links and pictures and focus on content for now.

One post I found particularly interesting was Brad's post on the facial feedback study. Since I'd like to see some of you start responding to one another's posts, I'll start off with a response of my own. Brad raised the issue that perhaps the study was not as valid as it seemed, because a lot of people might simply laugh at anything. This actually raises a very fundamental question in experimental research: How can we tell if differences between groups are due to our manipulation rather than pre-existing differences between individuals? Put differently, what if one group (the smiling condition) was simply stacked with people who will laugh at anything, leading us to falsely conclude that they were laughing because of their facial muscles?

To solve this, experimenters must use statistical methods that compare variance within groups to variance between groups (in addition to random assignment). If people are all over the board on their humor ratings and we only observe a tiny difference between groups, then it is quite likely that the observed difference is due to random chance. On the other hand, if people within each group have fairly similar scores but there is a huge difference between the scores of different groups, then we can be more comfortable saying the experimental manipulation (in this case, the pens in participants' mouths) had something to do with it. The larger our sample, the more power we have to conclude that what we observed was a real effect of the manipulation. Small samples, such as those we had in class, are more prone to chance error.

Although we did not employ the statistical methods necessary to determine the significance of our results, those of you who take Research Methods will learn how to. In the meantime, thanks to Brad for critically examining some research we are all familiar with. I hope this post is an adequate explanation of how we could overcome the problem you raised. :)

P.S., For next time, everyone try to embed your links like I did above. The code can be found in the step-by-step guide posted in the Blog Instructions category.

About this Archive

This page is an archive of recent entries in the Chris's Posts--Do not post your entries here! category.

Blog Instructions is the previous category.

Writing 1 is the next category.

Find recent content on the main index or look in the archives to find all content.