The New York Times recently published an article regarding the results of a 30 year observational study that found mammograms really don't work as well as the public believes. While the results found that mammogram screening did lead to increases in the detection of early stage breast cancers, "the number of cancers diagnosed at the advanced stage was essentially unchanged." If mammograms really are effective at finding deadly cancers sooner, then cases of advanced cancer should have been reduced, however, that was not the case.
The Times article details why observational studies are usually hard to trust for reliable results; it mentioned issues with observational data we have discussed in class, such as confounding variables and the lack of randomization. The interesting part is that although the Times mentions these common observational issues, the article advocates for the study's results since experimental and longitudinal studies have found similar results, yet these have been ignored for the past decade.
The Times article states, "It is normally troubling to see an observational study posing questions asked and answered by higher science. But in this case the research may help society to emerge from a fog that has clouded not just the approach to data on screening mammography, but also the approach to health care in the United States. In a system drowning in costs, and at enormous expense, we have systematically ignored virtually identical data challenging the effectiveness of...cancer screening...and more."
Overall, the article makes a good point that when the results of experimental methods and trials are ignored, observational methods may be able to help break socially accepted "fact." Although experimental methods do control for issues such as confounds and randomization, in turn providing more reliable and generalizable results, observational research may prove to be a good supplement by providing a less scientifically-laden, and more understandable, methodology and approach.