November 2012 Archives

Fox News Poll of Obama

| No Comments

Screen Shot 2012-11-29 at 11.37.35 AM.png

The above survey was conducted about twice per month from January 26, 2009 to October 29, 2012 by Fox News under the direction of Anderson Robbins Research and Shaw & Company Research. President Obama's job approval rate is ranked in comparison to former President George W. Bush. The poll is based on landline and cell phone interviews with approximately 900 randomly chosen registered voters nationwide. The sampling error is +/- three percent points with minor weights applied to age, race, and gender variables but no political party affiliation variable.

Screen Shot 2012-11-29 at 11.46.35 AM.png

Screen Shot 2012-11-29 at 11.49.18 AM.png

Screen Shot 2012-11-29 at 11.51.00 AM.png

This screenshot compares Republican opinion to overall opinion. As of the latest October poll, Republican approval rate is at an (almost) all-time low with 90%, whereas Democratic approval is at 91%, clearly demonstrating very strong oppositional views of the Democratic President. Independent approval rate as of October was split at 42% and 52% disapproval.

The poll graphic is undated, so viewers must assume the article has not been updated since the recent presidential election, at a time when Presidential approval may fluctuate greatly and not represent everyday views. The chart is purely visual without any writing or explanation and the sample size, though random, is not necessarily reliable, as it is not an acceptable participation pool with fewer than 1,200 participants. Phone interviews also eliminate some of the population of registered voters that do not have homes with landlines or mobile phones, which eliminates a significant opinion that does indeed matter when it comes to 'overal approval', making the poll ungeneralizable.

YouTube & News

| No Comments

Through PRSA, I discovered a Pew Research study about the relationship between viral news searches and YouTube. Digital/virtual journalism is accessible worldwide and available in real time, as well as providing a dialogue platform for worldwide communication of views and opinions, and links to personal footage expanding on the event(s) and issue(s). Internet users are incorporating opinion into news sharing as well as journalism incorporating viewer activity, creating viral attention.

The Pew Center examined fifteen months' worth of the world's most popular news videos on YouTube (January 2011-March 2012). About 260 videos resulted by identifying and tracking the five most-viewed videos each week according to YouTube's 'News and Politics' channel. Pew analyzed the nature of the videos, the topics viewed most often, and the producers and posters of them.

The key findings to Pew's study were that the most popular videos were of natural disasters and political upheaval with intense visuals, entertaining ephemeral videos are more popular than information-based videos, citizens supply and produce the most footage, most viral videos contain both raw and edited footage and are fairly subjective, not containing individual personalities, and of course, that YouTube videos are briefer, generally lasting about two minutes in length, though YouTube video lengths vary greatly depending on the poster and topic.

In conclusion, people tend to search YouTube for videos covering current news due to its real-time updates and functionality. People tend to look for brief, concise news on the web, where we spend a lot of time during the day for both personal and work purposes (71% of Americans have used YouTube). Viewers also prefer platforms in which they determine their news agenda and content consumption without advertisements and biased information, as YouTube (though owned by Google) is not sponsored like some stations such as Fox. As to whether YouTube as a legitimate journalistic source, that is still unconfirmed due to lack of regulations, ethics, and copyright violations. YouTube is looking to a future with partnerships with Reuters and the like, which I am curious to see. This is the era of technology and it will only continue to progress with big players such as YouTube.

Link Found Between Child Prodigies & Autism

| No Comments

Ohio State University's division of Psychology recently published study findings in its 'Research and Innovation Communications' and journal 'Intelligence', saying that child prodigies have the commonality of autism. The study was conducted by an associate Psychology professor at the University and a Yale student/neurological non-profit organization Founder.

Three of the eight studied had autism and half had a family member OR first- OR second-degree relative with an autism diagnosis, though it is not specified how many of those with an autistic relative also had autism or if it is another half IN ADDITION to the diagnosed three. The study also does not specify the severity of autism in each child.

One group of eight was controlled, the other of 174 adults randomly contacted by mail not. All "child prodigies" were chosen via internet, television specials, and referrals. The control group contained one art prodigy, one math prodigy, four musical prodigies, one music/gourmet cooking prodigy, and one music/art prodigy. Six were males and the remaining two were females. It appears as though "child prodigy" in this case is defined as "at least younger than 18 years, who is performing at the level of a highly trained adult in a very demanding field of endeavour" (Wikipedia) and had elevated intelligence scores on the Stanford-Binet Intelligence test.

Each child was individually tested by researchers over the course of two or three days. In addition to the Stanford-Binet, researchers administered the Autism-Spectrum Quotient assessment. All eight tested were categorized in the top 1 percentile of the working memory sub-test.

The problematic issues with this study are the amount of control over the "autistic" group versus the huge lack of control over the randomly selected and mailed uncontrolled group. The ages were also not given of those tested (though assumedly all under 18) nor for those over 18, varying from probably 18+ to 100! The two groups qualify it as a valid study, however the large variety of control (loose to too confined) and ungiven information lead to skepticism and unreliability. Depending on the study layout, environment (in-home mailed flyer vs. interview room), and number and presence of researcher(s) possibly influencing the child prodigy.

Screen Shot 2012-11-26 at 8.21.08 PM.png
Screen Shot 2012-11-26 at 8.21.14 PM.png

The study still proves nothing more than its title; there are a few links between autism and child prodigous. They simply share similar traits at this point and cannot be considered proof until further and more extensive research is done and verified.

NYTimes: Vitamin D & Type 1 Diabetes via SpringerLink

| No Comments

Screen Shot 2012-11-26 at 7.25.01 PM.png

The above article was published by The New York Times recently about the correlation between low vitamin D levels and increase in risk of Type 1 diabetes. The aim was to prove the hypothesis that a deficiency of vitamin D results in Type 1 diabetes, specifically in the tested active-military personnel.

The study was not a random sample, as every participant was a military member and selected based on vitamin D levels, an assortment of low and high. Between 2002 and 2008, the 'nested case-control' study of 1000 subjects' blood samples were matched and analyzed. The following image explains and also verifies the reliability of the testing, as the serum reading dates were accurate across the board for all participants' samples (time-wise).

Screen Shot 2012-11-26 at 7.33.20 PM.png

The results support the researchers' hypotheses though it cannot prove causation of diabetes by vitamin D deficiency. The results are measured and presented in numbers representing value to the discovered phenomena, giving the [high and low] number relativity to one another (i.e. 17-23 nanograms vs. 40+).

Screen Shot 2012-11-26 at 7.39.23 PM.png

Cars influence voting?

| No Comments did a survey via Facebook of 600+ people identifying themselves as Democrat/liberal or Republican/conservative asking what they drive. The respondents were allowed to select the type of car he/she drives. The following results were published were based on their identifying party and car selection then categorized into car type:

Primary findings show that 29% of Republicans/conservatives drive a pickup truck, 27% of Democrats/liberals drive compact cars. People in both parties agree they like SUVs and crossovers about equally with 20 percent for Republicans/conservatives vs. 18 percent for Democrats/liberals. Members of both parties dislike hybrid/alternative-fuel vehicles about the same. Hybrid/alternative-fuel vehicles came in last place for Democrats and second-to-last place for Republicans, just above the van/minivan segment. Still, with 6 percent market share among Democrats, the hybrid segment is more than twice as popular with them as it is with Republicans, who drive hybrids only 3 percent of the time.

Though this is just a summary of the article published on, the survey seems too simple (only 1 multiple choice question and the account member's 'Political View'). The independent variable is party affiliation and the dependent being car type. The survey does not reveal the ages of the respondents, the likelihood of actually going to the polls if he/she is indeed over 18, and excludes drivers without Facebook accounts. The selection could have been random, however it is not truly reliable as it is not an population but just a poll of a specific social media network's user. I do not know how the question was worded but the 'most members of both parties dislike parties' is a negative assumption, as they may not be disliked but rather not owned due to external factors such as cost, availability, practicality, size, etc.

Boynton survey

| No Comments

Screen Shot 2012-11-08 at 8.24.43 AM.png

The above image was an e-mail I received from Boynton asking for survey participation. In order to reduce [nonresponse] error, they explained the source, purpose, and brevity of the survey, provided incentive, notified me prior with no expiration date, and assured my anonymity and confidentiality.

It is obvious that her survey is intended to gather personal and political values and beliefs of University-associated females regarding abortion, healthcare, and sex. She did a good job on the survey layout and design (for the most part) concerning sensitive topics and personal, typically unshared information.

Beyond this, I had a few concerns with question design. A few examples:
Screen Shot 2012-11-08 at 8.20.56 AM.png
I consider this question to be leading. It is assumed that the participant has been in a sexual or romantic relationship, which perhaps not all have. And is also sort-of irrelevant in cases of participants that haven't had a "relationship" recently or currently. What is considered "recent"? And what is considered a "relationship"?

Screen Shot 2012-11-08 at 8.17.25 AM.png
The two follow-up questions here are unclear. Perhaps my screenshot should have covered the 25a-c questions to support my qualms, however, this section was unclear in that I did not know if the question about male and then female partners were just concerning the past 12 months. The questions varied from 3 months of activity to 12. They were a bit jumbled and confusing regarding the time span.

Screen Shot 2012-11-08 at 8.12.52 AM.png
This multiple-choice nominal measurement is visually unclear to me (the category labels on top), as well as needs either a separate category or explanation box for someone like me; I learned about some of the topics from religious leaders but attended a private Catholic K-8. My teachers back then were considered religious leaders but only in my school, not the outside world, as they are not religious experts.

Screen Shot 2012-11-08 at 8.09.48 AM.png
This question almost deterred me from completing the survey because I began to wonder if all survey questions would be so lengthy and wordy.

Questions about changing attitudes should have been created, too. I may have learned about such topics in a conservative, religious manner, but have changed due to external pressures since Eighth Grade.

This survey definitely contains external validity as field research. However, with topics so sensitive, the researcher must consider participants' personal awareness and level of admittance in the survey.

Hybrid Car Peer Survey via Facebook

| No Comments

Recently, a friend of mine asked her Facebook friends to take an online survey for a class of hers. She is a Journalism and Psychology student at UW Madison and used Madison's Qualtrics Survey format.

Concerning demographics, she asked:
-Age (fill in the blank)
-Gender (choose Male or Female)
-Zip code (fill in the blank)
-Occupation (fill in the blank)
-Race/Ethnicity (6 options and Other)
-Highest level of Ed. (choose GED, Some college, 2-year Associate's, 4-year Bachelor's, Master's, Doctoral, or Professional)
-Household size (fill in the blank)

Then came more fill-in-the-blank questions about previous car ownership, car brand values, car technology, attitudes towards hybrid cars, then specific questions concerning Chevrolet and their hybrid Volt. She used a mixture of open-ended and multiple choice questions.

This seems as though it was intended to provide qualitative answers, though the questions were pretty broad and most questions were answered in three or fewer words. It is understandable that online it is impossible to provide follow-up/discussion questions for more qualitative data. This type of research was evaluative, summative, primary, quantitative, but somewhat humanistic. The content validity is strong but cannot be guaranteed valid because it is unpredictable and not approved by experts or panels as true.

My answers were descriptive, as her survey was Description goal-oriented. Participants provided her with my demographics and a list of descriptive words about my attitude and knowledge of the Chevy Volt and hybrid cars in general. It is "what?" research rather than "why?", though I am sure she will conduct follow-up research to answer the "why's" to participants' "what's".

We were not informed of the purpose of the survey, but having been told it would be simple, quick, and painless probably increased the likelihood of friend and acquaintances willingness to participate. With that said, there is probably public resistance, along with other negative aspects. Some downfalls to her research are the lack of in-depth questions, brevity of the survey, and lack of control. However, it was free to conduct with immediate delivery, rapid data processing. and could poll a large population (though not necessarily diverse or an ideal sample size or quality). It is a good method to begin research analysis.

SJMC Research

| No Comments

The following is an e-mail I received from an SJMC graduate student:

We have a new study waiting for your participation. The study titled, "Thoughts and Feelings Toward Current Brands and Current Issues" (conducted by Whitney Walther and Dr. Heather LaMarre)" currently recruits 150 participants. This study involves two online surveys.

The online research study was conducted by a grad student/TA for a school/Senior project, though specific purposes nor motives were listed. It was a nonrepresentative study of convenience, using other SJMC students to whom she has easy access and knew would probably be willing to participate. The survey did not contain many open-ended questions and could be considered flawed due to nonverbal, impersonal communication, the fact that perhaps not all participants were 100% honest during testing, and there was no cause-effect correlation to produce completely accurate results, and though it was concepted as an external field study, it cannot be presumed to represent the "real world".

The benefits following the research could have led to qualitative findings or led to producing more qualitative questions for future surveys or experiments. This survey was implemented in this particular student's research because she wanted to describe levels of beliefs, awarenesses, perceptions, etc. and to draw comparisons. She was not aiming to make predictions or determine a cause to the effect, in which case a survey would have not been appropriate. I, too, would have conducted a survey in this scenario, in order to compile various findings and be able to make some sort of educational generalization.

About this Archive

This page is an archive of entries from November 2012 listed from newest to oldest.

October 2012 is the previous archive.

December 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.