By swan1703 on December 2, 2012 10:50 AM
In a recent email, Professor Ball invited us to critique a survey put together by a fellow J-school student. I figured I would take this opportunity as a way to demonstrate how developed my research skills have become over the course of this semester.
The survey was very short and consisted of nine open-ended questions. The responses will be hard to code and analyze on a mass scale, but that's OK as she wants our opinions to help guide in developing a questionnaire for a future study. All of the questions referenced bottled water, and it is clear the student is trying to gauge bottled water use and perceptions of its use among college students.
It leaves me wondering what her research questions and hypotheses are. Obviously that is privileged information as participants only get a snippet of that kind of information when they get briefed before taking surveys. Yet I am very interested in what they would be as I am helping to formulate an ad campaign about bottled water in another class.
That ad campaign is actually going to expose some disturbing realities behind bottled water. For example, one fact we uncovered is that 40% of bottled water is actually taken from municipal water sources also known as "tap water," and Bottled Water can be distributed even if it doesn't meet the quality standards of tap water.
I wonder if this student should expose respondents to some of these facts and have them re-take the survey? I guess that would depend on his/her research objectives. I would do exactly that, however, if I wanted to measure the effects of my group's ad campaign.
Most of us here in the Journalism School are very familiar with Stephen Colbert. He and counterpart, Jon Stewart, have essentially created a brand new way of bringing the news to the world. From a strategic communicator's standpoint, they've hit the jackpot as they've gotten people to listen. So it's only appropriate that I include a blog partially devoted to at least one of them.
Years ago, Colbert invented a new world called "truthiness." He described it as truth that comes from the gut, rather than a book. The word turned some heads in academia, and ultimately found its way into the dictionary. Webster now defines it as, "the quality of preferring concepts or facts one wishes to be true, rather than concepts or facts known to be true." Unfortunately, for many that means irrationally denying cold, hard researched facts in favor of their own intuition or gut feelings.
As future professionals, who will be relying on research to influence and make multi-million dollar decisions, we must be very weary of our own feelings of truthiness. Just because we may believe that the best Unique Selling Position for a car might be the leather seats doesn't mean we should focus an ad on the seats. If research shows that the target audience truly cares about how fast a car can go, then that is what the ad should focus on.
The same can go for a PR message. Just because we believe that a company's mission statement is right doesn't mean that is the mission statement we should present to the whole world. We must present a message that research shows is appealing to a mass audience, one that will present the company as friendly to its customers.
Only in the most obvious of circumstances should we be relying on our intuition. So long as we have the time and money needed to researching the data required to make decisions, then we should be relying on research more than anything.
This Gallup poll measured how much the average consumer spent per week in the weeks leading up to Thanksgiving, and per day in the three days before it. The data is gathered using Gallup's familiar random-digit dialing, and they reached 2,000 respondents for the first poll and 1,500 for the second.
All the data is, in Gallup's words, "self-reported," and my only question is why did they do primary research for this with a phone survey? Shouldn't they have done some secondary research and measured sales from retailers and other businesses?
Gallup does admit the "data aligns with chain store sales," which does offer some reliability to the poll. However, I feel as though a phone survey just simply is not a very appropriate way of measuring consumer spending.
For starters, I don't think the sample is right. The average American doesn't pay enough attention to their budgets to accurately answer Gallup's questions. Businesses, on the other hand, pay professionals to track every penny in sales. They could offer exact quotes as to how much consumers have spent in their stores, which could then be averaged and fairly generalized to the overall consumer population. Businesses make a much more valid sample.
Furthermore, the polls actually measured a dip in consumer spending compared to previous years, yet other Gallup polls (included some mentioned in my blogs) have actually measured increasing consumer confidence and outlook on things like jobs and the economy. If consumers feel that spending is OK again, then why are they spending less this year? The data gathered may have been consistent with chain store data, but it is not consistent with other related data.
Lastly, Gallup's summary theorizes that the dip in spending is caused by the rise of Cyber Monday and the upcoming "fiscal cliff" as reasons that consumers didn't spend as much this time around. Yet these are two contradicting explanations. If consumers were truly worried about a "fiscal cliff" would they not spend less on Cyber Monday as well? And if they were saving for Cyber Monday, wouldn't they not be worried about the "fiscal cliff?"
I feel as though this was a poor poll, and the researchers should have been gathering their data from secondary sources to measure consumer spending. It just goes to show that sometimes the answer doesn't lie in your own data, it lies in someone else's.
By swan1703 on November 30, 2012 6:12 PM
The above ad is for the video game Battlefield 3 (BF3). It was run heavily leading up to its November 2011 release. The spot made multiple appearances during male-focused TV broadcasts such as sports, but it also made just as many rounds on the web.
One of those places it appeared was on the web's most popular video site, Youtube. They bought up the whole website for, what seemed like, the entire month of October to promote this video game.
The thing about Youtube, however, is it allows viewers to skip their ads after five seconds. I've noticed this isn't the case for some ads now, as more and more ads seem to make you sit through the 15 or 30 seconds it's running. However, a year ago you could skip virtually any ad at any time, this BF3 spot included.
In viewing these ads, I noticed that Electronic Arts (EA), the publishers and promoters of Battlefield 3, found a way around the ad skip. The first 5 seconds of the ad (the part viewers can't skip) consisted only of a very intense animation of the game's cover, backed up with a very intense audio. The segment of the above video from 1:00-1:05 is the exact five second intro they would use. For most Youtube viewers this is the only five seconds of the ad they would ever see before skipping it. These five seconds did exactly what an ad is supposed to do: get the advertiser's message out.
As an advertising major, I pay close attention to these kinds of things, especially when it comes to digital as the book on how to advertise on digital is still being written (and whoever writes it gets rich). As far as ads on the fourth biggest website in the world goes, BF3 is the only one I've seen make use of those first five crucial seconds. The rest just run whatever 30 second spots they've made up for TV, and none get their message out before the first five seconds are up.
What I would like to do is devise a research experiment to answer the research question: How effective is EA's 5-second intro strategy on brand recall? In the interest of keeping things somewhat brief, I'm not going to go too far into specifics with regards to pre/post-testing and how to analyze the results, but I will go over the basic outline of the study.
My hypothesis is that there is a positive correlation between ad recall and the 5-second intro strategy. I believe that viewers exposed to the intro are much more likely to be able to recall an ad than viewers who are not.
I would develop a series of ads for a series of products that participants would be tested on for recall. Each ad would also have a 5-second intro developed for it. I would then gather five groups of about 20 participants each, and label them A, B, C, D, and E.
Group A would be pre-tested on their knowledge of the selected products. I would then instruct them to spend one hour browsing Youtube videos. I would ensure Group A is only exposed to ads that employ the 5-second intro strategy and allow the participants the option of skipping them. They would then be post-tested on the same products to measure recall.
Group B would be pre-tested as well, and then set loose on Youtube for the same amount of time. This group would only be exposed to ads without the 5-second intro strategy, and would be allowed to skip them. They would then be post-tested.
Group C would not be pre-tested in order to ensure the pre-test isn't influencing the results of the research. They would get the 5-second intros like Group A, and then be post-tested.
Group D would skip the pre-test too, then get ads without the intros like Group B. They would then be post-tested.
Group E would be the control group. They would only be post-tested, and not be pre-tested or exposed to any variables.
It's important to keep in mind that anyone who buys a video ad on Youtube also gets a banner ad next to the video. This study is meant only to measure the effects of the 5-second intro strategy. As a result, this banner ad would be removed so as not to influence the results of the study.
The rest of the site would have to be dumbed down as well. The comments section would be removed, and the recommended videos section would recommend the same video for all participants, regardless of what video they just watched. All this to ensure no other variables can manipulate the results.
In-depth questionnaires/surveys would be developed for the pre/post-tests and they would collect both quantitative and qualitative data. The data would then be appropriately coded and examined to either validate or reject my hypothesis.
I'm very confident that my hypothesis would be supported, and the ad industry would hail me as a hero for recognizing such an ingenious ad strategy. I would go down in history with ad greats like Dan Bernbach and Ridley Scott, and forever be showered in riches... only to wake up from what surely would have been such a very good dream.
I was surprised to read this blogger's piece about big data in the workplace. It seems to me that there is no centralized database for sharing data within companies.
The common practice for researchers gathering data is to pick out only the relevant data they need to accomplish their research objectives. Be it gaining consumer insight, measuring the competition, or increasing sales, the researchers take data relevant to their goals and throw what they don't need away.
Unfortunately, it's as the old saying goes, "one man's trash is another man's treasure." In the world of research, this trashed data can benefit researchers and analysts in other parts of the company. Instead of being able to use the already-gathered data of their coworkers, they must take the time and resources to gather their own.
The author proposes the creation of a "customer-centric" and "cross-departmental collaboration" of research-sharing that would ensure no data would ever go to waste. He calls for the creation of the Customer Experience Officer (CEO) to facilitate these new models of information-sharing within big business.
The reason I found this all so very surprising is because I learned in another research class that most major companies have their own libraries. To my understanding, these libraries are havens for company records and all sorts of other information pertaining to the company. I guess, however, that only data deemed relevant to the current researcher makes it into the library. If that's the case, then I think companies need to adopt this data-sharing model immediately.
Playing on the spirit of presenting research in a fun and entertaining manner using infographics like the ones we made in class, I've decided to blog on this infographic which presents a variety of random facts about the NFL.
When it comes to information, the NFL has no shortage of it. They record data and statistics with such meticulous detail, that a person suffering from OCD would seem completely normal in comparison. Every kick, catch, pass, and run gets jotted down for the record books.
When it comes to presenting such data, the NFL takes a note from our Moodle reading, "Research With Legs." During broadcasts they show only statistics that are relevant to the current game, often with footage of past feats. For example, when Drew Brees was getting ready to break Johnny Unitas' consecutive-games-with-a-passing-TD record earlier this season, they showed the number of games Johnny Unitas went with passing TD's as well as some video of Unitas playing the game. They did not show irrelevant stats like the longest field goal or longest kick return.
On the web, however, journalists, analysts, and bloggers all struggle to present information about the NFL in an intriguing light. Unless there's some information to put out with direct relevance to current events regarding the game, viewers are not likely to stop and view it. So many must resort to eye-catching infographics like the one linked too above. The information contained in this one is completely random, yet the interesting graphic helps keep the attention of the viewer.
This Pew Research poll surveyed 1,000 Americans on how they used their cell phones while shopping for the 2011 holiday season. According to it, 52% of Americans used their cell phones in some way to help make purchasing decisions while in a store.
38% of respondents phoned a friend for purchase advice, while 24% looked up product reviews and 25% checked for better prices in other stores. 33% of respondents did both of the latter two actions.
The data reflects the digital divide in that young and urban/suburbanites were much more likely to use their cell phones while shopping than their older and rural counterparts.
What I don't understand is, why did Pew need to run a survey to determine this information? With all this talk of digital/text analytics, and cookies tracking our every (albeit electronic) move, couldn't they just have used one of the analytics tools we've learned about in class to gather this information? Perhaps they would not be able to determine if shoppers actually called their friends for advice, but they certainly could have determined whether they checked product reviews and other prices. Our phones record our geographical location at all times, so they could determine if they were in a store or not. Of course, the practice of digital analytics is still in its infancy, so maybe we'll just have to wait a few decades for such detailed research.
One last thing I must add is about the effects of cell phones on consumers and the strategic communicators who try to influence them. This poll sheds much light on the growing prevalence of mobile in consumers' everyday lives, and I believe that we, as strategic communicators, need to be aware of this. Mobile is the future now. No longer will a "cool" or "hip" Super Bowl ad result in a direct purchase. Consumers now have the ability to bargain, and its right in the palm of their hands. What we must remember is that the tool they use to bargain with (their cell phone) is just another channel of communication. It is a channel which, like all channels, we can manipulate to our benefit. We just need to figure out how.
Last September, I watched Stephen Colbert do a piece on a Fox News poll concerning the election. Unfortunately Fox News seems to have taken down the poll (likely because it is quite a stab at their credibility) and the only remnants remaining of it on the internet are from the above blogs.
In an attempt to determine who would win the swing state of Ohio between Obama and Romney, Fox launched a poll among its own viewers. It determined that Romney would lead Obama in Ohio 90% to 10%, effectively giving Romney the White House.
If this isn't an example of sampling error, I don't know what is. The sample is a nonrepesentative, convenience/haphazard sample as Fox News obviously took what was available. It would be fair if they generalized these results to their own viewers, but they generalized them to the entire US population. So not only did they discredit their entire poll, but they unethically presented the results of research.
Luckily, political pundit and media critic Stephen Colbert stepped in and called b***s*** on the poll, saving America, yet again, from the lamestream media (his words, not mine).
According to recent a Gallup poll, Americans' perspective of the job climate in the US has changed. Unlike climate change pertaining to the environment, job climate change has actually changed for the better.
24% of Americans say it is now a good time to find a job, which is nearly three times as many as a year ago (8%).
The samples can be fairly and accurately generalized to the populations they are representing. Both polls were phone polls that used random-digit dialing to contact their participants. The sample sizes were plenty large enough, as well, at 1,015 for the first and 527 for the second.
The polls is also valid in terms of question wording. The polls measured the perceptions of participants by asking one simple question: thinking of the job situation today, would you say that it is now a good time or bad time find a quality job? The question is understandable, and not leading or loaded, indicating the poll measured what it is supposed to measure.
The poll is also reliable in that the results measured consistently with other polls. These polls are conducted year-over-year, and there are (sometimes drastic) changes in results, which could lead researchers to believe that the results are unreliable. However, Gallup compares their results to other polls relating to the economy in order to show reliability.
These two polls are the the Economic Confidence Index and the Job Creation Index. The first measures Americans' confidence in the economy and the second measures how many newly-hired Americans there have been in the past year. The results of all polls are correlated (especially Job Creation vs Job Climate Perception) and add to each other's reliability.
Gallup admits a possible sampling error of +/- 3 percentage points and a confidence interval of 95%. In its summary, they theorize that the more positive perception of jobs and the economy could result from improved unemployment rates in past months. Of course, to establish causality in that matter, Gallup would need to launch a research experiment.
I don't understand the practice of text (or digital) analytics as much as I would like as we haven't spent too much time on it, but I wanted to blog about this article because it relates very much to what our guest lecturer from Carmichael Lynch was talking about. It was written about a year ago by Tom H.C. Anderson and he addresses some problems relating to text analytics that our lecturer also spoke of in class.
He describes current text analytics as being "pure play" and not specifically tailored for market research as most have been developed for the defense, intelligence, and financial industries. This can lead to errors in analysis of social media, blogs, and other texts on the web in relation to a market researcher's research goals. Our lecturer mentioned that a tweet could mention a business, but it's hard to determine whether the tweet was positive, negative, or completely irrelevant without actually reading the tweet (something that's impossible to do as a researcher could have thousand or even millions of tweets to review).
The author calls for market researchers to develop their own text analytics software, made to meet the needs of the market researcher and to accurately analyze the wealth of qualitative data on the world wide web. He is also in the business of developing his own software for his company, Anderson Analytics. The software is called Odin Text (linked to above), and promises to properly analyze text date in order to meet marketing and communication needs.
I feel as though text/digital analytics is the future of strategic communications research, and developing the proper tools for practicing it is detrimental to that future. Our lecturer mentioned that 23% of all US advertising is now digital and the author states that 85% of all information in the world is available via text. The world is "digitizing" (trademark) and in order for communications professionals to do their jobs, we must "digitize" right along with it.
By swan1703 on November 28, 2012 3:07 PM
*** Use this link if the embedded video didn't work ***
The above video is a comic strip created by grad students from Georgia State University. It shows the differences between quantitative and qualitative research in a more engaging way than your average textbook or powerpoint presentation.
Qualitative and quantitative research are portrayed as two separate superheroes named Captain Quan T. Tative and Dr. Qual I. Tative (or Quan and Qual for short). They fight villains (aka research problems) with their special research powers. These powers, as you can guess, are the powers of quantitative and qualitative research.
The villain (or research problem) in this episode is not the consumer. Rather, it is an ethical dilemma posed by the diabolical Dr. D Plagiarism (Dr. DP), who is looking to plagiarize the research of other people to pass off as his own. The evil doctor is seen in the library stealing the research of James Stelheimer, Ph. D, compelling our heroes to act.
The heroes jump into action, detailing exactly how they will use their respective skills to bring down Dr. P, and it echoes exactly what we learned in class. Quan says he will gather data ABOUT Dr. P, while Qual says he will gather data to figure out HOW to destroy him. Quan will be measuring the "what," "why," and "how much" surrounding the problem, and Qual will gain a deeper understanding of the "why" and "how."
Like any good researcher, the heroes employ a team of cohorts to do all their research for them (I'll bet they're interns or recent college grads). Quan's team comes up with a variety of quantitative data including his height/weight, hair type, and location. They devise a plan to ambush him with their "villain profiler" (a survey I assume) to come up with more data about his past offenses so they can determine a fair punishment.
Qual's team plans to "interrogate" Dr. DP, by doing an in-depth personal interview (gimminy-gillickers!). They plan to measure his actions, thoughts, beliefs, motives, and attitudes in order to come up with an appropriate plan of action.
As is the way of superhero stories with more than one protagonist, our heroes butt heads over whose methods are the best. Coincidentally (or not), this mirrors the research world as researchers are constantly balancing and choosing between the two methods based on things like access to research resources, budget, time constraints, and even personal preferences. Luckily, our interns (I mean cohorts) step in, reminding our heroes that their work is "all for the good of research!"
The heroes overcome their differences and unite into an Avengers/Justice League hybrid team (I'm a superhero nerd, it's best to google those if you don't know what they are). They call themselves The Mixed Methods Research Heroes and set out to stop Dr. DP together. They lock him up, while also reminding viewers that while both methods of research are good, they're best if they're utilized together.
Now, if you'll excuse me. I'm going to go write a sequel to this featuring Qual and Quan's arch-nemeis, Sampling Error.
I know this article doesn't have much to do with communication research, but it is research. Plus, it's about cats, and I love cats too much to ignore them.
A recent University of Georgia study determined that our beloved feline critters are really nothing more than murdering psycopaths. They attached "kitty cams" to 60 pet cats in Georgia and monitored their activity at night.
They found that the cats killed an average of 2.1 animals per week, and it was apparently just for sport as they would only eat their kills 30% of the time. 21% of the time, they would bring their kill home (I can only imagine they thought it would be some kind of trophy).
I can't help but think there has been some bias introduced to this study. The kitty cams were outfitted "with LED lights." I'm familiar with night optics as I've spent some time in the military, and LED lights are not night optics, they are flashlights. It is possible that the light given off by the camera (while small) offered an unfair hunting advantage to the cats by helping them see, and by freezing their prey (as animals tend to freeze when suddenly immersed in light). It's possible these cats caught more prey than their non-recorded counterparts, thus their average kills per week is too high.
Also, it is possible that the critter cams may have been bulky enough to inhibit the cats in some way (they basically look like shock collars), and perhaps may have negatively affected their hunting. It is just as possible that the cats caught fewer critters than their unrecorded counterparts, and their average kills per week is actually too low.
The news article makes no mention of possible errors like this in the research, and does not link to an academic report that would discuss such errors. Of course, this is the norm with news organizations. They tend to present things as absolute, gospel-like fact regardless of possible errors. Then, when they're proven wrong, they ignore it and pretend like it never happened. All in the name of credibility, I guess.
Nevertheless, there is only one thing I can say with certainty from this research, and that is that my cats will never go outside again.
The Army Times, a newspaper not owned by the US Army, but rather by the same publicly-traded media company that owns USA Today, recently conducted a Presidential poll among members of the military. It found that Romney is leading Obama 2-1.
The newspaper conducted the poll via email, contacting only subscribers of Military Times newspapers (Army Times, Air Force Times, etc.). Most subscribers of these newspapers tend to be senior-enlisted/senior-officers, thus the poll is "skewed slightly toward servicemembers who have made the military their lifelong career." The respondents were also overwhelmingly white (80%) and male (91%). In describing this measured military-demographic, the Army Times appropriately labelled it the "professional core of the military."
In total, 3,100 servicemembers responded to the poll. Of which, 66% support Romney, while 26% support Obama. In issues facing the nation, the economy is at the top of servicemembers' minds with 66% of them rating it the number one issue in the election. In contrast, only 1% rate the war in Afghanistan as the nation's biggest issue, compared with 16% in 2008. One Army Captain cited the troops' salaries and ability to get a job should they separate from the service as the main factors for their concerns about the economy.
The online version of the poll does not list a sampling error, or confidence level. However, I read the print version (which compelled me to blog about this particular poll in the first place), and they were listed there.
Furthermore, the poll established validity in that it did properly measure members of the military. The newspaper can distinguish between its military and civilian subscribers by sending emails only to subscribers with addresses ending in @us.army.mil, @us.navy.mil, etc. To get an email address like that, you must be in the military.
Lastly, the poll established reliability as it showed the same Conservative slant in the politics of servicemembers that polls have shown in the past. According to UNC-Chapel Hill Professor of Military History, Richard Kohn, "the poll really tracks with the traditional [conservative] views of the military."
7-11 is, once again, holding its 7-Election Presidential Election Poll. Just like years past, 7-11 customers can "vote" for either Obama or Romney by going into a 7-11 and buying a cup of coffee. To vote for Obama, they fill up with the blue cup labeled Obama. To vote for Romney, they fill up with the red cup labeled Romney. The results are updated daily and posted to the election section of their website. As of Oct. 26, 2012, Obama is leading with 59% of the vote.
If this isn't an example of poor sampling, I don't know what is. There is no way this so-called "poll" is generalizable to the overall population of the United States. There are currently 16 states which do not have a 7-11 meaning the votes of citizens in those states are not measured. Furthermore, 7/11's are commonly found in low income, urban areas automatically excluding rural citizens from its measures. Lastly, within the already narrowed down sample of low income, urban citizens living in the 34 states lucky enough to have a 7-11, only customers of 7-11 who buy coffee get to have their votes counted, and the votes of some get counted more than once as the poll allows for a customer to vote more than once by buying more than one cup of coffee.
The results are published in a somewhat fair manner. The map on their website only shows results from states that have 7-11's, and if the viewer knows the nature of 7/11 locations, then they will know exactly what kind of voter the poll is measuring (low income, urban). However, their website refers to the results as "national results," misleading viewers into thinking that the poll could accurately predict the forthcoming election. We can only hope that no voter would take this poll seriously.
Perhaps I shouldn't be too hard on 7/11 though. After all, this is simply one company out of thousands who are cashing in on the free hype and publicity of the Presidential election to market their products. And I'm willing to bet they're selling a lot off coffee because of it.
When it comes time to make decisions regarding ad campaigns, advertisers tend to rely on the most complicated of epistemological methods of knowledge: empiricism (observation and research). Before they spend millions of dollars of their client's (or their own) money, they prefer to test ads to see how they'll resonate with audiences. They do this to appease their worried clients and bosses, who don't want to be held responsible for multi-million dollar campaigns that are deemed a failure. Ads that test successfully go out, while the ones that don't are sent back to the creative (and now disappointed) minds who dreamed them up in the first place.
Advertisers don't always rely on the painstaking process of research, however. Sometimes, they rely purely on intuition (their gut instincts). The advertisers of Allstate's in-house marketing arm relied on exactly that for their "Mayhem" ad campaign. Allstate's Senior Vice President of Marketing, Lisa Cochrane, talked about the wildly successful campaign at a meeting for the Association of National Advertisers, "I knew that 'Mayhem' was the right idea at the right time. I could feel it." The company did absolutely no surveys, focus groups, or experiments to justify running the campaign. They just did it, and it worked.