November 2012 Archives

Ad Recall on Youtube

| 0 Comments

The above ad is for the video game Battlefield 3 (BF3). It was run heavily leading up to its November 2011 release. The spot made multiple appearances during male-focused TV broadcasts such as sports, but it also made just as many rounds on the web.

One of those places it appeared was on the web's most popular video site, Youtube. They bought up the whole website for, what seemed like, the entire month of October to promote this video game.

The thing about Youtube, however, is it allows viewers to skip their ads after five seconds. I've noticed this isn't the case for some ads now, as more and more ads seem to make you sit through the 15 or 30 seconds it's running. However, a year ago you could skip virtually any ad at any time, this BF3 spot included.

In viewing these ads, I noticed that Electronic Arts (EA), the publishers and promoters of Battlefield 3, found a way around the ad skip. The first 5 seconds of the ad (the part viewers can't skip) consisted only of a very intense animation of the game's cover, backed up with a very intense audio. The segment of the above video from 1:00-1:05 is the exact five second intro they would use. For most Youtube viewers this is the only five seconds of the ad they would ever see before skipping it. These five seconds did exactly what an ad is supposed to do: get the advertiser's message out.

As an advertising major, I pay close attention to these kinds of things, especially when it comes to digital as the book on how to advertise on digital is still being written (and whoever writes it gets rich). As far as ads on the fourth biggest website in the world goes, BF3 is the only one I've seen make use of those first five crucial seconds. The rest just run whatever 30 second spots they've made up for TV, and none get their message out before the first five seconds are up.

What I would like to do is devise a research experiment to answer the research question: How effective is EA's 5-second intro strategy on brand recall? In the interest of keeping things somewhat brief, I'm not going to go too far into specifics with regards to pre/post-testing and how to analyze the results, but I will go over the basic outline of the study.

My hypothesis is that there is a positive correlation between ad recall and the 5-second intro strategy. I believe that viewers exposed to the intro are much more likely to be able to recall an ad than viewers who are not.

I would develop a series of ads for a series of products that participants would be tested on for recall. Each ad would also have a 5-second intro developed for it. I would then gather five groups of about 20 participants each, and label them A, B, C, D, and E.

Group A would be pre-tested on their knowledge of the selected products. I would then instruct them to spend one hour browsing Youtube videos. I would ensure Group A is only exposed to ads that employ the 5-second intro strategy and allow the participants the option of skipping them. They would then be post-tested on the same products to measure recall.

Group B would be pre-tested as well, and then set loose on Youtube for the same amount of time. This group would only be exposed to ads without the 5-second intro strategy, and would be allowed to skip them. They would then be post-tested.

Group C would not be pre-tested in order to ensure the pre-test isn't influencing the results of the research. They would get the 5-second intros like Group A, and then be post-tested.

Group D would skip the pre-test too, then get ads without the intros like Group B. They would then be post-tested.

Group E would be the control group. They would only be post-tested, and not be pre-tested or exposed to any variables.

It's important to keep in mind that anyone who buys a video ad on Youtube also gets a banner ad next to the video. This study is meant only to measure the effects of the 5-second intro strategy. As a result, this banner ad would be removed so as not to influence the results of the study.

The rest of the site would have to be dumbed down as well. The comments section would be removed, and the recommended videos section would recommend the same video for all participants, regardless of what video they just watched. All this to ensure no other variables can manipulate the results.

In-depth questionnaires/surveys would be developed for the pre/post-tests and they would collect both quantitative and qualitative data. The data would then be appropriately coded and examined to either validate or reject my hypothesis.

I'm very confident that my hypothesis would be supported, and the ad industry would hail me as a hero for recognizing such an ingenious ad strategy. I would go down in history with ad greats like Dan Bernbach and Ridley Scott, and forever be showered in riches... only to wake up from what surely would have been such a very good dream.

Big Data and Big Business

| 0 Comments

http://blog.vovici.com/blog/bid/111478/Big-Data-and-the-Executive-Level-It-s-Not-What-You-Think

I was surprised to read this blogger's piece about big data in the workplace. It seems to me that there is no centralized database for sharing data within companies.

The common practice for researchers gathering data is to pick out only the relevant data they need to accomplish their research objectives. Be it gaining consumer insight, measuring the competition, or increasing sales, the researchers take data relevant to their goals and throw what they don't need away.

Unfortunately, it's as the old saying goes, "one man's trash is another man's treasure." In the world of research, this trashed data can benefit researchers and analysts in other parts of the company. Instead of being able to use the already-gathered data of their coworkers, they must take the time and resources to gather their own.

The author proposes the creation of a "customer-centric" and "cross-departmental collaboration" of research-sharing that would ensure no data would ever go to waste. He calls for the creation of the Customer Experience Officer (CEO) to facilitate these new models of information-sharing within big business.

The reason I found this all so very surprising is because I learned in another research class that most major companies have their own libraries. To my understanding, these libraries are havens for company records and all sorts of other information pertaining to the company. I guess, however, that only data deemed relevant to the current researcher makes it into the library. If that's the case, then I think companies need to adopt this data-sharing model immediately.

NFL Fun Facts

| 0 Comments

http://dailyinfographic.com/nfl-fun-facts-infographic

Playing on the spirit of presenting research in a fun and entertaining manner using infographics like the ones we made in class, I've decided to blog on this infographic which presents a variety of random facts about the NFL.

When it comes to information, the NFL has no shortage of it. They record data and statistics with such meticulous detail, that a person suffering from OCD would seem completely normal in comparison. Every kick, catch, pass, and run gets jotted down for the record books.

When it comes to presenting such data, the NFL takes a note from our Moodle reading, "Research With Legs." During broadcasts they show only statistics that are relevant to the current game, often with footage of past feats. For example, when Drew Brees was getting ready to break Johnny Unitas' consecutive-games-with-a-passing-TD record earlier this season, they showed the number of games Johnny Unitas went with passing TD's as well as some video of Unitas playing the game. They did not show irrelevant stats like the longest field goal or longest kick return.

On the web, however, journalists, analysts, and bloggers all struggle to present information about the NFL in an intriguing light. Unless there's some information to put out with direct relevance to current events regarding the game, viewers are not likely to stop and view it. So many must resort to eye-catching infographics like the one linked too above. The information contained in this one is completely random, yet the interesting graphic helps keep the attention of the viewer.

Cell Phones: A Consumer's Best Friend

| 0 Comments

http://pewresearch.org/databank/dailynumber/?NumberID=1638

This Pew Research poll surveyed 1,000 Americans on how they used their cell phones while shopping for the 2011 holiday season. According to it, 52% of Americans used their cell phones in some way to help make purchasing decisions while in a store.

38% of respondents phoned a friend for purchase advice, while 24% looked up product reviews and 25% checked for better prices in other stores. 33% of respondents did both of the latter two actions.

The data reflects the digital divide in that young and urban/suburbanites were much more likely to use their cell phones while shopping than their older and rural counterparts.

What I don't understand is, why did Pew need to run a survey to determine this information? With all this talk of digital/text analytics, and cookies tracking our every (albeit electronic) move, couldn't they just have used one of the analytics tools we've learned about in class to gather this information? Perhaps they would not be able to determine if shoppers actually called their friends for advice, but they certainly could have determined whether they checked product reviews and other prices. Our phones record our geographical location at all times, so they could determine if they were in a store or not. Of course, the practice of digital analytics is still in its infancy, so maybe we'll just have to wait a few decades for such detailed research.

One last thing I must add is about the effects of cell phones on consumers and the strategic communicators who try to influence them. This poll sheds much light on the growing prevalence of mobile in consumers' everyday lives, and I believe that we, as strategic communicators, need to be aware of this. Mobile is the future now. No longer will a "cool" or "hip" Super Bowl ad result in a direct purchase. Consumers now have the ability to bargain, and its right in the palm of their hands. What we must remember is that the tool they use to bargain with (their cell phone) is just another channel of communication. It is a channel which, like all channels, we can manipulate to our benefit. We just need to figure out how.

Erroneous Sampling

| 0 Comments

http://www.politicususa.com/90-delusional-fox-news-viewers-romney-beat-obama-ohio.html

http://www.mediaite.com/tv/colbert-thrilled-that-fox-news-blogger-dean-chambers-unskewed-polls-showing-obama-in-lead/

Last September, I watched Stephen Colbert do a piece on a Fox News poll concerning the election. Unfortunately Fox News seems to have taken down the poll (likely because it is quite a stab at their credibility) and the only remnants remaining of it on the internet are from the above blogs.

In an attempt to determine who would win the swing state of Ohio between Obama and Romney, Fox launched a poll among its own viewers. It determined that Romney would lead Obama in Ohio 90% to 10%, effectively giving Romney the White House.

If this isn't an example of sampling error, I don't know what is. The sample is a nonrepesentative, convenience/haphazard sample as Fox News obviously took what was available. It would be fair if they generalized these results to their own viewers, but they generalized them to the entire US population. So not only did they discredit their entire poll, but they unethically presented the results of research.

Luckily, political pundit and media critic Stephen Colbert stepped in and called b***s*** on the poll, saving America, yet again, from the lamestream media (his words, not mine).

Job Climate Change

| 0 Comments

http://www.gallup.com/poll/158972/americans-best-job-climate-financial-crisis.aspx

According to recent a Gallup poll, Americans' perspective of the job climate in the US has changed. Unlike climate change pertaining to the environment, job climate change has actually changed for the better.

24% of Americans say it is now a good time to find a job, which is nearly three times as many as a year ago (8%).

http://sas-origin.onstreammedia.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/e8rsfztivueard2jklb5yg.gif

Among employed Americans or unemployed and looking for work, 26% feel the job climate is good versus 7% last year.

http://sas-origin.onstreammedia.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/v6l6riiiyeimzwet0kyvjw.gif

The samples can be fairly and accurately generalized to the populations they are representing. Both polls were phone polls that used random-digit dialing to contact their participants. The sample sizes were plenty large enough, as well, at 1,015 for the first and 527 for the second.

The polls is also valid in terms of question wording. The polls measured the perceptions of participants by asking one simple question: thinking of the job situation today, would you say that it is now a good time or bad time find a quality job? The question is understandable, and not leading or loaded, indicating the poll measured what it is supposed to measure.

The poll is also reliable in that the results measured consistently with other polls. These polls are conducted year-over-year, and there are (sometimes drastic) changes in results, which could lead researchers to believe that the results are unreliable. However, Gallup compares their results to other polls relating to the economy in order to show reliability.

http://sas-origin.onstreammedia.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/ssit2-aae0qcbnv5mholya.gif

These two polls are the the Economic Confidence Index and the Job Creation Index. The first measures Americans' confidence in the economy and the second measures how many newly-hired Americans there have been in the past year. The results of all polls are correlated (especially Job Creation vs Job Climate Perception) and add to each other's reliability.

Gallup admits a possible sampling error of +/- 3 percentage points and a confidence interval of 95%. In its summary, they theorize that the more positive perception of jobs and the economy could result from improved unemployment rates in past months. Of course, to establish causality in that matter, Gallup would need to launch a research experiment.

Text Analytics

| 0 Comments

http://www.tomhcanderson.com/wp-content/uploads/2011/09/tomhcandersonanalyticsodintextanalyticsama.pdf

http://blog.odintext.com/

I don't understand the practice of text (or digital) analytics as much as I would like as we haven't spent too much time on it, but I wanted to blog about this article because it relates very much to what our guest lecturer from Carmichael Lynch was talking about. It was written about a year ago by Tom H.C. Anderson and he addresses some problems relating to text analytics that our lecturer also spoke of in class.

He describes current text analytics as being "pure play" and not specifically tailored for market research as most have been developed for the defense, intelligence, and financial industries. This can lead to errors in analysis of social media, blogs, and other texts on the web in relation to a market researcher's research goals. Our lecturer mentioned that a tweet could mention a business, but it's hard to determine whether the tweet was positive, negative, or completely irrelevant without actually reading the tweet (something that's impossible to do as a researcher could have thousand or even millions of tweets to review).

The author calls for market researchers to develop their own text analytics software, made to meet the needs of the market researcher and to accurately analyze the wealth of qualitative data on the world wide web. He is also in the business of developing his own software for his company, Anderson Analytics. The software is called Odin Text (linked to above), and promises to properly analyze text date in order to meet marketing and communication needs.

I feel as though text/digital analytics is the future of strategic communications research, and developing the proper tools for practicing it is detrimental to that future. Our lecturer mentioned that 23% of all US advertising is now digital and the author states that 85% of all information in the world is available via text. The world is "digitizing" (trademark) and in order for communications professionals to do their jobs, we must "digitize" right along with it.

A Box Office Smash (That's Actually Educational)

| 0 Comments

*** Use this link if the embedded video didn't work ***
http://www.youtube.com/watch?v=vmulkCjHqqw

The above video is a comic strip created by grad students from Georgia State University. It shows the differences between quantitative and qualitative research in a more engaging way than your average textbook or powerpoint presentation.

Qualitative and quantitative research are portrayed as two separate superheroes named Captain Quan T. Tative and Dr. Qual I. Tative (or Quan and Qual for short). They fight villains (aka research problems) with their special research powers. These powers, as you can guess, are the powers of quantitative and qualitative research.

The villain (or research problem) in this episode is not the consumer. Rather, it is an ethical dilemma posed by the diabolical Dr. D Plagiarism (Dr. DP), who is looking to plagiarize the research of other people to pass off as his own. The evil doctor is seen in the library stealing the research of James Stelheimer, Ph. D, compelling our heroes to act.

The heroes jump into action, detailing exactly how they will use their respective skills to bring down Dr. P, and it echoes exactly what we learned in class. Quan says he will gather data ABOUT Dr. P, while Qual says he will gather data to figure out HOW to destroy him. Quan will be measuring the "what," "why," and "how much" surrounding the problem, and Qual will gain a deeper understanding of the "why" and "how."

Like any good researcher, the heroes employ a team of cohorts to do all their research for them (I'll bet they're interns or recent college grads). Quan's team comes up with a variety of quantitative data including his height/weight, hair type, and location. They devise a plan to ambush him with their "villain profiler" (a survey I assume) to come up with more data about his past offenses so they can determine a fair punishment.

Qual's team plans to "interrogate" Dr. DP, by doing an in-depth personal interview (gimminy-gillickers!). They plan to measure his actions, thoughts, beliefs, motives, and attitudes in order to come up with an appropriate plan of action.

As is the way of superhero stories with more than one protagonist, our heroes butt heads over whose methods are the best. Coincidentally (or not), this mirrors the research world as researchers are constantly balancing and choosing between the two methods based on things like access to research resources, budget, time constraints, and even personal preferences. Luckily, our interns (I mean cohorts) step in, reminding our heroes that their work is "all for the good of research!"

The heroes overcome their differences and unite into an Avengers/Justice League hybrid team (I'm a superhero nerd, it's best to google those if you don't know what they are). They call themselves The Mixed Methods Research Heroes and set out to stop Dr. DP together. They lock him up, while also reminding viewers that while both methods of research are good, they're best if they're utilized together.

Now, if you'll excuse me. I'm going to go write a sequel to this featuring Qual and Quan's arch-nemeis, Sampling Error.

Not Lolcats

| 0 Comments

http://www.huffingtonpost.com/2012/08/08/kitty-cam-uga-research-national-geographic-killing_n_1757070.html

I know this article doesn't have much to do with communication research, but it is research. Plus, it's about cats, and I love cats too much to ignore them.

A recent University of Georgia study determined that our beloved feline critters are really nothing more than murdering psycopaths. They attached "kitty cams" to 60 pet cats in Georgia and monitored their activity at night.

They found that the cats killed an average of 2.1 animals per week, and it was apparently just for sport as they would only eat their kills 30% of the time. 21% of the time, they would bring their kill home (I can only imagine they thought it would be some kind of trophy).

I can't help but think there has been some bias introduced to this study. The kitty cams were outfitted "with LED lights." I'm familiar with night optics as I've spent some time in the military, and LED lights are not night optics, they are flashlights. It is possible that the light given off by the camera (while small) offered an unfair hunting advantage to the cats by helping them see, and by freezing their prey (as animals tend to freeze when suddenly immersed in light). It's possible these cats caught more prey than their non-recorded counterparts, thus their average kills per week is too high.

Also, it is possible that the critter cams may have been bulky enough to inhibit the cats in some way (they basically look like shock collars), and perhaps may have negatively affected their hunting. It is just as possible that the cats caught fewer critters than their unrecorded counterparts, and their average kills per week is actually too low.

The news article makes no mention of possible errors like this in the research, and does not link to an academic report that would discuss such errors. Of course, this is the norm with news organizations. They tend to present things as absolute, gospel-like fact regardless of possible errors. Then, when they're proven wrong, they ignore it and pretend like it never happened. All in the name of credibility, I guess.

Nevertheless, there is only one thing I can say with certainty from this research, and that is that my cats will never go outside again.

Army Times Presidential Election Poll

| 0 Comments

http://www.armytimes.com/news/2012/10/military-times-poll-romney-bests-obama-2-1-100712/

The Army Times, a newspaper not owned by the US Army, but rather by the same publicly-traded media company that owns USA Today, recently conducted a Presidential poll among members of the military. It found that Romney is leading Obama 2-1.

The newspaper conducted the poll via email, contacting only subscribers of Military Times newspapers (Army Times, Air Force Times, etc.). Most subscribers of these newspapers tend to be senior-enlisted/senior-officers, thus the poll is "skewed slightly toward servicemembers who have made the military their lifelong career." The respondents were also overwhelmingly white (80%) and male (91%). In describing this measured military-demographic, the Army Times appropriately labelled it the "professional core of the military."

In total, 3,100 servicemembers responded to the poll. Of which, 66% support Romney, while 26% support Obama. In issues facing the nation, the economy is at the top of servicemembers' minds with 66% of them rating it the number one issue in the election. In contrast, only 1% rate the war in Afghanistan as the nation's biggest issue, compared with 16% in 2008. One Army Captain cited the troops' salaries and ability to get a job should they separate from the service as the main factors for their concerns about the economy.

The online version of the poll does not list a sampling error, or confidence level. However, I read the print version (which compelled me to blog about this particular poll in the first place), and they were listed there.

Furthermore, the poll established validity in that it did properly measure members of the military. The newspaper can distinguish between its military and civilian subscribers by sending emails only to subscribers with addresses ending in @us.army.mil, @us.navy.mil, etc. To get an email address like that, you must be in the military.

Lastly, the poll established reliability as it showed the same Conservative slant in the politics of servicemembers that polls have shown in the past. According to UNC-Chapel Hill Professor of Military History, Richard Kohn, "the poll really tracks with the traditional [conservative] views of the military."

Tag Cloud