October 2012 Archives

Social Networks in Market Research

| No Comments

How do individuals act on social networks? What drives them to interact with brands within their networks?

These questions and more are addressed in Mark Earls' new book, "I'll have what She's Having". Within the novel, Earls and his colleagues dive deep into what drives human behavior on social issues. Two key discoveries of the group are extremely notable:

1. "Brands and marketing content are not important on their own. What matters most is what people (e.g. staff, customers and non-customers) do with them and how they interact with other people in their networks. The scale and structure of social networks will influence how your brand is adopted and evolves as a social entity."

This point was extremely interesting to me. With all the work that brands such as NIKE, McDonalds, and Anheiser Busch have put into their social media, the results of their work will ultimately be decided by what the users do with the content. In other words, it is not about what a company does in building a social media structure, it is about cultivating enough interest to force user interaction. Interaction is what drives a media campaign, not passive viewing. Media organizations must create an environment for interaction rather than forcing content on users.

2. " We are more likely to be influenced by the actions of others in our network. Thus to understand the spread of ideas and innovation we need to pay more attention to the characteristics of our social networks."

Brands should look more into the ability of one person's influence over another rather than marketing equally to all subjects. Perhaps creating "Brand Leaders" in developing campaigns and targeting very specific groups could be positive, as the leaders will then interact within their social groups to spread word on a product.

Survey on Football and Referees

| No Comments

2005PoinsettaBowl-Navy-LOS.jpg

This past week, I was approached by a University of Minnesota student asking me to take a survey for a research class. The survey was on Football and feelings towards penalties and referees within the games. The methods of the survey had both benefits and flaws that led me to look at the survey with a critical eye, given what I have learned in the Journalism 3251 class.

Starting with the negatives, the researchers methods of selecting participants were majorly flawed. The researcher would not be able to generalize their information because the sample is not random. The sample was a form of convenience sampling to the researcher, as they first began to recruit students by standing at a corner and handing surveys to the first students that walked by. As we learned in class, this is not a random sample. The population can't be generalized, as students are selected based on their presence to the researcher. All of these students could have similar opinions, as they may all come from one class. Then, the researcher began to ask if those walking by were football fans and if so, they should take the survey. This is extremely flawed; football fans will have an inherent bias against the NFL replacement referees after following the first few weeks of the sport.

On the positive side, the researchers questions, in questionnaire format, stayed objective and clear. None of the questions led the respondents one way or the other. All questions stood on general likert scales, except for basic demographic information. The scales effectively gauged one's knowledge of football and their opinions, giving a no confidence section for those not knowledgeable. The questions asked about opinions related to NFL referees and fines given in the NFL. The survey did a fine job gaining insights, but those insights will not be generalizable

Twitter Teaming Up with Nielsen?

| No Comments

212133-nielsen_logo_original.jpg

Where is the world of research heading going into the future? How integrated will social media become with research analytics?

These two questions perhaps gained some clarity at the beginning of the month with the announcement of the newly formed partnership between Twitter and Nielsen. This partnership could have huge ramifications on the media world. The partnership seeks to add research polls and "Twitter Surveys" under the promoted tweets section on the Twitter website. The partnership has benefits for both companies, as Twitter will gain slotted revenues by meditating research information and Nielsen will build new structures for research results on social media.

While the partnership is a win-win for the two companies, is it a win for the public? I question whether those on Twitter will actively participate in the surveys. There is little incentive to participate in the surveys outside of the pure interest of the public. Even then, respondents may have a concentrated interest in the studies and therefore sway the results. This would additionally deprive the surveys of the ability to generalize, as only the opinionated would respond.

To be successful, I believe that the surveys and polls not only have to provide incentive for completion, but also must be aware of the respondents. The surveys will fail if they don't generate interest from the wide public. While the partnership between Nielsen and Twitter has potential to build a strong future in the research world, I will need to see the initial results of the research before declaring this a main direction of research going into the future.

Communication Research within the Journalism School

| No Comments

Recently, I took an online survey through the Journalism School on politics and brands. The survey ranged from questions on the feelings respondents have towards common brands such as Coca Cola, their opinions on "Super Pacs", and their overall political opinions related to particular candidate. The survey gave me an interesting perspective as a participant in a large project.

The survey had both major pros and some slight cons. To begin with the pros, I felt that the survey had finely worded questions that provided clear directions and unity of content. None of the questions led the participant towards one answer or another. The survey made effective use of likert and semantic differential scales in assessing attitudes towards candidates. The questions flowed well, with videos and articles supplementing the questions. I felt pleased with my work as a participant at the end of the survey.

A number of issues arose as I took the survey. Many questions were repeated from one section to the next when looking at political candidates. While I understand that the researchers sought unity in their work, the continued repetition of questions irritated me and made me lose interest at times. Additionally, I felt that some questions were simply unnecessary. Why might it be important to know if I associate Coca Cola with the term "lonely"? Some questions like this left my head spinning.

Overall, the survey enriched me with experience in the research field. It was fun to apply my thinking from JOUR 3251 class into the survey, seeing what the researchers did successfully and what they could improve upon.

Color and Brands in Design Research

| No Comments

Does color help to influence consumer choice? Do consumers remember one brand over another simply because of the packaging color?

These questions have come to light recently with Cadbury, the chocolate company that has lasted over 100 year. The company is attempting to trademark the purple color of its packaging against others within the industry. With the increased competition among brands in today's market, tradition and familiarity have more influence on a consumer's purchasing decisions than any other factor. Stuart Chapman notes in his article Why colour matters in design research, "colour is consumers' number one means of identifying brands. It's for this reason that it can be extremely damaging when a brand's colours are 'borrowed' by a competitor. Consumers typically shop on autopilot - so when rival packs mimic a brand's 'trade dress', this often leads to shopper confusion."

Cadbury builds tradition and familiarity with its iconic purple wrapper. Any mimic brand poses a threat to Cadbury. Research has shown that color can be used to convey a brand's values. Cadbury is attempting to preserve its future by looking at the research studies and patenting purple. The issue that arises with this, however, is that other brands may follow suit and patent their own colors to match the findings. There must be a line drawn in the sand. M&Ms are symbolized by their brown wrapper; McDonald's is known for their golden arches on red. Brands must take advantage of their colors, but trademarking may be on the extreme end.

article - http://www.research-live.com/comment/why-colour-matters-in-design-research/4008429.article

Social Networking and the News

| No Comments

christie.jpg

According to a new poll by Pew, more Americans are beginning to rely on social media websites such as Facebook and Twitter to get their news. The large bump in social media reliance can be attributed to the rise of smartphones and the use of tablets. 39% of Americans in the poll said that they search for news on their cell phones or tablets. More people report seeing headlines for news articles on social media sites than ever before. Trends in the study also show that the rise in digital news can lead to the proliferation of television news going into the future. Pew sampled 3,003 adults with a 95% confidence interval through simple random sampling.

The poll brings up an interesting trend in today's media that researchers of communication will need to note. The traditional methods of media, namely print and television, are losing ground in the digital age. While this has long been predicted, little concrete evidence would suggest the demise of television news. While the end of papers and television may be far in the distance, it is important for researchers to note where their audiences consume the most media. While television is still at the top of news gathering sources, the rise of the internet and social media will continue to bring in bigger audiences, especially with mobile phones taking off. Researchers and companies must recognize this shift in order to target audiences more specifically. Perhaps an article that is allocated through Yahoo or Reddit is more effective than a PR plug on a local news program due to sharing and retweeting on social media. The world is adapting;researchers and companies must adapt with the changing culture.

A future concept that may be interesting to think about as well may be the use of social media on mobile phones to communicate news. Mobile phones are a huge subject of research going into the future. Everyone has their phone ready in their pocket at all times. Social media provides a unique medium on mobile phones to integrate news in a more viral and ubiquitous way.

Article - http://www.people-press.org/2012/09/27/about-the-media-consumption-survey-data/

The Ethical Questions Raised by Tracking Services

| No Comments

When should advertisers be able to collect information about consumers? Are consumers able to be tracked on all sites they visit, regardless of their intentions on the sites?

These questions are being raised consistently in America today. In an article titled
US consumers expect do-not-track to stop all data collection, the issue of "do-not-track" is confronted. In a survey of 1,200 American citizens, 60 percent said that they would expect do-not-track to mean that all data collection on the web or through mobile devices is stopped. This is the stance of many American lobbying groups, who wish to protect the interests of the people. Advertising professionals believe that do-not-track should only apply to behaviorally-driven targeting. Only 14 percent of those polled agreed with this definition.

The FTC in March had this to say about do-not-track; "An effective do-not-track system should go beyond simply opting consumers out of receiving targeted advertisements; it should opt them out of collection of behavioral data for all purposes other than those that would be consistent with the context of the interaction (e.g., preventing click-fraud or collecting de-identified data for analytics purposes."

All things considered, it is apparent that the definition of do-not-track services is foggy at best. In communication research, behavioral data can shed new light on consumer preferences and needs. Targeted advertising and data collection can be crucial in a study to determine the best methods to market a product and see what types of products may be successful. The question is how much is too much? What crosses the line when tracking individuals. I believe that this topic and article greatly relate to the class lecture on ethics. While information on purchase decisions and behaviors can bring in great data to better target consumer groups, collecting information from unknowing individuals on a consistent basis can cross an ethical threshold. Businesses must way their decisions on the methods of their studies with ethical respects. The Do-not-track issue will stay in the American spotlight as long as companies test the ethical standards that our nation has built.


Article: http://www.research-live.com/news/legal/us-consumers-expect-do-not-track-to-stop-all-data-collection/4008453.article

Research Methods Used for Political Polling

| No Comments

Yesterday, as I was looking over the news, I came across an article noting that Governor Mitt Romney now has a four point lead in the presidential race after Pew conducted a post debate survey. The poll is the first of many to be conducted by the Pew Research Company. Pew's survey showed that viewers of the first presidential debate "overwhelmingly believe Romney won the debate: 66 percent said the republican candidate performed better, compared with only 20 percent saying Obama performed better".

After reading over the article, I found the survey methods at the bottom of the page to be extremely interesting. One has to question where all of Pew's numbers come from. According to the article, Pew conducted live phone interviews from October 4th through 7th with 1,201 registered voters and 1,112 likely voters nationwide. The margin of error is 3.3 percentage points for registered voters and 3.4 percentage points for likely voters.

Phone surveys are quick and efficient, but leave many questions. Was the population Pew interviewed representative of the entire nation? There is little information on where the sample came from; perhaps the majority of respondents were from a republican-favoring state. What was the method of the phone survey? Did numbers come from a random-digit-dialing technique or phone lists? How did Pew verify that the interviewee was a registered voter? All of these questions come with a phone survey. Perhaps another type of survey, like an in person interview, would reduce the amount of questions that this survey has seemed to ignore.

Link to the Survey:

About this Archive

This page is an archive of entries from October 2012 listed from newest to oldest.

November 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.