December 2012 Archives

Research misused in drug ads

| No Comments

Direct-to-consumer pharmaceutical ads have come under much scrutiny in recent years as the number of these ads increased while governmental regulations laxed. Perhaps the most common issue brought up regarding pharmaceutical ads is the enhancement of the drug's positive potential effects, while negative side effects are hurried and distorted by background noise.

However, the misrepresentation of research statistics is another big issue common to drug advertisements. Let's look at some examples.

1. Dacogen
Used to treat some rare blood cell disorders and cancers, drug manufacturer Eisai claimed in a patient brochure that 38% of study patients responded positively to the drug. In a November 2009 letter from the FDA, the study's claims were said to be false. The FDA said the 38% figure was misleading because it was "...taken from a small subgroup of patients who responded well to the drug. Including all the patients in the study, the response rate was a mere 20%."
When reviewing statistical results, it is always important to consider the population from which the sample came from and to compare the statistical results to additional trials or previous related research. This study falsified the generalizability of the statistical results from one subgroup of the sample to say that the results could be expected for many people.

2. Kaletra
Kaletra is an AIDS drug from Abbott Laboratories. The company came under FDA scrutiny after a testimonial DVD featuring Magic Johnson suggested that the drug could be helpful to most HIV patients in managing their illness. In a July 2009 letter, the FDA warned the company against such claims as in a clinical trial, the drug was shown to be ineffective for 37% of patients.
This example is a reminder of the importance of knowing a research study's methodology. While the drug was ineffective for nearly 40% of participants in one study, the drug company overstated the drug's effectiveness and generalizability with the omission of critical information, such as the overall sample size, population from which the sample was drawn, and subsequent statistical results.

How social media is replacing focus groups

| No Comments

WATCH
New York Times Video: Social Media as Focus Group

It's no surprise that companies and retailers all over are investing more resources into tracking information posted on social media sites. What is surprising is just how useful this data can be; for a very low cost, companies can track valuable information like what consumers "like" and what they're searching.

As the above Times video shows, companies can combine this information with personal user information like name, age, gender, location, photos, etc. to create a more unfiltered profile of a consumer than would typically be gathered in a focus group setting.

Using social media as a sort of replacement for focus groups has proven particularly rewarding for companies because they get uncensored feedback and results. These unfiltered comments and results eliminate some concerns that are typical in traditional focus groups, such as respondents not having enough opportunities to express their opinions, expressing opinions that are not their own in effort to appease the moderator, or to not be in opposition with other members of the focus group. Social media also allows companies to get a lot of information from younger consumers, an age group that typically doesn't engage in focus groups.

As social media usage increases, market researchers will undoubtedly continue to exploit the information posted on these sites. It will interesting to watch how the development of social media encroaches on traditional research methods.

Poor sampling=costly outcome for Australian University

| No Comments

Since I studied abroad in Australia last spring, I still regularly check into Australian news; this morning I found an article that is particularly pertinent to our class's focus on survey data.

It all began when the Australian Department of Transport and Main Roads decided to implement a survey of bus stops along James Cook University in order to gauge how many people were using the buses along the route; the results showed that the bus stops were being grossly underused, which resulted in the decision to cut services to these stops in order to protect the financial resources used to maintain these routes.

However, Sunbus, the company that ran the survey on behalf of the Australian Department of Transport and Main Roads, failed to communicate with University officials when deciding when to do the survey. As it turns out, the survey gathered data from a non-representative sample; the sampling time frame that was used happened to be exam week at the university, a time, as we all know, that significantly reduces traffic on campus since students tend to either be at home or in the library studying.

This real-world example of misleading survey data should be taken as a lesson of the importance of accurate sampling and effective communication in research. When conducting surveys, it is highly important to select a sampling time frame that will reflect a "normal" event. It is also important to consider whether one survey sampling time is adequate; in this instance, if more than one survey had been conducted at different time intervals, it is likely the data from the exam period would be have been considered abnormal, or outliers, instead of the norm.

Just as it is important to select representative sampling time frames and intervals, it is equally as important for clients to communicate effectively with all parties involved in survey data. Had the research company spoken to University officials, they would have been informed about the dates of exams and could have a chosen a time that more accurately reflected day-to-day usage of the buses. This one oversight led to a waste of government monetary resources and could affect all students and teachers that rely on campus bus stops for safe transportation.

For full news coverage, click here.

Multicultural sampling

| No Comments

A recent survey of 106 marketers conducted by the Association of National Advertisers found that new media is a rapidly growing medium in terms of reaching multicultural consumers. Whereas stratified sampling has previously been used as a primary method of reaching cultural minority groups, recent data suggests that new technology is enabling marketers to reach these groups even more effectively.

2010's top 3 most popular methods of reaching multicultural consumers through new media:
1. The company's website (75%)
2. Online ads (72%)
3. Search-engine marketing (71%)

It's interesting to note that the use of other new mediums to reach speciality populations are rapidly growing; 32% of respondents said they used location-based apps in 2012 to reach multicultural segments (compared with 2% in 2010), the use of blogs has increased from 27% in 2010 to 44% in 2012, and 64% reported using mobile marketing (59% in 2010).

These new trends indicate that Internet and GPS technologies will continue to make special population sampling easier and more efficient. GPS technologies are particularly useful as the people who reside in a geographic area tend to be similar to each other in many ways, particularly in socio-economic status and culture. Blogs and mobile marketing are also proving useful methods of reaching smaller populations as they can create a sort of "speciality environment" where members of a group congregate either on the same blog or through similar applications of their mobile devices.

These findings are particularly important to market research so as to increase response rates; the more effectively marketers are able to target their populations, the more likely it is that the participants will respond since the information is truly relevant to them.

A "juicy" blog

| No Comments

Brain Juicer, a market research company that thrives on creativity and prides itself on its innovative methods, keeps a blog about all things human behavior and behavioral research.

Their blog is mostly loaded with market research experiments done by Brain Juicer and other companies, but also occasionally throws in industry-related cartoons and ads.

One of the entries I found most interesting and relevant to our own course material had to do with the effectiveness of click-through advertisements. Citing an article originally posted on AdAge, the Brain Juicer post "I belong to the Blank Generation" details an experiment that measured the amount of click-throughs on 6 blank ads, and then compared the number of click-throughs to those of other branded ads.

What the researchers ultimately found was that the click-through rates for the blank ads did not vary significantly from the click-through numbers for actual branded ads. This finding then raised the question, are click-through ads reliable metrics of online behavior?

To make sure the results were accurate, the researchers used various methods to detect any potential click fraud; these methods included tracking "...hovers, interactions, 'mouse downs,' heat maps--everything. (Heat maps detect click fraud because bots tend to click on the same spot every time.)"

The results suggested that roughly 4 clicks in every 10,000 impressions are unintentional. The research also indicates that all the online noise serves as a confound to the reliability of click-through rates as a metric. The extra noise encountered online can lead users to mistaken clicks, thus distorting the representation of intended behaviors and rendering behavioral data almost unidentifiable from surrounding noise.

Brain Juicer sums it up best: "We are great believers in focusing on behaviour, and that changing behaviour should be a research outcome. But - especially online - there is an awful lot of tempting behaviour to measure, and it's easy to be seduced by that. 'If you can't measure it, you can't manage it,' the gurus tell us, and they sound very pragmatic. But it doesn't make 'If you can measure it, you can manage it' any truer. A click seems concrete, but may be as insubstantial as... a blank advert."

These findings further reinforce the importance of working backwards in research; it's more important to focus on data application than it is to focus on data acquisition. Just because there is data to measure doesn't mean it's reliable, or even applicable to any business objectives.

About this Archive

This page is an archive of entries from December 2012 listed from newest to oldest.

November 2012 is the previous archive.

Find recent content on the main index or look in the archives to find all content.