Research misused in drug ads

| No Comments

Direct-to-consumer pharmaceutical ads have come under much scrutiny in recent years as the number of these ads increased while governmental regulations laxed. Perhaps the most common issue brought up regarding pharmaceutical ads is the enhancement of the drug's positive potential effects, while negative side effects are hurried and distorted by background noise.

However, the misrepresentation of research statistics is another big issue common to drug advertisements. Let's look at some examples.

1. Dacogen
Used to treat some rare blood cell disorders and cancers, drug manufacturer Eisai claimed in a patient brochure that 38% of study patients responded positively to the drug. In a November 2009 letter from the FDA, the study's claims were said to be false. The FDA said the 38% figure was misleading because it was "...taken from a small subgroup of patients who responded well to the drug. Including all the patients in the study, the response rate was a mere 20%."
When reviewing statistical results, it is always important to consider the population from which the sample came from and to compare the statistical results to additional trials or previous related research. This study falsified the generalizability of the statistical results from one subgroup of the sample to say that the results could be expected for many people.

2. Kaletra
Kaletra is an AIDS drug from Abbott Laboratories. The company came under FDA scrutiny after a testimonial DVD featuring Magic Johnson suggested that the drug could be helpful to most HIV patients in managing their illness. In a July 2009 letter, the FDA warned the company against such claims as in a clinical trial, the drug was shown to be ineffective for 37% of patients.
This example is a reminder of the importance of knowing a research study's methodology. While the drug was ineffective for nearly 40% of participants in one study, the drug company overstated the drug's effectiveness and generalizability with the omission of critical information, such as the overall sample size, population from which the sample was drawn, and subsequent statistical results.

How social media is replacing focus groups

| No Comments

WATCH
New York Times Video: Social Media as Focus Group

It's no surprise that companies and retailers all over are investing more resources into tracking information posted on social media sites. What is surprising is just how useful this data can be; for a very low cost, companies can track valuable information like what consumers "like" and what they're searching.

As the above Times video shows, companies can combine this information with personal user information like name, age, gender, location, photos, etc. to create a more unfiltered profile of a consumer than would typically be gathered in a focus group setting.

Using social media as a sort of replacement for focus groups has proven particularly rewarding for companies because they get uncensored feedback and results. These unfiltered comments and results eliminate some concerns that are typical in traditional focus groups, such as respondents not having enough opportunities to express their opinions, expressing opinions that are not their own in effort to appease the moderator, or to not be in opposition with other members of the focus group. Social media also allows companies to get a lot of information from younger consumers, an age group that typically doesn't engage in focus groups.

As social media usage increases, market researchers will undoubtedly continue to exploit the information posted on these sites. It will interesting to watch how the development of social media encroaches on traditional research methods.

Poor sampling=costly outcome for Australian University

| No Comments

Since I studied abroad in Australia last spring, I still regularly check into Australian news; this morning I found an article that is particularly pertinent to our class's focus on survey data.

It all began when the Australian Department of Transport and Main Roads decided to implement a survey of bus stops along James Cook University in order to gauge how many people were using the buses along the route; the results showed that the bus stops were being grossly underused, which resulted in the decision to cut services to these stops in order to protect the financial resources used to maintain these routes.

However, Sunbus, the company that ran the survey on behalf of the Australian Department of Transport and Main Roads, failed to communicate with University officials when deciding when to do the survey. As it turns out, the survey gathered data from a non-representative sample; the sampling time frame that was used happened to be exam week at the university, a time, as we all know, that significantly reduces traffic on campus since students tend to either be at home or in the library studying.

This real-world example of misleading survey data should be taken as a lesson of the importance of accurate sampling and effective communication in research. When conducting surveys, it is highly important to select a sampling time frame that will reflect a "normal" event. It is also important to consider whether one survey sampling time is adequate; in this instance, if more than one survey had been conducted at different time intervals, it is likely the data from the exam period would be have been considered abnormal, or outliers, instead of the norm.

Just as it is important to select representative sampling time frames and intervals, it is equally as important for clients to communicate effectively with all parties involved in survey data. Had the research company spoken to University officials, they would have been informed about the dates of exams and could have a chosen a time that more accurately reflected day-to-day usage of the buses. This one oversight led to a waste of government monetary resources and could affect all students and teachers that rely on campus bus stops for safe transportation.

For full news coverage, click here.

Multicultural sampling

| No Comments

A recent survey of 106 marketers conducted by the Association of National Advertisers found that new media is a rapidly growing medium in terms of reaching multicultural consumers. Whereas stratified sampling has previously been used as a primary method of reaching cultural minority groups, recent data suggests that new technology is enabling marketers to reach these groups even more effectively.

2010's top 3 most popular methods of reaching multicultural consumers through new media:
1. The company's website (75%)
2. Online ads (72%)
3. Search-engine marketing (71%)

It's interesting to note that the use of other new mediums to reach speciality populations are rapidly growing; 32% of respondents said they used location-based apps in 2012 to reach multicultural segments (compared with 2% in 2010), the use of blogs has increased from 27% in 2010 to 44% in 2012, and 64% reported using mobile marketing (59% in 2010).

These new trends indicate that Internet and GPS technologies will continue to make special population sampling easier and more efficient. GPS technologies are particularly useful as the people who reside in a geographic area tend to be similar to each other in many ways, particularly in socio-economic status and culture. Blogs and mobile marketing are also proving useful methods of reaching smaller populations as they can create a sort of "speciality environment" where members of a group congregate either on the same blog or through similar applications of their mobile devices.

These findings are particularly important to market research so as to increase response rates; the more effectively marketers are able to target their populations, the more likely it is that the participants will respond since the information is truly relevant to them.

A "juicy" blog

| No Comments

Brain Juicer, a market research company that thrives on creativity and prides itself on its innovative methods, keeps a blog about all things human behavior and behavioral research.

Their blog is mostly loaded with market research experiments done by Brain Juicer and other companies, but also occasionally throws in industry-related cartoons and ads.

One of the entries I found most interesting and relevant to our own course material had to do with the effectiveness of click-through advertisements. Citing an article originally posted on AdAge, the Brain Juicer post "I belong to the Blank Generation" details an experiment that measured the amount of click-throughs on 6 blank ads, and then compared the number of click-throughs to those of other branded ads.

What the researchers ultimately found was that the click-through rates for the blank ads did not vary significantly from the click-through numbers for actual branded ads. This finding then raised the question, are click-through ads reliable metrics of online behavior?

To make sure the results were accurate, the researchers used various methods to detect any potential click fraud; these methods included tracking "...hovers, interactions, 'mouse downs,' heat maps--everything. (Heat maps detect click fraud because bots tend to click on the same spot every time.)"

The results suggested that roughly 4 clicks in every 10,000 impressions are unintentional. The research also indicates that all the online noise serves as a confound to the reliability of click-through rates as a metric. The extra noise encountered online can lead users to mistaken clicks, thus distorting the representation of intended behaviors and rendering behavioral data almost unidentifiable from surrounding noise.

Brain Juicer sums it up best: "We are great believers in focusing on behaviour, and that changing behaviour should be a research outcome. But - especially online - there is an awful lot of tempting behaviour to measure, and it's easy to be seduced by that. 'If you can't measure it, you can't manage it,' the gurus tell us, and they sound very pragmatic. But it doesn't make 'If you can measure it, you can manage it' any truer. A click seems concrete, but may be as insubstantial as... a blank advert."

These findings further reinforce the importance of working backwards in research; it's more important to focus on data application than it is to focus on data acquisition. Just because there is data to measure doesn't mean it's reliable, or even applicable to any business objectives.

Market research predictions for 2013

| No Comments


Greenbook webinar: 2013 market research predictions

Greenbook recently hosted a webinar regarding upcoming industry trends in 2012.

Highlights include:

-Marketers are trending toward "do-it-yourself" research (like Survey Monkey)
-Analytics are becoming marketers' primary focus and are becoming highly integrated
-Qualitative and quantitative research are becoming less distinct and they will continue to merge with qualitative items being included in quantitative methods (like surveys) and vice-versa
-Mobile surveys will continue to gain popularity among consumer-oriented surveys and as so, will lead to shortened survey lengths and more streamlined research objectives across all mediums/projects
-On the other hand, it will become increasingly important to consider the results of other survey sources as mobile and social media research can lead to biased results towards specific demographics and behavioral audiences

'Tis the season for...surveys?

| No Comments

Starting a couple of weeks before Thanksgiving, I've noticed a huge increase in the amount of email surveys I am receiving from companies that I have previously shopped with online. Most of the surveys ask me to rate the company's website, and if applicable, their in-store customer service. As the biggest time of year for retailers, this really isn't too surprising; companies want to both remind consumers that they exist and, more importantly, want to revamp their customer experience if necessary to ensure that customers spend their money with them this holiday season.
A few of the companies have offered incentives for taking the survey, like "exclusive" money-saving deals, while others have promised the opportunity to be entered into a prize drawing. I've found it particularly interesting that at the end of a lot of these surveys, the companies will take the opportunity to thank me and ask me to provide friends' or family members' email addresses, which would increase my chance to win the drawing. It seems that this holiday season, companies are using email surveys to not only get customer feedback/input, but as an opportunity to increase their own customer base by incentivizing existing customers to recommend others for a potential increased payoff.

mQuest: mobile survey app

| No Comments

Screen Shot 2012-11-30 at 2.08.51 PM.png

With mobile usage ever increasing, it was only a matter of time before mobile survey apps became a new trend. Here, we look at the pros and cons of mQuest, one of the leading mobile survey apps.

Pros:
-The survey can be corrected in realtime, even when it is ongoing, to control for any errors or confusing items that may be discovered.
-Surveys can be supplemented with embedded visuals, audio, and videos.
-Surveys can be translated into multiple languages, allowing for diverse samples of people to participate.
-Built-in "plausibility checks" to control for wrong inputs.
-Fast (information is saved as survey is taken) and FREE.
-Survey questions can be made in many different formats.
-Works for both Android and Apple devices.

Cons:
-Limited amount of questions that one can ask on a mobile device.
-Questions on a mobile device cannot be too complex since people are on-the-go.
-Not everyone owns or has access to a mobile device.

Overall, mQuest seems to have more advantages than disadvantages, however, researchers must make sure that mobile surveys actually make sense for the data they are trying to collect. Just because mobile technology is a growing medium doesn't mean it is the right medium for all research studies.

mQuest's website

The New York Times recently published an article regarding the results of a 30 year observational study that found mammograms really don't work as well as the public believes. While the results found that mammogram screening did lead to increases in the detection of early stage breast cancers, "the number of cancers diagnosed at the advanced stage was essentially unchanged." If mammograms really are effective at finding deadly cancers sooner, then cases of advanced cancer should have been reduced, however, that was not the case.

The Times article details why observational studies are usually hard to trust for reliable results; it mentioned issues with observational data we have discussed in class, such as confounding variables and the lack of randomization. The interesting part is that although the Times mentions these common observational issues, the article advocates for the study's results since experimental and longitudinal studies have found similar results, yet these have been ignored for the past decade.

The Times article states, "It is normally troubling to see an observational study posing questions asked and answered by higher science. But in this case the research may help society to emerge from a fog that has clouded not just the approach to data on screening mammography, but also the approach to health care in the United States. In a system drowning in costs, and at enormous expense, we have systematically ignored virtually identical data challenging the effectiveness of...cancer screening...and more."

Overall, the article makes a good point that when the results of experimental methods and trials are ignored, observational methods may be able to help break socially accepted "fact." Although experimental methods do control for issues such as confounds and randomization, in turn providing more reliable and generalizable results, observational research may prove to be a good supplement by providing a less scientifically-laden, and more understandable, methodology and approach.

Sleep Innovations, a mattress and speciality sleep product company, recently faced a challenge when it came time for the company to obtain consumer feedback in order to best facilitate new product brainstorming.

As a sleep product company, it was hard to engage with specific users since there's no directory of people based on their mattress brand, nor is there a list of consumers who have brought new mattresses in X time period. However, in order to grow the company and make new products, this is exactly what Sleep Innovations needed to do; they needed to reach a hard-to-reach segment to uncover new user insights through consumer's attitudes and usage towards sleeping products.

Knowing that this segment would be hard to reach, Sleep Innovations created online surveys to pre-identify the brand's purchasers and purchase intenders. This extra work out front in pre-identifying their sample ended up saving time and money in the execution of the company's actual "Sleep Talkers" panel community.

The panel community has been successful in gaining consumer insights as the pre-selected consumers find the information relevant and are encouraged to take photos of their "sleep experience," post on community discussion boards, and are invited to partake in more traditional research methods, like concept testing and new product screening.

Sleep Innovations even developed an additional online survey template that was able to track responses over time. In one instance, this additional survey led to an important insight that one of their mattress brands was not selling well in a certain region because the brand's perceived image was too high for its low price; Sleep Innovations raised the brand's prices and watched sales increase.

Overall, Sleep Innovations was able to successfully reach their "hard-to-reach" population by implementing a step-by-step approach that used custom screening to pre-identify the users for whom this information would be the most relevant and interesting, prioritize the actual research activities over the company's own product development goals, establish clear research objectives for the panel to get the most directed feedback, and use visuals to gather feedback.

Check out the whole case study here.