Mental Health Study

| No Comments

There was a blog written about a mental health study done on college students. It found that many have mental health issues and of those that do, about 64% end up dropping out of college, sadly. It discusses how mental health should be treated and properly addressed to avoid these unfortunate circumstances. The first way was to train staff and faculty to deal with students under these conditions.

The actual study write-up gave a lot of interesting insights and statistical information on mental health and who it affects most and with what type those groups are affected, typically.

From what I read, the study raised no ethical concerns, but it did, however, present some bias, as people had to be willing to participate with these conditions, which are often sensitive and hard to address, so the participants had to be relatively open about the subject to get good, reliable, and valid data.

U of M health behaviors of College Women survey

| No Comments

Today I received an email from the University asking me to complete a survey on the health behavior of college women.
The start of the survey had necessary consent and information about the survey. However, some of the following questions had issues.
The first was with the question "were you born in the US?" and the answers were Yes or No. The question following is the one that really raised concern with me. It was,
"What generation are you?"
With the answers:

1st generation (I immigrated to the United States)
2nd generation (my parents immigrated to the United States)
3rd generation (my grandparents immigrated to the United States)

The issue here is that the categories are not exhaustive. I, like many others I am sure, do not fall under these categories, and, therefore, cannot answer properly.

This question I also found confusing:
"Please rate your reaction to each statement on a zero-to-10 scale on which a 10 means you think the statement is a very strong and important concern."

This was confusing since they did not fully explain the scale, giving the number 1 a meaning as well as the number 10.

There were also many very personal and controversial issues asked on the survey, some of which I don't think matter in the grand scheme of health behavior. I was expecting to be asked about diseases or medical conditions I have or have experienced.

This survey did use repetition of questions to check for validity throughout the duration of the survey. I was asked multiple times if I had certain tests done or not.

The survey was long, and was focused on women's issues like pregnancy and abortion, which was not what I was expecting from the title and intro to the survey which I did not like and found slightly deceiving, which brings up some ethical concerns as well.

EPA and unethical human testing

| No Comments

The EPA is being sued for unethical human testing at its site on UNC's campus. They believe it may have been happening for about 8 years. The EPA is being sued by the American Tradition Institute for failure to protect human test objects from harmful risks. The testing center is on UNC's campus, but is not actually affiliated with the University, it only leases lab space there. Though the testing center is run by the government UNC's Biomedical Institutional Review Board must first review the experiments before they are run, also placing partial blame on the review board and UNC. The experiment that set off the investigation involved participants inhaling large amounts of a pollutant out of the air in a glass chamber. The tests were also performed on unhealthy (obese and asthmatic) persons since they would be more susceptible to the results.

Obviously, this experiment (and the others in question) are extremely unethical and should not have been passed by the review board. The experiment in question would not, under any school of thought or ethical theory, pass as ethical; especially because the risks in no way would outweigh rewards. I can see how the EPA may argue this so that it would pass a review board: that a few volunteers, hopefully briefed in what they are getting into, could donate themselves for the bettering of the population as a whole. This, however, still should have been caught and questioned by the review board and forced to be changed to another way of testing air pollution and its effect on human breathing. Also, according to scientific reports and testimony from the EPA to congress, the pollutant they tested on humans, PM2.5, is one of the most toxic substances on earth.

None of the current ethical codes would approve of this type of experiment. The only component actually followed in this case by the EPA is the informed consent. And even that is questionable since the EPA has been lying about the experiments and the conduct of them in the first place. Who knows if the test subjects actually knew what they were participating in? and Who is to say they were not mislead to believe they weren't being exposed to toxic or harmful substances? This report reflects poorly not only on the government, the EPA, and UNC, but research as a whole. Since the EPA is a government agency it is hard to say rather there will be a truly fair trial or not. Also, I'm sure university experiments and experimental facilities will be under even stricter guidelines and more intense review after this awful occurrence.

Surveying Latino Voters

| No Comments

This particular article discusses the predicted voting habits of Latino voters in this year's presidential election. I found a few issues with the wording of the article or the reliability of it, especially in the behavior vs. response department and the fact that data seemed to be manipulated to appear more optimistic.

A quote from the article that I found misleading: "A survey released this week indicates 87 percent of Latinos nationwide are "almost certain" they will vote next Tuesday."
I found this misleading because the title of the article is "Survey: 87 percent of Latino voters will go to polls." This was a manipulation by the author to make readers more shocked and intrigued. Since the actual response was "almost certain", that really does not tell us much. I could tell you that I am almost certain I am going to eat at a restaurant tonight, but there may only be a 30% chance in reality. This response is a bit in the grey area, as many people will have a different perception of what almost certain actually means or how sure they should be do mark this answer. Also, if 87% are almost certain they will vote, their behavior can prove to be very different on election day. Something else may come up, they may forget, or people may not be able to still register due to time constraints. There are a lot of excuses that could prevent these 87% of those Latinos polled to not go.

Also, another problem is that there are a lot of illegal immigrants from Latin American countries here, which, most likely, were not included in this survey as they can not vote. Therefore, the actual percentage may be skewed since a significant portion of the total Latin American population is actually left out of the data from the start.

Another issue I found was in the statement, ""The poll shows that this year we can anticipate record participation among Latino voters," impreMedia CEO Monica Lozano said in a statement." This could be misleading data in that this year our country has more Latinos than in other election years. This is by no means a bad thing, but it is simply misleading with this data, unless they are discussing percentages of Latino voters, in which case the speaker should really specify.

This survey seems to have content validity in that it measures what it should measure: the outlook for Latino voters, but how reliable and re-testable is this data? It seems hard to say, as often political surveys change the most from day-to-day and it is hard to actually predict voter behavior prior to the election.

Get paid to take surveys!

| No Comments

Recently, a short paper survey got sent to my apartment. With it was a letter that said if you fill out the survey and send it back, they will send you $5. As many college students would do for the cash, I filled it out and sent it back. It was a brief survey about the age of the people living in the home, tv viewing habits, and radio and newspaper habits.
About a week after I sent in the survey and received my $5 in return, I received a phone call from the company asking me to complete a bit longer phone survey and, in return, they would send me $10. Again, I completed the survey. This one was quite a bit longer. The interviewer asked me how much I listen to the radio, how many times in the past week, what stations, and for how long during different parts of the day. He also asked similar questions about newspapers, television, and internet use. Most of the questions were based on my personal habits and what services or products I read, view, or purchase on a regular basis. Towards the end of the survey I was also asked a few questions about who I lived with, my roommate's habits, and about our background, i.e. if we were enrolled in a University, were working, etc.
The surveyor was very kind and tried to be engaging, but, quite frankly, if I was not getting paid and already committed to the survey I would have hung up. It lasted for almost a half an hour on the phone and I had other things I could have been doing. He thanked me many times for the good data I was providing and for being kind throughout the survey. In my opinion, though, the survey would have gone much quicker if I could have completed it either on paper or electronically and I found no reason for which it was important to utilize the phone to complete the second portion of the survey. The questions were all very straight forward and many were yes or no or multiple choice questions. The reliability may have been increased since there was human contact and multiple contacts from the company, but people can still lie whether they are completing a phone or email survey, though it is harder to not respond when talking to an actual surveyor.
In conclusion, I understand that it is difficult to collect large amounts of valid data, especially when people live busy lives and a company wants people to complete a half hour phone survey, but the payment does help and did not sway my responses, as my responses were simply on media usage.

How do you feel about the new Ipad launch?

| No Comments

This particular survey asked respondents how they felt about the timing of the release of the new Apple Ipad 4 and Ipad mini, as well as compared the two to other similar devices such as the Amazon Kindle. The survey had a sample size of 2000 online consumers. The survey asked questions regarding people's general opinion toward Apple, the timing of release of current products, and buying behavior considering these new products. It found that many people were angry at the fact that there were only a few months in between the release dates of the Ipad3 and Ipad4 and, additionally, the Ipad minis. The respondents were adults 18 and older that had elected to participate in Toluna online surveys. The company weighted the percentage of previous Ipad owners in the group to resemble that of Ipad owners nationwide.
There are a few trouble areas I see with this particular survey. The first being the respondents. Since people had already elected to be a part of these online surveys their participation is voluntary, and, thus, the results will show a bias of some sort. Also, many times voluntary survey takers are more likely to resemble one another; therefore it is possible that the sample is not representative of the nation as a whole, even though the company claims to have considered proportions of Ipad owners to non-owners in its sample.
The second issue with this survey, I believe, is the behavior vs. response issue we have previously discussed in class. In a couple questions in the survey, respondents are asked whether they would buy the new Ipad, Ipad mini, or another device. Depending on their rage with apple or general attitude at the time they may say they will not purchase any items, if the respondent just received a raise at work they may claim to buy all new products. So, again, this response bias and buyer behavior issue might impact the reliability of the survey in the end.
Also, as with all online surveys, you cannot be 100% positive that people are who they say they are online. For example, in this case, a person may claim to be a 30 year old Ipad owner, but actually be a 40 year old non-owner, and, as a result, not interested in any of the other products. This will also skew results that have been adjusted for age, race, sex, education, and race as the responses would not necessarily line up and you may have outlying data.

Pictures are worth a thousand words

| No Comments

This is actually an article I wrote for my internship this summer. It discusses why a company should be interested in using instagram, since a lot of the work I did dealt with social media and the use in companies, but also why pictures are so important to a website and it's success.

It's a known fact that pictures spark more interest in consumers, as it adds a face to your otherwise faceless brand. They create the bridge that fills the gap between brand and human. This often can garner trust into your brand, as a person is more likely to trust a person (or company) with whom/which they feel personally connected over one that is just a name on the web. Pictures get more "likes" on Facebook than do strictly text posts and are often found to be more memorable.

This dips into the realm of research because you must look into what your audience yearns for most-- text, pictures, videos. There have been many studies already done to see what type of information sparks the most response and what kind of responses. In today's world it isn't just about knowing what it is that your audience desires, but you have to be the best provider of that product or service. Pictures, and instagram, are ways that elicit emotion and help researchers understand what makes their audiences tick. This is often a mystery question, as to how to get these emotions to come about, but pictures are an easy way to judge reactions without having to actually be present to survey viewers or consumers. For example, if you release a sneak peak of an advertisement for a new soup that contains a picture of 3 adorable children and another that contains an image of a middle-aged person alone, you will most likely get more likes on one than the other, and that is what hits home and is the winner with your target market.

Pictures are another qualitative component that can drastically improve research responses. You can also gain quantitative data from sites like Facebook or Twitter, as you can compare numbers of likes, comments, or shares on Facebook and a person can track retweets and favorites on Twitter. The combination of both types of data from one source, pictures, is incredibly efficient and valuable to brands everywhere.

Ethical Considerations

| No Comments

This is an article on ethics out of Australia, which, I believe, goes to show how universal the issue of ethics really is. It also demonstrates how the same issues of ethics are being deliberated about globally and many countries appear to have the same issues when it comes to researching particular groups, in this case children under the age of 18.
The issues found here are consent and privacy. The article poses the question of is a person aged 18 years and one day really that much more mature and able to consent of themselves than a person 17 years and 10 months? I found this quite interesting, as many times at this age maturity levels vary greatly and it really does not make a difference and should not, considering the research itself be of ethical proportions. The other issue of privacy is touchy because, again, maturity levels vary greatly and parents are not involved in all personal behaviors of their children that may be surveyed, so why would they be willing to give consent for their children to give that information up to a surveyor? The ethical concerns in this article go far beyond parental consent and privacy, though, and stretch into those of public vs. private information (such as when using internet or mobile devices) and being overheard in public spaces.
I believe that this article is very thought-provoking and gives a good insight on perspectives that should be used to discuss ethical situations. As we know, ethics are a big factor in most research projects, as researchers must be careful to not out-step their boundaries and fall into potentially detrimental ethical situations that could arise from mishandling of participants or information.

Social branding and consumer emotion

| No Comments

Branding today has been drastically modified by social media and other interactive technologies, like smart phones. Brands now need to not only be concerned with the content they produce, but what users are saying to other users or potential users as well. Certain types of content stand out more to the human brain than others, and companies need to learn what exactly that is so they are able to get their brand out there and acknowledged by the world in today's terms.
Dr. Paul MacLean has discovered a way to classify the brain into 3 sections: the reptilian brain (decision-making), the limbic brain (emotional center) and the neocortex (rational thinking). This can hopefully lead marketers to understand what content taps into what brain and what part of the brain consumers are using when viewing brand messages. Also, according to this study, 95% of our decision making is not influenced by rational, conscious thinking; making it more difficult to market directly to consumers. Instead, brands must tap into emotions, instincts, and trust to get the messages across.
This article reminded me a lot of the qualitative research we have discussed in class. Qualitative research taps into emotion and gives further insight into what the brands would need in order to really drive a message home to its consumers. Also, when we discussed focus groups we talked about how many decisions are unconscious choices and it is hard to be aware of those in the presence of others, or difficult to explain why exactly you felt a certain way or chose a particular item over another.
Just as people are often spontaneous in actions and choices, the article suggests that the brand should do the same. This makes a lot of sense to me because it would make the brand appear to be more human and not so scripted. To have a spontaneous online 30% off sale for Facebook fans would be a way to not only gain interest and probably fans, but also to be more interesting and well liked by consumers. If a brand does not announce every sale, promotion, or action is does before it happens, and customers know this, it would keep customers engaged and interested in the brand. I know, from personal experience, that brands that I follow on Facebook or Twitter that do this I am more likely to pay attention to so that I don't miss out on what they offer. As a result, I am more aware and favorable towards those particular brands and more engaged in their activities. Although I am aware of this because of the major I have chosen, many people are not and would not be entirely aware of why they are so in love with a particular brand, which plays in, again, to the unconscious decision making.
Also, I think there is a lot to say with being a social brand and being involved with your consumer base. Consumers are more likely a trust a brand that listens to their feedback and changes based upon that, than one that tweets out 50 messages a day, but never responds to negative or constructive feedback.

Differences in Work and Communication styles

| No Comments

This survey discusses the generational differences between those 55 and older and those 25-34 in the workplace. It is becoming increasingly common to have younger people in professional and managerial positions. Though this does not bother older workers, it often changes how they do work, as the younger generation of workers often think, communicate, and address projects in slightly different manners.

Though the intent and content and construct validity appear to be ok, this particular study seems to be missing some vital information. One would be response bias. What happened to workers between ages 35 and 54? There are many workers in this age bracket that I'm sure also have opinions, so the study comes across as not very reliable when it skips an entire 20 years' worth of workers.

The major differences I can see from this study are how workers believe work should be done, i.e. at home or in the office and how the two groups approach projects. The younger generation of the two would prefer to develop a detailed plan, while the older generation would prefer to jump right in. This could be a difference from educational backgrounds or just personal preferences that happen to correlate this way. There is no doubt that with advances in technology the modern job market and work place is shifting, so being aware of the differences between age groups is vital to have successful work environments.