(What follows might be too long for this venue, but I really wanted to think these ideas through. If you make it to the end, I would appreciate any and all comments!)
The Adrianson article has been pivotal in my search for epistemological perspective. What is my worldview and how does it influence my approach to research? Granted, it's just one article and I'm destined to read many more, but it has compelled me to examine my entrenched assumptions about what constitutes "good" research. In a nutshell, I've long held that credible research tests hypotheses through objective, rigorous procedures of structured data collection and statistical analyses. Perhaps this reflects where and when I did my Master's research in the late '80's, or maybe it's the fact that I live with a polymer research chemist! Of course, the most likely explanantion is the pervasiveness of scientific certainty in our culture. As Longo explains, scientific and technical knowledge is indeed "the dominant way of knowing." Even the election pollsters are given that dash of credibility when Peter Jennings announces the "scientific" poll results after each debate. My assumptions squarely in hand, I've approached our qualitative readings with a skeptical eye. Sure these studies explore the communication complexities of power, expectation and marginalized groups, but how can they ever be taken seriously in a world dominated by the scientific method? Where were the stringent rules of research, the generalizability to a wider population, etc? Could this mean I leaned to the post-positivist perspective? Then I read Adrianson.
In class we raised the question of worldview and 'what constitutes knowledge' in the Adrianson article. While Adrianson doesn't openly declare a post-positivist perspective, her methodology is primarily concerned with isolating discrete variables, examining numeric observations, and describing causal relationships. Even though she incorporates qualitative elements, the specific methods of data collection reflect an objective, hypotheses driven orientation toward knowledge development. (Question 1: Is this study considered mixed method research, even though Adrianson's feet seem firmly planted in the quantitative realm with only a slight raise of her toe to the qualitative?) This study should have been right up my (potentially) post-positivist alley! But it didn't address my questions of usefulness. How might these results lead to improved communication processes? This perspective, I am learning, falls into the pragmatic view of knowledge.
The pragmatic perspective is still very much evolving and is pulled in multiple directions by differing viewpoints, which I won't trouble you with now. However, there are some overriding themes that give clarity to the perspective. Pragmatism focuses on the practical and social value of knowledge and views its acquisition as a purposeful response to life's demands. The belief is that through new insights we are better able to cope with the challenges life presents. As I understand it, rather than repetitively applying theory and proving or disproving hypotheses, the pragmatic researcher distances herself from assumption and remains open to discovering future possibilities. The focus is on the usefulness of knowledge. (Question 2: Are any of you also exploring the pragmatic perspective? I'd enjoy discussing it further and have some illuminating articles I'd be glad to share.)
This perspective helps to explain my frustration with Adrianson's detached, insubstantial explanations. She attempts to examine the influence of social context, yet the narrowly categorized elements are too isolated from the intricate complexities of human interaction. For example, she conjectures that the lack of feedback support in the Eudora mail-system "may have influenced" the participants' motivation for the discussion. This is an interesting point; however, without more substantial analyses of the participants' perspectives, she can't address the myriad of other variables that may have also influenced motivation. An even greater influence might have been the contrived nature of the situation or the length of time it took.
I recognize that research as a whole is an incremental process and that we move our knowledge forward in small, often humble steps. We might not be able to "use" Adrianson's results in a practical sense, but in the tradition of empirical science, they will no doubt contribute to future research hypotheses. But I struggle with what we lose when we isolate and quantify variables in an effort to produce "credible" research. If we infer only from statistically significant results, does that mean the insignificant data has no value? In a process as complex and context driven as communication, don't even the small, unique nuances contribute to our formation of meaning? My concern is that we narrow the potential for discovery when we frame our research in hypotheses and presuppositions. We risk looking only in one direction and not seeing the possibilities waving at us from the data that doesn't qualify as a 'result.' Does this sound like pragmatism to any of you?
Rhetoric and Technical Writing scholars often stress the validity and rigor of the qualitative studies that our discipline relies so heavily upon. However, there also seems to be a sizable contingent that argues for scientific and quasi-scientific studies because quantifying such information provides the discipline with more “hard,” irrefutable data. Some also argue that aligning our work more closely with the hard sciences gives our discipline more credibility. Do you think this is so? Is this a credibility that Rhetoric/PTW wants or needs? Is it something that we should call for within the discipline?
My first question pertains to getting some clarification about statistics for someone that’s never taken a statistics course. I understand the fundamentals and the reasoning behind statistics – you don’t have to convince me of that – this is really just a nit-picking question that has always nagged at me and I wanted to get an answer. What I’m wondering is what level of reliability can statistics based on a sample ever really achieve – how close to 100% reliability can one get? Also, from the perspective of a statistician, is there actually (or should there be) an unstated assumption that every statistic is prefaced by the statement, “It is likely that..."?
The reason I ask is that in any given random sample, no matter how much precaution is taken to prevent otherwise, there is always the possibility, however slight, of getting a biased, inaccurate, extreme case sample. I know that this possibility is minimized by a greater sample size – the larger the sample, the less likely it is that a chance happening can occur. So, for example, if you’re flipping a coin repeatedly, the presumption is that 50% of the time you’ll get heads and 50% of the time you’ll get tails. However, if you only flip the coin four times, you might get heads all four times, which would betray the 50/50 assumption (this would, of course be attributed to the small sample size, which in this case was too small) – but if you flip the coin 1000 times, it’s MUCH less likely you’ll get heads every time, you’re very likely to get or be very close to the 50/50 occurrence, but the possibility of getting heads every time does still exist (or maybe it's truly impossible to get heads every time if you flip a coin 1000 times? Statistically, it would seem the possibility exists, just distantly.). As quantity goes up, the likelihood of a particular extreme outcome goes down – but the outside possibility still exists. Think about the guy that was hit by lightning eight times, Roy Sullivan. The likelihood of any one person getting struck by lightning is very small (something around 1:700,000, I believe), and the likelihood of any one person being struck by lightning multiple times is even smaller – but, however small that likelihood is, there was someone that got struck multiple times and they were even struck eight times (What’s the likelihood of that?). To further illustrate, if I were to do a survey of voters for the upcoming election and I had a reasonable sample size, the sample was random, etc. – it is very likely that I’ll get usable, reliable results. But isn’t there a VERY slight chance that everyone I surveyed just happened to be only Bush supporters or only Kerry supporters? And still there’s all the gray area between the extreme and the likely result - perhaps only a slightly disproportionate showing of Bush or Kerry supporters than is actually the case nationally (I presume this is where margin of error comes into account). How does statistics account for the most extreme and unlikely examples, but which, should they occur, would utterly destroy the reliability of a survey? I suppose one would just re-do a survey if the results seemed way off expectations, but does statistics ever address this issue? Then again, maybe I’m just guilty here of not applying what Charney says Robert Slavin calls “educated common sense."
As someone that is new to designing a formal research methodology, such as how to design an effective survey, I find all of the necessary considerations a little overwhelming. Subtle differences or changes in samples, or how questions are phrased can easily throw off your results. And there can always be considerations which one, as the researcher, never thought to consider. It’s impossible, of course – or at the very least, highly unrealistic – to always be aware of everything. Furthermore, the world doesn’t exist in a closed, controlled system – so an awareness of every possible factor could very well be detrimental to one’s study. I’m sure it’s common for new researchers to be overly cautious and overly considerate of particular aspects of survey research. This isn’t necessarily a bad thing, until it gets in the way of conducting effective research. There are so many considerations that need to be taken into account in order to receive results that are reliable and pertain to the question being addressed. How would you advise a research novice of how not to become overwhelmed by all of the possible considerations and/or become too focused on the minutiae of conducting surveys as they set out to conduct their first survey? One consideration – do people often conduct pre-survey surveys, or test surveys, to see if a survey is effective in uncovering or providing the types of information that a researcher is hoping to receive?
Regarding the “Story of Subject Naught” article – the researchers said that they validated participants as Latino based on U.S. census data approximating location and race. How reliable is this method of validation? One can take many measures that are intended to weed out undesired participants, but this doesn’t provide a verifiable check in the system. So, it would seem suspect to use this information to generalize about the larger Latino community since there’s no way to be certain that participants are who they say they are – especially since the survey contains a financial incentive. Maybe an undesired participant (e.g. non-Latino, female, heterosexual) came across the study accidentally and posted it to a general chat board telling everyone they could make $20. Over the Internet, how can you be certain about the validity of the data pertaining to any of your subjects? I guess I find myself having a lot of reservations about using data from an Internet survey because there seem to be so many uncontrollable factors. Is this just worked into the assumptions in a study that there may be some invalid participation, but the vast majority will participate honestly?