1) It was interesting for me to note the similarities in laying out the methods and heuristics of analysis that were apparent in Janel’s discourse analysis of the televised classroom and Berkenkotter and Nyssa’s genre-linguistic analysis of the Zoo magazine. It seems that both were quite reliant on bracketing their terms such that they could be used to dissect and store the parts for more granular examination. These two are, I think, good examples of how a mixed-methods (qualitative and quantitative) approach is needed in some contexts. The problems that need to be addressed in these situations are not problems of theory or methodology, but problems of definition, data, criteria, comparison, etc.
2) It seems to me that genre analysis of the sort that Berkenkotter writes about and the presentation of the research that we saw in class is very similar to visual rhetoric. Selma brought up this point in our discussion and I though I would expand on it here. Berkenkotter writes in her essay “The classroom is both institution setting and activity system with tits contextual cues, language, and bodily orientation, gestures, physical configuration of desks, microscopes, blackboards, computers and so forth. These are some of the tools, artifacts, speech genres, and settings that are constitutive of the institutionalized practices that students of all ages come to learn” (Berkenkotter, Thein, 186). Also the five principles of genre knowledge: dynamism, situatedness, form and content, duality of structure, and community ownership, could also be applied to visual rhetoric. In the zoo analysis they are looking at visual and verbal representations of the different communities that are characterized in the texts and the communities to which they wish to address.
Questions and Comments for Week 7
1. Audience and user research: Though Harrison explains her research is “old hat,” I was surprised to read the concerns in Allen and Southard’s and in Grice’s articles. Audience analysis seems so intuitive to techcomm, particularly within the context of rhetoric here at the University of Minnesota. All the concerns about the “whole” user experience seem equally apparent. Our we at the university just ahead of the game? Are these concerns serious concerns?
2. Participatory research: What struck me as most interesting in Kemmis and McTaggart’s discussion about participatory research was the almost reversed status it has in here compared to its status in our other readings so far. Whereas so far participatory research was considered an aspect of qualitative research (particularly in our discussions of feminist and human subject research) here the authors bracket qualitative research (along with quantitative research) as a means to conduct participatory research. This seems to confirm my suspicions that methodology and methods are fluid terms. Other thoughts?
I want to comment on my second question. Janel’s discussion of discourse analysis (and then Carol’s discussion of genre analysis) has been helpful in clarifying my confusions with method and methodology as created through the Kemmis and McTaggart discussion. I will now retract on my belief that methodology and method can be fluid terms. Even though the hierarchy in which participatory research and qualitative/quantitative research are placed seems to be reversed a re-reading of some of our earlier readings in the context of these discussions from Week 7 and Week 8 have made me realize that participatory research is really always methodology, not method.
The key, to clearing my confusion was in Janel’s (and Carol’s) reference to method as “tool” versus Creswell’s more general terms “techniques and procedures” (Research Design, p5). Creswell’s definition did nothing to help clarify a sentence in Lay’s chapter on feminist theory: “Dawn Currie and Hamida Kazi (1987) suggest than feminist researchers can competently use traditional social science research methods but also should adopt a ‘participatory model’ that ‘requires that the research question be of concern and of interest to the subjects’ (p 81),” (Gurak and Lay, p167). Because participatory research is cited in the same sentence as methods it was easy for me to confuse the two. The word “model” might have worked to suggest that participatory research is part of the plan of action, but methods have models too. After all to conduct her discourse analysis, Janel used someone else’s procedural model to design her own. In the context of Creswell’s definitions this reference to participatory research seems to fit both/either methodology or method. However, considering method as tool makes it harder to class participatory research as method. Janel’s tool of discourse analysis had specific steps one had to follow to come to an outcome. What steps does one take in participatory research? It’s apparent from Kemmis and McTaggart’s article that the steps one takes in conducting research are varied and dependent on the various contexts (critical action research uses different approaches from classroom action research, p338-339). In essence it seems harder to categorize participatory research as a tool. It seems to be more a framework that governs the choice of tool, e.g., using an interview instead of surveys, or using discourse analysis instead of experimental analysis.
At any rate, to sum up, the last two classes have been valuable because they presented the nitty-gritty of research, how one employs a tool with all its specific procedures (e.g. coding then quantifying the codes) and helped me think through why I was confusing method and methodology with reference to participatory research.
Stemming mostly from my own experience and the sources of knowledge I find compelling, I keep coming back, particularly this week, to the place of theory and theory-building in research, wondering at what point an ethnographer engages theory (if one does at all). Emerson et. al. posit a method of “inscription/translation/textuality” in which meaning emerges through the participant-observer through the act of writing about a culture from “within” that culture. Thus meaning is not applied to the subjects of study but emerges from the situation and process of study. How does an ethnographer position herself theoretically then? For example, though suspicious of ethnography, Herndl is completely at home with his use of theory and seems to wish that ethnographies of writing would do the same.
When I think of my own research and the questions and issues that excite me, I realize that whether or not I make explicit use of Foucault, Althusser, Judith Butler, and even Lacan, my work will be influenced by their ideas of power, interpellation, gender, and subjectivity because my worldview is greatly shaped by these ideas. How then does the ethnographer navigate this terrain? Clearly going into a setting with an aim to “find Lacan” in the activity and culture of others is problematic, but then so is pretending that my own observations and analyses will not be influenced by aspects of his work. Besides the adoption of a “reflexive praxis” (which is a nimbus and highly interpretable category), what can an ethnographic researcher do?
And how does one, then, represent this use of theory in the final narrative? Obviously the incorporation of theory could be alienating to the participants of that “culture” of study, thus distancing them from the final result of the research. This issue of audience and respecting the participants’ wish to learn from and understand the final narrative is an extremely complicated one, when I think about the ultimate goal of such work – academic advancement for the researcher (call me jaded, perhaps). I wonder if Thein shared the final version of her study with teachers and administrators of Thompson High School. Would or did they find her discussion of control and power disturbing or inappropriate? Though I found her discussion to represent my own experience as a high school student, what would the actual teachers think of it?
Lastly, I am perhaps the last one to criticize an ethnographic method as lacking rigor, but I do wonder about the line between representation-as-research and representation-as-art. The “experiential style,” which postpones the writing of field notes at times to the end of the “immersion,” reminds me (CAUTION: pretentious reference to follow) of Wordsworth’s idea of poetry, which is recollected at a later time in tranquility. How is this model any different from travel writing or creative nonfiction? At least the “participating-to-write style” keeps the researcher actively engaged in the meaning-making process of writing, though clearly, as Emerson, Fretz, and Shaw point out, the situation can make jotting difficult.
According to Allen and Southard, “Because research discussions necessarily (even if tentatively) evaluate findings, participants might use the information provided in the research report to reconsider their behaviors or responses (e.g., strategies, organizational goals, responsibilities, and assessments—to name just a very few possible illuminations) (139).
As a researcher I agree with Allen and Southard’s idealistic hope for audiences’ potential use of research to improve and change. However, past experience has taught me that some audiences will not appreciate research observations.
What can a researcher do when there is potential for audience dissatisfaction with the research results? Do unfavorable research observations devalue the potential of the overall usability of that research?
As I read Theresa Harrison’s “Framework for the Study of Writing in Organizational Contexts,” I was reminded of materials I have been reading about “new literacies” needed for learning and communicating in our increasingly electronic culture. Many writers emphasize the need to teach students to evaluate situations and determine appropriate communication principles and techniques. The variety of communication situations is increasing and are, even now, too numerous and changeable to prepare students for particular situations they might encounter. Thus, some theorists who are attempting to identify future literacy needs seem to support the idea that a communication context cannot be known prior to entering it.
Allen and Southard’s chapter on audiences ended with a warning that not considering audiences and their needs when doing technical communication research would lead readers to “shrug and lament” about the status of TC research. We talked about apologia in TC research earlier in the term.
I thought it might be interesting to find some way to involve the public in submitting questions/issues for research--throughout the university, not just in TC. I realize that if the public took interest in TC research they way they do in medical research, it would bring both benefits and problems. Increased interest may bring more money and visibility, but it may also bring pressures to do certain kinds of research.
My third thought was about Janel's presentation and our discussion of assigning motives to our research subjects. It is true that we cannot know, but I think as meaning makers, humans always try to figure that out. I guess that is what motivates us to do research in the first place.
I'm pondering the final project, and wondering if anyone else has an interest in my research area, which is digital intellectual property. If so, please feel free to comment or drop me a line at kenne329[AT]tc[DOT]umn[DOT]edu. Thanks!
The question I have stems from the concern of a relationship between audience and researcher becoming “an entanglement” that can be “controversial and even damaging” (Allen and Southard 137). My intended research on environmental sustainability in the watershed where I grew up has the potential to lead towards this “entanglement.” In looking at potential conflicts in my case, I have outlined five examples:
1. I grew up next to (DNR Fisheries Section).
2. I am related to (operators of Duschee Hills Dairy).
3. I have had previous public disputes with (Minnesota Trout Association and Trout Unlimited)
4. An operator of (Reiland Farms) was (my 4-H dairy judging coach)
5. Key political figure (Greg Davids) defeated (my uncle)
I use the examples in a “fill-in-the-blank” template to show that many more potential conflicts could be identifies (neighbors, relatives, political alliances, relation to businesses and organizations etc.). Allen and Southard argue, in the tradition of feminist methodology, that there is an ethical imperative to reveal the “forces” acting upon my research to my audience (138). I agree. The complication I see arising from a theoretical standpoint is that this “revealing” is not merely an ethical question but one that impacts my ethos. I personally have no problem intertwining the two but the question I have is does this pose a problem to the feminist technique? If my ethos is enhanced by the perception of my audience that I am a local “good old boy,” does this jeopardize the merit of the feminist technique? In academic circles, we can probably come up with elaborate explanations to say why this is not the case. If my audience is the coffee conversationalists of Southeast Minnesota, I think it does.
(What follows might be too long for this venue, but I really wanted to think these ideas through. If you make it to the end, I would appreciate any and all comments!)
The Adrianson article has been pivotal in my search for epistemological perspective. What is my worldview and how does it influence my approach to research? Granted, it's just one article and I'm destined to read many more, but it has compelled me to examine my entrenched assumptions about what constitutes "good" research. In a nutshell, I've long held that credible research tests hypotheses through objective, rigorous procedures of structured data collection and statistical analyses. Perhaps this reflects where and when I did my Master's research in the late '80's, or maybe it's the fact that I live with a polymer research chemist! Of course, the most likely explanantion is the pervasiveness of scientific certainty in our culture. As Longo explains, scientific and technical knowledge is indeed "the dominant way of knowing." Even the election pollsters are given that dash of credibility when Peter Jennings announces the "scientific" poll results after each debate. My assumptions squarely in hand, I've approached our qualitative readings with a skeptical eye. Sure these studies explore the communication complexities of power, expectation and marginalized groups, but how can they ever be taken seriously in a world dominated by the scientific method? Where were the stringent rules of research, the generalizability to a wider population, etc? Could this mean I leaned to the post-positivist perspective? Then I read Adrianson.
In class we raised the question of worldview and 'what constitutes knowledge' in the Adrianson article. While Adrianson doesn't openly declare a post-positivist perspective, her methodology is primarily concerned with isolating discrete variables, examining numeric observations, and describing causal relationships. Even though she incorporates qualitative elements, the specific methods of data collection reflect an objective, hypotheses driven orientation toward knowledge development. (Question 1: Is this study considered mixed method research, even though Adrianson's feet seem firmly planted in the quantitative realm with only a slight raise of her toe to the qualitative?) This study should have been right up my (potentially) post-positivist alley! But it didn't address my questions of usefulness. How might these results lead to improved communication processes? This perspective, I am learning, falls into the pragmatic view of knowledge.
The pragmatic perspective is still very much evolving and is pulled in multiple directions by differing viewpoints, which I won't trouble you with now. However, there are some overriding themes that give clarity to the perspective. Pragmatism focuses on the practical and social value of knowledge and views its acquisition as a purposeful response to life's demands. The belief is that through new insights we are better able to cope with the challenges life presents. As I understand it, rather than repetitively applying theory and proving or disproving hypotheses, the pragmatic researcher distances herself from assumption and remains open to discovering future possibilities. The focus is on the usefulness of knowledge. (Question 2: Are any of you also exploring the pragmatic perspective? I'd enjoy discussing it further and have some illuminating articles I'd be glad to share.)
This perspective helps to explain my frustration with Adrianson's detached, insubstantial explanations. She attempts to examine the influence of social context, yet the narrowly categorized elements are too isolated from the intricate complexities of human interaction. For example, she conjectures that the lack of feedback support in the Eudora mail-system "may have influenced" the participants' motivation for the discussion. This is an interesting point; however, without more substantial analyses of the participants' perspectives, she can't address the myriad of other variables that may have also influenced motivation. An even greater influence might have been the contrived nature of the situation or the length of time it took.
I recognize that research as a whole is an incremental process and that we move our knowledge forward in small, often humble steps. We might not be able to "use" Adrianson's results in a practical sense, but in the tradition of empirical science, they will no doubt contribute to future research hypotheses. But I struggle with what we lose when we isolate and quantify variables in an effort to produce "credible" research. If we infer only from statistically significant results, does that mean the insignificant data has no value? In a process as complex and context driven as communication, don't even the small, unique nuances contribute to our formation of meaning? My concern is that we narrow the potential for discovery when we frame our research in hypotheses and presuppositions. We risk looking only in one direction and not seeing the possibilities waving at us from the data that doesn't qualify as a 'result.' Does this sound like pragmatism to any of you?
Rhetoric and Technical Writing scholars often stress the validity and rigor of the qualitative studies that our discipline relies so heavily upon. However, there also seems to be a sizable contingent that argues for scientific and quasi-scientific studies because quantifying such information provides the discipline with more “hard,” irrefutable data. Some also argue that aligning our work more closely with the hard sciences gives our discipline more credibility. Do you think this is so? Is this a credibility that Rhetoric/PTW wants or needs? Is it something that we should call for within the discipline?
Question 1: If your major emphasis in training is on qualitative research, how do you know when you read a quantitative article that the statistical work is accurate, and that the results are correct?
Question 2: Does a survey provide the researcher with information that fits their way of thinking rather than truly reflecting the respondent’s way of perceiving?
The readings for this week bring my thoughts back to the benefits and hazards of quantitative work. I have read and heard people use the terms fast, efficient, exact, calculated, convenient, easy, and generalizable when talking about survey and questionnaires as a means of doing research. There certainly are many benefits to surveys, and there is a specific purpose they fulfill, however, this does not mean they are infallible. Certainly, it is imperative to have a good understanding of the theory that is driving the questionnaire, and whether that theory is best represented by the type of answers a survey will be able to produce. Along with an understanding of what type of analysis is being utilized to explain the answers of the survey. Otherwise, much like a comment made in the MINTS reading, we may get inaccurate results that will lead to incorrect recommendations.
Chapter 5 on Surveys and Questionnaires (Murphy) starts with the premise that theory is extremely important to research design and in the use of survey research. In keeping with this premise it is important to recognize that theory is not just part of the introduction and discussion sections, but rather needs to be integrated throughout the entire process. Murphy may have dropped the ball, or lacked to explicitly connect the use of theory in the section discussing “question design” (pg 102 – 105). If survey is the method than the theory being used should be evident in the types of questions being asked about the phenomenon being examined. In particular on page 105 in the section talking about ‘demarcation of educational information’ there is no mention that by using theory the lines differentiating the parceling of this category could be explained and substantiated.
Chapter 6 on Experimental and Quasi-Experimental spoke of the traditional pre-post testing and I just thought I would share a method that was new to me this year that I utilized in a program evaluation of a national financial program. Here is a brief section of what I wrote to explain the post then pre test method to the board of directors.
The students were asked the knowledge, confidence, and behavior questions in a way that is described as “post-then-pre” test method (Rockwell & Kohn, 1989) in an attempt to measure behavior change more accurately. This method (post-then-pre test method) has been found to be more reliable in measuring changes after studying specific content than the more traditional pre-test/post-test method (pre-test given before studying subject matter with a post-test given at the end of the presentation of the subject matter) (Howard & Dailey, 1979; Howard, Ralph, Bulanick, Maxwell, Nance, & Gerber, 1979; Linn & Sinde, 1977). In the post-then-pre method, the students are first asked about what they learned from studying the curriculum content and after that questioning, they are then asked what their level of knowledge, confidence, or behavior was prior to studying the content of the curriculum. The primary reason for the increased reliability of the answers is that students often do not know what they do or do not know before studying the material; asking them first about what they learned serves as a foundation to indicate what it is they actually did not know or do prior to studying the content.
Thought I would share from one of the books I have read….
e-Research: methods, strategies, and issues
By Terry Anderson & Heather Kanuka (2003)
Commercial e-survey products:
www.postmasterdirect.com (opt-in system)
Register with major search engines:
Paid Banner Advertising:
My first question pertains to getting some clarification about statistics for someone that’s never taken a statistics course. I understand the fundamentals and the reasoning behind statistics – you don’t have to convince me of that – this is really just a nit-picking question that has always nagged at me and I wanted to get an answer. What I’m wondering is what level of reliability can statistics based on a sample ever really achieve – how close to 100% reliability can one get? Also, from the perspective of a statistician, is there actually (or should there be) an unstated assumption that every statistic is prefaced by the statement, “It is likely that..."?
The reason I ask is that in any given random sample, no matter how much precaution is taken to prevent otherwise, there is always the possibility, however slight, of getting a biased, inaccurate, extreme case sample. I know that this possibility is minimized by a greater sample size – the larger the sample, the less likely it is that a chance happening can occur. So, for example, if you’re flipping a coin repeatedly, the presumption is that 50% of the time you’ll get heads and 50% of the time you’ll get tails. However, if you only flip the coin four times, you might get heads all four times, which would betray the 50/50 assumption (this would, of course be attributed to the small sample size, which in this case was too small) – but if you flip the coin 1000 times, it’s MUCH less likely you’ll get heads every time, you’re very likely to get or be very close to the 50/50 occurrence, but the possibility of getting heads every time does still exist (or maybe it's truly impossible to get heads every time if you flip a coin 1000 times? Statistically, it would seem the possibility exists, just distantly.). As quantity goes up, the likelihood of a particular extreme outcome goes down – but the outside possibility still exists. Think about the guy that was hit by lightning eight times, Roy Sullivan. The likelihood of any one person getting struck by lightning is very small (something around 1:700,000, I believe), and the likelihood of any one person being struck by lightning multiple times is even smaller – but, however small that likelihood is, there was someone that got struck multiple times and they were even struck eight times (What’s the likelihood of that?). To further illustrate, if I were to do a survey of voters for the upcoming election and I had a reasonable sample size, the sample was random, etc. – it is very likely that I’ll get usable, reliable results. But isn’t there a VERY slight chance that everyone I surveyed just happened to be only Bush supporters or only Kerry supporters? And still there’s all the gray area between the extreme and the likely result - perhaps only a slightly disproportionate showing of Bush or Kerry supporters than is actually the case nationally (I presume this is where margin of error comes into account). How does statistics account for the most extreme and unlikely examples, but which, should they occur, would utterly destroy the reliability of a survey? I suppose one would just re-do a survey if the results seemed way off expectations, but does statistics ever address this issue? Then again, maybe I’m just guilty here of not applying what Charney says Robert Slavin calls “educated common sense."
As someone that is new to designing a formal research methodology, such as how to design an effective survey, I find all of the necessary considerations a little overwhelming. Subtle differences or changes in samples, or how questions are phrased can easily throw off your results. And there can always be considerations which one, as the researcher, never thought to consider. It’s impossible, of course – or at the very least, highly unrealistic – to always be aware of everything. Furthermore, the world doesn’t exist in a closed, controlled system – so an awareness of every possible factor could very well be detrimental to one’s study. I’m sure it’s common for new researchers to be overly cautious and overly considerate of particular aspects of survey research. This isn’t necessarily a bad thing, until it gets in the way of conducting effective research. There are so many considerations that need to be taken into account in order to receive results that are reliable and pertain to the question being addressed. How would you advise a research novice of how not to become overwhelmed by all of the possible considerations and/or become too focused on the minutiae of conducting surveys as they set out to conduct their first survey? One consideration – do people often conduct pre-survey surveys, or test surveys, to see if a survey is effective in uncovering or providing the types of information that a researcher is hoping to receive?
Regarding the “Story of Subject Naught” article – the researchers said that they validated participants as Latino based on U.S. census data approximating location and race. How reliable is this method of validation? One can take many measures that are intended to weed out undesired participants, but this doesn’t provide a verifiable check in the system. So, it would seem suspect to use this information to generalize about the larger Latino community since there’s no way to be certain that participants are who they say they are – especially since the survey contains a financial incentive. Maybe an undesired participant (e.g. non-Latino, female, heterosexual) came across the study accidentally and posted it to a general chat board telling everyone they could make $20. Over the Internet, how can you be certain about the validity of the data pertaining to any of your subjects? I guess I find myself having a lot of reservations about using data from an Internet survey because there seem to be so many uncontrollable factors. Is this just worked into the assumptions in a study that there may be some invalid participation, but the vast majority will participate honestly?
I found Dr. Lay’s chapter very enlightening on what constitutes feminist research—or, as she points out, a feminist perspective on research. I always bristle at blanket generalizations of any group, especially of a group of which I’m a member, but while reading the chapter I realized that one could easily replace the references to “men” with references to “the traditional power structure,” which has been male-dominated for most of recorded history (at least in the sense that they—we, I guess—got to do the recording). Looking at the feminist approach as a perspective, rather than a method, implies that the principles can inform any critical analysis, whether gender-based or not. My discussion question asked how could this be done. It seems that only minor adaptations would be necessary, and the results would be much richer for it.
BTW: For a 19th century perspective on qualitative and quantitative analysis (actually poetry and math), see this excerpt: http://www.tc.umn.edu/~jone0850/value_of_reason.htm
According to cultural studies, truth is unstable. And because truth is unstable, Thralls and Blyler state, lasting and concrete answers to researchers' questions are not possible (p. 206). Researchers, they say, must constantly "rehistoricize or relink" what they examine as a result of shifting significances. If this is so, is it possible that feminist criticism (as a perspective) could one day be irrelevant? Or if not irrelevant, rehistoricized in a way that would be unrecognizable today?
I was thinking of this in light of Durack's piece. Will it ever be possible to level the playing field of technical communication so that gender is no longer a divisive issue? If culturally, we are able to redefine what is "technology" and "work" and "the workplace" to be more inclusive, would feminist perspective no longer be necessary? Or will gender, by its very nature, always be a valid variable?
One of the interesting topics we discussed yesterday was the distinction among method (a system of collecting data), methodology (a particulary of interpreting data), research goals and objectives, theory, and perspective. That feminist theory is a perspective rather than a method seems to make sense, especially in light of Dr. Lay's discussion of how feminist research "informs" a method.
Also interesting was the discussion of Dickson regarding what constitutes data. For example, we talked about how Dickson, though her data isn't "traditional," uses data that appears to help her defend her perspective.
Given the above, and given our discussions over the past few weeks, the notions of what constitutes "research" seem to have become more tenuous--more ambiguous and less well defined, at least in the realm of qualitative research.
Furthermore, as I mention in my question below, there seems to be some thought that "things" that are non-traditional are more likely to not establish themselves within a dominant cultural frame. There fore, my questions for this week are as follows:
1. Though qualitative research has certainly established itself as legitimate in arts and humanities research, do you think it will ever establish itself as credible on the same level as qualitative research in social science and natural science disciplines? Does qualitative research need to do so? Will we forever be justifying (even within larger academic communities, institution-wide) what we do as research as being on par with the research being done in social science or natural science discplines?
2. In my Communication in Human Organizations course, we discussed feminist organizational structure—companies organized around principles of egalitarian, participatory, nonhierarchical, collectivist management philosophy. However, one of the criticisms of feminist organizational structures is that they cannot sustain themselves because of pressure from dominant cultural expectations; furthermore, feminist organizations may not be seen as “profitable” by more traditional organizations (e.g., banks who loan them money) and may therefore have a more difficult time succeeding. Because the characteristics of feminist organizational structure seem consistent with those discussed by Lay regarding feminist research perspectives, are scholars who approach research from a feminist perspective faced with similar criticisms (e.g., publishability/profitability)?
1.In what specific ways can we as researchers avoid falling into the roll of paternalistic, patronizing, expert towards our research participants? And how do you do empancipatory research that includes our participants as subject experts but is still research that is structured and valid?
I am passing on a fairly good -- sophistocated, brief, and (best of all) free -- online guide to critical/cultural theory created by Dino Felluga at Purdue.
He does an excellent job of tracing the development of such schools of thought as postmodernism and new historicism (often cultural studies in disguise) as well offering brief introductions to some of the major figures.
As a recent refugee from the hard sciences, where, at least in my department, researchers paid lip service to postpositivism and then went on their positivist way, I was struck again this week with the theoretical difficulties in elucidating a reasonable role for the researcher. Are the objective “facts”—whether historical or physical—“out there” and as researchers do we believe it is possible to apprehend these facts in the course of research? In thinking about these questions as I read, I found that all the authors seemed to want it both ways. Schiappa and Kynell and Seeley, postpositivist (and therefore modernist) in their belief that historical facts exist “out there,” although our interpretation of those facts will always be situated and incomplete, seem to want postmodern cred on the academic street as well. This is analogous to my positivist physics professors’ lip service to postpositivism. Meanwhile, the postmodernists (Poulakos and Tierney) deny that objective facts exist, period… except when they need them to prove their point.
On the whole, the positions of Schiappa and Kynell and Seeley (which I acknowledge I’ve reduced simplistically to the ill-defined moniker “modernist”) seem more balanced. My problem is not with their belief in the existence of objective facts, nor in how they situate the researcher of these facts, but instead in their claim that both modern and postmodern inquiry can occur separately yet in parallel in the same research program. On page 196, Schiappa states this claim succinctly: “I am not suggesting that historical reconstruction should be done to the exclusion of rational reconstruction. With Rorty, I believe that both ought to be done, but done ‘separately’ (1984, p. 49)” (Schiappa 196). Unlike Schiappa, Kynell and Seeley do not explicitly attempt to combine modern and postmodern agendas. However, their advice to TC researchers, following the examples of historians and social scientists, to eclectically and pragmatically “borrow” (73) methodologies does not problematize the mixing of multiple modern and postmodern theoretical perspectives.
Both Schiappa’s and Kynell and Seeley’s unproblematized approach to multitheoretical research seems to me to be a mistaken attempt to reconcile the hip theoretical frameworks of their postmodern critics with their own modernist stances. The TC researcher or rhetorician attempting to investigate historical events will primarily be doing so through texts—the trade journals, historical records and textbooks Kynell and Seeley discuss. Poststructuralism (which comprises one part of postmodernist program) denies the existence of a “center” in any text. Knowledge is not merely mutable or imperfect to researchers working within this theoretical paradigm; true knowledge simply does not exist. Reconciling these two ontologies within the same research program commits a logical fallacy, because the ontological categories of each are mutually exclusive. You simply cannot have a coherent research program that simultaneously admits and denies the existence of any objective reality.
In spite of my critique of “the modernists” for their greediness for analytical riches, I think the postmodernists’ appropriation of the factual assets of the modernists deserves more censure. Poulakos condemns Schiappa for being “under the illusion that a value-neutral description of the facts, prior to their interpretation or analysis, is possible” (p 220, but the quote is from Hayden White). Yet Poulakos then describes at length how his own corpus search for declensions of the Greek rhetorike yields pre-4th century instances of the word, thus refuting Schiappa’s argument. Poulakos believes in this case that his “facts” (pre-4th century instances of rhetorike) about the Greek corpus are objective and his data reproducible using the computer equipment he specifies. If his data were indeed correct, (which, as we learned in class, unfortunately were not) Poulakos would make a finer social scientist than postmodernist.
Tierney makes an error that I’m considering here to be analogous to Poulakos’ and symptomatic of what I called in class postmodernist cake (or fact) eating. As Kenny pointed out in class and in his post, Tierney’s deploys his postmodernist stance in contradictory fashion when it comes to Lewis’s sexual identity. While acknowledging on page 294 that “gay” means something different in the 20th century than in the 19th (and it is debatable whether 20th century “gay” even existed in 1809) Tierney repeatedly applies the description “gay” to Lewis (p. 294, 309, 313, 314). Tierney’s use of “gay” as a stable, empirical category strikes me not as a deconstruction and decolonization of essentialized categories, but as an artifact of modernist thinking. To make his postmodern argument, Tierney must find recourse in an “objective” truth (or at least an objective sexual category). However emancipatory or destabilizing his intentions are, Tierney finds it necessary at times to behave like a modernist.