September 22, 2004

Charlotte Week 2 Questions

(Excuse duplicate postings, but my post didn't appear to me, so I'm assuming others didn't see it either. :( )
My questions targeted the practicality of mixing methods, especially triangulating methods. As I had asked in class, I wondered if triangulating methods is really just using three different methods and creating three different studies (meaning, do you use a combination of all three, work-load wise, or do you essentially do three different studies? )

My thoughts on triangulating are in the extended entry.

I've played with the idea of triangulating methods for my thesis. I'm currently in the prospectus phase of my thesis; my goal is to have this done by the end of this class. As a topic, I'm looking at studying AIDS collaboratories. Collaboratories are "online" (from the standpoint that the data is exchanged online but is not stored there, to my understanding) laboratories that allow people in different locations (who might normally not be able to collaborate in a very productive way (sending findings via mail, re-entering data, translating, etc.) to exchange and compile data. In AIDS research, for example, researchers in the United States working under the umbrella of some pharmaceutical company's monies could collaborate with researchers in, for example, Madagascar, where the AIDS/non-AIDS ratio is 1:8. In this case, data can be collected much faster in an "affected" population.
My concern here is naturally not only with the results of this collaboration, such as the "data mix," but also what this could mean for scientists' individual careers as indicated by resulting reports, scientific papers, patents, and notoreity in general. Finally, what do the scientists themselves think? Do they believe they are participating in "equal" collaboration? Does one kind of data rank higher in importance/relevance than the other (i.e. information regarding sexual behavior v. drug research), and how does the weight of each kind of data affect resulting benefit (such as free/reduced price AIDS therapy for the affected population or country participating or monetary rewards for researchers developing the drug)?
Naturally, there are far too many questions to fit within the scope of one thesis, which is why this could easily become a dissertation.
If I take a starting research question of "How do scientists weigh the importance of each others' work in the X online collaboratory?" I could easily follow up with questions related to data flow/mix, analysis of resulting reports, and interviews/observation with the scientists involved.
In this example, it would be useful to do a quantitative analysis of how much data comes from each side and the type of data collected. However, it would also be useful to do a qualitative textual analysis of resulting reports, such as published scientific journal papers and tracing which information comes from each location. Finally, it would be helpful to note the attitudes of the scientists involved by observing and interviewing (potentially on-site)--an ethnography.
Is it wise to try to triangulate for a thesis? I've heard from some people in our department that it's too much for a thesis, but if the work is largely background (i.e. only the salient information makes it into the thesis), wouldn't triangulating work?

Posted by tsch0070 at 12:55 PM

September 21, 2004

Paula's Wk 2 Questions

Paula Lentz
RHET 8011 Week 2
Discussion Questions

1. Do the additional complexities involved in conducting research over the internet, mean that IRB approval is more difficult to get?

2. As we learned in the readings, IRB approval is denied when research projects are structured inappropriately. In these instances researchers can simply restructure their designs to ensure that human subjects are treated fairly. However, are there certain topics in technical communication that would not earn IRB approval? Can you think of what these topics would be? And if so, does this mean that there is certain knowledge that is just “off limits”—knowledge that will never be discovered or constructed?

3. One of the main issues in this week’s reading is the legitimacy of qualitative research approaches, compared to quantitative research. Do you think researchers who engage in qualitative methodologies do the field a disservice by trying to address such issues as validity? For instance, Cresswell states that validity, in qualitative research, is more accurately thought of in terms of credibility. If this is so, why even try to borrow the term “validity” from the qualitative realm and force a qualitative definition on it? Does doing so raise the suspicions of quantitative researchers who may see this as a means of legitimizing a “weaker” methodology? Would a better strategy for promoting qualitative research be to establish it as a distinct methodology with its own standards for rigor rather than to try to relate it to quantitative research by adapting its (quantitative) terminology to qualitative strategies?

Posted by lent0064 at 10:43 AM

September 20, 2004

Walt's comments on week 2

The main discussions in class on week 2 (qualitative/quantitative research and the IRB) showed our need as researchers to understand not only our fields of inquiry, but to understand the nature of research itself. In the first discussion, we questioned the division between qualitative and quantitative research: is it actually a division or is it a spectrum? The idea of a spectrum could apply to the second discussion as well.

The positivist in us all (a brash generalization, I admit) would like a simple distinction between qualitative and quantitative research or between research that should have IRB approval and that for which the process is ridiculous. The authors try to create this distinction by contrasting extreme examples. For example, Shea shocks the reader with a description of Nazi experiments, and then points out the ridiculousness of making historians submit to extensive IRB reviews. Historians (here’s another generalization) study the words of dead people, so it’s hard to imagine the harm they might do to their subjects.

Nevertheless, as scholars of rhetoric, we fall somewhere between these extremes. We certainly don’t expose our subjects to mortal danger, but nor is our research entirely innocuous. Most of our subjects (again I generalize) are living, and as such have certain rights—for example, to expect privacy or profit--that our research could potentially compromise. The value of these rights must be balanced with the value of the knowledge our research will create, and we cannot credibly say that a self-evaluation would be unprejudiced.

Lee-Ann commented that she views the IRB process as a heuristic that helps her define her research. She is able to do this because she understands the process, not only how it restricts research, but also how it helps. Only by thoroughly understanding the issues involved can we succeed in our research now and effect change--as more social scientists join IRBs-- over the long term.

Posted by jone0850 at 11:40 AM

September 17, 2004

Skeptical, Amazed, and Delighted

After two weeks of extensive reading and discussion in research methodology, I find myself struggling with the breadth of approaches one can take. It's exciting to know that so much along the methodological spectrum is open to STC research, and that the clarity of the research question, rather than the expectations of a 'community,' illuminates which method to use. While I am inspired by the possibilities inherent in the interpretive, holistic nature of qualitative methods, I feel a deeply rooted tug back to the quantitative end of the spectrum. The issue of audience (specifically scientific, technical or business audiences) and my concern over which research methods they value pulls me toward the perceived safety of reliability, validity and generalizability. Doesn't research make a more useful contribution (pragmaticism!) if it is reliable and can be generalized beyond the specific sample or situation studied? Perhaps my struggle with the 'usefulness' of wholly qualitative research stems from the emphasis on quantitative method and statistical accuracy during my Master's thesis in 1990, or perhaps it stems from the fact that I am married to a Ph.D. polymer chemist whose career has been steeped in the scientific method! No matter the reason, I am beginning to realize that I am at once skeptical, amazed, and delighted by the research possibilities in STC. I look forward to discovering the true richness of it all!

Posted by coggi007 at 9:55 PM

September 14, 2004

Week 2 Questions/Reflections - Mike

Question/Reflection #1
In regard to fears surrounding IRBs and the implications that it limits academic freedom, there seems to be some misunderstanding between the two sides, or maybe better stated as an incomplete understanding of the context from which each side is coming. IRBs as I understand them, are really set up to protect the human subjects, and from the point of view of IRB advocates, it seems perfectly logical to subject all human subject research proposals to some level of scrutiny in order to provide that protection and regulation. I can see from the critic’s side, though, why a regulating force could seem too authoritative and limiting to their work. However, I believe that the issue people are having against IRBs is not a fear or opposition to the purpose of the IRB, per say, it is rather with the overall idea of a regulatory board in general. Once in place, the power could be abused, as power so often is, to control research, and, eventually, there will likely arise cases where the power is abused. This is similar to fears surrounding the Patriot Act – once civil liberties are surrendered, even if, supposedly, done so in the name of safety and good intention (I will leave my deeply held opposition and criticisms of the Patriot Act aside, and simply grasp the similarity of fear of abuse of authority.). But it is important that the critics also step back and look first at the purpose and intention of IRBs, and secondly the academic community/peers that are doing the regulating, and provide at least some leeway and benefit of the doubt that the power will generally be used correctly and as it is intended to be used – because that overall intention is a good one, and I think furthering the common goals and integrity of the academic community as a whole. What is needed, I think, to assuage the fears of the critics is well-defined system of check against the IRB system – a place to petition, and an authority that can overrule an IRB in cases where abuse of authority is suspected. The AAUP report discusses this need in it’s section on “Opportunity to Appeal an IRB Decision”, and there seems to be an agreement that such a body needs to be put into place, and there are some procedures in place in some circumstances, but overall, their assessment seems to be that the appeal opportunities, as of yet, are poorly defined, not adequately in place, and as currently conceived, place too much burden upon the organizations housing the IRBs. It would seem to me that the answer may lie in a joint commission of some federal committee in conjunction with professional organizations – such as the American Historical Association for historians – which by peer review and approval by a certain majority among those most familiar with the subject being studied, could possibly overrule IRBs at a national level. The American Bar Association does engage in at least some of this type of self-regulation. Such a system redistributes some of the power away from federal authorities. Of course this would likely require some acquiescence and troubling politics at the university level. What is to be done about an appeals process, because issues will inevitably arise, and this seems to be a major one?

Question/Reflection #2
In the AAUP report, there is a reference to a discussion that occurred at UNC in the Journalism School about concerns that IRB review of gathering information for news articles violates rights of freedom of the press under the First Amendment. It is stated, “The faculty, the students, and the IRB agreed that most news stories do not contribute to generalizable knowledge and therefore are not subject to IRB.” This issue of journalists being exempt from IRB regulation because of First Amendment rights came up elsewhere, as well. I would agree that the work of journalists should not be subject to IRB review, however I would strongly disagree that news stories do not contribute to general knowledge. If fact, there is much general knowledge that is really only received and/or documented through news type documentation (The reasons for this are plenty and I will not try to list them here.). This then raises a fuzzy area for me about where to draw the line with IRB review. Perhaps journalists are unique or perhaps there are studies in other disciplines that should receive some greater exemption, as well. And what about the journalist from the Shea article who, while working can interview all the subjects that he wants, but once he’s at school he’s regulated. There is clearly some inconsistency and how are we to resolve this? Can IRB review go too far in the sense of where its authority ventures?

Posted by bank0019 at 3:09 PM

Week 2 Questions: Endless Ineffable Narrative?

In Denzin and Lincoln’s Introduction they describe how montage may be used as an alternative to validation in research projects. “Readers & audiences are then invited to explore competing visions of the context, to become immersed in and merge with new realities to comprehend.” If everyone brings own paradigm and interacts with the text uniquely, how is the intersubjectivity between researchers and their subjects of inquiry breached? How are research results to be communicated from one researcher to another, including research team members to each other? Given the assumed inherent ineffability of the lived experience of each person (whether researcher/performance artist or participant/unemancipated other) will the same research outcomes be apprehended by all parties? How will consensus be reached even about questions of when the research program is complete?

Denzin and Lincoln anticipate my reservations on page 17 in their discussion of criticisms of the rhetorical turn. “Realists and postpositivists within the interpretive qualitative research tradition criticize poststructuralists for taking the textual, narrative turn. These critics contend that such work is navel gazing. It produces conditions ‘for a dialogue of the deaf between itself and the community’ (Silverman, 1997).” To me, more dangerous still is the implication that the “navel-gazing” of qualitative researchers may be self-propagating through the ranks of the academy. As new graduate students enter the field and form their own continuously interactive, intersubjective relationships with faculty members, each student is schooled in the particular subjectivities of their advisor. Disciplines fragment into ever-smaller shards of specialization and “navel-gazing” as each research group develops its own incommunicable webs of context. Disclaimer: I’m playing devil’s advocate here and do not buy into postpositivist worst-case scenarios.

Posted by nyssa003 at 2:50 PM

Influencing the IRB and Research Journals

1) As Barbara already noted, many of the specific examples show the frustration, irritation, and inconvenience that researchers feel about having to get approval from the IRB. These researchers believe that their research will have no risks for the subjects and that IRBs and the public should trust them. The researchers do not grant that same level of trust to the IRBs—accusing them of controlling the agenda of knowledge or stifling truth. I tend to agree with Hicks (from Duke University), who was quoted as saying that "there's a way to do just about any research and follow the guidelines." Wilmouth (the researcher who interviewed the old man) could have been someone with nefarious intentions; his subject was vulnerable. I am not willing to trust researchers, even myself, to protect subjects all on their own.

IRBs generally include representatives from throughout the organization, so these individuals should be able to carry the concerns of their field to the IRB and influence decisions about research and requirements. What message would we as researchers communicate to our IRB representative? What conversations do we think IRB members should be having that they are not?

2) The idea of keeping a research journal (Breuch, Olson, and Frantz) is interesting and would give a richer description of the process. However, I have a hard time believing that researchers would follow through with keeping notes on their attitudes in addition to their filed notes. One reason being the time such journaling would take. Given the resistance there is to getting approval, are researchers—are you—willing to record all attitudes that come up during a research project?

Posted by renda003 at 1:17 PM

September 13, 2004

Crisis of representation

Discussion Question

What would we find if we departed from objectivity-through-generalization and investigate qualitative research in a specific context to see if it holds up to the tests of validity, reliability, and objectivity - three concepts that Denizen and Lincoln suggest represent a crisis of representation in the field of qualitative research? Let’s consider a qualitatively ethnographic research field like journalism - specifically war correspondents covering Iraq.

Journalists dispatched to foreign lands to cover wars can be thought of as field-worker ethnographers who have remarkable claims to authority. War correspondents from the NY Times, the BBC, and the Washington Post, for example, are regularly relied upon for trustworthy, legitimate, and impartial information on the day-to-day developments in Iraq. The world of lived experience is captured by the journalist/ethnographer in a way that empirical science and reason never could. The montage created by the political bricoleur cum-journalist, supercedes most all of empirical science, thus, in this case the journalist/ethnographer’s political, rhetorical, and social authority far surpasses that of the empiricist.

The vibrancy of our free democracy and the direction of public policy depend on the journalist/ethnographer recording their personal experiences within any given social context - capturing the lived experience of a soldier at war for example. Does the journalist/ethnographer, then, answer the representation and legitimation crisis of qualitative field research?

We should also examine how technology adds to the complexity of this issue. With every new technology - near instantaneous electronic communication, internet web sites, personal web-logs, satellite telephones, and the like - the validity, reliability, and objectivity of the journalist/ethnographer becomes more immediate. The “reality” or “lived experience” of the journalist/ethnographer can be validated reliably and objectively by, say, a live video feed direct from the battlefield where the journalist narrates the action while the camera broadcasts the images. The real-time imagery and narratives are instantly available for the world to see and to which the legitimacy fairness can be judged. There is no time for “filework” or field-note review; the documentation is broadcast as it is discovered by the journalist, and thus, the political interpretation is subject only to the location of the researcher, and the specific images framed by the camera. What could the positivist, or empiricist researcher add to make such an ethnographic scenario significantly more valid, reliable, objective, or vivid? Is there another, or better example responds to the crisis of representation in qualitative research?

Anthony

Posted by arrig002 at 10:46 PM

Arguments for qualitative and quantitative methods

Here are my questions and thoughts for the readings for this week. I’m sure I’m not the only one who probably found the Charney article and the Janesick chapter in Denzin and Lincoln the most thought-provoking of the readings.

1) I enjoyed both articles, but found myself wondering if Janesick isn’t doing what Charney criticizes researchers in humanities and social sciences doing. Despite this criticism of Janesick’s view of quantitative research, I believe her description of qualitative research is well made and the dance metaphor is effective. I found the description of qualitative research more complete and nuanced than Creswell’s I am not so sure of her final reasons for optimism in the growth of qualitative research (p75). Some of these seem a bit naïve and convoluted, e.g., her belief that reduced governmental funding for research will force people to turn to qualitative (presumably equated with cheaper research). This assumption seems to undermine the complexity of the funding machinery. In times of budget cuts we seem to hear more about the arts and humanities being cut than the quantitative research fields.

2) I had the hardest time with the Charney article. Again, as with the Janesick article I concur with his basic theses—that all types of research should be considered in the social sciences and humanities if we want to answer questions in the best way, and that researchers must not be confused with the research (objective research is done by researchers who are themselves objective). I appreciate his call to rethink and overhaul current ways of thinking within the humanities:
“is it fair to hold scientists responsible because we did not appreciate the rhetoric of their discourse better than they did? We are supposedly the ones skilled in discourse analysis and steeped in rhetorical theory. But if we now dismiss objective methods as irrelevant or as opposed to the social functioning of scientific disciplines we will again be misconstruing the case,” (p581).
However I believe that in his criticism of critics of quantitative research Charney falters by giving too much credit to scientists. It is only in passing that he acknowledges that most scientists “are not as self-conscious of their methods as they should be” (p591) suggesting that understanding scientific practice can excuse ideology. He wishes to separate ideology from practice, but I wonder how possible this really is—to do one type of research be it quantitative or qualitative is to buy into the ideological myth. Even he admits that scientists buy into this myth “many scientists say and even believe, that their discourse is free of argument and interpretation” (p580). He’s asking humanists and social scientists to change their ideologies and perspectives, but doesn’t make a strong enough case for scientists to do the same.

3) I found all the readings on the IRBs and their role in social sciences illuminating. I’m considering doing interviews for my dissertation and it’s good to get a sense of what I might need to think about when working with IRBs. I did notice that most of the articles date from the 1990s and 2000. I’m interested to know where the debate is now in 2004. I am sure Lee-Ann will expand on this.

4) In Creswell I was confused by what exactly an instrument was in survey research. An example would be great.

--Salma

Posted by mona0046 at 9:52 PM

Heather

Our first week's readings were a good foundation for me to understand a discipline to which I am not a member. There was a feeling of relief to see another field grapple within itself about it's identity and boundaries. This seems constant in my field of family science and especially for me as an individual beginning to look at a segment of families and technology...talk about interdisciplinary! Family science is a meshing of disciplines (historically) and technology is being researched from multiple disciplines and the convergence of studying the two together is sometimes "gooey".

Just some of the questions that came to me ..

If you did open ended interview for a phenomenological perspective, could you also develop a grounded theory from those same interviews?
How does a researcher reconcile their personal philosophy of research with their fields paradigm when they are vastly different?

This is my first time blogging and I am amazed at how time consuming it is to read everyone's postings. However, I wouldn't want to miss anything either.

Posted by habe0076 at 6:58 PM

Week 2: Questions of Ethical Methods (Kenny)

One of Janesick’s major discussions of ethical concerns emerges from her rejection of objectivist assumptions and her call for the researcher’s analysis of his/her own ideological assumptions and complex subject positions (56ff). This move obviously constructs ethical qualitative research as an anti-objectivist production of local, particularized knowledge. Charney, in both defense of quantitative methods, which by the end of the article she conflates with objectivity (589), and critique of qualitative methods, agrees with Janesick’s assumptions without sharing her enthusiasm. Charney questions the use of a research method that unapologetically favors un-generalizable results (588, Janesick praises this aspect of new qualitative research, 70).

Do we take Janesick and other’s position that there is what we might call a tyranny of generalizability in the academic research methods? If so, does that mean we should only value subjectivist-quantitative research? Or do we agree with Charney’s point and find usefully only studies that seek to redefine and increase objectivity and generalizability? For example, what does one usefully draw from ethnographical studies of workplace/industrial/scientific settings if not some sense or wish for generalizability? Could the communal critique/public scrutiny that Charney sees as invaluable to the objectivity of quantitative research methods be understood as an example of what Janesick calls methodolatry (Charney 577; Janesick 64)? What does Charney’s discussion imply about historical methods which, through interpretation, retrace and retell one representation of the past? Is this merely an ideological impasse that we, as grad students, will simple have to maneuver by acquiescing to?

Considering that IRBs are concerned with the oversight of research that posits generalizable results (Shea), what part do they play in this debate? Most journalists see their work as irrelevant to the debate and are often free from IRB governance. Yet many ethnographers make the same argument. Should generalizability be a marker of IRB involvement? Can’t journalist interact with human subjects in a way that most would consider unethical? And if the exemption is merely a matter of first amendment rights, couldn’t that be the case with other academic disciplines?

How much of the onus should reside in the actual academic field, in the name of which this research is conducted? Can the disciplines ethically “regulate” themselves, given that the CCC statement and the article by Kastman-Breuch et. al. imply that many people in our field still have questions about ethical practices?

Finally, is there, in fact, such a thing as inherently ethical or unethical research methodologies? Or like the old adage of the devil and the details, is ethical practice a question of how a researcher conceives, creates, proceeds with, and “writes up” a research study?

Posted by fount012 at 6:49 PM

Week 2 Questions: Qualifying the IRB

My questions focus on contemplating the place of the IRB within the Academy:

  • The IRB seems to position itself as the keeper of the University's ethos. Is such a thing really feasible? Do all colleges and/or departments impart a uniform ethos that can be centrally governed by the University?
  • Does the IRB procedure subvert the spirit of the tenure system? Does it place the University in a position of power it wasn't originally accorded?
  • If the answer if yes, is this the price we have to pay to preserve the Academy within a capitalist, litigious society? Does any of this have to do with protecting human subjects so much as it does with avoiding litigation?

Posted by kenne329 at 2:47 PM