In "Responding to Student Writing," Nancy Sommers examines what she calls the "most widely used," yet "least understood" method for responding to student writing. Sommers and her colleagues, Lil Brannon and Cyril Knoblauch, researched the commenting styles of 35 teachers at New York University and the University of Oklahoma. Their findings show that the teachers' comments were "arbitrary and idiosyncratic" and that they did little to encourage better writing.
One of the problems that Sommers sees is that teachers often provide only vague, "rubber-stamped" advice that doesn't vary according to purpose or to the student's stage in the drafting process. Many students receive mixed messages from their instructors as well. For instance, in the sample paragraphs Sommers cites, the teachers have marked many sentence level errors, but at the same time, have also suggested the student writer rethink the entire paragraph.
Sommers recommends that in order for teachers' comments to help students become better writers, they must be thoughtful and specific to the text on which they're working. She also recommends teachers comment differently on early drafts than they do on later drafts. Students should see that revising their content must occur before they edit and polish.
Though I don't doubt Sommers' conclusion that we need to offer better writing advice to our students, advice that is specific to the text and the stage in the writing process, I do have concerns about how she reached that conclusion. Comparing teacher comments with a computer's comments and concluding that the computer offers better advice in a more reasonable way seems highly problematic to me.
As essays are being added into standardized tests required to receive AP/IB credit, graduate from high school, and get into college, machine scoring is becoming more and more attractive. It is an efficient, cost-effective way to handle the paper load. ETS claims that their machine scoring of papers is highly sophisticated and reliable Yet, how can a computer replace a human reader? How can it comment on the "big-picture" concerns that Sommers wants us to encourage students to wrestle with? And how can we expect students to produce authentic writing when they know it will be graded by a formula that has been programmed into a machine?Posted by gust0124 at April 18, 2005 2:51 PM