I’m spending this quarter reading senior theses: five drafts each of 13 theses. None of these students are working for me—I’m just running the class in which they are trying to convert what they’ve done for the past year into something resembling a thesis. About half the class had not written anything on their projects before taking this senior-thesis seminar—a serious dereliction of duty on the part of their faculty supervisors, who should have been requiring a draft at the end of each quarter of work.
I meet with each student weekly (for about half a hour, though I seem to run over more often than not) in addition to the 1 ¾ hour weekly class meeting and all the reading and scrawling on drafts. I’m currently spending over 14 hours a week on this 2-unit course (my light teaching quarter, as I had 8 units in the Fall and 7 in the Winter), and the amount of time will probably go up as the drafts get longer and more complete.
I know that a lot of MOOC-proponents are pushing automatic grading of papers as a cost-effective way to handle classes with over 1000 students. Quite frankly, the idea appalls me—I can’t see any way that computer programs could provide anything like useful feedback to students on any sort of writing above the 1st-grade level. Even spelling checkers (which I insist on students using) do a terrible job, and what passes for grammar checking is ludicrous nonsense. And spelling and grammar are just the minor surface problems, where the computer has some hope of providing non-negative advice. But the feedback I’m providing covers lots of other things like the structure of the document, audience assessment, ordering of ideas, flow of sentences within a paragraph, proper topic sentences, design of graphical representation of data, feedback on citations, even suggestions on experiments to try—none of which would be remotely feasible with the very best of artificial intelligence available in the next 10 years.
Providing good feedback on the student theses requires a good understanding of what the students are talking about (which I have gotten mainly from hearing years of research talks by their supervisors, since none are working on subjects within my areas of expertise) plus an understanding of what makes good technical writing. Either one without the other is nearly useless, which is why students who worked on their thesis drafts as part of a tech writing course last quarter are not much better off than those who didn’t—the tech writing instructor knew none of the content, and so could not see when the ideas were in the wrong order, misstated, or otherwise badly presented. Misuse of jargon and incorrect presentation of data were also missed. The main advantage for the students who wrote a draft for the tech writing course is that they have more complete draft to start from, with a few of the surface errors already removed.
If even expert tech writing instructors with decades of experience can’t produce good enough feedback on student writing, what hope is there that automated programs can do anything useful?
5. I don’t know a single instructor of writing who enjoys grading.
6. At the same time, the only way, and I mean the only way, to develop a relationship with one’s students is to read and respond to their work. Automated grading is supposed to “free” the instructor for other tasks, except there is no more important task. Grading writing, while time-consuming and occasionally unpleasant, is simply the price of doing business.
7. The only motivations for even experimenting [with], let alone embracing, automated grading of student writing are business-related.
12. The second most misguided statement in the New York Times article covering the EdX announcement is this from Anant Argawal, “There is a huge value in learning with instant feedback. Students are telling us they learn much better with instant feedback.” This statement is misguided because instant feedback immediately followed by additional student attempts is actually antithetical to everything we know about the writing process. Good writing is almost always the product of reflection and revision. The feedback must be processed, and only then can it be implemented. Writing is not a video game.
14. The most misguided statement in the Times article is from Daphne Koller, the founder of Coursera: “It allows students to get immediate feedback on their work, so that learning turns into a game, with students naturally gravitating toward resubmitting the work until they get it right.”
15. I’m sorry, that’s not misguided, it’s just silly.
…22. The purpose of writing is to communicate with an audience. In good conscience, we cannot ask students to write something that will not be read. If we cross this threshold, we may as well simply give up on education. I know that I won’t be involved. Let the software “talk” to software. Leave me out of it.
I pulled out the points that resonated most for me, but I recommend reading the whole of John Warner’s post.