Gas station without pumps

2012 May 31

Programming and writing: two fundamentals

Filed under: Uncategorized — gasstationwithoutpumps @ 14:15
Tags: , , ,

In a comment on Mark Guzdial’s blog (Women leave academia more than men, but greater need to change in computing « Computing Education Blog),  Laurissa commented

I’ve always been afraid that technology and computer education will follow in the footsteps of writing instruction. Communication and computer literacy are both necessary skills in today’s culture and job economy. How can we expect students to learn either if universities require one (maybe 2) intro level courses during freshman year then never teach students how to build upon those skills, show them how how valuable those skills, and actually give them opportunities to apply their skills AFTER they leave those initial classes. Computing–in the same way as communication–can’t the be taught in isolation from other disciplines. They’re life-long learning skills that students NEED if they’re going to succeed once they leave the university.

Although she is an English major who took some computer science, while I am a computer science PhD who taught tech writing for over a decade, we have similar views on the similarity of writing and computer programming.  Both are essential skills that require clarity of thought and are expensive to teach well.  Both are being short-changed in colleges and universities, because they require labor-intensive feedback to the students from highly skilled practitioners.

There is a strong temptation to throw the problem over the fence to a small group of experts (writing instructors or computer science lecturers) teaching first-year classes.  That happened in most universities to writing instruction over the past 2 decades, with the result that students write very few papers after their freshman year in most majors, and almost never get detailed feedback on them. It is happening in computer science also, except that the freshman CS courses already do not provide any feedback on programming style other than whether things compile and work on a few test cases.  (That’s like checking English papers for word count, word length, and sentence length, but not for content—sort of what scoring of SAT essays is like.)


  1. Wait a second! I provide extensive feedback to students on programming style in the freshman CS course, and so do most other professors that I know. Even back when I was a TA for one of those huge intro courses at a big university, we provided feedback – we TAs were all trained in how to do this and given a rubric for each assignment.

    Comment by Bonnie — 2012 May 31 @ 15:28 | Reply

  2. As a professor who has managed large classes with a small army of TA’s (e.g., 15 at once), my bet is that only a few TA’s gave comments as well as you did, Bonnie. Good comments come from a rich background of knowledge and experience. Few of our TA’s had that background, and their comments showed it.

    Comment by Mark Guzdial — 2012 June 1 @ 06:20 | Reply

    • That is why we were given grading rubrics and examples of good style by the professor running the course. We also had a weekly TA meeting with him and with the TA coordinator to go over these things. I think we did a good job, and that approach has always stuck with me.

      Comment by Bonnie — 2012 June 1 @ 09:40 | Reply

  3. I believe that some faculty and TA at some schools give extensive feedback, and some give very little. The push to have enormous online classes in CS is almost certain to be accompanied by further erosion of this sort of feedback.

    The students I’ve had in my first-year grad class come from all over the country and their experiences with feedback on their programs has varied enormously. Of course, it is possible that they had been given feedback and not bothered to read it, but these were diligent students and I find it more likely that many had been given little feedback and that of low quality.

    Comment by gasstationwithoutpumps — 2012 June 1 @ 09:59 | Reply

  4. Well, for an English major and writing instructor, I did a poor job re-reading my comment on Dr. Guzdial’s blog before I submitted it, now didn’t I? Rather embarrassing. I need to realize my fingers can’t always keep up with my brain when I’m typing.

    Leaving quality comments on student work is so important, but it is time intensive and mentally exhausting–especially when you’re trying to make sure students aren’t discouraged. I haven’t taught programming, but I’m guessing it shares a few similarities with teaching writing. It’s one thing to point out a mistake and another thing to explain WHY it’s a problem in a way that motivates students to make a conscious effort not to make the same type of mistake moving forward.

    There’s also a similar concern among faculty in English departments about online classes “eroding” instructor feedback–which is one of the reasons why online writing courses are met with such strong resistance.

    Comment by Laurissa — 2012 June 2 @ 06:21 | Reply

  5. Note: Robert Talbert has just pointed out that “Most Udacity courses these days just use the final exam [for providing certification].” Although Udacity is teaming up with Pearson to provide proctored exams (for a price, of course), the reliance on testing as the only measure of student performance means (to me) that their certification will tell me nothing about student programming ability, which is not really measurable on a short-duration test.

    Comment by gasstationwithoutpumps — 2012 June 2 @ 10:35 | Reply

  6. I think that autograding is slightly better than checking metrics to grade essays. It’s more like a rather limited form of software testing. Even though it’s limited, though, it probably compares favourably to how a lot of software gets tested in the real world. I do agree that autograding does not evaluate clarity of thought, nor taste, which are both important.

    I think that there are differences between grading writing and grading code. The key difference is that code actually has to work, so evaluating it can be much less subjective than evaluating writing.

    I wonder if students are more likely to read comments from a computer rather than from their instructor or TA. I believe that it’s well-known that a lot of instructor feedback gets ignored. I provide feedback (on writing, not code), but it’s more for my own use in establishing a mark. Part of the issue, I think, is that people are fairly sensitive about being criticized by people, and it may be the case that it doesn’t apply so much to being criticized by machines.

    I do think that we can do better than autograding-by-testcase. However, I don’t know how to evaluate the quality of autogenerated comments.

    Comment by plam — 2012 June 4 @ 19:57 | Reply

    • Autograding programs by I/O behavior is a marginally adequate compromise in handling programs. Rarely do these autograding programs provide detailed feedback. It is not much like the QA tests in industry, where the programmer is provided with all the tests that failed and the QA staff are continually looking for new test cases to stress the code.

      I agree, though, that autograding I/O behavior is better than trying to autograde writing (of programs or of English).

      Comment by gasstationwithoutpumps — 2012 June 4 @ 20:43 | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: