Gas station without pumps

2011 March 24

Why sweat the details?

Filed under: Uncategorized — gasstationwithoutpumps @ 11:06
Tags: , , , ,

Mark Guzdial on his Computing Education Blog asks some interesting questions today in his post Why we teach how linked lists are made, and how memory representations work.

The basic question is a simple one that applies to almost any field: why teach the low-level details? When and to whom?  He offers two of the standard arguments as straw men:

One school of thought goes something like this, “We have to teach how data structures are really implemented, because unless you realize how the runtime behavior differs between a singly-linked list and a doubly-linked list and a tree, you really can’t make informed choices between them.”  In other words, we teach the implementation to inform the use.

An opposing school of thought says, “It’s ridiculous to have anyone implement yet another of these complex data structures!  How many linked list implementations do we need?  Instead, we should focus on how to use them well.”  That school of thought sees data structures as a programming or even vocational benefit.

He points to another article (The dimensions of variation in the teaching of data structures, doi:10.1145/1007996.1008023, sorry that it is hidden behind a pay wall) that describes 5 “categories of instructor rationale for the purpose of teaching data structures:”

  • Developing transferable thinking
  • Improving students’ programming skills
  • Knowing “what’s under the hood”
  • Knowledge of software libraries
  • Component thinking

They further categorize four of these approaches into two dimensions: abstract/concrete and computer science/object engineering.  It’s all a very nice classification that let’s one see why different sides of the argument about what to teach talk past each other so much, though it ignores the computer-engineering rationale for teaching low-level programming, which is two-fold: to understand what needs to be built in hardware to allow the software to exist and to build software interfaces to the hardware (device drivers cannot ignore low-level details—they are the code that allows computer scientists to ignore those details).

Guzdial wants to present a different view:

It’s only through dealing with implementation structures that I (and my students) have seen their misconceptions highlighted

Mark sees the value of teaching lower-level details in the diagnostic information it provides about student understanding. Very high-level constructs are complex things, and failure to use them correctly can come from any of several different misunderstandings. Also, using large library packages can sometimes result in correct or nearly correct programs, even when there is a deep misunderstanding. Lower-level details (like the meaning of next and previous in a doubly-linked list) are more revealing of confusion (like the difference between a variable, an object, and a pointer) that can cause students serious problems in later programming.

I liked one of his commenter’s (drj11) responses:

But the real take home message for me was a much higher level one: some datastructures and algorithms are too clever and useful to discover by oneself (Graham scan, the FFT algorithm for multiplying large numbers, red–black trees are three that spring to mind). And the practical application of this is that sometimes for some problems it may be useful to go and do some research to find “an algorithm”.

That is, the value of going into more detail than the student “needs” may be to point out the limits of their understanding, so that they know when to do further reading, look for an existing program, or hand off a problem to experts.

I run into pedagogical questions in my bioinformatics classes all the time about how detailed I should get in presenting algorithms and having students implement them.  There isn’t time to go to full implementation on everything—we’d not cover nearly enough important concepts, but if I just do a high-level view of everything, then students end up not having enough depth of understanding to implement anything new.  Given that I’m teaching grad students and seniors in bioinformatics, they must be able to do new things, not just use the existing tools (that is a different class).

My approach for the Bioinformatics: Models and Algorithms class is to use a mix of different levels: some stuff taught down to very fine details but not practiced (like the algorithm for adding probabilities represented in log space), some stuff taught in fine detail and implemented by the students (like affine-gap alignment), some stuff taught in moderate detail (like hidden Markov models and Baum-Welch), and some stuff flown over at 1000 feet (like stochastic context-free grammars and phylogenetic trees). Obviously, different teachers might choose different topics to go into depth on or to fly over: that choice is a personal one based on what algorithms the instructor is most comfortable diving into detail on.

Affine-gap alignment is one of those algorithms that is very good for exposing student misunderstandings, as doing it properly requires converting a high-level recursive definition into a low-level dynamic programming implementation. There are several students every year who have trouble with the edge conditions and at least one or two who don’t seem to be able to internalize that the traceback has to use exactly the same tests as the forward recurrence, no matter how many examples I work with them or what explanations I use. Every year I try a somewhat different approach to teaching the traceback algorithm, swearing that this year I’ll get through to everyone.  I’ve even tried showing them the most common error and helping them see why it produces the wrong results—that reduced the number making that particular mistake to about 10–15% of the class, which is still too many.

Simpler alignment algorithms (like linear gaps or arbitrary gap costs), which are more often used as exercises in textbooks, do not reveal the misunderstandings so clearly, as it is often possible to debug those programs into doing the right thing in most cases.  Even the affine-gap algorithm requires some carefully constructed test cases to reveal some of the subtler bugs in edge conditions or traceback.

Bottom-line: I think that there are many different reasons for teaching low-level details, and Guzdial’s and drj11’s are as good as many others.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: