One of the first presentations on Standards-Based Grading that I read was Dan Meyer’s post, which seems to have started many others as well. I’ve also been reading Sean Cornally’s blog Think Thank Thunk, who is passionately committed to SBG, and who is passionately opposed to including homework in assessment.

While I like some of the ideas of SBG, I’m still having trouble applying it to my own teaching.

The think I like most about SBG is that it requires the teacher to articulate precisely what they want the students to learn—much more precisely than state standards or textbook designers need to. This is a valuable, but very difficult task. The hard part is not listing topics, but coming up with meaningful assessments that test whether the student has mastered the concept.

I can easily list topics that students need to have some understanding of, and I can create assignments which show mastery of several of the crucial skills and ideas, but I have a hard time pinpointing exactly what I want students to know in a testable way. For example, one topic I teach is hidden Markov models (HMMs), which are a major tool in bioinformatics for protein and DNA sequence analysis. (For RNA sequence analysis, HMMs don’t capture enough of the information, but understanding HMMs helps with understanding Stochastic Context-Free Grammars, which are useful). So I know I need to present HMMs, give the students ways of thinking about them, and walk them through the derivation of the forward-backward algorithm. But what can I assess them on? There isn’t time in the course for them implement the forward-backward algorithm and test it (there’s barely time for them to do the much simpler dynamic programming of the Smith-Waterman algorithm for sequence-sequence alignment). What smaller assessment can I give that is not completely bogus?

One assumption of SBG is that assessment is cheap. That is, that it does not take up too much student or teacher time, and that reassessment of a concept can be done easily. For some subjects (like arithmetic and algebra), there are readily available test generators that can create new instances of closely related problems at the push of a button. For those subjects, assessment is indeed cheap. But for other subjects, it may take weeks to craft a reasonable assessment, making reassessing difficult.

Just last fall, I redid the 6 programming assignments in my core bioinformatics class (each of which had been tweaked several times already). The change was prompted by a change in the programming language the assignments were to be implemented in, from Perl to Python. In redoing the assignments, I implemented solutions for each assignment, tweaked the assignment to make it better at probing student understanding of the concepts, and reimplemented the solutions. I also standardized the I/O specifications, so that I could more easily evaluate the student programs, by comparing their output to the output of my programs automatically, on input that the students had not seen. This effort, which did not involve creating new assignments, just tweaking 6 existing ones, took 2–3 weeks full time. These assignments are expected to take students about 10 hours each—much of the tweaking was to try to keep the assignments form getting too big. Creating a completely new assignment, which I do about every other year for this course, takes about 40 hours of effort. So at 40 hours of teacher time and 10 hours of student time for a new assessment, reassessing in this class is **not** cheap.

Part of my problem is that I’m not interested in whether the students know factoids about biology, programming, statistics, or bioinformatics. I don’t really care whether they can apply formulas to isolated problems or emulate a simple algorithm on a toy problem. What I want them to do is to put together all the material they have learned in other classes and create programs that use statistics to answer biological problems. I’m interested in their synthesis of the pieces, not a reductionist analysis of whether they have acquired the pieces separately.

This seems to me the biggest weakness of SBG: it is based on a reductionist approach to learning that is proper in the beginning stages of learning a subject, but which is not appropriate in later stages where the focus is on synthesis of the concepts. Even in classes like Algebra 2, which lend themselves well to listing specific topics and techniques students must master, and for which assessments can be cheaply produced, how do teachers assess the ability of students to put the concepts together? SBG seems pretty good for ensuring that students have all the tools in their toolbox, but how do we teach students how to choose the right tools? How do we assess their ability to solve a problem using multiple tools, choosing the best tools and applying them correctly? (Note: although I don’t teach high-school math, I have coached middle-school math teams and may coach a high-school one next year. For competition math, learning to choose among an array of tools and apply them in clever ways is the core pedagogic goal.)

I’ve yet to come up with any small assessments that tell me useful things about the students’ ability to synthesize concepts. I’ve pretty much given up on giving tests in my courses, relying instead on programming assignments, week-long essay assignments, and quarter-long research projects. Anything shorter just doesn’t seem to measure what I’m interested in. But these big assignments invariably cover multiple skills, and don’t lend themselves to the easy diagnostics of SBG.

[…] second post on SBG looked at the unspoken assumption that assessment is cheap, something that is not the case in many […]

Pingback by Sustained performance and standards-based grading « Gas station without pumps — 2010 August 29 @ 09:16 |

[…] fashionable is Standards-Based Grading, which is good for a reductionist analysis of topics, but not so strong on synthesis. SBG also has trouble measuring sustained […]

Pingback by Experience Points for classes « Gas station without pumps — 2010 October 23 @ 22:51 |

“This seems to me the biggest weakness of SBG: it is based on a reductionist approach to learning that is proper in the beginning stages of learning a subject, but which is not appropriate in later stages where the focus is on synthesis of the concepts. … SBG seems pretty good for ensuring that students have all the tools in their toolbox, but how do we teach students how to choose the right tools?”

Make synthesis a standard (a heavily weighted one, if you want to!). ANY skill or piece of content can be made into a standard; that’s the beauty. I don’t have a standard on fitting lines to data, because my juniors/seniors should be fine with the mechanics of that, but I do have a standard on algebraic models, which requires students to evaluate the data and pick the best model. Poor integration of concepts? Poor standard score.

Comment by josh g — 2010 October 25 @ 11:49 |

[…] The computation parts of the course are pretty simple to implement because the assessment for them is cheap. The Conceptual questions for Linear Algebra were more of a challenge, but I could still create a […]

Pingback by SBG Reflections « Solvable by Radicals — 2011 June 16 @ 13:36 |