Gas station without pumps

2016 July 6

Outcomes assessment

Filed under: Uncategorized — gasstationwithoutpumps @ 21:47
Tags: , , , ,

In Confessions of a Community College Dean: The Whole and the Sum of Its Parts, “Dean Dad” wrote about outcomes assessment:

The first was a discussion on campus of the difference between the “course mapping” version of outcomes assessment and the “capstone” version.  Briefly, the first version implies locating each desired student outcome in a given class—“written communication” in “English composition,” say—and then demonstrating that each class achieves its role.  The idea is that if a student fulfills the various distribution requirements, and each requirement is tied to a given outcome, then the student will have achieved the outcomes by the end of the degree.

Except that it doesn’t always work that way.  Those of us who teach (or taught) in disciplines outside of English have had the repeated experience of getting horrible papers from students who passed—and even did well in—freshman comp.  For whatever reason, the skill that the requirement was supposed to impart somehow didn’t carry over.  Given that the purpose of “general education” is precisely to carry over, the ubiquity of that experience suggests a flaw in the model.  The whole doesn’t necessarily equal the sum of the parts.

In a “capstone” model, students in an end-of-sequence course do work that gets assessed against the desired overall outcomes.  Can the student in the 200 level history class write a paper showing reasonable command of sources?  The capstone approach recognizes that the point of an education isn’t the serial checking of boxes, but the acquisition and refinement of skills and knowledge that can transfer beyond their original source.  

I have certainly experienced the phenomenon of students doing well in freshman writing courses, but being unable to write reasonably in upper-division and graduate engineering courses—indeed, that is why I insisted on UCSC’s computer engineering curriculum requiring a tech writing course 30 years ago (and teaching it for about 14 years). I continue to teach writing-intensive courses—my current main course, Applied Electronics for Bioengineers, requires about 5–10 pages of writing from each pair of partners each week (though that load will drop in half next year, as the course is split into two quarters). The writing level of students increases noticeably during the quarter, though a number of students continue to have problems with organization, with paragraph structure, with grammatical sentences, and with punctuation (particularly commas).

But evaluating writing just once in a capstone course is no solution—that just invites a lowering of standards so that the failure rate is not too high. Nor can one guarantee that capstones will necessarily be a good check of all the desired outcomes. Indeed, one of the capstone options for the bioengineering degree at UCSC does not involve major writing—the results are presented orally, and oral presentations are frequent in the course.

I recently wrote an evaluation of the “Program Learning Outcomes” (PLOs) for the bioinformatics program at UCSC (and refused to write one for the bioengineering program—it was hard enough getting the info needed for the bioinformatics assessment).  The assessment falls more in the “various distribution requirements” camp than in the “capstone” camp.  We did not have much trouble showing that the PLOs were assessed thoroughly, largely because the PLOs were chosen to be things that the faculty really cared about and included in their course designs, rather than “wouldn’t it be nice if students magically acquired this” outcomes.

Here is the report (minus any attachments):
Program Learning Outcome Assessment
Bioinformatics BS program
Spring 2016
The bioinformatics program was asked to assess at least one of our Program Learning Outcomes (PLOs) []:

A bioinformatics student completing the program should

  • have a detailed knowledge of statistics, computer science, biochemistry, and genetics;
  • be able to find and use information from a variety of sources, including books, journal articles, and online encyclopedias;
  • be able to design and conduct computational experiments, as well as to analyze and interpret data;
  • be able to apply their knowledge to write programs for answering research questions in biology;
  • be able to communicate problems, experiments, and design solutions in writing, orally, and as posters; and
  • be able to apply ethical reasoning to make decisions about engineering methods and solutions in a global, economic, environmental, and societal context.

Because the program graduates so few students, it did not seem very productive to examine the output of just the most recent graduating class—we would need a decade’s worth of output to get statistical significance, and any information about changes in the curriculum would be lost in viewing such a long time scale. Instead, we asked the faculty in our department who teach the required courses of the major how they assess the students for the objectives that they cover.

A Google form was used to collect the information.  The faculty were prompted

Please complete a separate form for each course that you teach that is a required or elective course for the Bioinformatics major (select from list below).  Only courses that are required (or are part of a small list of constrained choices) matter here, since we are looking for guarantees that all students are meeting the PLOs, not that there is some elective path that would cover them.

Please provide a sentence or two describing how your course(s) provide evidence that the student has met the outcome.  Be explicit (what assignment in what course provides the evidence)!  Your course does not have to provide evidence for all the PLOs—one or two PLOs supported strongly in a course is more convincing.

Responses were collected for 7 courses: BME 80G Bioethics, BME 110 Computational Biology Tools, BME 130 Genomes, BME 185 Technical Writing for Bioengineers, BME 205 Bioinformatics models & algorithms, BME 211 Computational Systems Biology, and BME 230/L Computational Genomics.  Each of these courses is required, except BME 185 (bioinformatics students may take CMPE 185) and only one of BME 211 and 230/L is required. Our hope is that all the PLOs are assessed thoroughly in these courses, so that we do not need to rely on courses outside our control for outcome assessment.

The responses to the questions are attached as a CSV file and will be summarized here [I’m not including the attachments in this blog version]. Despite the prompt, faculty did not always explain exactly how the outcome was assessed.

detailed knowledge of statistics, computer science, biochemistry, and genetics

All courses except BME 80G (bioethics) test some aspect of this objective. Most of the assignments in most the courses depend heavily on this content knowledge, and the faculty are convinced that this objective is being adequately addressed.  Note that we did not include the courses from other departments that actually teach the fundamental material—just the courses within our department that rely on it.

able to find and use information from a variety of sources, including books, journal articles, and online encyclopedias;

All the courses rely on students gathering information from a variety of sources, with different levels of search and different levels of interpretation needed in each course. All courses have at least one assignment that assesses students’ ability to use information from a variety of sources, and most have several.  Again, because of the pervasive nature of the objective in all our courses,  the faculty have no concern that the outcome is being inadequately assessed.

able to design and conduct computational experiments, as well as to analyze and interpret data;

All the courses except Bioethics require some data analysis, and several require computational experiments, but only BME 211 and 230/L have the students doing extensive design of the experiments.

able to apply their knowledge to write programs for answering research questions in biology;

BME 80G (Bioethics) and BME 110 (Bioinformatics tools) do not require programming, and BME 185 (Technical writing) has minimal programming, but all the other courses require writing computer programs, and the programming tasks are all directly related to research questions.  In BME 211 and BME 230/L the questions are genuine open research questions, not just classroom exercises.

able to communicate problems, experiments, and design solutions in writing, orally, and as posters; and

All courses except BME 110 (Bioinformatics tools) require written reports, and several of the courses require oral presentation. Only BME 185 (Technical writing) requires poster presentation, so we may want to institute a poster requirement in one of the other courses, to provide more practice at this form of professional communication, as posters are particularly important at bioinformatics conferences.

able to apply ethical reasoning to make decisions about engineering methods and solutions in a global, economic, environmental, and societal context.

BME 80G (Bioethics) is specifically focused on this PLO and covers it thoroughly, with all the assessments in the course testing students’ ability to apply ethical reasoning.  There is also coverage of research and engineering ethics in BME 185 (Technical Writing).  Although most of the courses do not teach ethics, the writing assessment in each of the courses holds students to high standards of research citation and written acknowledgement of collaboration.

Overall, the faculty feel that the PLOs are more than adequately assessed by the existing courses, even without looking at assessments in obviously relevant courses for the objectives from outside the department (such as AMS 132 for statistical reasoning). Because so many of the objectives are repeatedly assessed in multiple courses, they see no point to collecting portfolios of student work to assess the objectives in yet another process.

Only poster presentation and ethical reasoning are assessed in only one course, and practical research ethics is assessed in almost every course, leaving only poster presentation as a skill that might need to be reinforced in improvements to the curriculum.

2014 December 27

We create a problem when we pass the incompetent

Filed under: Uncategorized — gasstationwithoutpumps @ 22:55
Tags: , , , ,

I finished my grading earlier this week, and I was little distressed at how many students did not pass my graduate bioinformatics class (19% of the students in the class did not pass this fall, about equally divided between the seniors and the first-year grads—note that “passing” for a grad student is B– or better, while for an undergrad is C or better). Some students were simply unprepared for the level of computer programming the course requires and were not able to get up to speed quickly enough.  They made substantial improvement during the quarter and should do fine next time around, particularly if they continue to practice their programming skills. Others have a history of failing courses and may or may not make the effort needed to develop their programming skills before their next attempt.

I don’t like to have students fail my courses (particularly not repeatedly, as some have done), but I can’t bring myself to pass students who have not come close to doing the required work. When I pass a student in a course, it means that I’m certifying that they are at least marginally competent in the skills that the course covers (most of my courses are about developing skills, not learning information).  I’ll give the students all the help and feedback I can to develop those skills, but I grade them on what they achieve, not on how much work they put in, what excuses they have, nor how many times they’ve attempted the course.

I often feel alone in holding the line on quality—I’m afraid that there are not enough faculty willing to fail students who don’t meet the requirements of the courses they are teaching.  Those teachers are just kicking the problem of inadequately prepared students on to the next teacher, or to the employer of the student who graduates without the skills a college graduate should have.

In The Academe Blog,  in the click-bait-named post Nude Adult Models, William Bennett, Common Core, Rotten Teachers, Apples, Robert Frost, Ulf Kirchdorfer wrote

The reality is that many teachers, whether prompted by supervisors or of their own volition, continue to pass students so that we have many that reach college with the most basic of literacy skills, in English, math, science, the foreign languages.

Tired of listening to some of my colleagues complain of college students being unable to write, I went to look at learning outcomes designed for students in secondary education, and sure enough, as I had suspected, even a junior high, or middle-school, student should be able to write a formulaic, basic five-paragraph theme.

Guess what. Many college students, even graduating ones, are unable to do so.

While I don’t often agree with Ulf (who often takes extreme positions just for the fun of argument), I have to agree with him that many of my students are not writing at what I would consider a college level for senior thesis proposals, even though they have had three prerequisite writing courses (including a tech writing course) as prerequisites to the senior thesis.  And it isn’t just writing coherent papers in English that is a problem, as evidenced by the failure rate in my bioinformatics course due to inadequate programming skills (despite several prerequisite programming courses).

In an article about Linda B. Nelson’s “spec” grading system, which attempts to fix some of the problems with current grading practices, she is quoted:

“Most students (today) have never failed at anything,” Nilson noted, since their generation grew up receiving inflated grades and trophies for mere participation in sports. “If they don’t fail now, they’re going to have a really hard life.”

It doesn’t do anyone any favors to pass students who do not meet the minimum competency expected—the students are deluded into thinking they are much more competent than they are (so that they don’t take the necessary actions to remediate their problems); future teachers are forced to either reteach what the students should already have learned (which means that the students who had the prerequisites get shortchanged) or lose a big chunk of the class; the university degree loses its value as a marker of competence; and employers ratchet up credentials needed for employment (as the degrees mean less, higher degrees are asked for).

There is pressure on faculty to raise pass rates and pass students who don’t have adequate preparation.  The University administration wants to increase the 4-year graduation rate while taking in more students from much weaker high schools. I worry that the administration is pushing for higher graduation rates without considering the problems caused by pressuring faculty to pass students who are not competent. The reputation of the university is based on the competence of its alumni—pumping out unqualified students would fairly quickly dissipate the university’s good name.

Four-year graduation is not very common in engineering fields—even good students who start with every advantage (like several AP courses in high school with good AP scores) have a hard time packing everything into 4 years. Minor changes to course schedules can throw off even the best-laid plans, so an extra quarter or two are completely routine occurrences. And that’s for the top students.  Students coming in with weak math preparation find it almost impossible to finish in 4 years, because they have to redo high school math (precalculus), causing delays in their starting physics and engineering classes. If they ever fail a course, they may end up a full year behind, because the tightening of instructional funding has resulted in many courses only being offered once a year.  There is a lot of pressure on faculty to pass kids who clearly are not meeting standards, so that their graduation is not delayed—as if the diploma was all that mattered, not the education it is supposed to represent.

There are things that administrators can do to reduce the pressure on faculty.  For example, they could stop pushing 4-year graduation rates, and pay more attention to the 5-year rates. The extra time would allow students with a weaker high school background to catch up.  (But our governor wants to reduce college to 3 years, which can only work if we either fail a lot of students or lower standards enormously—guess which he wants. Hint: he favors online education.) Students who need remedial work should be given extra support and extra time to get up to the level needed for college, not passed through college with only high school education.

Or they could stop admitting students to engineering programs who haven’t mastered high school math and high school English.  This could be difficult to do, as high school grades are so inflated that “A” really does mean “Average” now, and the standardized tests only cover the first two years of high school math and that superficially (my son, as a sixth grader, with no education in high school math, got a 720 on the SAT math section).  It is hard for admissions officers to tell whether a student is capable of college-level writing or college-level math if all the information they get is only checking 8th-grade-level performance.

Or administrators could encourage more transfer students from community colleges, where they may have taken several years to recover from inadequate high school education and get to the point where they can handle the proper expectations of college courses.  (That would help with the attrition due to freshman partying also.)

Or administrators could pay for enough tenured faculty to teach courses with high standards, without the pressure that untenured and contingent faculty feel to keep a high pass rate in order to get “good” teaching evaluations and retain their jobs.

Realistically, I don’t expect administrators to do any of those reasonable things, so it is up to the faculty to hold onto academic standards, despite pressure from administrators to raise the 4-year graduation rate.

2013 August 4

What teachers need to look for

Filed under: Uncategorized — gasstationwithoutpumps @ 13:29
Tags: , , ,

Grant Wiggins, in his post Better seeing what we don’t see as we teach, gives some good advice to teachers about observing their students.  It is wrapped in too much sports metaphor for my taste, which makes the condensed summary a bit hard to understand:

  • Look beyond the “yesses” and head nods. 
  • Look “off the ball.”
  • Spot the “first foul.”
  • Listen for the ‘dog that does not bark’.
  • Look for what the quiz does not show.
  • Who are my “starters”?
  • Feedback on your feedback.
  • What notes do they take?
  • Call a time-out.
  • Assess formatively every few minutes.
  • Ask for feedback.

There was no pithy quote to pull out to summarize the post.  The basic idea is that most teachers do not pay enough attention to the students who are not actively providing feedback, and that it requires concerted, conscious effort to become aware of the students who are either not participating or who are giving the feedback that they think you want, rather than honest assessment of their understanding.  For the past few years I’ve been working on improving my awareness of the students having trouble with the material and involving them more in the class, but I still have a ways to go.

For me, the biggest change I could make is to ask for formative assessment from the whole class every few minutes. (Grant suggests roughly 10-minute intervals.) Although the logistics of clickers, cards, or hand signs is pretty simple for getting feedback on multiple choice questions, I find it difficult to come up with multiple choice questions that tell me anything useful—coming up with them on the fly during a class seems particularly challenging.

The classes I teach are unusual enough that there isn’t a big body of predigested teacher support material for me to lean on—I have to come up with everything myself.  This doesn’t cause me any difficulty for lectures—I know the material well enough that I generally only need a couple of words of notes to remind me what to talk about, but building assessments takes me a long time.  I may spend a week or more full-time devising and writing a programming assignment or a lab assignment, and I’ve mostly given up on writing timed tests (though I used one in the Applied Circuits class and will do so again next year).

I particularly find it difficult to come up with small questions, of the sort that students can reasonably answer in a minute or two, which is what I’d need for in-class feedback of the type Grant Wiggins suggests.  The homework and lab assignments are almost always of the size that it takes 3–10 hours to do them.  I’ll be trying in the coming year to devise at least one tiny question for each class session—which will probably take me longer than the class sessions themselves.  I’ll record the questions in this blog after I’ve used them in class, so that I’ll be able to use them in classes in future.

2013 March 26

Petition for human readers

Filed under: Uncategorized — gasstationwithoutpumps @ 15:32
Tags: , , ,

I just found out about a petition against the machine scoring of essays at Human Readers (thanks to a blog post on the AAUP blog):

We call for schools, colleges, and educational assessment programs to stop using computer scoring of student essays written during high-stakes tests.

Every year hundreds of thousands of students write essays for large-scale standardized tests. The scores are used in life-changing decisions. Students are accepted into, placed within, and rejected from educational programs. Graduates are hired or not hired. Teachers are qualified, evaluated, promoted, and fired. Learning institutions are compared, accredited, and punished. Yet in a major disservice to all involved, more and more of these essays are scored not by human readers but by machines.

Let’s face the realities of automatic essay scoring. Computers cannot “read.” They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others. Independent and industry studies show that by its nature computerized essay rating is

  • trivial, rating essays only on surface features such as word size, topic vocabulary, and essay length
  • reductive, handling extended prose written only at a grade-school level
  • inaccurate, missing much error in student writing and finding much error where it does not exist
  • undiagnostic, correlating hardly at all with subsequent writing performance
  • unfair, discriminating against minority groups and second-language writers
  • secretive, with testing companies blocking independent research into their products

The basic premise of the petition is good: computers can’t score those aspects of writing that people actually care about, and so should not be used for scoring any essays that matter. Of course, some of the alternatives are just about as bad (like the MOOCs that use peer grading by incompetent “peers”), but I signed the petition anyway.

The bottom line is that assessment of writing is difficult, and so is expensive. If you can’t afford to pay for proper evaluation by well-trained human readers, then you can’t afford to use essays as part of an assessment. There are no shortcuts.

2012 January 18

Teaching is not quite like programming

Filed under: Uncategorized — gasstationwithoutpumps @ 23:39
Tags: , , ,

In one of the comments at Kitchen Table Math, the sequel, SteveH compared teaching students to programming.  In particular, he was railing against holistic evaluations:

If students can do a complex task, that really only means that they can do that one problem, no matter how much understanding is applied. If they can’t, it’s very difficult to find the gap or problem in understanding. It reminds me of unit testing versus system testing in programing. Testing doesn’t start at the “authentic” system level. You might define tests that work, but you will never properly exercise the code.

For any complex system, you have to validate the parts before you put them together to test and validate the whole. For education, you have to validate that students have mastered basic skills before letting them loose on complex “authentic” or real world problems.

This analogy seemed pretty reasonable at first, so I thought about how in interacts with my own assessment methods.  I’m usually dealing with students at a fairly advanced level, and there is no way that I could assess them on all the basic skills they are supposed to have acquired over the preceding 16 years of their education.  The new skills I want them to develop are layered on top of the writing, math, biology, and programming skills and knowledge they are supposed to already have.  I can test some of the new skills incrementally (and I do to some extent, having usually 6 programs of gradually increasing difficulty, plus a couple of papers), but most of the problems I uncover tend to be underlying problems in their previous training.

I disagree with him that evaluating a complex task results in only one bit of information (they can or can’t do it).  The complex task produces a complex output that can be debugged—though not always easily. So my teaching and assessment is often more like debugging someone else’s poorly tested and undocumented library.  I can’t write unit tests for everything in the library—I can’t even figure out what is in the library, but I can look for errors in the output and try to trace them back to their sources. That is why it takes me forever to grade programs or papers—I’m not doing a pass/no-pass assessment, but attempting to debug the underlying thought processes of the person. Like all debugging tasks, it is difficult and the first guess at the problem is usually wrong.

I would love it if the students I taught had all been excellently educated and I knew precisely what their skills were (like tested and documented libraries).  But that is a fantasy world.  Every one of my students comes in with a different skill set, often with great strengths in some areas and unexpected holes in others. Part of my job is to try to identify some of those holes and point them out to the students, so that they can patch them up. Another part is to help them build new capabilities on top of what they already have.

Next Page »

%d bloggers like this: