Gas station without pumps

2014 September 1

Where PhDs get their Bachelors’ degrees

Last year I wrote about a study that looked at where CS PhD students got their bachelors’ degrees. Now Reed College has extended that question to other fields as well: Doctoral Degree Productivity.  Their point was to show how high Reed ranked on the standard they chose: the number of students who went on to get PhDs divided by the number of students getting bachelor’s degrees.  I quote the tables and accompanying text below, but I take no credit or blame for the data—this is directly from Reed’s site:

Undergraduate Origins of Doctoral Degrees

Percentage ranking of doctorates, by academic field, conferred upon graduates of listed institutions.

Rank All Disciplines Science and Math Social Sciences Humanities and Arts
1 Calif. Inst. of Tech. Calif. Inst. of Tech. Swarthmore New England Conserv. of Music
2 Harvey Mudd Harvey Mudd Grinnell Curtis Institute of Music
3 Swarthmore Reed Reed Juilliard
4 Reed MIT Bryn Mawr Cleveland Inst. of Music
5 Carleton NM Institute Mining & Tech. Spelman St. John’s College
6 MIT Carleton Oberlin Reed
7 Grinnell Wabash Wesleyan Hellenic College-Holy Cross Greek Orthodox Sch. of Theology
8 Princeton Rice St. Joseph Seminary Swarthmore
9 Harvard Univ. of Chicago Harvard Oberlin
10 Oberlin Grinnell Pomona Amherst

Percentage Ranking by Specific Fields of Study

Rank Life Sciences Physical Sciences Psychology Other Social Sciences* Humanities
1 Calif. Inst. of Tech. Calif. Inst. of Tech. Univ. Puerto Rico – Aguadilla Swarthmore St. John’s, MD
2 Reed Harvey Mudd Wellesley Reed Reed
3 Swarthmore Reed Vassar Harvard Amherst
4 Carleton MIT Hendrix Grinnell Swarthmore
5 Grinnell NM Institute Mining/Tech. Pontifical Coll. Josephinum Univ. of Chicago Carleton
6 Harvey Mudd Carleton Grinnell Bryn Mawr Yale
7 Univ. of Chicago Wabash Swarthmore Thomas More College of Lib. Arts Thomas More College of Lib. Arts
8 Haverford Rice Barnard Oberlin Bryn Mawr
9 MIT Univ. of Chicago St. Joseph Seminary Coll. Bard College at Simon’s Rock St. John’s, NM
10 Earlham Grinnell Pomona Wesleyan Wesleyan
11 Harvard Haverford Reed Amherst Princeton
12 Cornell Univ. Swarthmore Wesleyan Pomona Bard College at Simon’s Rock

*Does not include psychology, education, or communications and librarianship.

Source: National Science Foundation and Integrated Postsecondary Education Data System. The listing shows the top institutions in the nation ranked by estimated percentage of graduates who went on to earn a doctoral degree in selected disciplines between 2001-2010.

All the schools listed are private schools except Univ. Puerto Rico—Aguadilla and NM Institute Mining/Tech., but seeing dominance by expensive private schools is not very surprising—grad school is expensive, and students who can afford expensive private schools are more likely to be able to afford expensive grad school and are less likely to need to work immediately after getting their B.S. or B.A. A PhD is not a working-class degree—it is prepares one for only a small number of jobs, mainly in academia or national labs, so for many it is just an elite status symbol.  What is more surprising is how poorly the Ivy League schools do on this list—perhaps those who get their elite status conferred by their bachelor’s institution see no need to continue on to get higher degrees.

Reed does not report numbers directly comparable with the ones in the Computing Research Association report, which reports only on computer science PhDs, where

Only one institution (MIT) had an annual average production of 15 or more undergraduates.   Three other institutions (Berkeley, CMU, and Cornell) had an average production of more than 10 but less than 15.  Together, these four baccalaureate institutions accounted for over 10% of all Ph.D.’s awarded to domestic students.   The next 10% of all Ph.D.’s in that period came from only eight other baccalaureate institutions (Harvard, Brigham Young, Stanford, UT Austin, UIUC, Princeton, University of Michigan, and UCLA). 

Note that five of the top producers of bachelor’s in CS who went on to get PhDs were public schools.  The CRA does not report PhD/BS numbers for individual institutions, probably because the numbers are too small to be meaningful for most colleges—you have to aggregate either across many colleges or across many fields before the denominators are big enough to avoid just reporting noise.  Reed did the aggregating across fields, while the CRA report aggregated across colleges, finding that research universities sent about 2.5% of their CS graduates on to get PhDs, 4-year colleges about 0.9% and masters-granting institutions about 0.6%.  They did have one finding that supports Reed’s analysis:

The top 25 liberal arts colleges (using the U.S. News and World Reports ranking) collectively enroll slightly less than 50,000 students per year in all majors and were the origins of 190 Ph.D. degrees between 2000 and 2010, collectively ranking ahead of any single research university.

Reed’s findings are also consistent with the NSF report that put the “Oberlin 50″ colleges highest at over 5% of their science and engineering graduates going on to get PhDs, compared to about 3% for research universities.  The NSF report supports somewhat the analysis that socio-economic status is important in determining who goes on to grad school—private research universities match the Oberlin 50, but public research universities have only about half as large a fraction of their graduates go on to grad school.

I found out about this site from The Colleges Where PhD’s Get Their Start, which has a copy of the tables that probably came from an earlier, buggy  version of the site, because Lynn O’Shaughnessy wrote

I bet most families assume that attending a public flagship university or a nationally known private research university is the best ticket to graduate school. If you look at the following lists of the most successful PhD feeder schools for different majors, you will see a somewhat different story. Not a single public university makes any of the lists. The entire Cal State system, however, is considered the No. 1 producer of humanities PhD’s.

I could believe that the Cal State system had the largest raw numbers of students going on to get PhDs in humanities, as they are a huge 4-year college, enrolling about 438,000 students [], with about 76,000 bachelor’s degrees per year []. Are there any other colleges in the US graduating so many BS or BA students per year? But the fact remains that Cal State is not the flagship university of California, and the University of California probably has a much higher percentage of its alumni go on to get PhDs.

In fact, one of the big problems with these lists is the question of scale—most of the colleges that come up high on Reed’s lists (which means high on NSF’s lists) do so by having very small denominators—they don’t graduate many students, though a high percentage of those go on to get PhDs.  In terms of raw numbers of students who go on to get PhDs, the public research universities produce many more than the private research universities, and the liberal arts schools are just a drop in the bucket. Of the top 25 schools in terms of raw numbers who go on to get PhDs in science and engineering, 19 are public research universities and 6 are private research universities—of the top 50 only 17 are private research universities.

When you are looking for a cohort of similarly minded students, you get slightly higher enrichment at some very selective private schools, but there are actually more peers at a large public research university—if you can find them.

2014 February 2

CS is not a foreign language

Filed under: Uncategorized — gasstationwithoutpumps @ 07:40
Tags: , ,

The Computer Science Teachers Association recently had a blog post about a recent development in high school CS teaching, Why Counting CS as a Foreign Language Credit is a Bad Idea.

When these policy makers look at schools, they see that computer science is not part of the “common core” of prescribed learning for students. And then they hear that Texas has just passed legislation to enable students to count a computer science course as a foreign language credit and it seems like a great idea.

But all we have to do is to look at Texas to see how this idea could, at the implementation level, turn out to be an unfortunate choice for computer science education. Here are the unintended consequences

  1. If a course counts as a foreign language course, it will be suggested that a new course must be created.
  2. If a new course is created, chances are that it won’t fit well into any of the already existing course pathways for college-prep or CTE.
  3. This new course will be added to the current confusing array of “computing” courses which students and their parents already find difficult to navigate.
  4. There will be pressure brought to ensure that that course focuses somehow on a “language”. For the last ten years we have been trying to help people understand that computer science is more than programming. Programming/coding is to computer science as the multiplication table is to mathematics, a critical tool but certainly not the entire discipline.
  5. If this new course is going to be a “language” course, we have to pick a language (just one). And so the programming language wars begin.

I agree that computer science should count as a math or science credit, not a foreign language credit.  CS and foreign language use different mental skills. For one thing, the “grammar” taught in programming languages is deliberately very much simpler than natural language grammars—so much so that there is little transference between learning on and learning the other.  Also, foreign language courses include a lot of memory work (particularly vocabulary, but also conjugation and declension patterns in many languages), while beginning computer science courses are more about learning how to design and debug—that is, to develop problem-solving skills, not memory skills.

Foreign language instruction is also very important, and replacing it with computer science would not be helpful for the cultural understanding and ease of relations between different countries that is an important reason for teaching foreign languages. Decreasing foreign language to make room for computer science is short-sighted.

The most sensible classification for CS (if it needs to be classified in the narrow categories that guide secondary school administrators) is with math.  The reasons for learning CS are much the same as the reasons for learning algebra, both in terms of the underlying set of mental skills that one hopes would transfer to other fields (but usually don’t) and the usefulness as a base for future study in STEM fields.

I think that the claim “Programming/coding is to computer science as the multiplication table is to mathematics” is a bad analogy.  Closer would be “Programming/coding is to computer science as algebra is to mathematics”.  Programming is a much larger and more complicated skill than multiplication, and it underlies most of the rest of computer science. Multiplication is primarily a memory skill (multiplication tables and a simple algorithm), while programming is primarily a problem-solving skill.

I also agree with the concern that making CS a “foreign language” skill will put even more pressure on high schools to adopt a common programming language.  The College Board AP test has already blessed Java, which I think is a poor choice for a first teaching language (though I know that many computer science professors like it as a first language, for many of the same reasons I dislike it).  I prefer Scratch followed by Python, as I’ve explained before on this blog.

Standardizing the way computer science is taught would be a moderately bad thing, as there is no “one true path” that produces great programmers and computer scientists. I see value in having a variety of different pedagogical approaches to teaching programming (and computer science), as the first few programming languages one learns tends to color the way one things about programs for several years.  I believe that a diversity of different approaches to programming is important to the health of the computer science both as an academic pursuit and as an industry—and different initial programming experiences is as important to that diversity as different people are.

Note: I tried to post a comment on the original post, but I kept getting

Server error!

The server encountered an internal error and was unable to complete your request.

Error message:
Premature end of script headers: mt-comments.cgi

If you think this is a server error, please contact the webmaster.

Error 500

You’d think that the Association for Computing Machinery could keep a blog running, but I guess their disdain for mere “programming” extends to the programmers who set up their website.

2014 January 1

Technical entitlement—is it a thing?

Filed under: Uncategorized — gasstationwithoutpumps @ 18:49
Tags: , , ,

I learned a new buzzword yesterday: “technical entitlement”.  I encountered the phrase on  the blog On Technical Entitlement |, though apparently Tess Rinearson originally wrote it in June 2012 and also published it on

I’m the granddaughter of a software engineer and the daughter of a entrepreneur. I could use a computer just about as soon as I could sit up. When I was 11, I made my first website and within a year I was selling code. I took six semesters of computer science in high school, and I had two internships behind me when I started my freshman year of college.

Despite what it may seem, I’m not trying to brag—seriously. I’m just trying to prove a point: I should not be intimidated by technical entitlement.

And yet I am. I am very intimidated by the technically entitled.

You know the type. The one who was soldering when she was 6. The one who raises his hand to answer every question—and occasionally tries to correct the professor. The one who scoffs at anyone who had a score below the median on that data structures exam (“idiots!”). The one who introduces himself by sharing his StackOverflow score.

That’s technical entitlement.

“Technical entitlement” seems to be the flip side of “imposter syndrome”. In imposter syndrome, competent people question their own competence—sometimes giving up when things get a little difficult, even though an outside observer sees no reason for quitting.  “Technical entitlement” seems to be blaming those who have both competence and confidence—as if it were somehow deeply unfair that some people learned things before others did.

Certainly some things are unfair—as an engineering professor I’ve been able to provide opportunities for my son to  learn computer science and computer engineering that would not be available to a parent who knew nothing about those fields.  And some of the characteristics she lists would apply to my son—I can see him correcting his professors, and although he’d never introduce himself by sharing his StackOverflow score, he did include it in some of his college essays, as evidence that he was knowledgeable and interested in sharing what he had learned.

But Tess Renearson goes on to say

It starts with a strong background in tech, often at a very young age. With some extreme confidence and perhaps a bit of obliviousness, this blooms into technical entitlement, an attitude characterized by showmanship and competitiveness.

While my son has confidence in his abilities and “perhaps a bit of obliviousness”, neither showmanship nor competitiveness are big factors in his behavior.  I think that Ms. Renearson has confused a personality trait and stereotypical US male behavior (showmanship) with early technical education. I see the arrogance as a bad thing, but the early technical education (which she herself had) as a good thing.

The rest of her post goes on to talk about ways that Amy Quispe and Jessica Lawrence managed to increase participation (particularly by women) in tech events.  But the analysis there really addresses imposter syndrome more than it does “technical entitlement”.  She quotes Jessica Lawrence: ‘“There is,” she said, “an under-confidence problem.” But Ms. Renearson then says

Sound familiar? Yep, it’s exactly the kind of self-doubt that can arise when there are so many technically entitled people around.

Somehow blaming “technically entitled people” for the under-confidence of others seems to be imposing blame where none is warranted.

Now imagine someone starting out as a college student taking their first CS course. Imagine how the technical elite make them feel.

I can understand someone being intimidated when entering a new field if they are surrounded by people more skilled in the field—but that is hardly the fault of the those who are skilled.  Newcomers anywhere are going to feel out of place, even when people are trying to welcome them. The “technical elite” are not making the newcomers feel intimidated.

If Ms. Renearson’s point is that some of the tech communities are not sufficiently welcoming of newcomers, I agree.  I’ve seen snarky comments in places like Stack Overflow that offered gratuitous insults rather than assistance.

But Ms. Renearson seems to assume that anyone who is more experienced than her is automatically trying to put her down, and that this is the way that everyone should be expected to feel.  When one starts with that assumption, there is no remedy—no matter what those more experienced or more skilled do, they will be seen as threatening.

Perhaps she has not identified those who should be getting blamed precisely enough.  I don’t think that it is “The one who was soldering when she was 6″ who is a problem, but those who refuse to give children an opportunity to learn (no public school in my county teaches computer science, except one lottery-entry charter) or who force students who’ve been programming for 6 years into the same classes as those who have never programmed, as many college CS programs do, providing no way for more advanced students to skip prerequisites.

Unfortunately, identifying the problem as being “technical entitlement” makes the problem worse not better, as it encourages public schools to suppress the teaching of technical subjects, rather than expanding them.

If she means to attack the arrogant culture of “brogrammers”, mean-spirited pranks, and other unpleasant culture that has emerged, then I support her, as I’m not happy with some of the culture I see either.  But don’t blame it on the kids who learned tech early, nor on the parents who taught them—the late-comers are more likely to be the arrogant bastards, since that arrogance is mainly a defense mechanism for incompetents.  The competent tech people are much more likely to be eager to share their enthusiasm with newcomers and help them join in the fun.

2013 November 12

Grading programming assignments

Filed under: Uncategorized — gasstationwithoutpumps @ 09:48
Tags: , , ,

In Critiquing code, I mentioned that I spend a lot of time reading students’ code, particularly the comments, when grading programming assignments.  For the assignment I graded this weekend, I did not even run the code—grading students entirely on the write-up and the source code.

The assignment is one that involves writing a simulation for a couple of generative null models, and gathering statistics to estimate the p-value and E-value of an observed event.  Running their code would not tell me much, other than detecting gross failures, like crashing code.  The errors that occur are subtler ones that involve off-by-one errors, incorrect probability distributions for the codon generator, or not correctly defining the event that they are counting the occurrence of.  These errors can sometimes be detected by looking at the distribution of the results (particularly large shifts in the mean or variance), but not from looking at a small sample.  The students had available plots of results from my simulations, so they could tell whether their simulation was providing similar results.

So I read the write-ups carefully, to see if the students all understand p-value and E-value (this year, sadly, several still seem confused—I’ll have to try again on improving their understanding), to check whether the distributions the students plotted matched the expected results from my simulations and previous years’ students, an to see whether the students explained how they extracted p-values from the simulations (only a a couple of students explained their method—most seem to have run a script that was available to them without even reading what the script did, much less explaining how it worked).

Whenever I saw a discrepancy in the results, I tried looking for the bug in the student code.  In well-written, well-documented code, it generally was fairly easy to find a bug that would explain the discrepancy.  In poorly written, poorly documented code, it was often impossible to figure out what the student was trying to do, much less where the code deviated from the intent. Even when the results appeared to be correct, I looked for subtle errors in the code (like off-by-one errors on length, which would have been too small to appear as a change in the results).

There was only one program so badly written that I gave up trying to figure it out—the student had clearly been taught to do everything with classes, but did not understand the point of classes, so he turned every function into its own class with a __call__ method.  His classes mostly did not have data in the objects, but kept the data in namespaces passed as arguments.  The factoring into classes or functions bore little resemblance to any sensible decomposition of the problem, but had arbitrary and complex interfaces. The result looked like deliberately obfuscated code, though (from previous programs by this person) I think it represented a serious misunderstanding of an objects-first approach to teaching programming, rather than deliberate obfuscation.  I instructed the student to redo the code without using any classes at all—not an approach I usually take with students, but the misuse of classes was so bad that I think that starting over with more fundamental programming concepts is essential.

Some of the students are now getting fairly good at documenting their code—I don’t seem to have any superstar programmers this year, but there are a couple of competent ones who are likely to do good work on their theses (assuming they don’t discard the documentation habits I’m enforcing in this course).  Some of the students who started out as very poor programmers are beginning to understand what it takes to write readable, debuggable code, and their programs have improved substantially. Many of the students have figured out how to separate their I/O, their command line parsing, and their computation into clean separate parts of their code, without sacrificing much efficiency. Even several who were writing very confusing code at the beginning of the course have improved their decomposition of problems, simplified their style, and improved readability of their code enormously.

In one comment on Critiquing code, “Al” commented “I would guess it has more to do with grit and the ability for a student to stick with a tough problem to find a solution.” I rejected that interpretation in my reply: “some of the worst programmers are putting in the most effort—it is just not well-directed effort. The assignments are supposed to be fairly short, and with clean design they can be, but the weaker programmers write much longer and more convoluted code that takes much longer to debug. So ‘grit’ gets them through the assignments, but does not make them into good programmers. Perhaps with much more time and ‘grit’, the really diligent students could throw out their first solutions and re-implement with cleaner designs, but I’ve rarely seen that level of dedication (or students with that much time).”

Now I’m not so sure that my reply was quite right. Some of the biggest improvements seem to be coming from students who are working very hard at understanding what makes a program good—when I complain about the vagueness of their variable descriptions or their docstrings, they improve them in the next assignment, as well as redoing the programs that elicited the feedback. But some of the students are falling behind—neither redoing assignments which they got “redo” on, nor keeping up with the newer assignments.  So there may be some merit to the “grit” theory about who does well—it isn’t predictive for any single assignment, but it may help distinguish those who improve during the course from those who stay at roughly the same level as they entered the course.

2013 November 8

Critiquing code

Filed under: Uncategorized — gasstationwithoutpumps @ 22:57
Tags: , , ,

In The Female Perspective of Computer Science: Why Arts and Social Science Needs Code: Testimonials, Gail Carmichael continues her “Why are we learning this?” guide for arts and social science students with “a set of testimonials from people in the field that learned to code.” I’ve pulled out a little piece of one testimonial here:

Emily Daniels, Software Developer and Research Analyst, Applied Research and Innovation at Algonquin College

As an artist you probably already have a thick skin developed by years of crits where others continually tear down your work and expect you to pick up the pieces. This will prepare you for similar responses to your programs and is also immensely useful in software development. It seems from my experience that most computer studies programs don’t spend nearly enough time preparing people to respond well to negative or constructive feedback of their work. [emphasis added] It would benefit a lot of developers to be able to take criticism in stride like an artist can, so if you can, you are ahead of the game.

I think that this is an important critique of many engineering programs, and of computer science programs in particular. Many students finish CS programs with high grades but are still unable to write good programs.  A big part of the problem is that no one has ever looked at their programs—certainly not critically, with an eye toward making the students better programmers through pointing out things that they have still not mastered. I think that CS may need to have more of the sort of criticism that a good studio art class or writing circle has: strong feedback about what needs improvement tempered with some praise for what is good. (I’m not a proponent of the “3 good for 1 bad” school of ego-stroking—that approach provides a lot of emotional support but very little improvement in performance.)

I try to provide strong feedback in my first-year grad course in bioinformatics, where I require eight programming assignments and two writing assignments.  The prior programming experience of my students varies, from students who’ve had just two introductory programming courses to students who have had BS degrees in computer science and 25 years as programmers in industry.  As a general rule, more experience in programming results in more competence at the programming assignments, and those with CS degrees do better than those without, but the differences are not as large as one would expect. I’ve had students who had earned straight As in several previous programming classes but who could not produce adequate programs even for the “warm-up” assignment in three tries (with extensive feedback on the first two).  I’ve also had students who had only one or two prior programming classes work very hard and produce adequate (though not stellar) programs, even on the more difficult assignments near the end of the course.

For many of the students in my course, I am the first person to read their code, no matter how many previous programming courses they have had.  I’m often also the first person to give them feedback on their variable names, comments, and docstrings—the things that the compiler ignores but which are crucial for anyone trying to understand the code.  Many of the students have no idea how to write a program that can be read by someone else (or even by themselves in six months), because they have never written anything but throwaway code, and no one has taught them the difference between good programming practice and throwaway programming.

What are the differences between the really good programmers, the adequate programmers, and the poor programmers?  Is there any way to predict who will rise to the challenge and who will fall flat?

The really good programmers seem to be the smartest (they also do very well on the written work that does not include programming), and they are very good at decomposing problems into sensible subproblems that can be clearly described and independently implemented and tested.  The cleanness of their problem decomposition leads to clean data structures and simple functions that are easy to document.  I believe that many of them write their docstrings (defining what the functions are supposed to do) before writing the code that implements it.  Most of the top programmers have had a lot of previous programming experience, but not everyone with a lot of experience turns out to be a good programer.  I don’t think I end up teaching the good programmers very much about programming—mainly I reinforce habits that they might otherwise have been tempted to let slide, since no one else seemed to care.

The adequate programmers do fairly reasonable problem decompositions, slightly awkward data structures, and code that is almost right.  They often debug their programs into existence, messing up on subtle boundary conditions.  Their variable names tend to be vague, giving some indication what is in the variable, but not a precise indication of the meaning.  They often make mistakes that result from interpreting a variable one way in one part of a code, but slightly differently in a different part of the code.  Their documentation seems to be an attempt at explanation after their code is finished and more or less working—it provides them no help during the time they spend debugging, which is most of their time. Their programs are often several times longer than the programs by the good programmers, because they added a lot of unnecessary special-case code, to compensate for awkward program decomposition or data structures.

Many of the adequate programmers can become good programmers, with some help in learning how to decompose problems more cleanly.  One of the best ways I can think of for getting them to improve is to have them write their docstrings before they code the functions, to provide terse but fairly complete external descriptions of those functions.  If they can’t come up with terse, clean descriptions, then they’ve probably done the decomposition wrong, and a little more time spent on thinking about the problem at a high level will probably save them a lot of debugging time later on.  Getting them to focus on the precise meaning of their variables and data structures and getting them to think about edge conditions probably has a lasting impact on their programming ability. I see enormous improvements in some of these students over the 10 weeks I have them in class, and I like to think that the huge amount of time I spend on providing feedback has a lasting effect, not just a do-it-for-the-class-but-never-again effect.

The poor programmers decompose the problem in random ways, often with highly inappropriate data structures (like copying an I/O format as an internal representation).  The awkward decomposition results in functions that cannot be tersely described, since the entire environment in which the function is embedded has to be just-so for the function to have any meaning at all.  Their variables usually have meaningless names (like “flag”, “i”, or “args”) or highly misleading names (like “kmer” for the length of a string, rather than for a length-k string).  Their code is often poorly tested, crashing on standard uses cases or obvious boundary conditions.  In many cases it looks like the students have tried to “evolve” their code—making random mutations to the code in the hope of increasing its fitness.  As in biology, most mutations are deleterious, so this approach to programming only works if you make millions of tries and use very strict selection criteria.

I think that many of the poor programmers who have had several programming courses have only worked on “scaffolded” assignments, where they fill in the blanks on programs that someone else has decomposed for them.  They have never decomposed a problem into parts themselves, and have no idea how to go about doing it.

I don’t know how to convert the poor programmers into adequate programmers—I can point out their problems to them and suggest better decompositions or better data structures, but I don’t know whether they will learn from that. Many can take specific suggestions (about variable names or data structures) and implement the changes, but there does not seem to be much transference to the next problem, where they again do random decompositions of the problem and use meaningless variable names.  In many cases, I think that the muddiness of their code reflects muddiness of their thinking (their writing in English is often similarly disordered and confused).  If they are new to programming, there is hope that with practice they will learn to think more precisely, but if they have been programming for a while and are still flailing, I have no idea how to help them.  Luckily I don’t get many really poor programmers—most tend to drop after the first couple of assignments, realizing that their usual programming style is not going to get them through the course.

I have had students switch from being poor programmers to adequate programmers (in one case after failing the course three times and succeeding on the fourth try), but I don’t think I can take any credit for the improvement—I didn’t do anything differently on the fourth try than on the previous three. I’ve also had one student fail four times, so it isn’t just that I give up and pass students who aren’t doing the work.

I don’t generally fail students until near the end of the course—when earlier work is not up to passing quality the grade I give is “REDO”.  I expect students to redo the work fixing the problems that I’ve identified and resubmit it.  Many students do learn from the redone assignments, and start turning in adequate work the first time. For many of the weaker students, it may take more than one round of “redo” before their work is of high enough quality. A few never seem to get it, and make the same types of mistakes in assignment after assignment.

After about three assignments, I can tell pretty much how well the students will do for the rest of the course.  The top programmers on the first three assignments will continue to do well, often improving their coding in minor ways as they pick up the little bits of feedback I can provide them.  The adequate programmers who are striving to become good ones and the adequate programmers who are content to stay at their current skill levels are also evident.  I’d like to spend the most time on feedback for the adequate programmers striving to become better—they are the ones most likely to benefit (they are also usually the majority of the class).  In practice, though I spend most of my grading time on the bottom of the class, trying to figure out what is going on in really unreadable code.  I am sometimes tempted to triage the grading, with little time spent on the good programmers or the hopeless ones, and if the class were much larger I’d have to do that, but so far I’ve been trying to provide useful feedback to everyone.

I have yet to find any good way to predict who will do well before I’ve read the first two programs.  Number of CS courses, grades, or years of programming experience are only weak predictors.  I suspect that I might get more useful predictions from SAT scores or IQ tests measuring general intelligence, but I don’t have access to that information.

Next Page »

The Rubric Theme. Create a free website or blog at


Get every new post delivered to your Inbox.

Join 315 other followers

%d bloggers like this: