One of the memes I’ve seen running through physics teacher blogs lately is teaching physics with computational modeling. The *Matter and Interactions* text is often mentioned (for example, by John Burk in My favorite texts: Matter and Interactions). I’ve previously posted wondering whether it would be a good way for my son (and me) to learn physics, since he loves to program in Python already.

Today, Mark Guzdial announced on his blog results from a new Ph.D. thesis addressing a related question: whether students in general learned more physics from a computational modeling physics class or a traditional one. The results are summarized in the blog title: Adding computational modeling in Python doesn’t lead to better Physics learning: Caballero thesis, part 1.

The details of the experiment are not in the blog post, so I don’t know such important things as how well the classes were matched for student demographics and teacher competence. Usually in education research the matching of student demographics is relatively easy, but controlling for things like teacher experience and competence are very hard. Since the goal was to measure how good a particular curriculum was (not how good a particular teacher or student was), controlling for teaching skills is essential, but nearly impossible.

Here’s the dilemma: a physics course based on computational modeling requires a different set of knowledge and skills than a traditional physics course. Not only does the teacher have to know physics and how to teach it, but also needs to know how to program and how to teach programming. Thus the computational modeling curriculum inherently requires more of the teacher.

Matching skill at teaching physics (say by taking physics teachers with comparable results in traditional classes) is not enough. One approach that could be tried is to have the same teacher teach parallel courses, one using a computational-modeling approach, the other a traditional approach. Of course, you end up then with the teacher doing better at whichever method they prefer, and the results generally just confirm whatever bias the teacher had initially.

The usual approach in such cases is to do many replicates of the experiment, with different teachers and different groups of students, in the hopes that the errors due to uncontrollable variables will be independent and average out. It is rare, though, that a PhD student can set up and fund such a huge study, and even rarer that a curriculum seller would do so (why risk showing your product is not as good as you think, when you only need a tiny, easily-manipulated study to sell things as research-based?). So, without even having seen the thesis I already have some doubts about how well the results generalize.

Comparing the computational modeling course to a traditional course has another problem: the students have all the prereqs for a traditional course (calculus, in this case), but have no prior experience in programming. This is like setting up a calculus-based course versus an algebra-based course without having the students take calculus and complaining that the students in the calculus-based course learned less physics, because the teacher had to spend so much time on calculus fundamentals.

OK, so I’ve dumped on the research methods of the education experimenters without even having seen the thesis, but not told you even the little that Mark Guzdial summarized from Caballero’s thesis. Basically, students were tested on the Force Concept Inventory after taking either a traditional physics class or a class using *Matter and Interactions*, and the students in the traditional course did a lot better. An analysis was done of what was taught in the courses, and the reason for the difference made sense: the traditional course spent a lot more time on the topics tested on the exam, while the computational modeling course spent a lot of time on teaching how to write simulations, which was not tested by the FCI test.

So far as I can tell, the students were not assessed afterwards for their programming ability, though Mark Guzdial expresses some doubt that they achieved much skill at that also.

Of course, nothing in this study addresses my personal question: “Would the *Matters and Interactions* book be a good one for my son and me to invest some time in?” Neither of us is anywhere near typical of the students in physics classes.

One study that would be more relevant for me (though perhaps not of general enough interest for anyone to actually do it) would be to repeat Caballero’s experiment but with classes consisting entirely of computer science majors who had already had at least a year of programming classes. That would address the question of whether computational modeling is a better way to teach physics if the students already know how to program. Just as physics has math prerequisites, so that physics teachers don’t have to spend all their time teaching algebra and calculus, perhaps physics needs to have computational prerequisites as well.

Of course, it could just be that the computational modeling people are fooling themselves, and that even with students who already know how to program less physics is learned that way. That seems unlikely to me, but not entirely impossible.

This isn’t the same as the general field of “modeling” instruction, is it?

http://modeling.asu.edu/

(It seems like it’s more specifically about using computational modeling & computer in physics instruction).

Comment by leelabug — 2011 July 29 @ 10:53 |

I believe that the Modeling Method and

Matter and Interactionshave the same pedagogical approach, but the ASU stuff is for algebra-based high-school physics, and em>Matter and Interactions is for calculus-based college physics.I don’t know why the traditional approach worked better for the calculus-based physics and modeling worked better for high-school physics. It may be experimental artifact, or it may be a real difference.

Comment by gasstationwithoutpumps — 2011 July 29 @ 13:51 |

It would be interesting to test a combined computation and mathematics class also. My son is taking the AoPS intro to Python class this session, and has programmed a quadratic equation solver and a standard deviation solver, among other things. I feel that he has a deeper understanding of the underlying mathematics, for example, because for the quadratic equation he had to figure out what combinations of a,b,c would test the logic for no real roots, one real root, and getting larger vs. smaller absolute values of x.

BTW, the Python instructor with AoPS mentioned that in addition to the Game Programming class already scheduled for fall, they are planning on running a new Data Structures class taught in Python in the spring.

Comment by Yves — 2011 July 29 @ 11:17 |

My son is already fairly proficient with Python, so the first programming course they offered was not of much interest to him, but a data structures course would be valuable for him, as he has not had much with structured types. We’ll watch for that course, since he likes the AoPS format. (Originally, I’d planned for him to take data structures at the university, which might still be a good choice at some point, as he hasn’t learned Java yet.) For this fall, he’ll be taking the AoPS Calculus class.

Comment by gasstationwithoutpumps — 2011 July 29 @ 13:29 |

There are two questions here: 1) Can your son benefit from M&I? and 2) How effective is M&I compared to the traditional curriculum?

Based only on you son’s lab report from 6th(?) grade you shared a while ago, I imagine that if he were to take the FCI, he would already score very well, so I would not worry about him not developing into a Newtonian thinker in almost any decent physics course. The FCI is really designed to be a measure of conceptual understanding of mechanics, and what makes it so vexing is that even though a student can get an ‘A’ on a traditional college level mechanics final exam with very difficult problems requiring calculus and lots of tricks, they can still utterly bomb this “simple” multiple choice test that contains questions like “if a small car collides with a large truck, the force that the truck exerts on the car will be (a) bigger (b) smaller (c) the same as the force the truck exerts on the car.

I’m not trying to denigrate the FCI—it is a great test, but it isn’t testing any of the “larger” ideas about computational modeling that M&I is trying to teach. One thought I had is if somehow we could do a better job of teaching introductory physics in high school, and get students to adopt a Newtonian worldview before they get to college level physics, then the successes of the M&I approach might be more apparent. I also think we need to come up with ways of assessing “computational thinking” and trying to measure its value to physics problem solving.

I really appreciate all your thoughts about how to design a good experiment in educational research—It’s something I’ve been thinking about a lot as I’m beginning to try to follow up on some of Danny’s work with my high school students.

But regardless of whether M&I is a superior way to educate college physics students, it would seem to me that well prepared students, especially those with an interest in programming, would be well served to take a look at its approach.

Comment by John Burk — 2011 July 30 @ 08:41 |

I agree that what FCI is testing is not a good fit for what M&I is teaching. Is there a more appropriate test that could be used, but which would be accepted by both traditional and computational modeling advocates as really testing the physics and not some other construct (like “computational thinking”)?

No physics teacher would want their students coming out of the class doing badly on the FCI, even if their real goal is a much higher level of skill. Making good performance on the FCI a prereq for college-level physics makes some sense, but someone has to get the students to that point. How is that best achieved?

I’m still confused about why one experiment (mentioned on the ASU website) with computational modeling for high-school physics got better results on FCI and this new (not yet published) experiment got worse results. What is the important difference in the experiments? Is either result replicable? Is some uncontrolled variable far more important than traditional vs. computational?

Comment by gasstationwithoutpumps — 2011 July 30 @ 08:57 |

As far as I know, there is no good test of physics that might show whether or not understanding computational thinking leads to a deeper understanding of physics. It’s something that I’ve been discussing with Danny and some of the members of his research group.

You’re right, no physics teacher would want bad FCI scores. And though I don’t have all the research, there’s a bunch of stuff out there about how reformed instruction methods (like peer instruction) and whole curriculum redesigns (like Workshop Physics , which also included instructional redesign) do achieve higher FCI scores than traditional lecture. There’s also a lot of evidence that in high school, students who study physics using the modeling approach outperform peers in traditional instruction. If I can find the time, I’ll try to dig up some links.

I’m not familiar with the experiment you mentioned on the ASU website, and couldn’t find it with a quick search.

Comment by John Burk — 2011 July 30 @ 09:09 |

I’ve not followed the links all the way, but http://modeling.asu.edu/modeling/Mod_Instr-effective.htm claims to point to research showing the effectiveness of modeling-based instruction.

Comment by gasstationwithoutpumps — 2011 July 30 @ 09:46 |