Gas station without pumps

2013 July 3

In defense of programming for physics and math teachers

Filed under: Uncategorized — gasstationwithoutpumps @ 09:14
Tags: , , , ,

In response to a comment I made on his blog, Mark Guzdial wrote

I am complete agreement that computing should really be taught within teachers’ disciplines, such as math or physics. Computing is a literacy. We write and do mathematics in science. We should also do computing in science.

Current constraints make that hard to get to.

  • Why should mathematics or teachers want to use computing? It’s harder (in the sense, that it’s something new to learn/use). And it doesn’t help them with their job. Remember the posts I did on Danny Caballero’s dissertation? Computing does lead to mathematics and physics learning, but different from what currently gets tested on standardized tests. Why should people who make up those tests change? To draw more people into computing? Recall how much luck we had getting CS into the new science education frameworks.
  • Who would pay for it? We can get Google to pay for more high school teachers to learn CS — that leads to more computer scientists that they might hire. We can get NSF’s CISE directorate to pay for CS10K — that leads to more CS workers and researchers. Who pays for math and physics teachers to learn computing, especially when learning computing doesn’t help them with their jobs?
  • Finally, in most states, computer science is classified as a business topic. Here in Georgia, the Department of Education did announce that only business teachers could teach computer science. The No Child Left Behind (NCLB) Act requires teachers to be “high qualified” in a subject to teach it. If CS is classified as business, then it makes sense (to administrators that don’t understand CS) that only business teachers are highly qualified to teach it. Barbara Ericson fought hard to get that changed, since some of our best CS teachers are former math and science teachers (who date back before CS became classified as business). I don’t know if, in other states, math and physics teachers are disallowed from teaching CS.

It’s a big, complicated, and not always rational system.

That the system is big and irrational is not news to anyone, and the Georgia Department of Education may be about as silly as Departments of Education get.  I have no idea how to fix dysfunctional government bureaucracies, though, so I won’t comment further on that point.

But I disagree on a couple of things:

  • Learning to use programming effectively can help physics and math teachers do their jobs better.
  • Companies like Google and federal agencies like NSF will pay for teachers to learn computational methods, not just for straight CS teachers.

For the first point, I’m going to have an uphill battle to convince Mark, because he has a carefully done research study that Danny Caballero did for his PhD on his side, and I don’t have 4 years of my life to spend working full time on the question.

I read Mark’s posts about Caballero’s dissertation, I even wrote about them when the posts first came out (and I started another draft post, but abandoned it).  I agree that Caballero’s results are not encouraging, but I don’t believe that a single experiment at 2 sites decides the issue for all time. Caballero showed that a couple of mechanics courses that taught physics using Matter and Interactions did not spend enough time on the concepts of the Force Concepts Inventory (a small but important subset of the concepts of a first physics course), and so students did not learn as much on those topics as in a traditional class. He also showed that students made typical programming errors that reflected poor understanding of both physics and programming, and that students had less favorable attitudes toward computational modeling at the end of the course than at the beginning. The programming errors Caballero found were typical of the errors seen after a first programming course also—if we can’t teach students to avoid those errors when the entire course is focused on programming, it is not surprising that a physics course in which programming is a small add-on also produced students who can’t program well.

Caballero’s thesis study was pretty convincing that those implementations of the intro physics course using computational approaches were not very successful at teaching the concepts of the Force Concepts Inventory. I’m not convinced that the problems are inherent to using computational approaches to teach physics though—just that these courses had not yet been optimized.  It is indeed possible that Mark’s conclusion (computing doesn’t help teach physics or math) is true, but I think that is too big a generalization from Caballero’s results.

Note that an earlier paper on which Caballero was an author showed that the M&I students showed better gains than students in a traditional course on a BEMA test (Electricity and Magnetism, rather than the mechanics topics of the FCI). So even Caballero’s results are not as uniformly negative as Guzdial paints them.

Personally, I liked the Matter and Interactions book, and I think that its approach helped me and my son learn physics better than we otherwise would have, but we’re hardly the typical audience for a first calculus-based physics course, so I don’t want to generalize too much from our experience either—Caballero’s results (positive and negative) are from 1000s of typical students, not 2 very unusual ones.

There are currently teachers in both physics and math looking at programming as a way both to motivate students and to teach physics and math better.  The spread of the ideas in the community is slow, because the teachers are getting little support, either from fellow math and physics teachers or from the computer science community.  People like Mark say “some of our best CS teachers are former math and science teachers”, but also say “it doesn’t help them with their job.”

Teaching physics and math teachers to program can help them do their jobs better—even if they don’t teach programming to students! There are other ways that programming helps them—for example, Matt Greenwolfe spent a lot of time programming Scribbler 2 robots to be better physics lab tools than the usual constant-velocity carts. Other physics teachers are doing simulations, writing video analysis programs (I contributed a little to Doug Brown’s Tracker program), improving data logging and analysis programs, and so forth.  A lot of math teachers are using GeoGebra to make interactive geometry applets (and, more rarely, to have students do some programming in GeoGebra).

As for my second point, there are already many corporate and federal programs to try various ways of improving STEM teaching (the CS portion of that is actually tiny).  To convince them to spend some of that money on teaching math and physics teachers to program, we may need some better use cases than the intro mechanics courses that Caballero studied—or we may just need to re-examine those courses after the instructors have done some optimization based on the feedback from Caballero’s study.

2012 May 29

Acronyms for physics modeling instruction

Filed under: home school — gasstationwithoutpumps @ 10:33
Tags: , , , ,

In reading one of Kelly O’Shea’s posts, Extra Tests, Bundled Objectives, and Changes for Next Year, I was struck by the number of unexplained acronyms there were.  Looking through posts by other Modeling Instruction advocates, I noticed that they all used the same acronyms: the acronyms seem to be a standard part of the training in Modeling Instruction—a secret code that lets people know you are part of the fraternity (or sorority, in Kelly’s case). Do you have to learn the secret handshake as well?

I wonder how much the acronym shorthand helps the Modeling Instruction teachers talk to each other and how much of a barrier the arcane lore is for other teachers to pick up the methods of Modeling Instruction.  Do other physics teachers use these acronyms?

I attempted to translate the acronyms, based on the usage in Kelly’s post.  In a few cases, I had to go elsewhere to find other uses, as I couldn’t guess from just Kelly’s usage.

CVPM
Constant Velocity Particle Model
BFPM
Balanced Force Particle Model
N3L
Newton’s Third Law (or Newton’s Three Laws?)
FBD
Free Body Diagram
CAPM
Constant Acceleration Particle Model
UBFPM
Unbalanced Force Particle Model
MTM
Momentum Model
COMM
Center of Mass Model
PMPM
Projectile Motion Particle Model
ETM
Energy Transfer Model
CFPM
Centripetal Force Particle Model (I guessed this wrong the first time—I thought the C was for “constant”.)
UCM
Uniform Circular Motion
MTET
Momentum and Energy Transfer (more commonly called “collisions”, I believe)

Elsewhere I’ve also seen

COEM
Conservation of Energy and Momentum
COAM
Conservation of Angular Momentum

Incidentally, the Matter and Interactions text, which is sometimes cited as ideal for Modeling Instruction of calculus-based physics, does not use these acronyms, preferring more English-like terms such as “Momentum Principle”.

2012 May 22

Analysis of wrong answers on FCI

Filed under: Uncategorized — gasstationwithoutpumps @ 16:35
Tags: , ,

Are All Wrong FCI Answers Equivalent? is a conference paper that does a cluster analysis of answers to the Force Concept Inventory, getting 7 different groups of students.  They followed that by a hidden Markov analysis or pre-test and post-test results to see what transitions were most probable.

The data set was of respectable size (2275 students), but they do the clustering on only 4 questions (4 questions that came out as the first factor in their latent class factor analysis).  With 5 possible answers to each question, there are only 625 possible groups.  They did not explain how they clustered into 7 groups, though they did describe why they chose 7 (it was the smallest number of groups in which deviations of observed values from the values predicted by the model were not significantly different from chance observations, though they never gave p values, so I don’t know what criteria they used).

They had a theory about how the students were thinking to explain the patterns of results they saw, but it is not clear that this theory has any predictive value, nor that a different FCI data set would produce the same 7 clusters.

I believe that analyzing the types of wrong answers will reveal more information about students than using a single-bit right/wrong value for each question, but I’m not convinced that their analysis is robust enough to base any pedagogic decisions on.

They hypothesized 5 possible schemata for responses to a question:

N, Newtonian:
a correct answer
D1, Dominance:
larger masses exert larger forces
D2, Dominance:
objects that initiate movement exert larger forces
PO, Physical Obstacles:
physical motion is determined by obstacles in the path of moving objects
NF, Net Force:
an incorrect understanding of net force, in which the net force is the sum of scalars rather than of vectors (though they expressed it differently)

It is not clear whether each of the questions had answers corresponding to the 5 possible schemata. Indeed, for each of the 4 questions analyzed there seemed to be only 2 probable answers (at least for 5 of their 7 classes of students—they gave up on analyzing the classes C6 and C7, other than saying that they used PO schemata more than the other groups did). Having only 2 common answers means that the answers were only carrying about one bit of information, not \log_2 5 bits, as a very carefully crafted question might. If there are only 2 common answers, a right one and a wrong one, then analyzing the pattern of wrong answers is not going to add much to the analysis.

Question Q4 only distinguished between N and D1, q15 between N and either D2 or NF (either schema would produce the same wrong answer), q16 between N or NF and D2 or NF, and q28 between N and D1. They did not show that the rare answers to the questions reflected other schemata, and I don’t have a copy of FCI to do that analysis myself.

I’m a little confused how both answer A and answer C of q16 could result from the same NF schema, though it is clearly a problem that the right answer to q16 could result from the wrong schema.

I found their partial reporting of results (like just the maximum likelihood answer for each question for each group) rather frustrating, as it was not enough to do more careful analysis of the results. Since the most likely answer to each question was the same for C3 and C1 (the correct answer), it looks like students in C3 got there because they got one or two of the questions wrong (but not q15, which would have put them in group C2. Group C4 got q16 right but the others wrong, and group C5 got them all wrong.

Overall, I think that they made a valiant effort, but analyzed too few questions to reach any conclusions—not about the 7 clusters, not about their 5 schemata, and not about pretest/post-test transitions.

Talking to Global Physics Department

Filed under: Uncategorized — gasstationwithoutpumps @ 13:29
Tags: , ,

Wednesday evening (23 May 2011, 6:30 p.m. PDT, 9:30pm EDT), I’ll be having a video conversation with the Global Physics Department (GPD) about various things (homeschooling physics, making physics lab equipment at home, PC Board design, bioinformatics, …).

I’ve prepared a handout of links (mostly to various of my blog posts) for things I think we might touch on in the discussion: Links for Global Physics Department.

I believe that GPD welcomes new members, but I’m not sure exactly how one goes about joining.  I believe that all you have to do is visit the link to the meeting, and wait for Elluminate Live! to start up.  (There may be a slow download the first time you do it, so you might want to try ahead of the meeting.)

2012 April 25

Photoeletric effect

Filed under: home school — gasstationwithoutpumps @ 16:30
Tags: , , , , ,

Brain Frank has just posted an exploratory exercise on his blog Teach. Brian. Teach.: Photoeletric Effect.  This exercise relies on a simulation from the University of Colorado at Boulder.

The simulation is of a standard phototube experiment.  A phototube is a vacuum tube diode, in which the cathode is illuminated by a light source.  The photons excite electrons in the cathode, raising some of them to high enough energy levels to become unbound from the atoms and leave the cathode.  The electric field accelerates them toward the anode (or repels them, if the diode is biased backward).  The energy of the electrons is basically the energy of the photons minus the energy needed to raise the electrons from the ground state to the unbound state.  (At very high illumination levels, you can have one photon exciting the electron out of the ground state and another raising it to the unbound state, but I don’t think that effect is being simulated.)

At forward voltages, the current is determined by the illumination, independent of the bias—essentially all the released electrons go to the anode. At reverse biases, only the higher energy electrons have enough speed to make it to the anode. The energy of the highest-energy electrons can be estimated from the reverse-bias voltage at which the current drops to zero.

The simulation seems pretty good, but I don’t know exactly what effects they are modeling.  For the zinc target with high forward bias, there is a current peak around 135 nm, but from the spectral lines at NIST, I would have expected a peak around  127 nm.  I don’t know if the problem is a limitation of the simulation or a limitation of my understanding.

I know that my understanding of quantum effects is very limited, and the simplistic view of the photoelectric effect given in Wikipedia does not cover some of the phenomena being simulated here.  But since I don’t know exactly what phenomena are being simulated, I have no way of predicting the behavior.

I find it frustrating to do the sort of discovery experiment that Brian is proposing using a simulation.  If I knew precisely what was being simulated, there would not be much discovery, but trying to reverse engineer a simulation from its behavior seems to me a rather irritating and frustrating exercise. I not only have to guess at what physics is important, I also have to guess at what physics the writer of the simulator thought was worth including, and what simplifying assumptions he made.  (For example, is the simulation including the absorption of the glass or quartz tube holding the vacuum?)

I suppose I could read the source code (PhET provides that) or read the 17 “Teaching ideas” on the web page for the simulation. The teaching ideas look like a wide range of different lesson plans for labs, demos, and homework questions.  I looked at one of the “advanced” ones, but it seemed to only use the Wikipedia-level model, which does not explain a drop in current with shorter wavelengths.

I’d much rather have real experiments than simulated ones—even if the crudeness of my measurement tools limits the quality of the data I can collect.  The value of simulations is more in writing them and seeing that they predict the behavior you observe than in running someone else’s black-box model.

Next Page »