Gas station without pumps

2013 July 28

MOOC roundup

Filed under: Uncategorized — gasstationwithoutpumps @ 10:02
Tags: , ,

I’ve been collecting Massively Online Open Cours (MOOC) blog posts for a while now, with the intent of doing a careful response to each.  There have gotten to be so many that I can’t do a careful response to each. At best, I’ll do a short summary or critique of each one.  If the number of links here is overwhelming (as it was for me in writing this post), read the summaries to pick out a few that seem likely to be worth your time.  But do try to read the ones I’ve marked with a green check.  I think that those were unusually valuable.

From the Becker-Posner Blog:

  1. This is basically a factual description of MOOCs saying who they think the students will be and the difficulty of making MOOCs pay, but without much thought on the consequences of MOOCs for education.
  2. Predictions that MOOCs, because they are cheaper than building classrooms, will become a major part of the higher education landscape worldwide.  This is the standard wishful thinking of people who want to defund education.

From the Academe blog, which is published by the American Association of University Professors, but which does not reflect official AAUP positions (in fact, it has published some posts from harsh critics of the AAUP). Most of the posts here are not the MOOC boosterism we see from administrators and non-matriculated students, but the faculty skepticism about the value of MOOCs both pedagogically and for the future of the universities.

  1. A somewhat optimistic post about using MOOCs to do freshman composition courses—one of the courses for which many faculty feel MOOCs are completely unsuitable. The post seems to think that peer grading might actually work, something I’m very dubious about—the value of a composition class depends mainly on the quality and quantity of feedback from instructors whom you trust enough to accept uncomfortable advice from.
  2. A rather long post that focuses mainly on how MOOCs are going to be monetized and the negative effects of MOOCs on public education.
  3. MOOCs seen as an outgrowth of the “culture of celebrity”:

    The MOOCs are centered around supposed “superteachers” from the most elite universities. It’s no accident that Harvard, MIT and Stanford are among those schools most often mentioned in discussions of MOOCs. These provide, supposedly, the “best” education in the world. Why not take advantage of technology to spread that wealth around?

    First of all, it is the student who makes the education; no teacher or school “gives” it. It’s not that Harvard professors are elite educators, but that Harvard students are elite students. This is what makes the school so successful. Put a Harvard professor in front of most community-college classrooms and that professor will, most likely, fail. The name “Harvard” dresses up the professors, but it does not make them master teachers.

  4. The Campaign for the Future of Higher Education points out that online courses are not really cheaper than traditional courses, at least not when they are taught in an effective way.
  5. response to an interview with Daphne Koller, pointing out that MOOCs don’t address how courses really get created and improved.

    Separating content development, course preparation, grading, and content delivery does, indeed, change teaching, but for the worse. Koller herself recognizes that universities have “played a critical role in the shaping” of “amazingly gifted scholars, researchers, and teachers.” Those scholars, researchers, and teachers are gifted precisely because of their integrated approach to teaching.

    Any good teacher will tell you that the primary work of education happens away from the classroom. MOOCs might be useful for self-enrichment courses, but as long as they deliver education in a piecemeal fashion, they will not be models of good teaching.

U to the rescue: Micheal Meranze and Christopher Newfield  are UC professors (UCLA and UCSB, respectively) who closely follow the politics and (de)funding of the University of California.  They have a lot to say  about MOOCs and administrative tricks to try to divert instructional resources into administrative salaries and private contractors.

  1. This post is not primarily about MOOCs, but about the failure of the UC administration to make a strong case for public funding of the University, pointing out that their appeasement policy has resulted in trumpeting as a victory funding levels that are 25% lower per student than in 1990 (counting both state funding and tuition—the per-student funding from the state has dropped much more than that).  Their key point about MOOCs:

    Explain clearly that technological improvements will not close the budgetary gap.  Unfortunately UCOP is a negative example here: it promised $500 M in savings through technology efficiencies that have yielded about a tenth of that (see links halfway down this post on the Regents’ retreat). And it is gearing up a new round of technology promises for this week’s Regents meeting, now in the form of online education.  I defy anyone to find meaningful cost savings in the gradual introduction of quality (blended) online instruction to any of higher ed’s segments. I invite you to scour the full rush transcript of the MOOC meeting at UCLA last week; or to read full scale investigations of online impacts like Taylor Walsh’s pro-tech Unlocking the Gates, which shows for example that simply posting course materials in the case of MIT’s Open Courseware program nets the university negative $4M annually (page 84); or to go to the source of the original analysis of academia’s “cost disease,” William J. Baumol and his new book The Cost Disease; or to contemplate the extent to which online providers expect not to make money by offering newly cheap university curricula but by selling referrals and other ancillary products by competing with universities.  Or picture yourself as a venture capitalist, and ask if a one-time $10M investment has ever in history closed a $2.9B gap. Technology has become a source of budgetary delusion and fake solutions and this has to stop.

  2. This post discusses in detail the privatization goals of many of the MOOC proponents, and the abysmal success rate of privatization of public functions in other fields. A quote from the post:

    If we aren’t careful, privatization history will repeat itself, and UC in its desperation will invite on board a host of outside service providers who will have a seat at the curriculum table and a claim on a piece of UC’s shrinking revenues.  (A figure I have heard is that the provider takes 50% of the first seven years’ revenues on each course it develops.)

  3. In January 2103, UCLA had a large presentation about MOOCs from various providers.  U to the rescue had a number of questions about how the MOOCs were going to produce enough money to survey past the initial capital burn.

    4. If (3) is true, what share of the development costs and course revenues will you take? One number we have heard is 50% of course tuition for 7 years. The NYT article has an even higher number.  Since you use salaried university faculty to create course content, so far, it appears, without pay, are you going to wind up in the business of selling universities their own courses back to them at a big production markup?
    5. Given substantial costs for quality on-line development and operation, how exactly will you save money for students enrolled in degree courses (who are already paying tuition)? Given enrollment problems with MOOCs that aren’t free, on what basis can you say that you will save universities money? What is your estimate of how much?

  4. The announcement of the results of an all-UC faculty discussion about the future of the University:

    The report of this all UC-faculty discussion entitled The Uses of the Public University in 2050 can be accessed at The report details underlying principles that address the issues of teaching and learning in the twenty-first century, the role of research in a public university, the stewardship of the university and the university’s role in creating an informed, proactive and responsible citizenry.

  5. This post points out that MOOCs have not yet come up with a sustainable business model. One danger for public universities is in neglecting to invest in sustainable education, and being left with nothing when the MOOCs crash and burn. A quote:

    In the ramp-up period, terribly high per-MOOC costs could be justified by mass enrollments, but unfortunately from the VC [venture capital] point of view the masses take these courses for free. These production costs also collide with increasing awareness of large faculty time inputs: Duke’s Dan Ariely and Cathy Davidson report 150 hours of their time per hour of “actual MOOC.”  Prof. Davidson’s phrase in a subsequent post is “insanely labor intensive” —in exchange for a $10,000 stipend that she spent entirely on assistants. Many MOOC watchers are now concluding, as she does, that MOOCs do not have a way of making up for massive public funding cuts.

Computing Ed: Mark Guzdial is a computer-science education researcher who has been following some of the MOOC and online-degree fervor at Georgia Tech:

  1. Here Mark points out that the University of Wisconsin plan to offer a degree almost entirely by testing is doomed to be a second-class degree, because the basic educational premises are flawed.  Here is a quote from the beginning of the post:

    The announcement from U. Wisconsin (that they’ll test students to get a degree, rather than requiring any coursework at all) is showing enormous and unsupported (almost religious) faith in our ability to construct tests, especially online tests.  Building reliable and valid assessments is part of my research, and it’s really hard.

  2. This post is just a pointer to White Paper- Making Sense of MOOCs, an analysis of MOOCs by Sir John Daniel, who was vice-chancellor of the UK Open University (one of the most successful online education efforts in the world) and who has written extensively on the economics of making distance education work.
  3. A brief reaction to Georgia Tech’s announcement of their Udacity-fueled online Masters degree:

    In case anyone didn’t see the various articles, Georgia Tech’s College of Computing will be offering a Udacity-based MS degree starting.  The faculty did vote on the proposal. I argued against it (based mostly on learning and diversity arguments), but lost (which led to my long winter post). Faculty in the College of Computing have been asked not to talk about the online MS degree (which seems weird to me—asking faculty not to talk about their own degree programs).  Please understand if I don’t answer questions in response to this announcement.

Inside Higher Ed

  1. Another post on financial problems with MOOCs—this one reporting on why Carnegie-Mellon in not jumping on the MOOC bandwagon, and what they are doing in online education instead.
  2. This article offers another view on the Georgia Tech Udacity-fueled online Masters, looking into some of the details of the contract obtained through the Freedom of Information Act.
  3. Chris Newfield does a very close reading of the contract and provides a detailed critique of it.  It does not appear that Georgia Tech faculty had an opportunity to read the contract in detail, nor that the Georgia Tech administrators bothered to, since it appears to have some internal contradictions.  I suspect that in a year or two, only the lawyers will be making any money off of this deal.  This is the most detailed look at MOOC finances I’ve seen in public—and it does not look like they have yet figured out how to make MOOCs sustainable. Note: Sebastian Thrun has responded to this critique—I’ve checkmarked his post near the end of this list of links.


  1. Some of Moody’s financial prediction of creditworthiness of higher-ed institutions in the face of MOOCs.  It seems to start from the assumption that MOOCs will be financially successful while continuing to offer free courses (an assertion that not all followers of the MOOC phenomenon believe):

    In the end, elite institutions are positioned to capitalise most effectively on the MOOC platform, by increasing their global presence and deriving greater credit benefits from new markets. Those institutions with limited brand identities, however, will have to compete more intensively to retain—or develop—a competitive edge.

  2. This piece also assumes that MOOCs will be wildly successful and essentially kill off all but the elite institutions (again, ignoring the fact that MOOCs so far have only been shown to work for autodidacts):

    The open-source educational marketplace will give everyone access to the best universities in the world. This will inevitably spell disaster for colleges and universities that are perceived as second rate. Likewise, the most popular professors will enjoy massive influence as they teach vast global courses with registrants numbering in the hundreds of thousands (even though “most popular” may well equate to most entertaining rather than to most rigorous). Meanwhile, professors who are less popular, even if they are better but more demanding instructors, will be squeezed out. Fair or not, a reduction in the number of faculty needed to teach the world’s students will result. For this reason, pursuing a Ph.D. in the liberal arts is one of the riskiest career moves one could make today. Because much of the teaching work can be scaled, automated or even duplicated by recording and replaying the same lecture over and over again on video, demand for instructors will decline.

    Of course, I question the assumption that delivering “the same lecture over and over again” constitutes “much of the teaching work”. Certainly lecturing plays a very small part of the time I dedicate to teaching—feedback on writing, guiding students in the lab, developing and testing design labs, meeting with students one-on-one, … all take far more time than the “lectures”, most of which involve student interaction rather than one-way info dumps in any case.

  3. Although the University of California is plunging headlong into MOOC mania (driven by the Regents, the Legislature, and a few administrators), this is not the first time UC has tried to jump on an online bandwagon.  A lot of money was sunk into UC Online, a pitiful attempt to make money at online education in the previous online fad, with UC rolling out their system a couple years after the fad had ended and many other universities had quietly dropped their attempts to make money at it (except CMU, see the Inside Higher Education link above):

    The University of California is spending millions to market an ambitious array of online classes created to “knock people’s socks off” and attract tuition from students around the world. But since classes began a year ago, enrollment outside of UC is not what you’d call robust.

    One person took a class.

  4. This post contains observations by Bob Samuels after the UCLA panel promoting MOOCs (see the U to the Rescue blog posts mentioned above). He starts with a pointed observation:

    First a few ironies: the faculty presenters had to listen to four hours of non-interactive presentations before they could speak and ask questions. In other words, as the “providers” were lecturing us about how online technology allows for true interactive education to occur, they did not leave space for any interaction. Moreover, the high-tech promoters kept on having a hard time getting their PowerPoint slides to work as they criticized traditional institutions for not turning to new technologies to make education “Faster, Cheaper, and Better.” A final irony was that throughout the lectures, I noticed most of the audience, including myself, constantly checking their iPhones. Once again, as the providers were celebrating the role of new technologies in making us more focused and efficient, most of the audience was half-listening and multi-tasking.

    And he ends with another pointed remark:

    Now UC has spent $5 million on virtually one student, but we shouldn’t laugh because someone is going to have to pay for this failed experiment, and the bigger question is will UC be able to walk away from the table after it has gambled millions away on its high-tech wager?

  5. A demographic analysis of the students who took “Computational Investing, Part I” as a Coursera MOOC.  It turns out that 93% of those who completed the course already had a bachelor’s degree or more education.  It looks like the MOOCs are fairly effective as continuing education for people who have already completed college—not a bad thing, but not a replacement for college as they are so often touted.  The course completers were also 89% male, which may be normal for that subject matter, but does not suggest that the MOOCs are any good at increasing diversity.
  6. A summary of data collected from MOOCs about completion rates.  The post is an explanation and a pointer to Katy Jordan’s interactive data site: It looks like completion rates in the 2%–12% range are standard for MOOCs.
  7. Amy Bruckman wrote a few things that resonated with me:

    Amy’s Conjecture: The future of universities is in excelling at everything a MOOC is not.

    The trend over the last dozen or so years is for people who make money creating intellectual property to be compensated more and more poorly.  Fewer people are making a living as musicians.  Professional journalism is in crisis.  Small newspapers are closing, and major ones are struggling. This hasn’t happened all at once, but like a frog in a pot, raising the temperature/economic pressure a fraction of a degree per year over the long haul has dramatic consequences.  MOOCs turn education into a form of IP.  The same economic pressures are going to apply.

    If you buy that, then what’s next for universities?  There will no doubt be MOOC winners—but I suspect that just as seems to be dominating the e-commerce business, there will be advantages to size that will be hard to fight.  Margins will be tight, with a small number of big winners.

    The future of universities, then, is in everything a MOOC can not do. What is that?

    Amy’s Lemma: There are some things that will never be learned as well online.

    Her arguments, and ones like them, have convinced me that the best thing I can be doing for UC is not creating a huge MOOC course that reaches 1000s of students (I’d probably be terrible at it, anyway), but to create hands-on lab courses (like my new Applied Circuits for Bioengineers course) and high-contact, high-feedback courses (like my Bioinformatics: Tools and Algorithms course or the senior thesis seminar).  These high-contact courses are what the top universities can provide that does not have any online substitute.  If this is the future of the university, I embrace it.

  8. A surprisingly balanced view of MOOCs from Armando Fox, Academic Director, Berkeley Resource Center for Online Education:

    The experience of Berkeley faculty who have taught them is  that MOOCs work well as a supplement to traditional courses, not a replacement for them.  Courses at Berkeley and other world-class universities have shown that a blend of online and classroom instruction can increase professor and TA leverage and expand course enrollments while maintaining or increasing student satisfaction and learning outcomes.

    But in the absence of systematic research, BRCOE believes it would be a disservice to Berkeley students to consider using a two-year-old technology as a replacement for traditional instruction, which seems to be the thrust of recent media coverage and some current legislative activity. Berkeley will become a leader in online education not by charging prematurely into overzealous use of a new technology, but by doing the research to uncover its potential and position us to do the right thing by our students in the coming years—not just internally but in non-UC settings such as community colleges, high schools, and K-8 that ultimately “feed our pipeline.”We recognize the disconnect between BRCOE’s point of view on this matter and the sentiment of some legislators and spokespersons for the private sector. To that end, both we at Berkeley and our colleagues on other UC campuses are working hard to proactively inform the media, the private sector, and the public servants in Sacramento and elsewhere about both the opportunities and the pitfalls of online education. Our intention is to persuade them that “proceed optimistically but with care” is the right thing to do by our students, who are entrusting us with four or more years of education and career development when they arrive on campus.

  9. This post points out that the company Pearson is shifting resources from acquiring more textbook publishers (perhaps having gobbled up all the textbook publishers they could) to getting state contracts for delivering tests.  There is an implicit assumption that this near monopoly of both textbooks and testing by one company is too large a concentration of power—particularly given that the company is more interested in profit than in the quality of education.

    Yet, it is hardly coincidental that the major corporate “educational providers” are already seizing on the enormous profit potential in MOOCs. Specifically, having bought up a slew of major textbook publishers, Pearson is now shifting to buying up very specialized technology companies.

    As MOOCs are migrated into general-education or core college-level courses, Pearson and its imitators (such as Academic Partnerships) will make a push to provide the “competency measures” for those courses: that is, they will seek to adapt what has been so profitable on the K-12 level to the post-secondary level.

    A discussion about the social cost of allowing higher education to be pushed down the dreary path K-12 education has already tread is long overdue.

  10. This post was triggered by San Jose States partnering with Udacity to deliver a few remedial courses through MOOCS. The author praised them for their desire to reach more students, but pointed out that the freshman composition course was not a good target for MOOCS:

    Still, I think something vital is missing from its description of ideal online learning, something I find hard to imagine happening in large online courses. That something involves what occurs when a good teacher responds carefully and closely to the work of a student.

    But I worry that digitized feedback systems can only be a pale version of the focused response that a trained and attentive reader, a teacher, can offer a young writer.The teaching of writing has long been a textbook-driven field precisely because such readers are in short supply. The hope is that a good textbook can give an inexperienced or indifferent teacher something to lean on. But it doesn’t really work. What students need is not someone to walk them through a textbook but someone who can respond to their own work and ideas.

    This post really resonated with me, particularly given the amount of time I spend on providing feedback on student writing. I had a more direct response to the post in my blog post Teaching by hand.

  11. This is not a blog post, but the website for an organization that has put together a “college” without faculty—just a residential campus with all “teaching” handled by MOOCs.  It is an administrator’s dream—no faculty at all! It’s very easy to create a “college” this way—rent a large building for housing the “students” (like an old hotel in a no-longer-popular location), hire a couple of cooks and janitors, and put up a website!  No need to bother with expensive things like labs, classrooms, teachers, accreditation, or anything else having to do with education.  Anyone can now start their very own college!
  12. Suki Wessling writes about MOOCs from a different perspective—that of a home schooling parent who values online courses as a resource for teaching her children.

    As a homeschooler, however, I do think that MOOCs are a welcome new addition to the options for learning outside of structured environments, and I love the idea that the breadth of human knowledge is being made available to everyone, everywhere.

    But will MOOCs make the whole idea of the university education obsolete?

    She discusses what she sees MOOCs as doing well and what traditional colleges do well and summarizes her points well:

    But I believe that MOOCs will never be able to provide the benefits that an in-person degree at a good university can provide:

    • Working directly with the best thinkers in your field
    • Developing mentoring relationships with professors or more experienced students
    • Learning from, helping, and arguing with your fellow students
    • Creating the sorts of connections that in some fields are absolutely necessary for success
    • Having guidance in honing your analytical skills in ways that can’t be done alone

    What bothers me is not that people are excited about online learning (so am I) or that people think it has some benefits over traditional college (it does), but that everyone is so happily throwing the baby out with the bathwater. Traditional college is still going to be the best choice for people who should have been there to begin with.

  13. A balanced reflection on MOOCs and their possible effects on higher education by Noel Jackson, who is ambivalent about them, rather than strongly pro- or anti-MOOC.  This is a thoughtful piece, but not one that leads to any clear conclusions.  Here is a representative quote:

    While both accounts of MOOCs envision significant future consequences from their implementation, moreover, neither says very much about actually existing MOOCs. The MOOC has become a repository for utopian and dystopian narratives about the present and future directions of higher ed. As a result, this or that fact about MOOCs is often considered (or not) insofar as it confirms the prevailing theory about them.

  14. For those who believe in “following the money”, this post points to a financial analysis of the future of higher education done by Moody’s.  The bottom line seems to be that MOOCs are primarily a public relations gesture by elite universities, blunting criticism of their non-profit status while gaining positive branding.
  15. This is an answer to Chris Newfield’s analysis of the deal between Georgia Teach and Udacity, provided by Sebastian Thrun (founder of Udacity).  Personally, I don’t find Sebastian Thrun very believable here, but this may reflect my prior beliefs about the motives of Newfield and Thrun. If you read Newfield’s analysis (checkmarked above), then you should probably read Thrun’s reply.
  16. futurist confidently predicts that half of US colleges will fail by 2030, giving the following reasoning:
    1. Overhead costs too high – Even if the buildings are paid for and all money-losing athletic programs are dropped, the costs associated with maintaining a college campus are very high. Everything from utilities, to insurance, to phone systems, to security, to maintenance and repair are all expenses that online courses do not have. Some of the less visible expenses involve the bonds and financing instruments used to cover new construction, campus projects, and revenue inconsistencies. The cost of money itself will be a huge factor.
    2. Substandard classes and teachers – Many of the exact same classes are taught in thousands of classroom simultaneously every semester. The law of averages tells us that 49.9% of these will be below average. Yet any college that is able to electronically pipe in a top 1% teacher will suddenly have a better class than 99% of all other colleges.
    3. Increasingly visible rating systems – Online rating systems will begin to torpedo tens of thousands of classes and teachers over the coming years. Bad ratings of one teacher and one class will directly affect the overall rating of the institution.
    4. Inconvenience of time and place – Yes, classrooms help focus our attention and the world runs on deadlines. But our willingness to flex schedules to meet someone else’s time and place requirements is shrinking. Especially when we have a more convenient option.
    5. Pricing competition – Students today have many options for taking free courses without credits vs. expensive classes with credits and very little in between. That, however, is about to change. Colleges focused primarily on course delivery will be facing an increasingly price sensitive consumer base.
    6. Credentialing system competition – Much like a doctor’s ability to write prescriptions, a college’s ability to grant credits has given them an unusual competitive advantage, something every startup entrepreneur is searching for. However, traditional systems for granting credits only work as long as people still have faith in the system. This “faith in the system” is about to be eroded with competing systems. Companies like Coursera, Udacity, and iTunesU are well positioned to start offering an entirely new credentialing system.
    7. Relationships formed in colleges will be replaced with other relationship-building systems – Social structures are changing and the value of relationships built in college, while often quite valuable, are equally often overrated. Just as a dating relationship today is far more likely to begin online, business and social relationships in the future will also happen in far different ways.
    8. Sudden realization that “the emperor has no clothes!” – Education, much like our money supply, is a system built on trust. We are trusting colleges to instill valuable knowledge in our students, and in doing so, create a more valuable workforce and society. But when those who find no tangible value begin to openly proclaim, “the emperor has no clothes!” colleges will find themselves in a hard-to-defend downward spiral.

    Futurists have a terrible track record for these sorts of grandiose predictions, but it is certainly the case that decisions made now about how to fund college education will have an enormous effect over the next 20 years. If states continue to defund public universities, but society still pushes college-for-all, then either colleges will have to reduce costs enormously or go out of business, because the debt load on students is getting close to or has already passed the sustainable limit. (Personally, I don’t see reducing costs without reducing quality as very likely. Although the sticker price of public universities has soared, the underlying costs have been remarkably constant—the price has gone up because students are paying a much larger portion of the cost.)

  17. A detailed reaction by Jon Beasley-Murray to two talks, one by Eric Mazur (proponent of peer instruction) and one by Daphne Koller (founder of Coursera).  Basically, Beasley-Murray sees little knowledge of pedagogy and a lot of self-serving hype in both talks.
  18. San Jose State Suspends Udacity Experiment. The attempt to teach remedial courses with Udacity at San Jose State turned out to fail miserably, with pass rates lower even than the usual remedial courses. This was a widely predicted result, particularly by people who have taught remedial courses. Fortunately, San Jose State had enough sense to stop the experiment after one round, though I understand that their president still plans to try again.

That’s it for the MOOC roundup.  I’ll try not to collect posts for so long again before summarizing them—the problem was that the list kept growing, and I kept putting off the summary, as it got more daunting with each new link added to the list.


2013 July 27

MAH wants a School Programs Coordinator

Filed under: home school — gasstationwithoutpumps @ 21:47
Tags: , , , ,

The Museum of Art and History in Santa Cruz recently posted a job announcement, Museum 2.0: Come Work With Us at MAH as School Programs Coordinator:

We are hiring for a School Programs Coordinator to wrangle the 3,500+ students and their teachers who come to the museum every year for a tour and hands-on experience in our art and history exhibitions.

Normally, I wouldn’t pay much attention to a post like that for a job well outside my field, but I’ve been following what Nina Simon has been doing at MAH, turning it from tiny, mostly ignored, provincial museum (noted mainly for having some decent historical archives of interest to local historians) into a cultural center for downtown Santa Cruz. So I read the post and was interested to see a couple of points included:

  • Many families in our area have opted into non-traditional school and educational formats, especially homeschooling. What kinds of programs should we consider providing for these groups?
  • Not all learning happens in school. How should we think about the balance between formal programs for school groups and youth-centered programs that happen after or outside of school?

It is not very often that home schoolers get explicitly included in descriptions of jobs for School Program Coordinators.  Sometimes a staff member realizes that home school students have more time for museums and day-time activities than schooled students do and thinks up some special activities for them, but rarely are they included as an integral part of a “School Programs” job.  Because many home schoolers in other parts of the country rely heavily on informal education at museums (particularly in museum-rich environments like Washington, DC and New York, NY), it is good to see the local museum interested in increasing opportunities for local home schoolers.  (I sent a pointer to the job post to a couple of the local home school mailing lists.)

Of course, the job is going to be a challenging one and (like most non-profit jobs locally) an underpaid one.  One of the challenges is due to local demographics:

Because 30% of the students in our school district are English language learners (and the majority of those, Latino), we are seeking someone who is bilingual and able to communicate comfortably with kids and adults in Spanish.

Although Nina has done marvelous work at MAH, I don’t think she has yet been successful in integrating the Spanish-speaking community much.  On my recent visit, I don’t remember there being any Spanish labels on any of the exhibits (though that may have been my unawareness, as I was not thinking about Spanish-language access at the time).  I suspect that the school field trips may be the only time that the Spanish-speaking kids from the city visit the Museum, and I suspect that almost none of the kids from the southern end of the county (which is majority Spanish-speaking) visit the Museum at all.  About the only things I remember at MAH involving Latin American culture were mainly cultural appropriations (like Day of the Dead altars for Halloween).

Nina has done a great job at bring in a younger group of people to the museum (essential to the future of the museum, since the traditional patrons of the museum were all much older than me), with events like these:

Remember when we lit up Abbott Square with an organ that breathed fire? With a glowing dance tower? With amazing digital and fire art? So do we. And we’re going to do it again this year when we bring back GLOW on October 18 and 19. We are raising money to make the festival even more amazing, and we’re hoping you can help. Every dollar of this campaign will go directly to artists to support their participation. When you donate, you’ll earn advance tickets to the festival and special perks… including the opportunity to shoot off a flamethrower. 

This is the last week to donate. Fuel the fire this October and click here to contribute.

The indiegogo fund-raising campaign only runs until Monday 2013 Aug 5, and they are currently below 60% of their $3000 target. They have a couple of Vimeo videos on the fund-raising site.  (The part II video showing the fire dancing and fire organ is worth watching even if you are too cheap to donate.)  I considered sticking Vimeo links in my post, but I’d really much rather force you through the indiegogo site.

I admit that I was too cheap to donate more than $25 (not enough for even a free-ticket perk, certainly not enough for the VIP roof-garden admission or the opportunity to shoot the flame-thrower), though they’ll probably get another $5–10 from me for tickets to the GLOW events themselves.  I’m hoping that a few of my readers will also contribute a few dollars.  The point of a crowdfunding campaign is to be able to get bunches of small contributors like me, without requiring the enormous staff effort that fundraising usually requires.  Word of mouth advertising is an essential part of a crowdfunding strategy.



Failed attempt at pulse oximeter

In Optical pulse monitor with little electronics  and Digital filters for pulse monitor, I developed an optical pulse monitor using an IR emitter, a phototransistor, 2 resistors, and an Arduino.  On Thursday, I decided to try to extend this to a pulse oximeter, by adding a red LED (and current-limiting resistor) as well.  Because excluding ambient light is so important, I decided to build a mount for everything out of a block of wood:

Short piece of 2x2 wood, with a 3/4" diameter hoe drilled with a Forstner bit part way through the block.  Two 1/8" holes drilled for 3mm LEDs on top, and one for a 3mm phototransistor on the bottom (lined up with the red LED).  Wiring channels were cut with the same 1/8" drill bit, and opened up a with a round riffler.  Electrical tape holds the LEDs and phototransistor in place (removed here to expose the diodes).

Short piece of 2×2 wood, with a 3/4″ diameter hole drilled with a Forstner bit partway through the block. Two 1/8″ holes drilled for 3mm LEDs on top, and one for a 3mm phototransistor on the bottom (lined up with the red LED). Wiring channels were cut with the same 1/8″ drill bit, and opened up a with a round riffler. Electrical tape holds the LEDs and phototransistor in place (removed here to expose the diodes).

My first test with the new setup was disappointing.  The signal from the IR LED swamped out the signal from the red LED, being at least 4 times as large. The RC discharge curves for the phototransistor for the IR signal was slow enough that I would have had to go to a very low sampling rate to see the red LED signal without interference from the discharge from the IR pulse.  I could reduce the signal for the IR LED to only twice the red output by increasing the IR current-limiting resistor to 1.5kΩ, and reduce the RC time constant of the phototransistor by reducing the pulldown resistor for it to 100kΩ The reduction in the output of the IR LED and decreased sensitivity of the phototransistor made about a 17-fold reduction in the amplitude of the IR signal, and the red signal was about a thirtieth of what I’d previously been getting for the IR signal.  Since the variation in amplitude that made up my real signal was about 10 counts before, it is substantially less than 1 count now, and is  too small to be detected even with the digital filters that I used.

I could probably solve this problem of a small signal by switching from the Arduino to the KL25Z, since going from a 10-bit ADC to a 16-bit ADC would allow a 64 times larger signal-to-noise ratio (that is, +36dB), getting me back to enough signal to be detectable even with the reductions..  I’ve ordered headers from Digi-Key for the KL25Z, so next week I’ll be able to test this.

I did do something very stupid yesterday, though in a misguided attempt to fix the problem.  I had another red LED (WP710A10ID) that was listed on the spec sheet as being much brighter than the one I’d been using (WP3A8HD), so I soldered it in.  The LED was clearly much brighter, but when I put my finger in the sensor, I got almost no red signal!  What went wrong?

A moment’s thought explained the problem to me (I just wish I had done that thinking BEFORE soldering in the LED).  Why was the new LED brighter for the same current?  It wasn’t that the LED was more efficient at generating photons, but that the wavelength of the light was shorter, and so the eye was more sensitive to it.

Spectrum of the WP3A8HD red LED that I first used.  It has a peak at 700nm and dominant wavelength at 660nm.  I believe that the "dominant wavelength" refers to the peak of the spectrum multiplied by the sensitivity of the human eye.

Spectrum of the WP3A8HD red LED that I first used. It has a peak at 700nm and dominant wavelength at 660nm. I believe that the “dominant wavelength” refers to the peak of the spectrum multiplied by the sensitivity of the human eye.  Spectrum copied from Kingbright preliminary specification for WP3A8HD.

Spectrum of the WP710A10ID brighter red LED.  The peak is at 627nm and the "dominant wavelength" is 617nm.  The extra brightness is coming from this shorter wavelength, where the human eye is more sensitive.

Spectrum of the WP710A10ID brighter red LED that didn’t work for me. The peak is at 627nm and the “dominant wavelength” is 617nm. The extra brightness is coming from this shorter wavelength, where the human eye is more sensitive. Image copied from the Kingbright spec sheet.

CIE_1931 luminosity curve, representing a stndardized sensitivity of the human eye under high-light conditions (scotopic vision).  Copied from Wikipedia:

1931 CIE luminosity curve, representing a standardized sensitivity of the human eye with bright lighting (photopic vision). The peak is at 555nm. Note that there are better estimates of human eye sensitivity now available (see the discussion of newer ones in the Wikipedia article on the Luminosity function).
Image copied from Wikipedia.

The new LED is brighter, because the human eye is more sensitive to its shorter wavelength, but the optimum sensitivity of the phototransistor is at longer wavelengths, so the phototransistor is less sensitive to the new LED than to the old one.

Typical spectral sensitivity of a silicon photodiode or phototransistor.  This curve does not take into account any absorption losses in the packaging of the part, which can substantially change the response.  Unfortunately, Kingbright does not publish a spectral sensitivity curve for their WP3DP3B phototransistor.  Image copied from

Typical spectral sensitivity of a silicon photodiode or phototransistor. This curve does not take into account any absorption losses in the packaging of the part, which can substantially change the response. Note that the peak sensitivity is in the infrared, around 950nm, not in the green around 555nm as with the human eye. Unfortunately, Kingbright does not publish a spectral sensitivity curve for their WP3DP3B phototransistor, so this image is a generic one copied from

This sensitivity is much better matched to the IR emitter (WP710A10F3C) than to either of the red LEDs:

Spectrum for the WP710A10F3C IR emitter, copied from the Kingbright spec sheet.  The peak is at 940nm with a 50nm bandwidth.  There is no "dominant wavelength", because essentially all the emissions are outside the range of the human eye.

Spectrum for the WP710A10F3C IR emitter, copied from the Kingbright spec sheet. The peak is at 940nm with a 50nm bandwidth. There is no “dominant wavelength”, because essentially all the emissions are outside the range of the human eye.

Furthermore, blood and flesh is more opaque at the shorter wavelength, so I had more light absorbed and less sensitivity in the detector, making for a much smaller signal.

Scott Prahl's estimate of oxyhemoglobin and deoxyhemoglobin molar extinction coefficients, copied from The higher the curve here the less light is transmitted.  Note that 700nm has very low absorption, but 627nm has much higher absorption.

Scott Prahl’s estimate of oxyhemoglobin and deoxyhemoglobin molar extinction coefficients, copied from
Tabulated values are available at and general discussion at
The higher the curve here the less light is transmitted. Note that 700nm has very low absorption (290), but 627nm has over twice as high an absorption (683).  Also notice that in the infrared

I had to go back to the red LED (WP3A8HD) that I started with. Here is an example of the waveform I get with that LED, dropping the sampling rate to 10Hz:

The green waveform is the voltage driving the red LED and through a 100Ω resistor.  The red LED is on for the 1/30th of second that the output is low, then the IR LED is on (through a 1.5kΩ resistor) for 1/30th of a second, then both are off.  THe yellow trace shows the voltage at the phototransistor emitter with a 680kΩ pulldown. This signal seems to have too little amplitude for the variation to be detected with the Arduino.

The green waveform is the voltage driving the red LED and through a 100Ω resistor. The red LED is on for the 1/30th of second that the output is low, then the IR LED is on (through a 1.5kΩ resistor) for 1/30th of a second, then both are off. THe yellow trace shows the voltage at the phototransistor emitter with a 680kΩ pulldown.
This signal seems to have too little amplitude for the variation to be detected with the Arduino (the scale is 1v/division with 0v at the bottom of the grid).

I can try increasing the signal by using 2 or more red LEDs (though the amount of current needed gets large), or I could turn down the IR signal to match the red signal and use an amplifier to get a big enough signal for the Arduino to read.  Sometimes it seems like a 4.7kΩ resistor on the IR emitter matches the output, and sometimes there is still much more IR signal received, depending on which finger I use and how I hold it in the device.

I was thinking of playing with some amplification, but I could only get a gain of about 8, and even then I’d be risking saturation of the amplifier.  I think I’ll wait until the headers come and I can try the KL25Z board—the gain of 64 from the higher resolution ADC is likely to be more useful.  If that isn’t enough, I can try adding gain also.  I could also eliminate the “off-state” and just amplify the difference between IR illumination and red illumination.  I wonder if that will let me detect the pulse, though.

2013 July 24

Digital filters for pulse monitor

In Optical pulse monitor with little electronics, I talked a bit about an optical pulse monitor using the Arduino and just 4 components (2 resistors, an IR emitter, and a phototransistor).  Yesterday, I had gotten as far as getting good values for resistors, doing synchronous decoding, and using a very simple low-pass IIR filter to clean up the noise.  The final result still had problems with the baseline shifting (probably due to slight movements of my finger in the sensor):

With digital low-pass filtering, the pulse signal is much cleaner, but the sharp downward transition at the start of each pulse has been rounded off by the filter.

(click to embiggen) Yesterday’s plot with digital low-pass filtering, using y(t) = (x(t) + 7 y(t-1) )/8.  There is not much noise, but the baseline wobbles up and down a lot, making the signal hard to process automatically.

Today I decided to brush off my digital filter knowledge, which I haven’t used much lately, and see if I could design a filter using only small integer arithmetic on the Arduino, to clean up the signal more. I decided to use a sampling rate fs = 30Hz on the Arduino, to avoid getting any beating due to 60Hz pickup (not that I’ve seen much with my current setup). The 30Hz choice was made because I do two measurements (IR on and IR off) for each sample, so my actual measurements are at 60Hz, and should be in the same place in any noise waveform that is picked up. (Europeans with 50Hz line frequency would want to use 25Hz as their sampling frequency.)

With the 680kΩ resistor that I selected yesterday, the 30Hz sampling leaves plenty of time for the signal to charge and discharge:

The grid line in the center is at 3v.  The green trace is the signal to on the positive side of the IR LED, so the LED is on when the trace is low (with 32mA current through the pullup resistor).  The yellow trace is the voltage at the Arduino input pin: high when light is visible, low when it is dark.  This recording was made with my middle finger between the LED and the phototransistor.

The grid line in the center is at 3v. The green trace is the signal to on the positive side of the IR LED, so the LED is on when the trace is low (with 32mA current through the pullup resistor). The yellow trace is the voltage at the Arduino input pin: high when light is visible, low when it is dark. This recording was made with my middle finger between the LED and the phototransistor.

I decided I wanted to replace the low-pass filter with a passband filter, centered near 1Hz (60 beats per minute), but with a range of about 0.4Hz (24 bpm) to 4Hz (240bpm). I don’t need the passband to be particularly flat, so I decided to go with a simple 2-pole, 2-zero filter (called a biquad filter). This filter has the transfer function

H(z) = \frac{b_{0} + b_{1}z^{-1} + b_{2}z^{-2}}{1+a_{1}z^{-1}+a_{2}z^{-2}}

To get the gain of the filter at a frequency f, you just compute \left| H( e^{i \omega} ) \right|, where \omega = 2 \pi f / f_{s}.  Note that the z values that correspond to sinusoids are along the unit circle, from DC at e^{0} = 1 up to the Nyquist frequency f_{s}/2 at e^{\pi} = -1.

The filter is implemented as a simple recurrence relation between the input x and the output y:

y(t) = b_{0} x(t) + b_{1}x(t-1) + b_{2}x(t-2) - a_{1}y(t-1) - a_{2}y(t-2)

This is known as the “direct” implementation.  It takes a bit more memory than the “canonical” implementation, but has some nice properties when used with small-word arithmetic—the intermediate values never get any further from 0 than the output and input values, so there is no overflow to worry about in intermediate computations.

I tried using an online web tool to design the filter, and I got some results but not everything on the page is working.  One can’t very well complain to Tony Fisher about the maintenance, since he died in 2000. I tried using the tool at to look at filter gain, but it has an awkward x-axis (linear instead of logarithmic frequency) and was a bit annoying to use.  So I looked at results from Tony Fisher’s program, then used my own gnuplot script to look at the response for filter parameters I was interested in.

The filter program gave me one obvious result (that I should not have needed a program to realize): the two zeros need to be at DC and the Nyquist frequency—that is at ±1.  That means that the numerator of the transfer function is just 1-z^{-2}, and b0=1, b1=0, and b2=–1.  The other two parameters it gave me were a2=0.4327386423 and a1=–1.3802466192.  Of course, I don’t want to use floating-point arithmetic, but small integer arithmetic, so that the only division I do is by powers of 2 (which the compiler turns into a quick shift operation).

I somewhat arbitrarily selected 32 as my power of 2 to divide by, so that my transfer function is now

H(z) = 32 \frac{1 - z^{-2}}{32+A_{1}z^{-1}+A_{2}z^{-2}}

and my recurrence relation is

y(t) = \left(32 \left( x(t) - x(t-2) \right) - A_{1} y(t-1) - A_{2} y(t-2) \right)/32

with A1 and A2 restricted to be integers.  Rounding the numbers from Fisher’s program suggested A1=-44 and A2=14, but that centered the filter at a bit higher frequency than I liked, so I tweaked the parameters and drew plots to see what the gain function looked like.  I made one serious mistake initially—I neglected to check that the two poles were both inside the unit circle (they were real-valued poles, so the check was just applying the quadratic formula).  My first design (not the one from Fisher’s program) had one pole outside the unit circle—it looked fine on the plot, but when I implemented it, the values grew until the word size was exceeded, then oscillated all over the place.  When I realized what was wrong, I checked the stability criterion and changed the A2 value to make the pole be inside the unit circle.

I eventually ended up with A1=-48 and A2=17, which centered the filter at 1, but did not have as high an upper frequency as I had originally thought I wanted:

The gain of the filter that I ended up implementing has -3dB points at about 0.43 and 2.15 Hz.

(click to embiggen) The gain of the filter that I ended up implementing has -3dB points at about 0.43 and 2.15 Hz.

Here is the gnuplot script I used to generate the plot—it is not fully automatic (the xtics, for example, are manually set). Click it to expand.

fs = 30	# sampling frequency
A0=32.  # multiplier (use power of 2)


peak = fs/A0	# approx frequency of peak of filter

set title sprintf("Design of biquad filter, fs=%3g Hz",fs)

set key bottom center
set ylabel "gain [dB]"
unset logscale y
set yrange [-20:30]

set xlabel "frequency [Hz]"
set logscale x
set xrange [0.01:0.5*fs]

set xtics add (0.43, 2.15)
set grid xtics

biquad(zinv,b0,b1,b2,a0,a1,a2) = (b0+zinv*(b1+zinv*b2))/(a0+zinv*(a1+zinv*a2))
gain(f,b0,b1,b2,a0,a1,a2) = abs( biquad(exp(j*2*pi*f/fs),b0,b1,b2,a0,a1,a2))
phase(f,b0,b1,b2,a0,a1,a2) = imag(log( biquad(exp(j*2*pi*f/fs),b0,b1,b2,a0,a1,a2)))

plot 20*log(gain(x,A0,0,-A0,  A0,A1,A2)) \
		title sprintf("%.0f (1-z^-2)/(%.0f+ %.0f z^-1 + %.0f z^-2)", \
			A0, A0, A1, A2), \
	20*log(gain(peak,A0,0,-A0,  A0,A1,A2))-3 title "approx -3dB"

I wrote a simple Arduino program to sample the phototransistor every 1/60th of a second, alternating between IR off and IR on. After each IR-on reading, I output the time, the difference between on and off readings, and the filtered difference. (click on the code box to view it)

#include "TimerOne.h"

#define rLED 3
#define irLED 5

// #define CANONICAL   // use canonical, rather than direct implementation of IIR filter
// Direct implementation seems to avoid overflow better.
// There is probably still a bug in the canonical implementation, as it is quite unstable.

#define fs (30) // sampling frequency in Hz
#define half_period (500000L/fs)  // half the period in usec

#define multiplier  32      // power of 2 near fs
#define a1  (-48)           // -(multiplier+k)
#define a2  (17)            // k+1

volatile uint8_t first_tick;    // Is this the first tick after setup?
void setup(void)
//    pinMode(rLED,OUTPUT);
//    digitalWrite(rLED,1);  // Turn RED LED off
    digitalWrite(irLED,1); // Turn IR LED off

    Serial.print("# bandpass IIR filter\n# fs=");
    Serial.print(" Hz, period=");
    Serial.print(" usec\n#  H(z) = ");
    Serial.print(" + ");
    Serial.print("z^-1 + ");
    Serial.println("# using canonical implementation");
    Serial.println("# using direct implementation");
    Serial.println("#  microsec raw   filtered");


// for canonical implementation
 volatile int32_t w_0, w_1, w_2;
// For direct implementation
 volatile int32_t x_1,x_2, y_0,y_1,y_2;

void loop()

volatile uint8_t IR_is_on=0;    // current state of IR LED
volatile uint16_t IR_off;       // reading when IR is off (stored until next tick)

void half_period_tick(void)
    uint32_t timestamp=micros();

    uint16_t IR_read;
    IR_read = analogRead(0);
    if (!IR_is_on)
    {   IR_off=IR_read;
        digitalWrite(irLED,0); // Turn IR LED on
        IR_is_on = 1;

    digitalWrite(irLED,1); // Turn IR LED off
    IR_is_on = 0;

    Serial.print(" ");

    int16_t x_0 = IR_read-IR_off;
    Serial.print(" ");

    if (first_tick)
    {  // I'm not sure how to initialize w for the first tick
       w_2 = w_1 = multiplier*x_0/ (1+a1+a2);
       first_tick = 0;
    if (first_tick)
    {   x_2 = x_1 = x_0;
        first_tick = 0;

    w_0 = multiplier*x_0 - a1*w_1 -a2*w_2;
    int32_t y_0 = w_0 - w_2;
     y_0 = multiplier*(x_0-x_2) - a1*y_1 -a2*y_2;
     y_0 /= multiplier;
     x_2 = x_1;
     x_1 = x_0;
     y_2 = y_1;
     y_1 = y_0;

Here are a couple of examples of the input and output of the filtering:

    The input signals here are fairly clean, but different runs often get quite different amounts of light through the finger, depending on which finger is used and the alignment with the phototransistor. Note that the DC offset shifts over the course of each run.

(click to embiggen) The input signals here are fairly clean, but different runs often get quite different amounts of light through the finger, depending on which finger is used and the alignment with the phototransistor. Note that the DC offset shifts over the course of each run.


(click to embiggen) After filtering the DC offset and the baseline shift are gone. The two very different input sequences now have almost the same range. There is a large, clean downward spike at the beginning of each pulse.

Overall, I’m pretty happy with the results of doing digital filtering here. Even a crude 2-zero, 2-pole filter using just integer arithmetic does an excellent job of cleaning up the signal.

Optical pulse monitor with little electronics

In yesterday’s blog post, I talked mainly about what my son did with his time yesterday, to mention the small amount of debugging help I gave him.  Today I’ll post about what I did with most of my time yesterday.

This year, I am hoping to lead a 2-unit freshman design seminar for bioengineering students.  (Note: I did not say “teach” here, as I’m envisioning more of a mentoring role than a specific series of exercises.)  One thing I’m doing is trying to come up with design projects that freshmen with essentially no engineering skills can do as a team.  They may have to learn something new (I certainly hope they do!), but they should only spend a total of 60 hours on the course, including class time.  Since I want to spend some of class time on lab tours, lab safety, using the library resources, and how to work in a group effectively, there is not a lot of time left for the actual design and implementation.

One of the things I found very valuable in designing the Applied Circuits course was doing all the design labs myself, sometimes several times, in order to tweak the specs and anticipate where the students would have difficulty.  I expect to do some redesign of a couple of the circuits labs this year, but that course is scheduled for Spring (and finally got official approval this week), while the (not yet approved) freshman seminar is scheduled for Winter.  So I’m now experimenting with projects that I think may be suitable for the freshman design seminar.

These students cannot individually be expected to know anything useful, high school in California being what it is.  As a group, though, I think I can expect a fair amount of knowledge of biology, chemistry, and physics, with perhaps a sprinkling of math and computer programming.  I can’t expect any electronics knowledge, and we won’t have access to a machine shop—we may get permission for the students to use a laser cutter under supervision.  We can probably get some space in an electronics lab, but maybe not in a bio lab (the dean took away the department’s only teaching lab, with a “promise” to build a bigger one—but it is unlikely to be available for the freshmen by Winter quarter—I miss our first dean of engineering, as we seem to have had a series of incompetent deans since then).

So I’m looking for projects that can essentially be built at home with minimal tools and skills, but that are interesting enough to excite students to continue to higher levels in the program.  And I want them to be design projects, not kit-building or cookbook projects, with multiple possible solutions.

So far, there have been a couple of ideas suggested, both involving a small amount of electronics and some mechanical design:

  • An optical growth meter for continuously monitoring a liquid culture of bacteria or yeast. The electronics here is just a light source (LED or laser diode with current-limiting resistor), a phototransistor,  a current-to-voltage converter for the phototransistor (one resistor), and a data logger (like the Arduino Data Logger we use for the circuits course).  The hard part is coming up with a good way to get uniform sampling of the liquid culture while it is in an incubator on a shaker table.  There are lots of possible solutions: mounting stuff around flasks, immersed sensors, bending glass tubing so that the swirling culture is pumped through the tubes, external peristaltic pumps, … .  Design challenges include how the parts of the apparatus that touch the culture will be sterilized, how to keep things from falling apart when they are shaken for a couple of days, and so forth.   I’ve not even started trying to do a design here, though I probably should, as the mechanical design is almost all unfamiliar to me, and I’d be a good example of the cluelessness that the students would bring to the project.
  • An optical pulse sensor or pulse oximeter.  This is the project I decided to work on yesterday. The goal is to shine light through a finger and get a good pulse signal.  (I tried this last summer, making a very uncomfortable ear clip and doing a little testing before rejecting the project for the circuits course.)  If I can get good pulse signals from both red and IR light sources, I should be able to take differences or ratios and get an output proportional to blood oxygenation.

I decided yesterday to try to build a pulse monitor with almost no electronics.  In particular, I wanted to try building without an op amp or other amplifier, feeding the phototransistor signal directly into an Arduino analog in.  (I may switch to using the KL25Z for this project, as the higher resolution on the analog-to-digital converter means I could use smaller signals without amplification.)

A phototransistor is essentially a light-to-current converter.  The current through the phototransistor is essentially linear in the amount of light, over a pretty wide range. The Arduino analog inputs are voltage sensors, so we need to convert the current to a voltage.  The simplest way to do this is to put a series resistor to ground, and measure the voltage across the series resistor.  The voltage we see is then the current times the resistance.  Sizing the resistor is a design task—how big a current do we get with the intensity of light through the finger, and how much voltage do we need. The voltage needed can be estimated from the resolution of the analog-to-digital converter, but the amount of light is best measured empirically.

One problem that the pulse monitor faces is huge variations in ambient light.  Ideally the phototransistor gets light only from LED light shining through the finger, but that is a bit hard to set up while breadboarding.  Distinguishing the ambient light from the light through the finger can be difficult. Yesterday, I tried to reduce that problem by using “synchronous decoding”.  That is, I turned the LED on and off, and measured the difference between the phototransistor current with the LED on and with the LED off.  Using the Arduino to control the LED as well as to read the voltage is fairly easy—these are the sorts of tasks that are starter projects on the Arduino, so should be within the capabilities of the freshmen (with some learning on their part).

I also looked at the phototransistor output with my BitScope oscilloscope, so that I could see the waveform that the Arduino was sampling two points from.  Here is an example waveform:

The x-axis is 20ms/division, and the y-axis 1v/division, with the center line at 2v. I put in a 50% duty cycle (20ms on, 20ms off).

The x-axis is 20ms/division, and the y-axis 1v/division, with the center line at 2v.
I put in a 50% duty cycle (20ms on, 20ms off).  The IR light is shining through my index finger.

For the above trace, I used a 680kΩ pulldown resistor to convert the current to voltage. I played a lot with different resistors yesterday, to get a feel for the tradeoffs.  Such a large resistor provides a large voltage swing for a small change in current, but the parasitic capacitance makes for rather slow RC charge/discharge curves.  Using larger resistors does not result in larger swings (unless the frequency of the input is reduced), because the RC time constant gets too large and the slowly changing signal does not have time to make a full swing.  I tried, as an experiment, adding a unity-gain buffer, so that the BitScope and Arduino inputs would not be loading the phototransistor.  This did not make much difference, so most of the parasitic capacitance is probably in the phototransistor itself.  One can get faster response for a fixed change in light only by decreasing the voltage swing, which would then require amplification to get a big enough signal to be read by the Arduino.  (It may be that the extra 6 bits of resolution on the KL25Z board would allow a resistor as low as 20kΩ and much faster response.)

Note that ambient light results in a DC shift of the waveform without a change in shape, until it gets bright enough that the current is more than 5v/680kΩ (about 7µA), at which point the signal gets clipped.  Unfortunately, ordinary room lighting is enough to saturate the sensor with this large a series resistor.  I was able to get fairly consistent readings by using the clothespin ear clip I made last summer, clamped open to make it the right size for my finger.  I did even better when I put the clip and my hand into a camera bag that kept out most of the ambient light.  Clearly, mechanical design for eliminating ambient light will be a big part of this design.

One might think that the 2v signal seen on the BitScope is easily big enough for pulse detection, but remember that this is not the signal we are interested in.  The peak-to-peak voltage is proportional to how transparent the finger is—we are interested in the variation of that amplitude with blood flow.  Here is an example plot of the sort of signal we are looking at:

The pulse here is quite visible, but is only about a 15–30 count change in the 300-count amplitude signal.  Noise from discretization makes the signal hard to pick out.

(click to embiggen) The pulse here is quite visible, but is only about a 15–30 count change in the 300-count amplitude signal. Noise from discretization (and other sources) makes the signal hard to pick out auotmatically.  This signal was recorded with the Arduino data logger, but only after I had modified the data logger code to turn the IR emitter on and off and report differences in the readings rather than the readings themselves. Note the sharp downward transition (increased opacity due to more blood) at the beginning of each pulse.

To get a bigger, cleaner signal, I decided to do some very crude low-pass filtering on the Arduino. I used the simplest of infinite-impulse response (IIR) filters: Y(t) = a X(t) + (1-a) Y(t-1). Because division is very slow on the Arduino, I limited myself to simple shifts for division: a= 1/2, 1/4, or 1/8. To avoid losing even more precision, I actually output X(t) + 7 Y(t-1) then divided by 8 to get Y(t). I also used a 40msec sampling period, with the IR emitter on for 20ms, then off for 20msec (the waveform shown in the oscilloscope trace above).

With digital low-pass filtering, the pulse signal is much cleaner, but the sharp downward transition at the start of each pulse has been rounded off by the filter.

(click to embiggen) With digital low-pass filtering, the pulse signal is much cleaner, but the sharp downward transition at the start of each pulse has been rounded off by the filter. This data was not captured with the Arduino Data Logger, but by cutting and pasting from the Arduino serial monitor, which involves simpler (hence more feasible for freshmen) programming of the Arduino.

I now have a very clean pulse signal, using just the Arduino, an IR emitter, a phototransistor, and two resistors. There is still a huge offset, as the signal is 200 counts out of 4600, and the offset fluctuates slowly.  To get a really good signal, I’d want to do a bandpass filter that passes 0.3Hz to 3Hz (20bpm–200bpm), but designing that digital filter would be beyond the scope of a freshman design seminar.  Even the simple IIR filter is pushing a bit here.

I’m not sure how to go from here to the pulse oximeter (using both an IR and a red LED) without fancy digital filtering.  Here is the circuit so far:

Although the 120Ω resistor allows up to 32mA, I didn't believe that the Arduino outputs could actually sink that much current—20 mA is what the spec sheet allows. Checking with the BitScope, I see a 3840mV drop across the resistor, for 32mA.  Note: I used pins D3 and D5 of the Arduino so that I could use pulse-width modulation (PWM) if I wanted to. (Schematic drawn with Digikey's SchemeIt.)

Although the 120Ω resistor allows up to 32mA, I didn’t believe that the Arduino outputs could actually sink that much current—20 mA is what the spec sheet allows. Checking with the BitScope, I see a 3840mV drop across the resistor, for 32mA. Note: I used pins D3 and D5 of the Arduino so that I could use pulse-width modulation (PWM) if I wanted to. (Schematic drawn with Digikey’s SchemeIt.)

(Update 2015-Jul-5: I just noticed that the schematic used a PNP phototransistor symbol, rather than an NPN one—I’d make my students redo their reports for a mistake that big!)

« Previous PageNext Page »

%d bloggers like this: