Gas station without pumps

2015 March 28

Ideas about vibration detection

Filed under: Uncategorized — gasstationwithoutpumps @ 14:28
Tags: , , ,

Yesterday, my son and I were looking at the “Quantum Bit” toys sold by ThinkGeek.  These are LED lights with plastic cases that flash briefly then fade after a second or so.  We tried reverse-engineering them from what we could see through the clear cases, then building the electronics to see if we got it right.

A rather blurry closeup of the "quantum bit" toy. The two white LEDs are triggered by the vibration switch in the middle. There are two coin cells on the back of the PC board to power the device. In addition to the vibration switch, the electronics seem to consist of 2 capacitors, 3 resistors, and a transistor.

A rather blurry closeup of the “quantum bit” toy. The two white LEDs are triggered by the vibration switch in the middle. There are two coin cells on the back of the PC board to power the device. In addition to the vibration switch, the electronics seem to consist of 2 capacitors, 3 resistors, and a transistor.

The key to the device is a vibration switch that consists of a spring coiled around, but not touching, a metal wire.  Shaking the device causes the spring to move around and touch the wire briefly. Those brief contacts are turned into longer pulses by the electronics.

This cutaway image, copied from http://www.adafruit.com/images/970x728/1767-02.jpg , shows the working part of a typical vibration switch.

This cutaway image, copied from http://www.adafruit.com/images/970×728/1767-02.jpg , shows the working part of a typical vibration switch.

The first thing I did was to look at the flashes with a phototransistor, a 1kΩ resistor, and an oscilloscope (because I happened to have a phototransistor wired up from a previous project). I could see very rapid turn-on for the LEDs (faster than the response time of the phototransistor circuit), followed by moderate-speed exponential decay, followed by a slower, very dim fade:

This 50ms/division, 2V/division trace shows a couple of bounces of the contact, followed by the exponential delay.  It took me several tries to get a clean single hit like this—most often I got a series of decaying pulses, as the spring bounced back and forth and made multiple contacts.

This 50ms/division, 2V/division trace shows a couple of bounces of the contact, followed by the exponential delay. It took me several tries to get a clean single hit like this—most often I got a series of decaying pulses, as the spring bounced back and forth and made multiple contacts.

From the decay curve, I estimated a decay time constant of about 28msec initially, slowing down as voltage dropped. We guessed that the circuit consisted of a couple of capacitors connected by the vibration switch: one slowly charged by the coin-cell batteries in the device, the other rapidly charged from the first when the vibration switch makes contact, then slowly discharged through a resistor and the base-emitter junction of an NPN transistor. We guessed which resistor was the one that produced the decay, and (with some struggle through the distorting plastic casing) read its value as 3.3kΩ. That gave us a capacitance around 8µF (other measurements of the decay got different time estimates, so we guess that the capacitor is 4.7µF or 10µF, those being popular sizes for ceramic capacitors).

The other two resistors are 22Ω and are probably series resistors to limit current through the LEDs (that is probably not necessary, given the high internal resistance of the batteries).

We tried duplicating the circuit using a different LED and a pushbutton in place of the vibration switch:

The resistor R1 limits the current through the LED and also sets the charging time for C1—it approximates the internal resistance of a pair of coin cells. C2 and R2 provide RC decay (down to the threshold voltage for the base-emitter junction).  Once the voltage has decayed enough, the base current is limited by the characteristics of the transistor rather than resistor, and the decay slows way down.

The resistor R1 limits the current through the LED and also sets the charging time for C1—it approximates the internal resistance of a pair of coin cells. C2 and R2 provide RC decay (down to the threshold voltage for the base-emitter junction). Once the voltage has decayed enough, the base current is limited by the characteristics of the transistor rather than resistor, and the decay slows way down.

This circuit worked fine and gave flashes of light that lasted about the same amount of time as the original device. I tried reducing R1 to 50Ω to get brighter flashes with the green and blue LEDs of an RGB LED, and the flash and fade looked a lot like the white LEDs of the toy.

While we were looking at vibration switches, we decided to redesign the SparkFun “Wake on Shake” device, which uses an ATtiny2313A microprocessor and an ADXL362 accelerometer to control a  pFET.  The idea is that the microprocessor samples the acceleration occasionally and turns on the pFET for 5 seconds.  They claim that the device takes only 2µA at 3.7V, which seems almost reasonable, since the ADP60-3.3 regulator takes 1µA, the ATtiny takes 2µA (assuming the microproccesor is powered down without a watchdog timer, woken by the ADXL362), and the ADXL362 takes 0.27µA when in motion-triggered wake-up mode.  But using microprocessors and accelerometers for this task seems like overkill—the whole thing should be doable by a vibration switch and a few analog parts, at considerably lower cost.

We set out to do a design with a somewhat wider voltage range and lower power budget (say 1.8V–5.5V and only 1µA) controlling a beefier pFET (lower on-resistance) at a lower parts cost. The design of the LED flashing circuit won’t work for us, because we want the pFET either to be all the way on or all the way off—not heating up in the linear region.  The switch and RC circuit are fine, but we can’t read the RC delay with just an NPN transistor—it doesn’t provide a sharp transition.  Instead we chose a low-power comparator, the TS881, which can operate on less than 400nA and that has a supply voltage range of 0.85V to 5.5V. Initially, I just planned to use the output of the comparator to drive the pFET gate directly, but my son wanted to add the capability to have an external active-high “wake” signal that kept the output on, and that turned off quickly (not with the 5-second delay) when removed (possibly under the control of the circuit that normally gets 5 seconds of power, so that it can keeps its power on until it is done).  To add this extra functionality, we put in another stage between the comparators and the pFET:

    After C1 is charged by the vibration switch, it discharges through R1. The comparator output is high as long as the voltage on C1 is more than 1/3 of the supply voltage, which turns on Q2, which then turns on the pFET.

After C1 is charged by the vibration switch, it discharges through R1. The comparator output is high as long as the voltage on C1 is more than 1/3 of the supply voltage, which turns on Q2, which then turns on the pFET.

The pFET was chosen to have a very low threshold voltage (so it could be turned on even with low-voltage supplies—the pFET threshold is the main limitation on how low the voltage can go) and a low ON-resistance, so that moderately high currents could be handled. With a minimal pad layout, I calculated that the pFET could handle about 2A. With 2oz copper and a square inch of board space on the back as a heat sink, perhaps 4A. Note: if the pFET is used to power anything with significant current, one would want a much larger bypass capacitor than the 4.7µF shown here—something like a low ESR aluminum-polymer electrolytic capacitor with over 100µF of capacitance.

The NPN transistors were chosen to be a cheap pair in a single package (reducing assembly cost slightly).

The timing capacitor was chosen to be a film capacitor, to get better precision on the capacitance and less temperature dependence. Unfortunately, this limits the amount of capacitance, unless a large through-hole capacitor is used. That in turn requires a large resistor to get the RC time constant, which puts strong constraints on how much leakage current is permitted on anything connected to the charged node.  If the 5-second on-time can be allowed a large fluctuation in duration, then C1 could be a 10µF ceramic capacitor and R1 a 470kΩ resistor, but the bypass capacitor on the other side of the switch would have to be made much larger, to ensure that C1 is fully charged in the momentary contact of the vibration switch.

The voltage divider takes up to 5.5V/20MΩ (183nA), the comparator up to 750nA, and the leakage through the NPN transistors when off about 20nA each, and about 300nA leakage through the pFET for a power budget of about 1.3µA when the circuit is not activated (SparkFun hadn’t counted the pFET leakage, and adding that to our power budget put us over our arbitrary 1µA limit). We originally had nFETs instead of NPN transistors for the comparator output and WAKE inputs, but think that they would leak more current in the presence of noise on the WAKE line. The pullup resistor adds up to 2.5mA of current when the pFET is turned on, which is probably a bad thing if the load is small.

If you don’t need the functionality of the WAKE line to hold the circuit on, a simpler circuit will do, without the power consumption of the pullup resistors:

Without the need for "wake" functionality, the extra OR-stage of bipolar transistors can be eliminated, if the inputs of the comparator are swapped.

Without the need for “wake” functionality, the extra OR-stage of bipolar transistors can be eliminated, if the inputs of the comparator are swapped.

If the WAKE function is supposed to do the same thing as shaking (providing about a 5-second ON time, rather than turning off quickly), then the bare circuit above can be made to have the WAKE function:

Here an nFET is put in parallel with the vibration switch, so that the WAKE signal has the same effect as vibration.  My one concern is that the leakage currents through the source of the nFET will make the RC decay computation wrong—both drain-source and gate-source leakage currents could be a problem.  Unfortunately , these are specified on data sheets at quite different operating conditions than we have here.  One could also use an optoisolator in place of the nFET, but the dark current may be enough to charge the capacitor.

Here an nFET is put in parallel with the vibration switch, so that the WAKE signal has the same effect as vibration. My one concern is that the leakage currents through the source of the nFET will make the RC decay computation wrong—both drain-source and gate-source leakage currents could be a problem. Unfortunately , these are specified on data sheets at quite different operating conditions than we have here. One could also use an optoisolator in place of the nFET, but the dark current may be enough to charge the capacitor.

Warning: none of the Shake’n’Wake circuits here have been tested (only the LED flashing circuits)—this is all paper design. I think that everything here should work (except the nFET for WAKE in the last schematic, which I suspect has too much leakage current), but it has not been built and tested. Long experience leads me not to trust paper designs, so I recommend that you not rely on these designs, until you have prototyped and tested them. I probably won’t bother to, as most of the parts are surface-mount parts, and so a bit of a pain to work with.

2015 March 27

Bogus comparison of Word and LaTeX

Filed under: Uncategorized — gasstationwithoutpumps @ 09:36
Tags: , ,

An article was recently brought to my attention that claimed to compare LaTeX to Word for preparing manuscripts: PLOS ONE: An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development. The authors claim,

To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors.

It turns out to be a completely bogus study—they compared typist or typesetting tasks, not authoring tasks. There was no inserting new figures or equations into the middle of a draft, no rearranging sections, no changing citations styles—not even any writing—just copying text from an existing typeset document. It is very misleading to say that the “LaTeX users … wrote less text”, as none of the subjects were writing, just copying, which uses a very different set of skills.

I don’t think that there is much question that for simply retyping an existing document, a WYSIWYG editor like Word is better than a document compiler like LaTeX, but that has very little to do with the tasks of an author. (And even they noted that the LaTeX users enjoyed the task more than the Word users.)

For those of us who use LaTeX on a regular basis, the benefits do not come from speeding up our typing—LaTeX is a bit slower to work with than a WYSIWYG editor.  The advantages come from things like automatic renumbering of figures and references to them, floating figures that don’t require manual placement (except when there are too many figures—then having to do manual placement with LaTeX is a pain), good math handling, automatic formatting of section and chapter headings, being able to define macros for commonly used actions, and the versatility of having a programming language available. For example, I have a macro that I like to use for proper formatting of conditional probability expressions, and another that I use for references to sections, so that I can switch between “Section 3.2″, “Sec. 3.2″, and “§3.2″ through an entire book with a change to just one line in the file.

LaTeX also has the advantage of having a much longer life span than Word—I can still run 30-year-old LaTeX files and print them, and I expect that the files I create now will still be usable in 30 years (if anyone still cares), while Word files become unusable in only 10-to-20 years.  LaTeX is also free and runs on almost any computer (the original TeX was written for machines that by modern standards were really tiny—64k bytes of RAM).

For those who want multiple-author simultaneous access (like Google Docs), there are web services like sharelatex.com that permit multiple authors to edit a LaTeX document simultaneously. I’ve used sharelatex.com with a co-author, and found it to be fairly effective, though the server behind the rendering is ridiculously slow—40 seconds for  a 10-page document on the web service, while I can compile my whole 217-page textbook three times in about 12 seconds on my 2009 MacBook Pro.

Like the emacs vs. vi wars, the LaTeX vs. Word camps are more about what people are used to and what culture they identify with than the actual advantages and disadvantages of the different tools. Bogus studies like the one in PLoS One don’t really serve any positive function (unless you happen to be a monopoly software seller like Microsoft).

 

Followup on plagiarism

Filed under: Uncategorized — gasstationwithoutpumps @ 08:26
Tags: , , ,

In Plagiarism detected, I mentioned that an article in Nature Biotechnology plagiarizes from my blog, specifically Supplementary Material page 6 from Segmenting noisy signals from nanopores. I got email from the last author this week, explaining the situation:

We saw your recent blog post about our paper and feel that we owe you an explanation.

At the time we read your level-finding blog post we had already implemented a recursive level-finding algorithm that we have been using  in our lab.  Our algorithm made comparison of two data segments using a T-test. We came across your blog and found that the logP value was more useful than the T-test.  We wanted to cite your blog, but Nature’s online publication guidelines made it seem that “Only articles that have been published or submitted to a named publication should be in the reference list” (http://www.nature.com/nature/authors/gta/#a5.4). While we wanted to present our methods as transparently as possible, we had no intention of claiming your work as ours.We should have made efforts to contact you and NBT editors about how to best cite your contribution.

I have contacted NBT to see if a post-publication citation to your blog can be made and I will keep you posted on this.

We noted your recent BioarXiv manuscript and will refer to it in future publications using logP-test level-finders.

So one of the two corrections I was seeking has been met (an apology from the authors), and the other (a citation to the blog) is being sought by the authors. It seems that Nature has a very poor policy about citations, discouraging correct attribution.  Yet another reason to consider them a less desirable family of journals (their rip-off pricing for libraries and their preference for sensational articles over careful research are others).

On a related front, referees for our journal submission of the segmenter paper pointed out that several of the ideas are not new (hardly surprising), and that the basic algorithm has been around for quite a while.  They pointed us to a paper by Killick, Fearnhead, and Eckley (http://arxiv.org/pdf/1101.1438.pdf), which supposedly has an exact algorithm that is as efficient as binary segmentation (which only approximates the best breakpoints). I thank the referees for the pointer—that is the sort of thing peer review is supposed to be good for: pointing out to authors where they have missed relevant prior literature.

I’ve only glanced through the paper (I had 16 senior theses to grade in 4 days, plus trying to get a new draft of my book for my applied electronics course done in time for classes starting next Monday), so I can’t say anything about the algorithm they present, but they do give a citation for the binary algorithm that dates back to 1974:

Scott, A. J. and Knott, M. (1974). A cluster analysis method for grouping means in the analysis of variance. Biometrics, 30(3):507–512.

The online version of the journal only goes back to 1999, so I’ve not confirmed that the paper does contain the same algorithm, but it would not surprise me if it did—the binary split method is fairly obvious once the basics of splitting on log-likelihood are understood.  I had looked for papers on the technique and not found them (which surprised me), but I didn’t look as hard as I should have. I did not find the right entry points to the literature—it is scattered over many different disciplines and I relied too much on the one textbook that I did find to give me pointers. And I didn’t read all the textbook, so I may have missed the appropriate pointers—though they do not cite Scott and Knott, so maybe the textbook authors missed an important chunk of the literature, too.

Now that the Killick et al. paper has given me some useful pointers, I have a lot of reading to do.  I don’t know if I’ll have time before the summer, though—my teaching load starting next week is pretty heavy (I was just noticing that my calendar had 24.5 hours scheduled for the first week, not counting time for prepping for classes, setting up the lab, grading, or revising the book for the electronics class: 7 hours of lecture, 12 hours of lab class, 2 office hours, 1.5 hours meeting with the department manager, 2 hours faculty meeting—and the dean wants to meet with me for half an hour sometime also).

Given that the main idea in our segmenter paper is an old one, for it to be salvageable, we’ll have to shrink the basic algorithm to a brief tutorial (with citations to prior inventors) and concentrate on the little changes made after the basic idea: the parameterization of the threshold setting and the correction for low-pass filtering.  There may be a little bit for applying the idea to stepwise slanting segments using linear regression, but I bet that idea is also an old one, buried somewhere in the literature.

This summer I may want to look at implementing the ideas of the Killick et al. paper (or other similar approaches), to see if they really do produce better segmentation as quickly.

2015 March 15

Bruni opinion column on college admissions

Filed under: Uncategorized — gasstationwithoutpumps @ 10:58
Tags: , ,

In How to Survive the College Admissions Madness, Frank Bruni writes consoling advice for parents and high school seniors wrapped up in college admissions and set on going to elite colleges. Although the obsession with elite-or-nothing is more a New York thing than the American universal he treats it as, it is common enough to be worth an opinion column, and he does as nice job of providing a couple of stories that counter the obsession. (No data though—his column is strictly anecdotal, with 5 anecdotes.)

He recognizes that he is really talking to a small segment of the population:

I’m describing the psychology of a minority of American families; a majority are focused on making sure that their kids simply attend a decent college—any decent college—and on finding a way to help them pay for it. Tuition has skyrocketed, forcing many students to think not in terms of dream schools but in terms of those that won’t leave them saddled with debt.

But the core of the advice he gives is applicable to anyone going to college, not just to those seeking elite admission:

… the admissions game is too flawed to be given so much credit. For another, the nature of a student’s college experience—the work that he or she puts into it, the self-examination that’s undertaken, the resourcefulness that’s honed—matters more than the name of the institution attended. In fact students at institutions with less hallowed names sometimes demand more of those places and of themselves. Freed from a focus on the packaging of their education, they get to the meat of it.

In any case, there’s only so much living and learning that take place inside a lecture hall, a science lab or a dormitory. Education happens across a spectrum of settings and in infinite ways, and college has no monopoly on the ingredients for professional achievement or a life well lived.

The elites have some resources to offer that colleges with lesser financial endowments find difficult to match, but any good enough college can provide opportunities to those who look for them.  For some students, being one of the best at a slightly “lesser” institution may result in more opportunities, more faculty attention, and more learning than being just above average in an elite school.  (And, vice versa, of course—moving from being the best in high school to run-of-the-mill at an elite college can also be an important wake-up call.)

Currently, the American college landscape is very broad, offering a lot of different choices with different prices and different strengths.  Unfortunately, many of our state legislatures and governors have decided that only one model should be allowed—the fully private, job-training institution—and are doing everything they can to kill off the public colleges and universities that have been the backbone of US post-secondary education since the Morrill Land-Grant Acts of 1862 and 1890.

The colleges established by the land grant acts were intended as practical places, not primarily social polish for the rich (as most private colleges were then, and most of the elites are now).  The purpose of these public colleges was

without excluding other scientific and classical studies and including military tactic, to teach such branches of learning as are related to agriculture and the mechanic arts, in such manner as the legislatures of the States may respectively prescribe, in order to promote the liberal and practical education of the industrial classes in the several pursuits and professions in life.[7 U.S.C. § 304, as quoted in Wikipedia]

Although agriculture is no longer as large an employer as it was in the 19th century, research in agriculture at the land-grant universities is still driving a major part of the US economy, and engineering (quaintly referred to as “the mechanic arts”) is still a major employer and a primary route for upward social mobility in the US.  The land-grant colleges were explicitly not intended as bastions for the rich to defend their privilege (as our legislators want to make them, by raising tuition to stratospheric levels), but for “liberal and practical education of the industrial classes”—colleges for working-class people.

I think that it would benefit the US for legislatures to once again invest in the “education of the industrial classes in the several pursuits and professions in life” and for parents and students to look seriously at the state-supported colleges, before the madness of privatization wipes them out.

(Disclaimer: I teach at one campus of the University of California, and my son attends another—neither of them land-grant colleges, but both imperiled by the austerity politics of the California legislature, who see their legacy in building prisons and making sure the rich don’t pay taxes, not in providing education for the working class.)

2015 March 14

History of electronics via Google ngrams

Filed under: Uncategorized — gasstationwithoutpumps @ 22:16
Tags: , , ,

I was playing with Google ngrams today (checking to see the whether some variant spellings were ever mainstream) and came up with a history of electronics in one graph:

A short history of electronics in a few key words. At first, power is what mattered, and voltmeters and ammeters ruled. In the 40s, time-varying signals mattered, and oscilloscopes started getting attention. Time-varying signals ruled until digital electronics took over with the introduction of the microprocessor. Now all these low-level views are losing space to consumer-level gadgets like mobile phones.

I could have picked different words, but because Google ngrams provides no way to switch to a log scale for the y-axis (the only sensible way to show growth or decay of word usage), it is not feasible to put a common word like “computer” on the same graph as a rare word like “multimeter”. Google, as always, provides an almost-reasonable product, then never takes the trouble to finish it to allow the user to do things right. Oh, well, it’s free, and that’s the business model Google is relying on: ads on free (almost usable) stuff. The two things they do well are search and selling ads.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 309 other followers

%d bloggers like this: