Gas station without pumps

2015 June 18

Suki Wessling: In praise of adult ed

Filed under: Uncategorized — gasstationwithoutpumps @ 23:49
Tags: , ,

A friend of mine who writes professionally, Suki Wessling, recently wrote on her blog about her experience at Cabrillo College, our county’s community college (In praise of adult ed – The Babblery):

There’s a lot of wrangling going on right now about the purpose of community college. The combination of limited funds and the push for “college for everyone” has incited discussion on whether community colleges are for the community as a whole or just for the specific purposes of helping young people on to four-year colleges and giving specific technical degrees.

Personally, I have always loved the “community” aspect of community college, and I think it would be sad to see it go. I have both taught at and been a student at a few different community colleges, and I think they only benefit from mixing the “young divas” with the more, ahem, seasoned members of our community.

People who want to separate the community college from the community are probably unaware of how much learning takes place in a classroom that seems so informal. They are also probably unaware of (or unconcerned with) how important intergenerational learning can be to many of the eighteen-year-olds who end up drifting into community college simply because nothing had gelled for them yet.

Although the majority of their students are young adults (though typically a bit older than at 4-year colleges, since community college is the most common route for students to start going back to school after a break), community colleges serve a wide age range. Cabrillo College also runs a number of enrichment courses for middle-school students in the summer, so they really are spanning a very wide age range, from 10-year-olds to 80+.

Our governor seems intent on stripping community colleges of most of their missions, leaving them only with transfer preparation, which currently accounts for a relatively small fraction of their students. I seriously hope that he does not succeed (or, better, gets educated about the true value of the other missions of the community colleges).

The community college is essential for the home-school community (though the home-schooled students make up an insignificant part of the college’s enrollment), the theater community (the musicals produced there each summer are a major part of the county’s theater experience, reaching much larger audiences than the productions that UCSC puts on, though not as big as Santa Cruz Shakespeare), and the arts community (the art classes at Cabrillo are very popular with people of all ages).  These functions are essential to the community, but are not part of the transfer-prep mission.

My son and my wife have taken courses at Cabrillo and found them valuable, even though neither was preparing for transfer to a 4-year-college (my son was in high school and my wife was a decade or so past her BA).  Although I have not yet taken any community college courses (it is a bit far for me to cycle to when I’m busy), I expect to when I retire.  I’m not sure exactly what, as my hobby interests tend to change every 5–10 years, but it probably won’t be stuff from the IGETC (Intersegmental General Education Transfer Curriculum), but more idiosyncratic stuff that requires in-person classes.

I sure hope that the fun courses still exist when I have time to pursue them and haven’t been thrown away in the name of austerity.

2015 June 17

PteroDAQ bug fix

Now that my son is home from college, I’m getting him to do some bug fixes to the PteroDAQ data acquisition system he wrote for my class to use. The first fix that we’ve put back into the repository was for a problem that was noticed on the very old slow Windows machines in the lab—at high sampling rates, the recording got messed up.  The recording would start fine, then get all scrambled, then get scrambled in a different way, and eventually return to recording correctly, after which the cycle would repeat.  Looking at the recorded data, it was as if bytes were getting lost and the packets coming from the KL25Z were being read in the wrong frame.  As more bytes got lost the frameshift changed until eventually the packets were back in sync.  There seemed to be 5 changes in behavior for each cycle until things got back in sync.

This happened at a very low sampling rate on the old Windows machines, but even on faster machines still happened at a high enough sampling rate.

What the program was designed to do was to drop entire packets when the host couldn’t keep up with the data rate and the buffer on the KL25Z filled up, but that didn’t seem to be what was happening.  The checksums on the packets were not failing, so the packets were being received correctly on the host, which meant that the problem had to be before the checksums were added.  That in turn suggested a buffer overflow for the queuing on the KL25Z board.  More careful examination of the recordings indicated that when we got back into sync, exactly 4096 packets of 10 bytes each had been lost, which suggested that the 5 changes in behavior we saw during the cycle corresponded to 5 losses of the 8192-byte buffer.

 

We suspected a race condition between pushing data onto the queue and popping it off, so modified the code to turn off interrupts during queue_pop and queue_avail calls (we also made all the queue variables “volatile”, to make sure that the compiler wouldn’t optimize them out reads or writes, though I don’t think it was doing so).  This protection for the queue pop and availability calls changed the behavior to what was expected—at low sampling rates everything works fine, and at high sampling rates things start out well until the queue fills up, then complete packets are dropped when they won’t fit on the queue, and the average sampling rate is constant independent of the requested sampling rate, at the rate that the packets are taken out of the queue.

On my old MacBook Pro, the highest sampling rate that can be continued indefinitely for a single channel is 615Hz (about 6150 bytes/sec transferred).  On the household’s newer iMac, the highest sampling rate was 1572Hz (15720 bytes/sec). (Update, 2015 Jun 18: on my son’s System76 laptop, the highest sampling rate was 1576Hz.)

One can record for short bursts at much higher sampling rates—but only for 819.2 /(f_{s} - \max f_{s}) for a single channel (8192 bytes at 10 bytes/packet is 819.2 packets in the queue).  At 700Hz, one should be able record for about 9.6376 seconds on my MacBook Pro (assuming a max sustained rate of 615 Hz).  Sure enough, the first missing packet is the 6748th one, at 9.6386 s.

I thought that 2 channels (12-byte packets) should be accepted on my MacBook Pro at (10bytes/12bytes)615Hz, or 512.5Hz, but the observed maximum rate is 533Hz, so it isn’t quite linear in the number of bytes in the packet.  Four channels (16-byte packets) run at 418Hz. There is some fixed overhead in addition to the per-byte cost on the host computer.

There is another, more fundamental limitation on the PteroDAQ sampling rate—how fast the code can read the analog-to-digital converter and push the bytes into the queue.  That seems to be 6928Hz, at which speed the longest burst we can get without dropping packets should be just under 130ms (it turned out to lose the 819th packet at 118.22ms, so I’m a little off in my estimate).  I determined the max sampling rate by asking for a faster one (10kHz) and seeing what the actual sampling rate was at the beginning of the run, then trying that sampling rate and again checking the achieved sampling rate. With two channels, the maximum sampling rate is only 3593Hz, suggesting that most of the speed limitation is in the conversion time for the analog-to-digital converter.

The current version of PteroDAQ uses long sample times (so that we can handle fairly high-impedance signal sources) and does hardware averaging of 32 readings (to reduce noise). By sacrificing quality (more noise), we could make the conversions much faster, but that is not a reasonable tradeoff currently, when we are mainly limited by how fast the Python program on the host reads and interprets the input stream from the USB port.  We’ll have to look into Python profiling, to see where the time is being spent and try to speed things up.

2015 June 14

More on nFET Miller plateau

Filed under: Circuits course,Data acquisition — gasstationwithoutpumps @ 22:09
Tags: , , , ,

In Bitscope jitter and nFET Miller plateau and Update for nFET Miller plateau I talked about using the Bitscope BS10 USB oscilloscope and post-processing to remove the trigger jitter to get a better trace of the Miller plateau when switching an AOI518 nFET.  I wasn’t entirely happy with that result, as I was using an external function generator (albeit a cheap Elenco FG500 box that puts out rather poor waveforms).  I thought I could do as well or better using a hysteresis oscillator to generate the square waves.

So I came up with the following circuit:

Test fixture for looking at the Miller plateau on the AOI518 nFET.

The top-left section, with the capacitors and the big inductor are just to keep the power supply clean. Because the power is coming from the 5V USB supply, passed through the Bitscope, any noise coupled back through the power supply can affect the readings, so I made sure that no high-frequency noise would be coupled back to the Bitscope that way.  One 4.7µF bypass capacitor (C6) is right next to the 74HC19N Vcc power pin on the breadboard and the other (C5) next to the emitter of the S9013 NPN transistor.  These helped in reducing ringing on the transitions of the square wave.

The Schmitt trigger U1 is the relaxation oscillator that oscillates at about 550kHz, and U2 buffers the output so that the load on the oscillator is constant.  R2 and C4 couple the output of the oscillator to the base of the NPN transistor Q2, which is configured as a common-emitter amplifier.

When Q2 is turned on, it sinks about 18.5mA of current (5V/270Ω), which at the nominal current gain of 120 for S9013 transistors, means about 150µA of base current. When Q2 is turned on it initially sinks even more current, discharging the gate  Q1 fast, but when Q2 is turned off, the gate is pulled up more gradually by the 270Ω resistor R1.  This gradual rise of the nFET gate voltage allows us to observe the Miller plateau when the nFET actually switches on.

The common-emitter amplifier using Q2 can switch very rapidly with no load, but adding the gate capacitance of Q1 makes the rise follow an RC charging curve. If the 150Ω load resistor R4 is omitted, the curve is smooth, as there is no voltage swing on the drain.  Putting in R4 results in the drain voltage dropping rapidly when Q1 turns on, which is coupled back to the gate through the Miller capacitance, resulting in the Miller plateau. These traces were gathered and dejittered separately (each is the average of over 500 traces).  The time alignment was done by hand, and may not be completely accurate.

The common-emitter amplifier using Q2 can switch very rapidly with no load, but adding the gate capacitance of Q1 makes the rise follow an RC charging curve. If the 150Ω load resistor R4 is omitted, the curve is smooth, as there is no voltage swing on the drain. Putting in R4 results in the drain voltage dropping rapidly when Q1 turns on, which is coupled back to the gate through the Miller capacitance, resulting in the Miller plateau.
These traces were gathered and dejittered separately (each is the average of over 500 traces). The time alignment was done by hand, and may not be completely accurate.

The initial rise in the drain voltage (from about -0.9 to -0.7 µs in the plot) is due to capacitive coupling of the rising gate voltage to the drain with the nFET off, and there is a similar overshoot at the lower end as the nFET is turned off (just before 0µs).

I played around quite a bit with the bias network R2 and C4 for the base of Q2. The resistor alone doesn’t work, as the base voltage remains a constant 340mV and the NPN transistor remains always on—I was a little confused by this, as the 340mV measurement on the Bitscope seemed too low to turn on an NPN transistor. I think that there is a calibration error on the Bitscope—according to my Kikusui COS5060 analog oscilloscope, the high voltage is about 600mV, which seems to be a more reasonable Vbe for a transistor that is on.

It seems that the offset that the Bitscope BS10 provides is inaccurate: I get a reading of 740mV for an offset of 2V, 1V, or 0V; 500mV with an offset of -1V; and 320mV with an offset of -2V or -3V.  Looking at the ground line with the same channel also shows an error for the -1V, -2V, -3V and -4V offsets (on the 11V range): -300mV for -1V, -560mV for -2V, -3V, and -4V. But those offsets are different from the ones I’m seeing with the oscillating base signal, so I suspect that it isn’t even a simple offset error.  This is bad—I’m going to have a hard time correcting such large and varying errors. I should probably ask the Bitscope technical staff whether large errors in the offsets are normal, or I have a damaged Bitscope BS10.  They’ve not made the schematic available, so I’m not sure what they are doing internally to provide the offset voltages.

The capacitor C4 alone also doesn’t work to bias the base, as the voltage on the base then doesn’t get high enough for  the NPN transistor to  turn on. With both the resistor R2 and the capacitor C4, the base swings about 5V, with the high value being the Vbe where the NPN transistor turns on. The resistor value is not critical—reducing R2 to 2kΩ moves the bottom end of the swing up a little, but seems to work just as well.  A very large resistor (100kΩ) seems to result in slow turn on for Q1.

The effect of changing C4 was not something I would have predicted—as I lower C4 (raising the corner frequency of the high-pass), I get a lot more ringing of the gate voltage, but faster transitions as well. I think that the faster transitions come from the base voltage not dropping as far below threshold when Q2 is off, so that it can rise above threshold sooner when Q2 needs to be turned on.

Smaller capacitors for C4 result in faster edges when turning on the NPN transistor, with more overshoot and ringing.  Once the capacitance is large enough, there is little further change.

Smaller capacitors for C4 result in faster edges when turning on the NPN transistor, with more overshoot and ringing. Once the capacitance is large enough, there is little further change.

The curves above were synchronized by the setting 0µs at the upward transition past 4.0V, which seems to overlay the dejittered waveforms fairly well. Over 900 traces were averaged for each of the 5 curves.

I could also provide a DC bias current for Q2 by removing R2 and connecting the base via a 10kΩ resistor to the clean +5V supply. That seems to work just as well, but moves up the lower end of the base voltage swing, which may result in slightly faster turn-on of the NPN transistor.

It’s nice that I can use the Bitscope (with dejittering) to produce the Miller plateau figure for the textbook, but I’m concerned about the erroneous offsets on the Bitscope.  I’m wondering what else I’ve been relying on that is miscalibrated.

2015 June 10

Update for nFET Miller plateau

Filed under: Circuits course,Data acquisition — gasstationwithoutpumps @ 22:47
Tags: , , ,

In Bitscope jitter and nFET Miller plateau,  I gave a nice super-resolution plot of the gate voltage on an nFET using the Bitscope and subsequent dejittering of the trigger:

Here is an example of the Vgs voltage of an nFET transistor being driven by an S9013 NPN transistor and a 270Ω pullup (the NPN  base was driven by the square wave from my Elenco FG500 function generator, through a 2kΩ resistor). The drain of the nFET had an 8Ω loudspeaker to a 5V USB power line.

Keeping the power supply noise from propagating back to the Bitscope cleans up the signal considerably.  The 10MHz ripple is now clearly visible. I tried zooming in with gnuplot and the resolution looks as good as on my 60MHz analog scope.  The dejittering and averaging has made for a very fine signal.

Keeping the power supply noise from propagating back to the Bitscope cleans up the signal considerably. The 10MHz ripple is now clearly visible. I tried zooming in with gnuplot and the resolution looks as good as on my 60MHz analog scope. The dejittering and averaging has made for a very fine signal.

But I didn’t give the schematic for the test jig.  So here it is:

Test jig for creating the Miller-plateau plot.

Test jig for creating the Miller-plateau plot.

Something else I didn’t point out in the previous post: the quantization error is still visible in the slow-moving parts of the signal (at the beginning and near the top of the gate charging), but is essentially eliminated in fast-changing portions of the signal.  I think that the dejittering and averaging gets good values if the jitter is large enough to move the signal to value that would have a different quantized output, so we are averaging values that are sometimes too high and sometimes too low. But if the jitter doesn’t move the signal that far, then we’re relying entirely on voltage noise to change the quantized output and get averaged out, and the voltage noise seems to be much smaller than the step size of the analog-to-digital converter.

For the book, I might redo the plot using a comparator to drive the nFET, rather than the S9013 NPN transistor, though there is some advantage to leaving the comparator out of the picture, so that students will have to think a bit about their own design, rather than simply copying.

I might want to try slowing down the falling edge, so that Miller plateau is visible on both edges. I could slow down the fall by increasing the size of R2 (reducing the base current from 1.2mA and hence the collector current).  I could probably reduce the base current to 200µA and still switch the nFET off, since with a typical current gain of about 120 for the S9103, I should still be able to get a collector current of about 24mA, which is more than 5V/270Ω=18.5mA.  Even 100µA may be enough, but then the low voltage on the nFET gate may start creeping up, and we don’t want to leave the nFET partially on. (The same reasoning argues against adding a series resistor between the collector and the gate.)

The current through the S9013 seems to be much larger than available from the LM2903—maybe I should do current vs. voltage (Ic vs Vce) plots for the S9013 with various base-emitter currents, though the data sheet already has a nice plot of that.

I should probably also try using a resistor rather than a loudspeaker as a load, though the inductance of the speaker is helping to limit the current and avoid overloading the USB power supply.  With a simple 10Ω resistor,  I’d be getting 500mA, which is the USB limit. With the inductive load of the loudspeaker, the current builds up slowly when the nFET is on, and never gets close to what we’d expect from the nominal 8Ω value.

Another thing I might do is to use a hysteresis oscillator to drive the NPN transistor.  That would be more in keeping with the minimal-equipment approach I’m going to try to add to the book this summer.  (I might also play with a larger voltage for the loudspeaker, since that should give a larger swing on the drain voltage, and hence a clearer Miller plateau.)

 

2015 June 8

Bitscope jitter and nFET Miller plateau

Filed under: Circuits course,Data acquisition — gasstationwithoutpumps @ 00:00
Tags: , , , ,

I got a little less than half my grading done this weekend (all the lab reports for the final lab, but not the redone reports from earlier labs, and not the 2–3 senior theses that had to be redone from last quarter) before I burned out.  I decided to do a little playing with my Bitscope USB oscilloscope, as a break from grading, and that sucked me in—I still haven’t gotten back to the grading.

Here is the problem I was addressing: In my book draft and in my blog post Third power-amp lecture and first half of lab, I presented a view of the Miller plateaus of an nFET, obtained by slowing down the transitions with series resistors and added gate-source capacitance, recording the result with my Bitscope USB oscilloscope, and averaging many traces together.

Here are the gate and drain voltages for an AOI518 nFET, slowed down by adding a series resistor to the gate signal and a large capacitor between the gate and drain.  I slowed it down so that I could record on my low-speed BitScope USB oscilloscope—students can see high-speed traces on the digital oscilloscopes in the lab.  The Miller plateaus are the almost flat spots on the gate voltage that arise from the negative feedback of the drain-voltage current back through the gate-drain capacitance.

Here are the gate and drain voltages for an AOI518 nFET, slowed down by adding a series resistor to the gate signal and a large capacitor between the gate and drain. I slowed it down so that I could record on my low-speed BitScope USB oscilloscope—students can see high-speed traces on the digital oscilloscopes in the lab. The Miller plateaus are the almost flat spots on the gate voltage that arise from the negative feedback of the drain-voltage current back through the gate-drain capacitance.

I was rather unsatisfied with this approach, as I really want to show the full-speed transitions. In Power amps working, I showed some Tektronix plots, but their little screen images are terrible (as bad as the Bitscope screen images), and I can’t use them in the book.

With an 8Ω loudspeaker as a load, turning off the nFET (gate voltage in blue) causes a large inductive spike on the drain (yellow).

With an 8Ω loudspeaker as a load, turning off the nFET (gate voltage in blue) causes a large inductive spike on the drain (yellow).

What is the fascination that scope designers have with black backgrounds? I know that the traditional cathode-ray-tube scopes gave no other choice, but for digital scopes black backgrounds are just evil —they don’t project well in lectures and they don’t print well on paper. It would be possible for me to use the data recording features of the Tektronix scopes, and plot the data using gnuplot, but I’d rather use the Bitscope at home if I can (much less hassle than transporting everything up the hill to the lab every time I need some more data).

The Bitscope B10 is capable of 20Msamples/s, which should give me decent time resolution, but the discretization noise is pretty large, so I want to average  multiple traces to reduce the noise. When using the “DDR” (DSO Data Recorder) option of the BitScope, it becomes very clear that they do not have any software engineers working for them (or didn’t at the time they defined the format for the recorder files).

The files are comma-separated values files, with no documentation (that I could find) of their content except the first line:

trigger,stamp,channel,index,type,delay,factor,rate,count,data

Each row of the file after that seems to have one trigger event, serially numbered in the first field, with a low-resolution time-stamp in the second field (hh:mm:ss, but no date and no finer time divisions).  The channel is 0 or 1, the index increments serially separately in each channel, the type is always 0, the delay is when the trace starts relative to the trigger (in seconds), the factor is always 1, the rate is the sampling rate (in Hz), the count is the number of data points, and the data is not a single field but “count” more fields. There is no other meta-data about the settings of the scope!

The data, unfortunately, is not the voltage measured by the scope, which is what one would naively expect.  Instead, you have to divide by the volts_per_division and add the offset voltage—neither of which are recorded anywhere in the data file! (You probably have to adjust for the probe as well, but I was using a 1X probe, so I can’t tell from this data.)

It is clear that the “engineers” who designed this format never heard of metadata—maybe they were used to just scrawling things on the backs of envelopes rather than keeping data around.  Yes, Bitscope designers, I am publicly shaming you—I like the Bitscope hardware well enough, but you are clearly clueless about data files! A correct format for the data would have had a block at the beginning of the file recording every setting of the scope and the full date and time, so that the precise conditions under which the data were recorded could be determined and used. (For example, was the RF filter on or off? what were the trigger conditions?)

I was able to read the DDR csv file and extract the data, but I found a number of other apparently undocumented properties of the B10.  If I switched away from post-trigger recording to having the trigger 25% or 50% of the way through the recording, the maximum sampling rate drops by a factor of 3 to 6.7MHz, so I need to use POST triggering, in which the recording starts about 1.25µs after the trigger. I can delay the part of the data I look at (only the part on the screen is saved), but if I delay too much, the sampling rate drops back down again.

One big problem is that the jitter on the Bitscope trigger is enormous—up to 150ns, which is 3 samples at the highest sampling rate. The image bounces around on the screen, and the data recorded in the files is similarly poorly aligned.

If I average a bunch of traces together, everything smooths out.  Not just the noise, but the signal as well! It is like passing the signal through a low-pass filter, which rather defeats the purpose of having a high sampling rate and averaging traces.

So today I wrote a program to do my own software triggering in a repetitive waveform. I recorded a bunch of traces that had the waveform I was interested in—making sure that the RF filter was off and the waveform was being sampled at the highest sampling rate. The program that read the csv file then looked in each trace for a new trigger event, interpolating between samples to get the trigger to much higher than single-sample resolution (by triggering on a fast rise, I can get about 0.1 sample resolution). I then resampled the recorded data (with 5-fold oversampling) with the resampling synchronized to the new trigger.  The resampled traces were then averaged.

Here is an example of the Vgs voltage of an nFET transistor being driven by an S9013 NPN transistor and a 270Ω pullup (the NPN  base was driven by the square wave from my Elenco FG500 function generator, through a 2kΩ resistor). The drain of the nFET had an 8Ω loudspeaker to a 5V USB power line.

The two traces on the right show a single trace (red) and an average of all the traces (magenta). Both of these are aligned by the Bitscope trigger event, which was substantially before the recording (much more than the minimum 1.25µs, as I’d deliberately delayed to get the next pulse).
The left-hand trace is also an average, but after retriggering on the first rising edge at 0.2v.
Note that the jitter in the trigger (and in the signal source) caused enormous rounding of the magenta curve, but retriggering to better than 5ns resolution allows the signals to be properly averaged.

The averaged plot is probably usable in the textbook. I also tried averaging the same traces triggering on the falling edge, to see if that got any more clarity for the ringing when the nFET is turned off, but it ended up looking essentially the same. On my Kikusui 60MHz analog scope, I see the little ripples after the downward transition (a 10MHz damped ripple), but I don’t see the hump in base line visible in the Bitscope trace.  I think that hump may be an artifact of taking too much power from the 5V USB line powering the Bitscope (or of coupling back of the inductive spike).

I tried putting in an LC filter on the 5V power line from the Bitscope (a 470µF capacitor to ground, a 200mH inductor, and another 470µF capacitor to ground).  This seems to have cleaned up the problem (this was hours later, and the frequency of the generator was almost certainly different, as I’d played with the tuning potentiometer several times):

Keeping the power supply noise from propagating back to the Bitscope cleans up the signal considerably.  The 10MHz ripple is now clearly visible. I tried zooming in with gnuplot and the resolution looks as good as on my 60MHz analog scope.  The dejittering and averaging has made for a very fine signal.

Keeping the power supply noise from propagating back to the Bitscope cleans up the signal considerably. The 10MHz ripple is now clearly visible. I tried zooming in with gnuplot and the resolution looks as good as on my 60MHz analog scope. The dejittering and averaging has made for a very fine signal.

One problem with this retriggering approach is that it doesn’t really work with two channels—the Bitscope traces for the two channels are separate events, and the only synchronization information is the hardware trigger. I could get a clean Vgs signal and a clean Vds signal, but aligning them is not going to be any better than the jitter of the hardware trigger. I’ll have to think about averaging (or taking the median) of the trigger times relative to the hardware trigger, and using that to align the two traces.

Still, I wonder why the Bitscope designers have not taken advantage of the trigger jitter to do averages of repetitive traces—it allows one to reconstruct signals in software that are much higher bandwidth than the sampling rate of the scope.  These sorts of super-resolution techniques are pretty common, and it only requires better software, not new hardware.

I’ve been thinking that I might even try writing some software that talks directly to the Bitscope hardware (they have published their communication protocol), so that I can do things like capturing the data with full metadata and looking at it with decent plotting software (Matplotlib or gnuplot).  I’m not into doing GUI programming (an infinite time sink), so I’d probably write a Python library providing a application program interface to the Bitscope and a number of simple single-use programs (like capturing and averaging waveforms) with a configuration file or command-line arguments to set the parameters. Yet another thing to add to my to-do list (but near the end—there are many more important things to work on).

« Previous PageNext Page »

%d bloggers like this: