Gas station without pumps

2015 June 17

PteroDAQ bug fix

Now that my son is home from college, I’m getting him to do some bug fixes to the PteroDAQ data acquisition system he wrote for my class to use. The first fix that we’ve put back into the repository was for a problem that was noticed on the very old slow Windows machines in the lab—at high sampling rates, the recording got messed up.  The recording would start fine, then get all scrambled, then get scrambled in a different way, and eventually return to recording correctly, after which the cycle would repeat.  Looking at the recorded data, it was as if bytes were getting lost and the packets coming from the KL25Z were being read in the wrong frame.  As more bytes got lost the frameshift changed until eventually the packets were back in sync.  There seemed to be 5 changes in behavior for each cycle until things got back in sync.

This happened at a very low sampling rate on the old Windows machines, but even on faster machines still happened at a high enough sampling rate.

What the program was designed to do was to drop entire packets when the host couldn’t keep up with the data rate and the buffer on the KL25Z filled up, but that didn’t seem to be what was happening.  The checksums on the packets were not failing, so the packets were being received correctly on the host, which meant that the problem had to be before the checksums were added.  That in turn suggested a buffer overflow for the queuing on the KL25Z board.  More careful examination of the recordings indicated that when we got back into sync, exactly 4096 packets of 10 bytes each had been lost, which suggested that the 5 changes in behavior we saw during the cycle corresponded to 5 losses of the 8192-byte buffer.

 

We suspected a race condition between pushing data onto the queue and popping it off, so modified the code to turn off interrupts during queue_pop and queue_avail calls (we also made all the queue variables “volatile”, to make sure that the compiler wouldn’t optimize them out reads or writes, though I don’t think it was doing so).  This protection for the queue pop and availability calls changed the behavior to what was expected—at low sampling rates everything works fine, and at high sampling rates things start out well until the queue fills up, then complete packets are dropped when they won’t fit on the queue, and the average sampling rate is constant independent of the requested sampling rate, at the rate that the packets are taken out of the queue.

On my old MacBook Pro, the highest sampling rate that can be continued indefinitely for a single channel is 615Hz (about 6150 bytes/sec transferred).  On the household’s newer iMac, the highest sampling rate was 1572Hz (15720 bytes/sec). (Update, 2015 Jun 18: on my son’s System76 laptop, the highest sampling rate was 1576Hz.)

One can record for short bursts at much higher sampling rates—but only for 819.2 /(f_{s} - \max f_{s}) for a single channel (8192 bytes at 10 bytes/packet is 819.2 packets in the queue).  At 700Hz, one should be able record for about 9.6376 seconds on my MacBook Pro (assuming a max sustained rate of 615 Hz).  Sure enough, the first missing packet is the 6748th one, at 9.6386 s.

I thought that 2 channels (12-byte packets) should be accepted on my MacBook Pro at (10bytes/12bytes)615Hz, or 512.5Hz, but the observed maximum rate is 533Hz, so it isn’t quite linear in the number of bytes in the packet.  Four channels (16-byte packets) run at 418Hz. There is some fixed overhead in addition to the per-byte cost on the host computer.

There is another, more fundamental limitation on the PteroDAQ sampling rate—how fast the code can read the analog-to-digital converter and push the bytes into the queue.  That seems to be 6928Hz, at which speed the longest burst we can get without dropping packets should be just under 130ms (it turned out to lose the 819th packet at 118.22ms, so I’m a little off in my estimate).  I determined the max sampling rate by asking for a faster one (10kHz) and seeing what the actual sampling rate was at the beginning of the run, then trying that sampling rate and again checking the achieved sampling rate. With two channels, the maximum sampling rate is only 3593Hz, suggesting that most of the speed limitation is in the conversion time for the analog-to-digital converter.

The current version of PteroDAQ uses long sample times (so that we can handle fairly high-impedance signal sources) and does hardware averaging of 32 readings (to reduce noise). By sacrificing quality (more noise), we could make the conversions much faster, but that is not a reasonable tradeoff currently, when we are mainly limited by how fast the Python program on the host reads and interprets the input stream from the USB port.  We’ll have to look into Python profiling, to see where the time is being spent and try to speed things up.

Advertisements

2015 June 8

Bitscope jitter and nFET Miller plateau

Filed under: Circuits course,Data acquisition — gasstationwithoutpumps @ 00:00
Tags: , , , ,

I got a little less than half my grading done this weekend (all the lab reports for the final lab, but not the redone reports from earlier labs, and not the 2–3 senior theses that had to be redone from last quarter) before I burned out.  I decided to do a little playing with my Bitscope USB oscilloscope, as a break from grading, and that sucked me in—I still haven’t gotten back to the grading.

Here is the problem I was addressing: In my book draft and in my blog post Third power-amp lecture and first half of lab, I presented a view of the Miller plateaus of an nFET, obtained by slowing down the transitions with series resistors and added gate-source capacitance, recording the result with my Bitscope USB oscilloscope, and averaging many traces together.

Here are the gate and drain voltages for an AOI518 nFET, slowed down by adding a series resistor to the gate signal and a large capacitor between the gate and drain.  I slowed it down so that I could record on my low-speed BitScope USB oscilloscope—students can see high-speed traces on the digital oscilloscopes in the lab.  The Miller plateaus are the almost flat spots on the gate voltage that arise from the negative feedback of the drain-voltage current back through the gate-drain capacitance.

Here are the gate and drain voltages for an AOI518 nFET, slowed down by adding a series resistor to the gate signal and a large capacitor between the gate and drain. I slowed it down so that I could record on my low-speed BitScope USB oscilloscope—students can see high-speed traces on the digital oscilloscopes in the lab. The Miller plateaus are the almost flat spots on the gate voltage that arise from the negative feedback of the drain-voltage current back through the gate-drain capacitance.

I was rather unsatisfied with this approach, as I really want to show the full-speed transitions. In Power amps working, I showed some Tektronix plots, but their little screen images are terrible (as bad as the Bitscope screen images), and I can’t use them in the book.

With an 8Ω loudspeaker as a load, turning off the nFET (gate voltage in blue) causes a large inductive spike on the drain (yellow).

With an 8Ω loudspeaker as a load, turning off the nFET (gate voltage in blue) causes a large inductive spike on the drain (yellow).

What is the fascination that scope designers have with black backgrounds? I know that the traditional cathode-ray-tube scopes gave no other choice, but for digital scopes black backgrounds are just evil —they don’t project well in lectures and they don’t print well on paper. It would be possible for me to use the data recording features of the Tektronix scopes, and plot the data using gnuplot, but I’d rather use the Bitscope at home if I can (much less hassle than transporting everything up the hill to the lab every time I need some more data).

The Bitscope B10 is capable of 20Msamples/s, which should give me decent time resolution, but the discretization noise is pretty large, so I want to average  multiple traces to reduce the noise. When using the “DDR” (DSO Data Recorder) option of the BitScope, it becomes very clear that they do not have any software engineers working for them (or didn’t at the time they defined the format for the recorder files).

The files are comma-separated values files, with no documentation (that I could find) of their content except the first line:

trigger,stamp,channel,index,type,delay,factor,rate,count,data

Each row of the file after that seems to have one trigger event, serially numbered in the first field, with a low-resolution time-stamp in the second field (hh:mm:ss, but no date and no finer time divisions).  The channel is 0 or 1, the index increments serially separately in each channel, the type is always 0, the delay is when the trace starts relative to the trigger (in seconds), the factor is always 1, the rate is the sampling rate (in Hz), the count is the number of data points, and the data is not a single field but “count” more fields. There is no other meta-data about the settings of the scope!

The data, unfortunately, is not the voltage measured by the scope, which is what one would naively expect.  Instead, you have to divide by the volts_per_division and add the offset voltage—neither of which are recorded anywhere in the data file! (You probably have to adjust for the probe as well, but I was using a 1X probe, so I can’t tell from this data.)

It is clear that the “engineers” who designed this format never heard of metadata—maybe they were used to just scrawling things on the backs of envelopes rather than keeping data around.  Yes, Bitscope designers, I am publicly shaming you—I like the Bitscope hardware well enough, but you are clearly clueless about data files! A correct format for the data would have had a block at the beginning of the file recording every setting of the scope and the full date and time, so that the precise conditions under which the data were recorded could be determined and used. (For example, was the RF filter on or off? what were the trigger conditions?)

I was able to read the DDR csv file and extract the data, but I found a number of other apparently undocumented properties of the B10.  If I switched away from post-trigger recording to having the trigger 25% or 50% of the way through the recording, the maximum sampling rate drops by a factor of 3 to 6.7MHz, so I need to use POST triggering, in which the recording starts about 1.25µs after the trigger. I can delay the part of the data I look at (only the part on the screen is saved), but if I delay too much, the sampling rate drops back down again.

One big problem is that the jitter on the Bitscope trigger is enormous—up to 150ns, which is 3 samples at the highest sampling rate. The image bounces around on the screen, and the data recorded in the files is similarly poorly aligned.

If I average a bunch of traces together, everything smooths out.  Not just the noise, but the signal as well! It is like passing the signal through a low-pass filter, which rather defeats the purpose of having a high sampling rate and averaging traces.

So today I wrote a program to do my own software triggering in a repetitive waveform. I recorded a bunch of traces that had the waveform I was interested in—making sure that the RF filter was off and the waveform was being sampled at the highest sampling rate. The program that read the csv file then looked in each trace for a new trigger event, interpolating between samples to get the trigger to much higher than single-sample resolution (by triggering on a fast rise, I can get about 0.1 sample resolution). I then resampled the recorded data (with 5-fold oversampling) with the resampling synchronized to the new trigger.  The resampled traces were then averaged.

Here is an example of the Vgs voltage of an nFET transistor being driven by an S9013 NPN transistor and a 270Ω pullup (the NPN  base was driven by the square wave from my Elenco FG500 function generator, through a 2kΩ resistor). The drain of the nFET had an 8Ω loudspeaker to a 5V USB power line.

The two traces on the right show a single trace (red) and an average of all the traces (magenta). Both of these are aligned by the Bitscope trigger event, which was substantially before the recording (much more than the minimum 1.25µs, as I’d deliberately delayed to get the next pulse).
The left-hand trace is also an average, but after retriggering on the first rising edge at 0.2v.
Note that the jitter in the trigger (and in the signal source) caused enormous rounding of the magenta curve, but retriggering to better than 5ns resolution allows the signals to be properly averaged.

The averaged plot is probably usable in the textbook. I also tried averaging the same traces triggering on the falling edge, to see if that got any more clarity for the ringing when the nFET is turned off, but it ended up looking essentially the same. On my Kikusui 60MHz analog scope, I see the little ripples after the downward transition (a 10MHz damped ripple), but I don’t see the hump in base line visible in the Bitscope trace.  I think that hump may be an artifact of taking too much power from the 5V USB line powering the Bitscope (or of coupling back of the inductive spike).

I tried putting in an LC filter on the 5V power line from the Bitscope (a 470µF capacitor to ground, a 200mH inductor, and another 470µF capacitor to ground).  This seems to have cleaned up the problem (this was hours later, and the frequency of the generator was almost certainly different, as I’d played with the tuning potentiometer several times):

Keeping the power supply noise from propagating back to the Bitscope cleans up the signal considerably.  The 10MHz ripple is now clearly visible. I tried zooming in with gnuplot and the resolution looks as good as on my 60MHz analog scope.  The dejittering and averaging has made for a very fine signal.

Keeping the power supply noise from propagating back to the Bitscope cleans up the signal considerably. The 10MHz ripple is now clearly visible. I tried zooming in with gnuplot and the resolution looks as good as on my 60MHz analog scope. The dejittering and averaging has made for a very fine signal.

One problem with this retriggering approach is that it doesn’t really work with two channels—the Bitscope traces for the two channels are separate events, and the only synchronization information is the hardware trigger. I could get a clean Vgs signal and a clean Vds signal, but aligning them is not going to be any better than the jitter of the hardware trigger. I’ll have to think about averaging (or taking the median) of the trigger times relative to the hardware trigger, and using that to align the two traces.

Still, I wonder why the Bitscope designers have not taken advantage of the trigger jitter to do averages of repetitive traces—it allows one to reconstruct signals in software that are much higher bandwidth than the sampling rate of the scope.  These sorts of super-resolution techniques are pretty common, and it only requires better software, not new hardware.

I’ve been thinking that I might even try writing some software that talks directly to the Bitscope hardware (they have published their communication protocol), so that I can do things like capturing the data with full metadata and looking at it with decent plotting software (Matplotlib or gnuplot).  I’m not into doing GUI programming (an infinite time sink), so I’d probably write a Python library providing a application program interface to the Bitscope and a number of simple single-use programs (like capturing and averaging waveforms) with a configuration file or command-line arguments to set the parameters. Yet another thing to add to my to-do list (but near the end—there are many more important things to work on).

Create a free website or blog at WordPress.com.

%d bloggers like this: