Gas station without pumps

2014 July 3

Digital filters: infinite-impulse-response filters

Filed under: Uncategorized — gasstationwithoutpumps @ 17:13
Tags: , , , , , ,

Yesterday I wrote a very brief introduction to z-transforms and finite-impulse-response filters.  Today I’ll try to do infinite-impulse-response (IIR) filters.  Obviously, one blog post can’t do justice to the topic, but I’ll try to get far enough that my son can implement some simple filters for his project.  If he needs more detail, the tutorial at CCRMA by Julius Smith is probably worth reading.

The basic idea of an IIR filter is that the transfer function is not a polynomial but a rational function.  For example, the standard “biquad” digital filter unit is the ratio of two quadratic formulas: H(z) = \frac{b_{0} + b_{1}z^{-1} + b_{2}z^{-2}}{1 + a_{1}z^{-1} + a_{2}z^{-2}}.

There are a lot of different ways to implement the biquad section, trading off various properties, but one that has particularly nice numeric properties is the “direct form I” implementation.

Direct form I for implementing a biquad filter section.  Taken from https://ccrma.stanford.edu/~jos/filters/Direct_Form_I.html

Direct form I for implementing a biquad filter section. Taken from https://ccrma.stanford.edu/~jos/filters/Direct_Form_I.html

The transfer function can be checked by looking at the summation in the center and taking z-transforms: Y(z) = X(z)(b_{0} + b_{1}z^{-1} + b_{2}z^{-2}) - Y(z)(a_{1}z^{-1} + a_{2}z^{-2}), followed by simple algebra to rearrange to get the transfer function above. One nice property of this particular implementation for fixed-point computation is that as long as the output stays within range, there are no problems with overflow in intermediate computations.

The left half of the implementation computes the numerator of the transfer function, which may have either two real zeros or a conjugate pair of complex ones (or a single real zero, if b2=0).  It is often useful to look at these zeros in polar form, Ae^{i\theta}, since the resulting filter will have minima at \omega=\theta, and how sharp the minimum is depends on how close A gets to 1. One popular choice for bandpass filters is to put the zeros at 1 and –1, so that there are zeroes at DC (ω=0) and at the Nyquist frequency (ω=π), which gives 1-z^{-2} as the numerator. For a low-pass filter, both zeros are often put at the Nyquist frequency, giving (1+z^{-1})^2, while for a high-pass filter, both zeros are often put at DC, giving (1-z^{-1})^2.

The right half of the implementation computes the denominator of the transfer function producing a pair of poles, and it is again useful to look at the poles in polar form. For stability of the filter we need to have the poles within the unit circle— even getting them too close to the unit circle can cause the output to get very large and cause numeric overflow.

Note that any rational function with real coefficients can be factored into biquad units with real coefficients, just by factoring the polynomials into a product of roots, and pairing the complex roots in their conjugate pairs. Because the direct implementation of a high degree polynomial is very sensitive to rounding errors in the parameters, it is common to factor the transfer function into biquad units.

About a year ago, I was playing with a biquad filter for an optical pulse monitor, where I wanted a roughly 0.4—2.5 Hz bandpass filter, with a sampling rate of 30Hz, using only small integer coefficients. I put the zeros at 1 and —1, and used a filter-design site on the web to get real-valued coefficients for the denominator. I then approximated the real-valued coefficients as integers divided by powers of two, and tried tweaking the coefficients up and down to get close to the response I wanted, using gnuplot to plot the gain:

Biquad design for a bandpass filter with peak around ω=π/15 using only small integer coefficients.

Biquad design for a bandpass filter with peak around ω=π/15 using only small integer coefficients.

The code I used for implementing the biquad bandpass filter on a microcontroller was fairly simple:

     y_0 = b0*(x_0-x_2) - a1*y_1 -a2*y_2;
     Output(y_0);
     x_2 = x_1;
     x_1 = x_0;
     y_2 = y_1;
     y_1 = y_0/a0;

Note that I’m outputting the high-precision output from the summation node, which has an extra gain of a0. The memory elements (x_1, x_2, y_1, y_2) have the same precision as the input—only the summation is done at higher precision. The division in the last line is not really a division but a shift, since I restricted a0 to be a power of 2. This shifting could be done after the multiplications of y_1 and y_2 in the first line, if y_1 and y_2 are carried to higher precision, but then the intermediate results in the summation could get too large—a lot depends on how big the word size is compared to the input values and the a0 scaling.

My son is probably interested in two digital filters—one to remove the DC bias (so a high-pass with cutoff frequency around 10–20Hz) and one to pick out the beat from a non-linearly modified signal (absolute value or squared signal). The beat he is interested in picking out is probably around 60–180bpm or 1–3Hz—similar to what I did for picking out heartbeats (I used 0.4—2.5Hz as a rough design goal on the pulse monitor). Unfortunately, as you go to higher sampling frequencies, you need to have higher precision in the filter coefficients.

The biquad coefficient calculator at http://www.earlevel.com/main/2013/10/13/biquad-calculator-v2/ seems pretty good, though it is a bit tedious to rescale and round the coefficients, then check the result. So I wrote a Python script to use the scipy.signal.iirfilter function to design filters, then scaled and rounded the coefficients. The scaling factor had to get much larger as the sampling frequency went up to get a good bandpass filter near 1Hz, otherwise rounding errors resulted in a low-pass filter rather than a bandpass filter (perhaps one of the poles ended up at 1 rather than as a complex pair?). To make a 0.33–3Hz bandpass filter, I needed to scale by 230 at 40kHz, 221 at 10kHz, 216 at 3kHz, 215 at 1kHz, 211 at 300Hz, and 28 at 100Hz. The scaling factors needed for 40kHz sampling would exceed the 32-bit word size, so this approach did not look very promising.

It may be more effective to use two separate biquad sections: a low-pass filter with a fairly high cutoff to downsample the signal, then a bandpass filter at the lower sampling rate. This approach allows using much lower-precision computations. I wrote a little Python program to test this approach also, aiming for a 0.33–3Hz bandpass filter.

Here is the filter response of a low-pass filter, followed by down-sampling and a bandpass filter.  Note that the scaling factor is only 2 to the 14, but the  filter response is quite nice.

Here is the filter response of a low-pass filter, followed by down-sampling and a bandpass filter. Note that the scaling factor is only 214, but the filter response is quite nice.

So the overall design of the loudness sensor will probably be a series of filters:

  • high-pass filter at 20Hz to remove DC bias, leaving sound
  • nonlinear operation (squaring or absolute value)
  • low-pass filter at 200Hz
  • down-sample to 500Hz
  • bandpass filter at 0.33–3Hz

One possible problem with this approach—look at the phase change when the beat is not 1Hz. At 0.33Hz, the phase change is about 37º, which is about 0.3s—a larger shift in the beat than we’d want to see. We may have to look at more complicated filter design that has smaller phase shifts. (The -49º phase shift at 3Hz is only about 45msec, and so not a perceptual problem.)

Here is a possible filter design for the initial high-pass filter:

Getting a 20Hz cutoff frequency required a somewhat larger scaling factor than for the other filters, but still quite reasonable for 14-bit values on a 32-bit machine.

Getting a 20Hz cutoff frequency required a somewhat larger scaling factor than for the other filters, but still quite reasonable for 14-bit values on a 32-bit machine.

Here is a version of the two-stage filter design program:

#!/usr/bin/env python

from __future__ import division,print_function

import numpy as np
import scipy
from scipy.signal import iirfilter,freqz,lfilter,tf2zpk
import matplotlib.pyplot as plt

pi=3.1415926535897932384627

fs = 40.e3  # sampling frequency

low_fs = 1000.  # sampling frequency after downsampling
cutoffs = (0.33,3.0)  # cutoff frequencies for bandpass filter in Hz

scale_factor = 2.**14

def scaled(b,a):	
    """scaled b and a by scale_factor and round to integers.
    Temporarily increase scale factor so that b[0] remains positive.
    """
    extra_scale=1.
    b_scaled = [int(np.round(scale_factor*c*extra_scale)) for c in b]
    while b_scaled[0]==0:
        b_scaled = [int(np.round(scale_factor*c*extra_scale)) for c in b]
        extra_scale *= 2
    a_scaled = [int(round(scale_factor*c*extra_scale)) for c in a]
    
    print (" b=",b, "a=",a)
    z,p,k = tf2zpk(b,a)
    print ( "zeros=",z, "poles=",p, "gain=",k)

    print ("scaled: b=",b_scaled, "a=",a_scaled)
    z,p,k = tf2zpk(b_scaled,a_scaled)
    print ( "zeros=",z, "poles=",p, "gain=",k)        

    return b_scaled,a_scaled    



def plot(b1,a1, b2,a2, frequencies=None):
    """Plot the gain (in dB) and the phase change of the
    concatentation of filters sepecified by b1,a1 and b2,a2.
    
    The b1,a1 filter is designed to run at sampling rate fs
    
    The b2,a2 filter is designed to run at sampling rate low_fs
    
    Both are designed with the filter type specified in global filter.
    """

    if frequencies is None:
    	worN =[pi*10.**(k/200.) for k in range(-1000,0)]
    else:
    	worN = [2*pi*f/fs for f in frequencies]
        
    freq,response1 = freqz(b1,a1, worN=worN)
    freq2,response2 = freqz(b2,a2, worN=[f*fs/low_fs for f in worN])
    freq *= fs/(2*pi)

    response = response1*response2
    
    fig=plt.figure()
    plt.title ('{}: b1={} a1={} fs={}\n b2={} a2={} fs={}'.format(filter,b1,a1,fs,b2,a2,low_fs))

    ax1 = fig.add_subplot(111)
    plt.semilogx(freq,20*np.log10(np.abs(response)), 'b')
    plt.ylabel('Amplitude [dB]', color='b')
    plt.grid()
    plt.legend()
    plt.xlabel('Frequency [Hz]')

    ax2 = ax1.twinx()
    angles = [ 180/pi*ang for ang in np.unwrap(np.angle(response)) ]
    plt.plot(freq,angles, 'g')
    plt.ylabel('Phase change [degrees]', color='g')
    plt.show()        


for low_fs in [100., 200., 400., 500., 800., 1000., 2000.]:
    filter='bessel'
    low_cut = 0.4*low_fs
    b1,a1 = iirfilter(2, low_cut/fs, btype='lowpass', ftype=filter, rp=0.1, rs=0.1)
    print (filter, "lowpass for fs=",fs, "cutoff=", low_cut)
    b1_scaled,a1_scaled = scaled(b1,a1)

    b2,a2 = iirfilter(1, [2*f/low_fs for f in cutoffs], btype='bandpass', ftype=filter, rp=0.1, rs=0.1)
    print(filter, "bandpass for fs=", low_fs, "cutoffs=",cutoffs)
    b2_scaled,a2_scaled = scaled(b2,a2)

    plot(b1_scaled, a1_scaled, b2_scaled, a2_scaled, [10.**(k/200.) for k in range(-400,min(401,int(200.*np.log10(low_fs/2))))])

Digital filters: z-transform and finite-impulse response filters

Filed under: Uncategorized — gasstationwithoutpumps @ 00:32
Tags: , , , ,

My son decided to try implementing a loudness sensor on the KL25Z board—last night he designed a circuit to interface an electret microphone to the A-to-D on the KL25Z, wired it up, wrote a small test program, and got an LED to be controlled by the loudness on the microphone.  He still needs to adjust the gain of his amplifier (the sensitivity was too low) and improve his loudness detection algorithm.  Currently the system only responds to very loud noises, but he’d like it to be able to pick out the beat in music even if the music is being played fairly softly.

I think that the best way to do what he wants is with digital filters (and some non-linear computations, like absolute value or squaring), but he doesn’t have any filter theory.  I’m going to try writing a few blog posts to cover the basics of implementing some simple digital filters, so that he can try out his ideas.

This post is going to cover the fundamental mathematical tool underlying most digital signal processing: the z-transform (which is related to the Laplace and Fourier transforms in continuous signal processing).

If we have a semi-infinite sequence of numbers, x_{0}, x_{1}, \ldots, then the z-transform of the sequence X(z) is \sum_{k=0}^{\infty} x_{k} z^{-k}.  For example, for the constant sequence x_{k}=c, the z-transform is X(z) = c \sum_{k=0}^{\infty} z^{-k} = \frac{1}{1-z}.  For the sinusoidal function with angular frequency \omega radians per sample, amplitude A, and phase \phi, x_{k} = A e^{i(\omega k +\phi)}, the z-transform is X(z) = \frac{Ae^{i\phi}}{1- e^{-i\omega}z}.

The z-transform is a linear transform—that is, if we take any two sequences x_{k} and y_{k} with z-transforms X(z) and Y(z), then the z-transform of a x_{k} +b y_{k} is a X(z)+ b Y(z).

One thing that is cool about z-transforms is that they can easily represent what happens if we delay a signal.  Consider the two sequences x_{0}, x_{1}, x_{2}, \ldots and 0, x_{0}, x_{1}, \ldots .  If the z-transform of the first sequences is X(z), then the z-transform of the second sequence is just z^{-1} X(z).

A linear filter consists just of multiplications by constants, delay elements, and additions.  The simplest filters (mathematically) are finite-impulse response filters:

Typical finite-impulse response (FIR) filter

Typical finite-impulse response (FIR) filter

The output y_{k} is computed as y_{k}=a_{0} x_{k} + a_{1}x_{k-1} + a_{2} x_{k-2} + \cdots a_{5} x_{k-5}, and the z-transform is Y(z) = a_{0} X(z) + a_{1} X(z) z^{-1} + a_{2} X(z) z^{-2} + \cdots a_{5} X(z) z^{-5}. We can simplify this to Y(z)/X(z) = a_{0} + a_{1} z^{-1} + a_{2} z^{-2} + \cdots a_{5} z^{-5}, a simple polynomial in z^{-1}. The function Y(z)/X(z) is known as the transfer function of the filter. The transfer function is sometimes written as H(z).

The impulse response of a filter is the output it provides when the input is 1 for the first sample and 0 thereafter. The z-transform of an impulse is the constant function 1, so the z-transform of the impulse response of a filter is just the transfer function for the filter. For the FIR filter in the picture above, the impulse response is a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, a_{5}, 0, 0, \ldots. Because the response is zero after a while, this is a finite impulse response.

When we are designing or analyzing filters, we often want to think about what they do to sinusoids, since we know that adding, multiplying, and delaying sinusoids of a given frequency e^{i \omega t} produces another sinusoid, but with a different amplitude and phase A e^{i (\omega t + \phi)}.

How do we convert the transfer function of the filter into the phasor A e^{i \phi}?

Note that delaying a sinusoid by 1 time unit is the same as subtracting \omega from its phase, so multiplying by the z-transform by z^{-1} must be the same as multiplying by e^{-i\omega} or setting z=e^{i\omega}.

So we can get the phasor for a given angular frequency just by plugging z=e^{i\omega} into the transfer function! This makes plotting the amplitude and phase response of a digital filter very simple.

Let’s look at a very simple filter averaging the last 4 samples, so y_{k}= 1/4 (x_{k} + x_{k-1} + x_{k-2} + x_{k-3}), and H(z)= 1/4 (1+z^{-1}+z^{-2}+z^{-3}). If we plot the gain (the absolute value of Y/X) as a function of frequency f, we can see that this is a low-pass filter:

This is a low-pass filter, with the half-power point, where the gain is 0.707, around 0.11536 cycles per sample, and gain goes to zero at frequencies of 1/4 and 1/2 (ω=π/2 or π), but there is a substantial extra peak around a frequency of 0.36614 cycles/sample. It is possible to design low-pass FIR filters with better characteristics than this one!

This is a low-pass filter, with the half-power point, where the gain is 0.707, around 0.11536 cycles per sample, and gain goes to zero at frequencies of 1/4 and 1/2 (ω=π/2 or π), but there is a substantial extra peak around a frequency of 0.36614 cycles/sample. It is possible to design low-pass FIR filters with better characteristics than this one!

The phase change is small for small frequencies (as the output is only delayed about 1.5 samples from the input), but does funny things in the higher frequencies.

The phase change is small for small frequencies (as the output is only delayed about 1.5 samples from the input), but does funny things in the higher frequencies.

Here is the gnuplot source code for the gain plot above:

set title "Averaging 4 adjacent samples"
set key bottom left
set samples 10000

set ylabel "Gain (y/x)"
set logscale y
set yrange [0.009:1.1]

set xlabel "frequency [cycles/sample]"
set logscale x
set xrange [0.001:0.5]

# transfer function
H(z)= (1+z**(-1)+z**(-2)+z**(-3))/4

j=sqrt(-1)
amplitude(omega) = abs(H(exp(j*omega)))
phase(omega) = imag(log(H(exp(j*omega)))) * 180/pi  # phase in degrees

plot amplitude(2*pi*x) notitle

I rarely use FIR filters (other than when I’m being very lazy and just averaging a few samples for downsampling), but usually use infinite-response filters, which I’ll leave for a subsequent post.

2014 June 17

Segmenting noisy signals revisited

This is the fourth in a series—the three previous are

A student (well, junior specialist this year, but student again in the fall) and I were trying to write up a journal paper describing the method and implementation, and I realized that the presentation in this blog was not clear enough for him to rewrite the math, and that I had left in some bits of fluff that were confusing. In this blog post I’ll try to merge the three previous posts into a single, coherent presentation using a Bayesian approach.

The segmenter uses a divide-and-conquer approach:

segment( time interval=[s,t) )
    if the time t-s is too short, return no breakpoints.
    Select the best breakpoint, i, to divide into two intervals [s,i) [i,t)
    If the breakpoint is poor, reject it and return no breakpoints
       otherwise return segment(s,i) + [i] + segment(i,t)

There are two essential parts to this algorithm: selecting the breakpoint and deciding whether to accept it.  Both rely on a Bayesian scoring scheme.  For the score, we’ll need two likelihood models, that is computable functions that compute the probability of seeing the observed data if it is generated by the corresponding model.  To score a potential breakpoint, i, we compare a model with the breakpoint (so that [s,i) and [i,t) are modeled independently) to one where the entire interval is modeled as a unit.

The simplest model, and one that has worked well for DNA data from the nanopore, is that each segment consists of a constant current with independent Gaussian noise added to each sample.  The likelihood for the data x_{s}, \ldots , x_{t-1} is just \prod_{s\leq j < t} P_{\mu,\sigma}(x_{j}), where µ and σ are the mean and standard deviation of the Gaussian generating the samples in [s,t). I’ll abbreviate this product as P_{\mu,\sigma}(s:t).

The log of the likelihood ratio for a breakpoint at position is L_{i}=\ln \frac{P_{\mu_{1},\sigma_{1}}(s:i) P_{\mu_{2},\sigma_{2}}(i:t)}{P_{\mu_{0},\sigma_{0}}(s:t)} = \ln P_{\mu_{1},\sigma_{1}}(s:i) +\ln P_{\mu_{2},\sigma_{2}}(i:t)- \ln P_{\mu_{0},\sigma_{0}}(s:t),

where µ0 and σ0 are the mean and standard deviation for the whole window from s to t, µ1 and σ1 are the mean and standard deviation for s to i, and µ2 and σ3 are the mean and standard deviation for i to t.  Because we are using maximum likelihood estimates for all the µ and σ values, we are guaranteed that the log-likelihood Li is non-negative.

The log-probability for a single sample with a Gaussian distribution is
\ln P_{\mu,\sigma}(x) = \ln \left( \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} \right) = -0.5 \ln(2\pi) -\ln(\sigma) - \frac{(x-\mu)^2}{2\sigma^2}
When we chose the optimal μ and σ for a segment, we did it so that σ2 is the variance, that is the average value of (x-\mu)^2. So when we sum the log probabilities over a segment of length n, using μ and σ optimized for that segment, we get n( -0.5 \ln(2\pi) -\ln(\sigma) - 0.5).

That means that the log likelihood ratio we want simplifies to L_{i} = (s-i)\ln \sigma_{1} + (i-t)\ln \sigma_{2} - (s-t)\ln \sigma_{0}. This simplification works for any modeling that results in independent Gaussian residuals, so we can do stepwise sloping segments by doing linear regression almost as easily as doing stepwise constant segments.

Choosing the best breakpoint is now easy—we just pick the value for i that maximizes Li.

Deciding whether to keep or reject the breakpoint requires looking not just at the likelihood ratio, but at the posterior-odds ratio, which is the likelihood ratio times the prior odds ratio. The prior odds ratio is a parameter we can set independent of the data. For example, if we have an estimate of the expected number of segments per second (sps) for a particular experimental setup and our sampling frequency is fs, then the prior probability of a step at each sample is \mbox{\it sps}/f_{s}, the prior odds ratio is \frac{ \mbox{\it sps} }{f_{s}-\mbox{\it sps}}, and the score we want to use (log posterior odds ratio) is L_{i} + \ln \mbox{\it sps} - \ln (f_{s}-\mbox{\it sps}).

If we are willing to accept no more than FP false positives per second, then we want the probability of a score arising by chance at a sample to be less than \mbox{FP}/ f_{s}

Although I have no proof for it, if the samples are all drawn independently from a Gaussian distribution, empirically the log-likelihood scores seem to be perfectly distributed on an exponential distribution P(L_{i} > x) = e^{-x}. Note that this distribution does not depend on the length of the window, μ, or σ, though I did avoid looking at the first and last few positions in the window, in case there are edge effects.

However, if we low-pass filter the Gaussian noise, then the samples are no longer independent, and the log-likelihood scores go up. The distribution with a cutoff frequency of k times the Nyquist frequency is empirically P(L_{i} > x) = e^{-k x}. (The Nyquist frequency is half the sampling frequency and is the highest frequency that can be represented by the samples.)

If these empirically observed distributions are correct, then we have
\frac{\mbox{FP}}{f_{s}} > P(\mbox{score} > x) = P\left(L_{i} > x \right) -\ln \mbox{\it sps} + \ln (f_{s}-\mbox{\it sps})= e^{-k x} \left(\frac{\mbox{\it sps} }{f_{s}-\mbox{\it sps}}\right)^{k}
that is
x > \ln \frac{\mbox{\it sps}}{f_{s}-\mbox{\it sps}} - \frac{1}{k}\ln\left(\frac{\mbox{FP}}{f_{s}}\right).

This is the same result as in More on segmenting noisy signals and Segmenting filtered signals, but there I made the mistake of changing parameterization from FP (false positives per second) to \mbox{\it over} = \frac{\mbox{FP}}{\mbox{\it sps}} (false positives per expected segment), which makes the parameterization more confusing, and I was not clear about where the scaling went for the filtering.

On the example I’ve been checking, I get good results with sps=10 segments/sec and FP=1e-4 false positives/sec, but neither parameter is very critical—changing them by a factor of 1000 does not change the results much.  Dropping FP to 1e-200 does cause segments to be obviously missed.

2014 June 3

EKG lab not quite done

Filed under: Circuits course — gasstationwithoutpumps @ 21:13
Tags: ,

Students came to lab prepared today! Only one of the five groups came to lab without a complete schematic and layout.  The other four groups all breadboarded their circuits and confirmed that they would work as designed (the only changes I suggested were changing from electrolytic capacitors to ceramic capacitors in the high-pass filter and perhaps lowering the corner frequency of the low-pass to remove more of the 60Hz noise—and I made those suggestions only after students had their circuits working).

One group had spent a lot of time debugging their circuit and not making any progress—they never saw anything except 2mV 60Hz noise.  I helped them debug and tracked down the problem fairly quickly: they had plugged their wires into the wrong set of headers on the KL25Z board, so were not delivering any power to the board.  I had not noticed this by looking at the wiring, but by trying to measure the power at the op amp and instrumentation amp, checking for loose wires on the breadboard.  When neither had power, I got suspicious and looked at the other end of the power wires.  Once power was delivered to the board, everything worked fine. (I made the same mistake this evening when I tried doing another recording at home, but it only took me a minute or two of wondering why I had no signal to realize my mistake.)

I wore EKG electrodes myself today, so that students having trouble getting a signal could check to see if their amplifiers worked with my electrodes (which I’d tested with my own board). A few students had problems with their electrodes, and confirmed that their amplifiers were working by using my electrodes, after which they managed to get their electrodes working.  I believe that 4 out of 5 groups managed to record signals from themselves, but no one got the soldering finished.

I’m going to suggest to groups that they might want to make EKG boards for both members of the team, so that they can take them home and demonstrate them to friends or relatives—it is a kind of cool thing to be able to make. It looks like I’m going to be missing the undergrad poster symposium on Thursday anyway, helping the group that came to lab unprepared today, so the students might as well have the opportunity to make another EKG to take home. (I’ll bring in some extra op-amp chips, if they need them, though there should be enough extras in all the kits, since I ordered enough for each student to work alone all quarter, if necessary.)

One student had an unusual EKG pattern in both leads I and II, but had had EKGs done professionally fairly recently and they found nothing wrong. I pointed the student to a simple book on EKGs (Ken Grauer’s A 1st Book on ECGs—2014) that I just had the library buy, for help in figuring out what the signals meant.

One thing I played with while they were putting their circuits together was doing digital filtering of the waveform to remove low frequency baseline drift and to reduce 60Hz noise.  I used scipy.signal.iirfilter and scipy.signal.filtfilt, pretty much as I did for looking at oscillometric blood pressure measurements.  The 60Hz noise was larger in the lab than at home (a lot more stuff plugged in), but didn’t really need to be filtered out:

Signal from EKG protoboard in the lab, with no digital filtering.

Signal from EKG protoboard in the lab, with no digital filtering. (sorry about the oversize bounding box: I’m not sure how to fix it and keep the grid square)

Same signal, filtered with bandpass 0.1Hz–88.2Hz followed by notch filter 57Hz—63Hz.  (5th order Bessel filters, applied forwards and backwards with filtfilt)

Same signal, filtered with bandpass 0.1Hz–88.2Hz followed by notch filter 57Hz—63Hz. (5th order Bessel filters, applied forwards and backwards with filtfilt)

The filtering cleans things up a little, but it is not worth the trouble of implementing the digital filters and giving them to the students. Of course, some of the students were getting a lot more 60Hz pick up than I was, perhaps because of their location in the room, perhaps because of higher gain (everyone ended up with gains between 1000 and 2000, though, so that probably isn’t the problem), perhaps coupled through their wiring (though people were pretty good about twisting their leads together).

Here is the quick-and-dirty script I used for filtering:

#!/usr/bin/env python2.7

from __future__ import print_function, division

import argparse
import sys
import numpy as np
from itertools import izip
import scipy
import scipy.signal

sampling_freq = 20	# if not read from input

times=[]
values=[]
for line in sys.stdin:
    if line.startswith('#'):
    	print (line.rstrip())
        if line.startswith('# Recording every'):
           # extract from input: # Recording every 0.05 sec (20.0 Hz)
           fields = line.split('(')
           sampling_freq=float(fields[1].split()[0]) 
        continue
    fields = map(float, line.split())
    if len(fields)==2:
    	times.append(fields[0])
        values.append(fields[1])

high_pass_cutoff = 0.1	# Hz
hp_over_Nyquist = high_pass_cutoff/(0.5*sampling_freq)

# low_pass_cutoff = 0.49*sampling_freq
low_pass_cutoff = min(100, 0.49*sampling_freq)
lp_over_Nyquist = low_pass_cutoff/(0.5*sampling_freq)


bess_b,bess_a = scipy.signal.iirfilter(5, 
			Wn=[hp_over_Nyquist,lp_over_Nyquist], 
			btype="bandpass",
			ftype='bessel')
filtered = scipy.signal.filtfilt(bess_b,bess_a,values)


mains_freq = 60.0 # Hz
if sampling_freq> 2*mains_freq:
    mains_over_Nyquist = mains_freq/(0.5*sampling_freq)

    bess_b,bess_a = scipy.signal.iirfilter(5, 
			Wn=[mains_over_Nyquist*0.95, mains_over_Nyquist*1.05],
			btype="bandstop",
			ftype='bessel')
    filtered = scipy.signal.filtfilt(bess_b,bess_a,filtered)


print("# Bessel bandpass filtered to {:.3g}Hz to {:.3g}Hz".format(high_pass_cutoff, low_pass_cutoff))
print("# followed by notch {:.3g}Hz -- {:.3g}Hz".format(mains_freq*0.95, mains_freq*1.05))

for t,n in izip(times,filtered):
    print("{:.7f}\t{:.6f}".format(t,n))

2014 June 1

Blood pressure monitor

I thought of a new variant on the pressure sensor lab for the circuits course: a blood pressure monitor.  I happen to have a home blood pressure monitor with a cuff and squeeze bulb that can be detached from the monitor and hooked up to the MPS2053 pressure sensor instead.  With this setup and an instrumentation amp, I can easily record the pressure in the cuff and observe the oscillations in the cuff pressure that are used for oscillometric blood pressure measurement.

Cuff pressure measurements using an MPX2053DP sensor, and instrumentation amp, and a KL25Z microcontroller board running PteroDAQ software.

Cuff pressure measurements using an MPX2053DP sensor, and instrumentation amp, and a KL25Z microcontroller board running PteroDAQ software.

The fluctuations can be observed by removing a baseline (fitting an exponential decay to the dropping pressure, for example, and the subtracting it out) or by using some sort of digital filter. I tried using a 0.3Hz–6Hz bandpass filter (4th order Bessel filter, applied using scipy.signal.filtfilt):

Oscillations corresponding to the pulse are very visible when the slow pressure decay is filtered out.  I've zoomed in on just the time of the dropping pressure, marked with lines on the previous plot.

Oscillations corresponding to the pulse are very visible when the slow pressure decay is filtered out. I’ve zoomed in on just the time of the dropping pressure, marked with lines on the previous plot.

The pulse is very easy to see (about 40.4bpm in this sample—low even for me), but figuring out the systolic and diastolic pressure from the fluctuations is a bit messy:

The oscillometric method of measuring blood pressure with an automated cuff yields valid estimates of mean pressure but questionable estimates of systolic and diastolic pressures. Existing algorithms are sensitive to differences in pulse pressure and artery stiffness. Some are closely guarded trade secrets. Accurate extraction of systolic and diastolic pressures from the envelope of cuff pressure oscillations remains an open problem in biomedical engineering.  
[Charles F Babbs, Oscillometric measurement of systolic and diastolic blood pressures validated in a physiologic mathematical model, BioMedical Engineering OnLine 2012, 11:56 doi:10.1186/1475-925X-11-56 http://www.biomedical-engineering-online.com/content/11/1/56]

One shortcut is to find the maximum amplitude of the envelope of the oscillations, and look at the pressures at fractions of the amplitude:

However, it has been shown that the pressure, Pm, at which the oscillations have the maximum amplitude, Am, is the mean arterial pressure (MAP). Empirical and theoretical work has shown that the systolic and diastolic pressures, Ps and Pd respectively, occur when the amplitudes of oscillation, As and Ad respectively, are a certain fraction of Am:

  • Ps is the pressure above Pm at which As/Am = 0.55
  • Pd is the pressure below Pm at which Ad/Am = 0.85

[Dr. Neil Townsend, Medical Electronics, Michaelmas Term, 2001, http://makezine.com/go/obpm]

I’m too lazy right now to try to come up with a good envelope follower and find the times for 55% and 85% of peak. The peak seems to be around 48.3s in this plot with magnitude of 0.336kPa and a predicted MAP of 16.28kPa (122mm Hg).  I based the MAP on low-pass filtering the signal to remove the fluctuations and make a good smooth curve for finding the systolic and diastolic pressure, once times on the envelope are picked.  Again, a 4th order Bessel filter applied with filtfilt looks good:

Low-pass filtering removes the fluctuations, so that picking two time points can give clean pressure readings for the systolic and diastolic pressure.

Low-pass filtering removes the fluctuations, so that picking two time points can give clean pressure readings for the systolic and diastolic pressure.

From the standpoint of the course, the filtering to get a good signal is probably too difficult, but students could record the cuff pressure and observe the fluctuations. They might even be able to do some crude RC filtering, though this is really an application that calls out for digital filtering.

« Previous PageNext Page »

%d bloggers like this: