## Conversion

# Prediction and negative-delay filters: Five things you should know

**The Filter Wizard aka Kendall Castor-Perry kicks of 2012 by posing five central questions whose answers can help designers navigate through the variety of prediction and negative delay filtering solutions.**

All systems, including filters, are causal. That means they can't produce a response to an (unpredictable) stimulus before that stimulus arrives. So, how the heck can you build a filter that 'predicts' something? Well, it all depends on how high you set your sights for the quality and the relevance of that prediction.

So, riffing on the Five Things You Should Know format that was very popular last time, let's ask five central questions whose answers can help us navigate through this filtery quagmire.**How do filters delay signals?**

Information can be impressed on a signal in many ways, and it always takes a finite amount of time to pass through a processing system. You'll be very familiar with the concept of the propagation delay of a digital block. It's simply the time elapsed between some state change at the input to the corresponding state change at the output of that block. The digital-minded reader's first thought might be of a stream of '1's and '0's, expressed physically as detectably different voltage or current levels. Propagation delay is fine for such signals, but not meaningful when we consider analog signals that don't really have defining features associated with particular points in time.

We often lowpass-filter signals and data sequences to get rid of 'noise' high-frequency variations that weve decided have no meaning and are getting in the way of observing a more important underlying feature. The filtering process can lend our observation a rather 'heavy touch', though; it's definitely a case of the observer affecting the observation. The most obvious consequence of conventional filtering, when we view the response graphically, is that theres clearly a time delay between variations in the input signal and corresponding variations in the filtered output. Well see this clearly on a test signal in a moment when we look at some examples.**How do we quantify this form of delay?**

This 'lag', between the input signal to the filter (or any other linear signal processing block) and the resulting output, is closely linked to the **group delay**, which is equal to (minus) the derivative of the phase response with frequency. Use sensible units for this; if you measure the phase in radians, and express the frequency in its angular form of radians per second, then (radians) divided by (radians per second) gives you a handy answer in seconds. Or you could use 'cycles' a cycle is one full rotation, or 360 degrees. Phase difference measured in cycles, divided by the difference in regular frequency measured in Hertz, (the same as cycles per second), also gives you an answer in seconds.

It's tempting to ask, then, why we don't just design a filter that doesn't have any group delay, if we want to avoid this lag. If you've read my columns before, you'll probably recognize the 'jeopardy setup' in that sentence. Because, you guessed it it's not as easy as that. If you look up or calculate the behaviour of the 'standard' flavors of lowpass filter response by and large, the ones named after dead mathematicians you'll find that their group delay is stubbornly positive, right down to zero frequency. We need to go a little off-piste here. **Can we eliminate (or more than eliminate) such delay?**

The strict answer is 'no', if you want it to be zero at **every** frequency. But theres an exact technique for developing a compensating filter that, when cascaded with the original, can give you zero or even negative group delay **at DC**. This can be very useful, as well see. You dont need to engage in any trial and error the Million Monkeys can stay in their cage today!

Let's say you have some lowpass transfer function **H** that has unity gain at DC. It's straightforward to show that a new transfer function **H'** = **2-H** is also unity gain at DC, and has a group delay at DC that has the same magnitude as that of **H**, but **negative**. If you cascade **H** and **H'** (i.e. connect them in series), you'll get an overall transfer function, let's call it **H1**, which has unity gain at DC and zero group delay at DC. For any linear transfer function in the s- or z-domain, **H1** is simply equal to **HH'**, i.e. **H1 = H(2-H)**. This is **always** realizable if **H** is realizable, whatever the type of filter.

This might appear to be a bizarre thing to do. Because the function **H**' has the same order as **H** (whether we are using analog or digital filters), you can see that combining them doubles the 'size' of the filter and therefore the resources needed to implement it. Perhaps less easy to visualize is that it's likely to significantly degrade the attenuation performance of your filter. If **H** is a lowpass function that has unity gain at DC, and a gain of unity or below at all other frequencies, then the function **2-H** has a value that can oscillate between 1 and 3, i.e. it could introduce a 'bump' of up to 9.5 dB in the response. If this falls in the stopband of the overall filter, then all that happens is a degradation of attenuation. If the bump is in the passband, then the overall passband response of the cascade will be very different from that of **H** alone.

Here's a simple example. Let's start with an n=2 Butterworth filter at 10 kHz for **H**, implemented digitally at a sample rate of 100 ksps. To design the filters and get the plots, I used a new release (available from February 2012) of the Filter tool for PSoC Creator, which gave the following coefficients for **H**, and the amplitude and group delay plot in Figure 1:

Final coefficients for Biquad filter :

Coefficients are in the order A0, A1, A2, B1 and B2

0.0674552917480469

0.134910583496094

0.0674552917480469

-1.14298057556152

0.412801742553711

**Please login to post your comment - click here**

- No news

- IP hot spot forms in Egypt
- EE Times Silicon 60: 2015's Startups to Watch
- Egypt's semiconductor cluster stronger than reported
- China buys Swedish MEMS foundry, builds fab
- Knowles wins microphone slots in iPhone 6S
- Volkswagen has given engineering a black eye
- Not enough money in MEMS, own the data, says InvenSense CEO
- Let's make MEMS suppliers rich, says Huawei
- MEMS platforms are way to go, says Bosch's Finkbeiner
- Dialog to acquire Atmel for $4.6 billion
- Peter Clarke's opinion compendium updated
- ADI provides force touch controller in iPhone
- CEO interviews updated
- MEMS move to 300mm-diameter wafers
- TSMC discusses next MEMS, monolithic mics

- The Advantages of Capacitive vs Optical Encoders
- High Voltage CMOS Amplifier Enables High Impedance Sensing with a Single IC
- Engineering Change Order and Implementation
- Low Power Bi-directional Level Shifter
- Haptic advancements put us in touch with complex systems
- Programmable Dual Edge Triggered Clock Divider for Staggered Clock Generation
- New Linear Regulators Solve Old Problems