Practical Considerations for Antiderivative Anti-Aliasing


First, what is aliasing? This question brings us to the heart of digital signal processing. A digital system with sample rate fs can only faithfully reproduce signals up to the frequency fs/2, often referred to as the Nyquist frequency, named after electrical engineer Harry Nyquist of Bell Labs. Any signal above the Nyquist frequency will be reflected back over it. For example, if digital system with sample rate fs=48 kHz attempts to reproduce a signal at 50 kHz, the signal will be reflected back to 48−(50−48)=46 kHz. This reflected signal is known as an “aliasing artifact” and is typically considered undesirable, particularly since the artifact is not harmonically related to the original signal. For a more in-depth explanation of aliasing, see here.

Anti-Aliasing with Oversampling

Now that we have some understanding of what aliasing is, and how nonlinear signal processing can create aliasing, let’s look at the most common method for supressing aliasing artifacts: oversampling.

Antiderivative Antialiasing

Antiderivative anti-aliasing (abbreviated as ADAA), was first introduced in a 2016 DAFx paper by Julian Parker, Vadim Zavalishin, and Efflam Le Bivic from Native Instruments. The technical background for ADAA is then further developed in an IEEE paper by Stefan Bilbao, Fabián Esqueda, Parker, and Vesa Välimäki. I won’t go to much into the mathematical details of ADAA, but the basic idea is that instead of applying a nonlinear function to a signal, we instead apply the anti-derivative of that function, and then use discrete-time differentiation resulting in a signal with supressed aliasing artifacts.

1st-order ADAA

ADAA can be implemented as follows: Say we have a nonlinear function, y[n]=f(x[n]), with an antiderivative F₁(x). A first-order ADAA version of the function, can be written as follows:

2nd-order ADAA

A second option is to use higher-order antiderivatives. Let’s say that our nonlinear function y[n]=f(x[n]) has a second antiderivative, which we’ll call F₂(x). Then second-order ADAA can be written as:

Tanh Distortion with ADAA

So far, we’ve only looked at using ADAA with the hard-clipping nonlinearity. Let’s move on to another nonlinear function commonly used for waveshaping distortion: the tanh function:

ADAA with Stateful Systems

You may have noticed that every nonlinear system examined thus far has been a “memoryless” system, i.e. the current output is dependent only on the current input, no memory of past input/output states is needed. In other systems, sometimes referred to as “stateful” systems, these past states are needed. For several years, ADAA was reputed to only work for memoryless systems, since first-order ADAA introduces 0.5 samples of group delay along the processing path. For an example of how this group delay can be problematic, let’s examine a nonlinear waveguide resonator:


Thus far, we’ve seen how antiderivative anti-aliasing can be useful for supressing aliasing artifacts in nonlinear systems. However, when using ADAA in practice, it can often be difficult to determine what order of ADAA is neccesary, and how it compares to using oversampling in terms of aliasing supression and computational performance. To that end, I’ve developed an audio plugin using the JUCE/C++ framework, to demonstrate the capabilities of ADAA, and allow users to find the right balance for their applications. The plugin offers three processing modes: standard, first-order ADAA, and second-order ADAA, using both real-time computation and table-lookup, as well as variable oversampling options. The idea is that users can examine what combination of ADAA and oversampling work best for their application.


Big thanks to Matt Nielsen for inspiring this project, and for some insightful conversations!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store