Eigenfilters are FIR filters which are expressed in terms of a particular eigenvector of a matrix. The mathematical theory underlying their construction is attractive due to its simplicity and generality. A wide variety of time-domain and frequency-domain constraints may be incorporated in the formulation of an eigenfilter design problem. The computation process is quadratic in the filter length and possesses good numerical stability.
The eigenfilter approach was first published by Vaidyanathan and Nguyen in 1987. Their paper is reasonably accessible to anyone with a solid background in linear algebra and digital signal processing. This page restates the eigenfilter development in Python (using Numpy and Scipy), with a few improvements over the paper, and then presents several specific examples of eigenfilter computation in Python.
Suppose that we wish to construct a real FIR filter of length
n. All of
the filter design parameters will be encoded into a length-
and two symmetric positive semidefinite
All of the quantities
v are real-valued.
The filter coefficients
h are defined as the vector which minimizes
(h @ P @ h) / (h @ h) subject to the requirements
h @ M @ h == 0
v @ h == 1. Assuming a well-posed design problem, the solution
(For those unfamiliar with Python, recall that
@ is the matrix multiplication operator.
h @ P @ h,
h @ M @ h, and
h @ h are quadratic forms on the
The utility of this formulation rests upon two facts:
- An enormous variety of useful constraints can be encoded into suitable
v, the computation of
hadmits a simple and tractable solution.
In general terms, the role of each parameter is the following:
Mencodes the “hard constraints” which must be satisfied by the filter.
Pencodes the “soft constraints” which the filter will attempt to optimize.
vconstrains the overall scale of the filter.
We will soon have much to say about the construction of
v, but first
let’s examine the solution for
To begin, we construct a matrix
U whose columns are an orthonormal basis for
the null space of
M. (This is easily accomplished by manipulation of
the singular value decomposition of
M.) We now suppose that
h = (U @ a) / (v @ U @ a) for some vector
a, and examine the constraints:
h @ M @ h == 0is automatically satisfied for any
hof the form
(U @ a) / (v @ U @ a).
v @ h == 1is automatically satisfied for any
hof the form
(U @ a) / (v @ U @ a).
h @ M @ h == 0and
v @ h == 1is equal to
(U @ a) / (v @ U @ a)for some choice of
(h @ P @ h) / (h @ h)is equivalent to minimizing
(a @ U.T @ P @ U @ a) / (a @ a).
By Rayleigh’s principle, the
a which minimizes
(a @ U.T @ P @ U @ a) / (a @ a)
is an eigenvector of
U.T @ P @ U corresponding to its smallest eigenvalue.
We therefore choose
a to be such an eigenvector, and compute
h = (U @ a) / (v @ U @ a).
It sounds complicated, but the Python implementation is remarkably simple:
def eigenfilter(M, P, v): U = scipy.linalg.null_space(M) eigs = numpy.linalg.eigh(U.T @ P @ U) assert eigs > 0, "insufficient constraints for good conditioning" return (U @ eigs[:,0]) / (v @ U @ eigs[:,0])
In the development that follows, we will frequently use the Fourier transform
fourier(n, freq) or its integral (stated here in its closed form)
fourier_integral(n, freq0, freq1).
def fourier(n, freq, center=0.5): index = numpy.arange(n) - (n - 1) * center return numpy.exp(2j * numpy.pi * freq * index) def fourier_integral(n, freq0, freq1, center=0.5): index = numpy.arange(n) - (n - 1) * center def integral(freq): return (numpy.where(index == 0, freq, fourier(n, freq, center)) / numpy.where(index == 0, 1, 2j * numpy.pi * index)) return integral(freq1) - integral(freq0)
center parameter controls which sample is designated as index 0.
This choice does not affect the magnitude of a filter frequency
response, but it does result in a linear offset applied to the phase.
For a causal filter, the first sample is designated as index 0 (represented
center = 0). For a linear-phase filter, we typically
designate the middle sample as index 0 (represented by
center = 0.5).
This forces a non-causal intepretation of the filter, but has
the useful consequence that the frequency response is either real (type I,
type II) or pure imaginary (type III, type IV).
For even-length filters, choosing
center = 0.5 means that the sample
indices are half-integers. This does not pose any theoretical difficulty,
but care must be taken in the interpretation of non-integer indices. For example,
[0, 1] with
center = 0.5 is apparently a perfect half-sample delay, but it
is not actually useful for delaying a signal by half a sample.
Normalizing the filter
For a low-pass filter, we typically normalize to a DC gain of 1, which is
v @ h == 1 with
v = ones(n). But if this choice of
used for a filter with an intended DC gain of 0, numerical instability will result
eigenfilter divides by
v @ U @ a, which is very nearly 0).
There is generally an obvious choice for
v based on the nature of the filter
being designed. For example, for a band-pass filter, we choose
v such that
v @ h
is the average value of the frequency response over the passbands. For a differentiator,
v such that
v @ h is the slope of the frequency response at zero frequency.
Previous authors choose
v such that
v @ h was equal to the frequency response
at a particular frequency, but did not consider the possibility that
v @ h could
represent a more complicated function such as an integral or derivative of the
frequency response. Our approach results in superior filter design for filters
which do not need to be exactly constrained at a particular reference frequency.
For example, consider a band-pass filter with passband extending from
Previous authors have chosen
v = fourier(n, (freq0 + freq1) / 2),
whereas we propose
v = fourier_integral(n, freq0, freq1) / (freq1 - freq0).
A “constraint” is a quadratic form on
h, represented by an
which encodes a particular figure of merit for a length-
n FIR filter.
For a constraint
C, the “best” filter is the one which minimizes
h @ C @ h.
Constraint matrices are required to be real, symmetric, and positive semidefinite.
h @ C @ h are always non-negative, and a value of 0 is attained
if and only if the constraint exactly satisfied.
Constraints are combined by addition and may be weighted by scalar multiplication
by a non-negative number. The sum of all of the hard constraints (i.e. constraints
for which we require
h @ C @ h == 0) gives the value for matrix
The weighted sum of all of the soft constraints (i.e. constraints for which we
attempt to minimize
h @ C @ h >= 0) gives the value for matrix
Assuming we can come up with matrices to represent the individual constraints,
combining them into one call to
eigenfilter is as simple as addition.
If the rank of
M is equal to
n, then there are conflicting hard constraints
and a hard constraint must be removed. If the rank of
P is less than
there may be too few soft constraints to uniquely specify
h (but whether this is
actually the case depends on the relationship between
eigenfilter will fail whenever the filter design problem is “too easy”
(in the sense that the optimum filter
h would result in a value of
h @ P @ h
which is smaller than as the smallest representable floating point number).
In the immortal words of Al Oppenheim, “just because it’s optimum doesn’t mean it’s
good”. It is eminently possible that
eigenfilter will return a solution which
is deemed unacceptable despite minimizing the specified constraint function.
This scenario sometimes arises due to an error in the formulation of a constraint.
In other cases, it is necessary to seek more subtle modifications to the constraints
which will increase the cost of the undesired behavior relative to the cost of the
We will now examine a variety of useful constraint matrices.
For an FIR filter to have linear phase, its coefficients must have
even symmetry (
h[i] = h[n - 1 - i]) or odd symmetry (
h[i] = -h[n - 1 - i]).
This requirement is easily expressed as a quadratic form:
def linear_phase(n, even=True): return numpy.eye(n) + numpy.rot90(numpy.eye(n)) * (-1 if even else 1)
- Type I:
h @ linear_phase(n, True) @ h == 0with
- Type II:
h @ linear_phase(n, True) @ h == 0with
- Type III:
h @ linear_phase(n, False) @ h == 0with
- Type IV:
h @ linear_phase(n, False) @ h == 0with
The frequency response of
h at a frequency
freq is given by
f @ h,
f = fourier(n, freq). The energy at a particular frequency is
equal to the magnitude squared of
f @ h, which may be written as
h @ real(outer(conj(f), f)) @ h. Integrating over frequency (and
remembering to include both the positive and negative frequencies), we find
that the total energy of the frequency response from
h @ band_energy(n, freq0, freq1) @ h, where
a simple closed form:
def band_energy(n, freq0, freq1): assert 0 <= freq0 <= freq1 <= 0.5 return scipy.linalg.toeplitz(2 * freq1 * numpy.sinc(2 * freq1 * numpy.arange(n)) - 2 * freq0 * numpy.sinc(2 * freq0 * numpy.arange(n)))
band_energy(n, freq0, freq1) is an appropriate constraint matrix
for minimizing the fraction of an eigenfilter’s spectral energy which falls between
The previous constraint may be generalized to consider the magnitude squared
of the deviation from a particular reference value
reference @ h. We define
f = fourier(n, freq) - reference and integrate
over a range of frequencies to obtain the total energy of the deviation in the form
h @ band_deviation(n, freq0, freq1, f_ref) @ h.
def band_deviation(n, freq0, freq1, reference, center=0.5): assert 0 <= freq0 <= freq1 <= 0.5 def response_error(freq): error_vector = fourier(n, freq, center) - reference return 2 * numpy.real(numpy.outer(numpy.conj(error_vector), error_vector)) return scipy.integrate.quad_vec(response_error, freq0, freq1)
For a low-pass filter, we typically choose
freq0 = 0 and
ref = fourier(n, 0).
In this case,
band_deviation simplifies to
def band_deviation_lowpass(n, freq, center=0.5): assert 0 <= freq <= 0.5 x = numpy.arange(n) - (n - 1) * center y = numpy.arange(n).reshape((-1, 1)) - (n - 1) * center return 2 * freq * (1 + numpy.sinc(2 * freq * (x - y)) - numpy.sinc(2 * freq * x) - numpy.sinc(2 * freq * y))
Arbitrary frequency response
The previous constraint generalizes even further to an arbitrary non-constant frequency
response with non-constant weighting applied to the response error at each frequency.
Suppose that the desired frequency response is described by
reference(freq) @ h
and a weighting function
weight(freq) specifies how much we care about the response
error at each frequency. To establish the desired figure of merit, we integrate the magnitude
weight(freq) * (fourier(n, freq) - reference(freq)) @ h over all frequencies.
def arbitrary_freq_response(n, reference, weight, center=0.5): def response_error(freq): error_vector = weight(freq) * (fourier(n, freq, center) - reference(freq)) return 2 * numpy.real(numpy.outer(numpy.conj(error_vector), error_vector)) return scipy.integrate.quad_vec(response_error, 0, 0.5)
In the typical case where the desired response frequency response is a function
we would set
reference(freq) = H(freq) * v to obtain
reference(freq) @ h = H(freq) * v @ h = H(freq).
Minimum time delay
When designing a causal filter, we might wish to minimize the average time delay
h (defined as
sum([i * h[i] for i in range(n)])). The square of this quantity
is easily written as a quadratic form
h @ time_delay(n) @ h.
def time_delay(n): return numpy.diag(numpy.arange(n)**2)
General linear constraint
Suppose that we wish for
h to satisfy
A @ h == b. Using
v @ h == 1,
we can rewrite this as
(A - outer(b, v)) @ h == 0. The quadratic constraint
is taken to be the magnitude squared of
(A - outer(b, v)) @ h, which leads
to the following constraint matrix:
def linear_constraint(A, b, v): error_matrix = A - numpy.outer(b, v) return numpy.real(numpy.conj(error_matrix.T) @ error_matrix)
Convolution is a special case of matrix multplication, so we may constrain that
h transforms a particular signal
s into a particular output
linear_constraint(scipy.linalg.convolution_matrix(s, n), r, v).
The following examples demonstrate the ease with which high quality eigenfilters may be designed to satisfy complicated constraints.
Standard low-pass filter
def lowpass(n, passband, stopband): return eigenfilter(linear_phase(n), band_deviation_lowpass(n, passband) + band_energy(n, stopband, 0.5), numpy.ones(n))
An assortment of linear-phase low-pass filters with
n = 31:
The special case of
lowpass(n, 0, stopband), known as the “Slepian window”,
is equal to
scipy.signal.windows.dpss(n, stopband * n).
Low-pass with minimum time delay
The standard linear-phase low-pass filter is fine for offline data analysis, but in a real-time application it may be preferable to impose a minimum-time-delay constraint and relax our control of the passband flatness and phase.
def lowpass_minimum_time(n, stopband, time_weight): return eigenfilter(numpy.zeros((n, n)), band_energy(n, stopband, 0.5) + time_delay(n) * time_weight, numpy.ones(n))
The disparate constraints must be weighted to specify their relative importance. Determination of suitable weights is the essence of the filter design problem.
An assortment of minimum-time-delay low-pass filters with
n = 31:
Now those are some filters that you sure can’t make with
A band-pass filter is constructed by adding together multiple passband and stopband
constraints. We choose the normalization
v so that the average frequency response
over the passbands is 1.
def bandpass(n, passbands, stopbands): v = numpy.real(sum([fourier_integral(n, p, p) for p in passbands]) / sum([p - p for p in passbands])) P = (sum([band_deviation(n, b, b, v) for b in passbands]) + sum([band_energy(n, b, b) for b in stopbands])) return eigenfilter(linear_phase(n), P, v)
We could apply different weights to the various passbands and stopbands by providing
scale factors to multiply each call to
for this example we will assume equal weights.
As an example, we’ll set
passbands = [(0.10, 0.20), (0.35, 0.40)] and
stopbands = [(0.00, 0.05), (0.25, 0.30), (0.45, 0.50)]
and construct bandpass filters of various lengths:
To generate a Hilbert transformer, we impose a soft constraint for a flat passband and a hard constraint for an odd linear-phase filter (i.e. type III or type IV).
def hilbert(n, f_min, f_max): v = numpy.imag(fourier_integral(n, f_min, f_max)) / (f_max - f_min) return eigenfilter(linear_phase(n, False), band_deviation(n, f_min, f_max, 1j * v), v)
In the type III case, the passband must not include the Nyquist frequency. In the type IV case, the passband may extend up to the Nyquist frequency, but due to the even filter length there is a mandatory half-sample shift in the output signal.
A differentiator is constructed in the same manner as a low-pass filter,
except that the passband constraint is a call to
rather than a call to
def differentiator(n, passband, stopband): v = numpy.arange(n) - (n - 1) / 2 desired_response = lambda freq: 2j * numpy.pi * freq * v frequency_weight = lambda freq: 1 if freq < passband else 0 P = (arbitrary_freq_response(n, desired_response, frequency_weight) + band_energy(n, stopband, 0.5)) return eigenfilter(linear_phase(n, False), P, v)
The special case
differentiator(n, 0, stopband) returns a discrete
prolate spheroidal sequence, equal to
scipy.signal.windows.dpss(n, stopband * n, 2).
We can plot some differentiators of length 31:
The “half sample delay” is defined by a half-sample shift of the
band-limited interpolation of a signal. Like the differentiator,
this filter uses an
arbitrary_freq_response constraint for the
passband and a
band_energy constraint for the stopband. Because
a half sample delay is physically impossible to realize at frequencies
approaching the Nyquist frequency, lower frequencies are weighted more
heavily in the passband constraint.
def half_sample_delay(n, passband, stopband, center=0.5): desired_response = lambda freq: numpy.exp(1j * numpy.pi * freq) frequency_weight = lambda freq: 0.5 / freq if freq < passband else 0 P = (arbitrary_freq_response(n, desired_response, frequency_weight, center) + band_energy(n, stopband, 0.5)) return eigenfilter(numpy.zeros((n, n)), P, numpy.ones(n))
We’ll look at some length-19 half sample delays:
We can also construct a causal half-sample delay by setting
center = 0.
This is a fundamentally difficult filter design problem, and the constraint
parameters must be carefully adjusted to avoid an extremely large gain in the
Download the code
Here is a Python file containing all of the code snippets from this page. Execute the file to generate the plots from the “Examples” section. The entire thing, including plotting, is 150 lines of code.