A general finite impulse response filter with n stages, each with an independent delay, di, and amplification gain, ai.
In signal processing, a digital filter is a system that performs mathematical operations on a sampled, discrete-timesignal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is an electronic circuit operating on continuous-timeanalog signals.
A bandpass filter with a high Q factor represents a bandpass filter with a narrow pass band; that is, a high Q factor means fewer signals of unwanted frequencies will pass through. A low Q factor means that the pass band is wide, and therefore allows a wider range of frequencies to pass through the filter. Digital Filter Design FIR, IIR, windowing, equiripple, least squares, Butterworth, Chebyshev, elliptic, pulse shaping Design digital filters using as a starting point a set of specifications ( designfilt ) or a design algorithm ( butter, fir1 ).
A digital filter system usually consists of an analog-to-digital converter (ADC) to sample the input signal, followed by a microprocessor and some peripheral components such as memory to store data and filter coefficients etc. Finally a digital-to-analog converter to complete the output stage. Program Instructions (software) running on the microprocessor implement the digital filter by performing the necessary mathematical operations on the numbers received from the ADC. In some high performance applications, an FPGA or ASIC is used instead of a general purpose microprocessor, or a specialized digital signal processor (DSP) with specific paralleled architecture for expediting operations such as filtering.
Digital filters may be more expensive than an equivalent analog filter due to their increased complexity, but they make practical many designs that are impractical or impossible as analog filters. Digital filters can often be made very high order, and are often finite impulse response filters which allows for linear phase response. When used in the context of real-time analog systems, digital filters sometimes have problematic latency (the difference in time between the input and the response) due to the associated analog-to-digital and digital-to-analog conversions and anti-aliasing filters, or due to other delays in their implementation.
Digital filters are commonplace and an essential element of everyday electronics such as radios, cellphones, and AV receivers.
- 1Characterization
- 1.1Analysis techniques
- 3Filter realization
Characterization[edit]
A digital filter is characterized by its transfer function, or equivalently, its difference equation. Mathematical analysis of the transfer function can describe how it will respond to any input. As such, designing a filter consists of developing specifications appropriate to the problem (for example, a second-order low pass filter with a specific cut-off frequency), and then producing a transfer function which meets the specifications.
The transfer function for a linear, time-invariant, digital filter can be expressed as a transfer function in the Z-domain; if it is causal, then it has the form:
where the order of the filter is the greater of N or M.See Z-transform's LCCD equation for further discussion of this transfer function.
This is the form for a recursive filter, which typically leads to an IIR infinite impulse response behaviour, but if the denominator is made equal to unity i.e. no feedback, then this becomes an FIR or finite impulse response filter.
Analysis techniques[edit]
A variety of mathematical techniques may be employed to analyze the behaviour of a given digital filter. Many of these analysis techniques may also be employed in designs, and often form the basis of a filter specification.
Typically, one characterizes filters by calculating how they will respond to a simple input such as an impulse. One can then extend this information to compute the filter's response to more complex signals.
Impulse response[edit]
The impulse response, often denoted or , is a measurement of how a filter will respond to the Kronecker delta function. For example, given a difference equation, one would set and for and evaluate. The impulse response is a characterization of the filter's behaviour. Digital filters are typically considered in two categories: infinite impulse response (IIR) and finite impulse response (FIR).In the case of linear time-invariant FIR filters, the impulse response is exactly equal to the sequence of filter coefficients:
IIR filters on the other hand are recursive, with the output depending on both current and previous inputs as well as previous outputs. The general form of an IIR filter is thus:
Plotting the impulse response will reveal how a filter will respond to a sudden, momentary disturbance.An IIR filter will always be recursive. While it is possible for a recursive filter to have a finite impulse response, non-recursive filters will always have a finite impulse response. An example is the moving average (MA) filter, that can be implemented both recursively[citation needed] and non recursively.
Difference equation[edit]
In discrete-time systems, the digital filter is often implemented by converting the transfer function to a linear constant-coefficient difference equation (LCCD) via the Z-transform. The discrete frequency-domain transfer function is written as the ratio of two polynomials. For example:
This is expanded:
and to make the corresponding filter causal, the numerator and denominator are divided by the highest order of :
The coefficients of the denominator, , are the 'feed-backward' coefficients and the coefficients of the numerator are the 'feed-forward' coefficients, . The resultant linear difference equation is:
or, for the example above:
rearranging terms:
then by taking the inverse z-transform:
and finally, by solving for :
This equation shows how to compute the next output sample, , in terms of the past outputs, , the present input, , and the past inputs, . Applying the filter to an input in this form is equivalent to a Direct Form I or II (see below) realization, depending on the exact order of evaluation.
In plain terms, for example, as used by a computer programmer implementing the above equation in code, it can be described as follows:
= the output, or filtered value
= the input, or incoming raw value
= the sample number, iteration number, or time period number
= the input, or incoming raw value
= the sample number, iteration number, or time period number
and therefore:
= the current filtered (output) value
= the last filtered (output) value
= the 2nd-to-last filtered (output) value
= the current raw input value
= the last raw input value
= the 2nd-to-last raw input value
= the last filtered (output) value
= the 2nd-to-last filtered (output) value
= the current raw input value
= the last raw input value
= the 2nd-to-last raw input value
Filter design[edit]
The design of digital filters is a deceptively complex topic.[1] Although filters are easily understood and calculated, the practical challenges of their design and implementation are significant and are the subject of much advanced research.
There are two categories of digital filter: the recursive filter and the nonrecursive filter. These are often referred to as infinite impulse response (IIR) filters and finite impulse response (FIR) filters, respectively.[2]
Filter realization[edit]
After a filter is designed, it must be realized by developing a signal flow diagram that describes the filter in terms of operations on sample sequences.
A given transfer function may be realized in many ways. Consider how a simple expression such as could be evaluated – one could also compute the equivalent . In the same way, all realizations may be seen as 'factorizations' of the same transfer function, but different realizations will have different numerical properties. Specifically, some realizations are more efficient in terms of the number of operations or storage elements required for their implementation, and others provide advantages such as improved numerical stability and reduced round-off error. Some structures are better for fixed-point arithmetic and others may be better for floating-point arithmetic.
Direct form I[edit]
A straightforward approach for IIR filter realization is direct form I, where the difference equation is evaluated directly. This form is practical for small filters, but may be inefficient and impractical (numerically unstable) for complex designs.[3] In general, this form requires 2N delay elements (for both input and output signals) for a filter of order N.
Direct form II[edit]
The alternate direct form II only needs N delay units, where N is the order of the filter – potentially half as much as direct form I. This structure is obtained by reversing the order of the numerator and denominator sections of Direct Form I, since they are in fact two linear systems, and the commutativity property applies. Then, one will notice that there are two columns of delays () that tap off the center net, and these can be combined since they are redundant, yielding the implementation as shown below.
The disadvantage is that direct form II increases the possibility of arithmetic overflow for filters of high Q or resonance.[4] It has been shown that as Q increases, the round-off noise of both direct form topologies increases without bounds.[5] This is because, conceptually, the signal is first passed through an all-pole filter (which normally boosts gain at the resonant frequencies) before the result of that is saturated, then passed through an all-zero filter (which often attenuates much of what the all-pole half amplifies).
Cascaded second-order sections[edit]
A common strategy is to realize a higher-order (greater than 2) digital filter as a cascaded series of second-order 'biquadratric' (or 'biquad') sections[6] (see digital biquad filter). The advantage of this strategy is that the coefficient range is limited. Cascading direct form II sections results in N delay elements for filters of order N. Cascading direct form I sections results in N + 2 delay elements, since the delay elements of the input of any section (except the first section) are redundant with the delay elements of the output of the preceding section.
Other forms[edit]
Other forms include:
- Direct form I and II transpose
- Series/cascade lower (typical second) order subsections
- Parallel lower (typical second) order subsections
- Continued fraction expansion
- Lattice and ladder
- One, two and three-multiply lattice forms
- Three and four-multiply normalized ladder forms
- ARMA structures
- State-space structures:
- optimal (in the minimum noise sense): parameters
- block-optimal and section-optimal: parameters
- input balanced with Givens rotation: parameters[7]
- Coupled forms: Gold Rader (normal), State Variable (Chamberlin), Kingsbury, Modified State Variable, Zölzer, Modified Zölzer
- Wave Digital Filters (WDF)[8]
- Agarwal–Burrus (1AB and 2AB)
- Harris–Brooking
- ND-TDL
- Multifeedback
- Analog-inspired forms such as Sallen-key and state variable filters
Comparison of analog and digital filters[edit]
Digital filters are not subject to the component non-linearities that greatly complicate the design of analog filters. Analog filters consist of imperfect electronic components, whose values are specified to a limit tolerance (e.g. resistor values often have a tolerance of ±5%) and which may also change with temperature and drift with time. As the order of an analog filter increases, and thus its component count, the effect of variable component errors is greatly magnified. In digital filters, the coefficient values are stored in computer memory, making them far more stable and predictable.[9]
Because the coefficients of digital filters are definite, they can be used to achieve much more complex and selective designs – specifically with digital filters, one can achieve a lower passband ripple, faster transition, and higher stopband attenuation than is practical with analog filters. Even if the design could be achieved using analog filters, the engineering cost of designing an equivalent digital filter would likely be much lower. Furthermore, one can readily modify the coefficients of a digital filter to make an adaptive filter or a user-controllable parametric filter. While these techniques are possible in an analog filter, they are again considerably more difficult.
Digital filters can be used in the design of finite impulse response filters. Equivalent analog filters are often more complicated, as these require delay elements.
Digital filters rely less on analog circuitry, potentially allowing for a better signal-to-noise ratio. A digital filter will introduce noise to a signal during analog low pass filtering, analog to digital conversion, digital to analog conversion and may introduce digital noise due to quantization. With analog filters, every component is a source of thermal noise (such as Johnson noise), so as the filter complexity grows, so does the noise.
However, digital filters do introduce a higher fundamental latency to the system. In an analog filter, latency is often negligible; strictly speaking it is the time for an electrical signal to propagate through the filter circuit. In digital systems, latency is introduced by delay elements in the digital signal path, and by analog-to-digital and digital-to-analog converters that enable the system to process analog signals.
In very simple cases, it is more cost effective to use an analog filter. Introducing a digital filter requires considerable overhead circuitry, as previously discussed, including two low pass analog filters.
Another argument for analog filters is low power consumption. Analog filters require substantially less power and are therefore the only solution when power requirements are tight.
When making an electrical circuit on a PCB it is generally easier to use a digital solution, because the processing units are highly optimized over the years. Making the same circuit with analog components would take up a lot more space when using discrete components. Two alternatives are FPAA's[10] and ASIC's, but they are expensive for low quantities.
Types of digital filters[edit]
Many digital filters are based on the fast Fourier transform, a mathematical algorithm that quickly extracts the frequency spectrum of a signal, allowing the spectrum to be manipulated (such as to create very high order band-pass filters) before converting the modified spectrum back into a time-series signal with an inverse FFT operation. These filters give O(n log n) computational costs whereas conventional digital filters tend to be O(n2).
Another form of a digital filter is that of a state-space model.A well used state-space filter is the Kalman filter published by Rudolf Kalman in 1960.
Traditional linear filters are usually based on attenuation. Alternatively nonlinear filters can be designed, including energy transfer filters [11] which allow the user to move energy in a designed way. So that unwanted noise or effects can be moved to new frequency bands either lower or higher in frequency, spread over a range of frequencies, split, or focused. Energy transfer filters complement traditional filter designs and introduce many more degrees of freedom in filter design. Digital energy transfer filters are relatively easy to design and to implement and exploit nonlinear dynamics.
See also[edit]
- High-pass filter, Low-pass filter
- Infinite impulse response, Finite impulse response
References[edit]
- ^M. E. Valdez, Digital Filters, 2001.
- ^A. Antoniou, Digital Filters: Analysis, Design, and Applications, New York, NY: McGraw-Hill, 1993., chapter 1
- ^J. O. Smith III, Direct Form I
- ^J. O. Smith III, Direct Form II
- ^L. B. Jackson, 'On the Interaction of Roundoff Noise and Dynamic Range in Digital Filters,' Bell Sys. Tech. J., vol. 49 (1970 Feb.), reprinted in Digital Signal Process, L. R. Rabiner and C. M. Rader, Eds. (IEEE Press, New York, 1972).
- ^J. O. Smith III, Series Second Order Sections
- ^Li, Gang; Limin Meng; Zhijiang Xu; Jingyu Hua (July 2010). 'A novel digital filter structure with minimum roundoff noise'. Digital Signal Processing. 20 (4): 1000–1009. doi:10.1016/j.dsp.2009.10.018.
- ^Fettweis, Alfred (Feb 1986). 'Wave digital filters: Theory and practice'. Proceedings of the IEEE. 74 (2): 270–327. doi:10.1109/proc.1986.13458.
- ^http://www.dspguide.com/ch21/1.htm
- ^Bains, Sunny (July 2008). 'Analog's answer to FPGA opens field to masses'. EETimes.
- ^Billings S.A. 'Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains'. Wiley, 2013
Further reading[edit]
- J. O. Smith III, Introduction to Digital Filters with Audio Applications, Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, September 2007 Edition.
- Mitra, S. K. (1998). Digital Signal Processing: A Computer-Based Approach. New York, NY: McGraw-Hill.
- Oppenheim, A. V.; Schafer, R. W. (1999). Discrete-Time Signal Processing. Upper Saddle River, NJ: Prentice-Hall.
- Kaiser, J .F. (1974). Nonrecursive Digital Filter Design Using the Io-sinh Window Function. Proc. 1974 IEEE Int. Symp. Circuit Theory. pp. 20–23.
- Bergen, S. W. A.; Antoniou, A. (2005). 'Design of Nonrecursive Digital Filters Using the Ultraspherical Window Function'. EURASIP Journal on Applied Signal Processing. 2005 (12): 1910–1922.
- Parks, T. W.; McClellan, J. H. (March 1972). 'Chebyshev Approximation for Nonrecursive Digital Filters with Linear Phase'. IEEE Trans. Circuit Theory. CT-19: 189–194.
- Rabiner, L. R.; McClellan, J. H.; Parks, T. W. (April 1975). 'FIR Digital Filter Design Techniques Using Weighted Chebyshev Approximation'. Proc. IEEE. 63 (4): 595–610. doi:10.1109/PROC.1975.9794.
- Deczky, A. G. (October 1972). 'Synthesis of Recursive Digital Filters Using the Minimum p-Error Criterion'. IEEE Trans. Audio Electroacoustics. AU-20: 257–263.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Digital_filter&oldid=907990201'
Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which are contradictory. The purpose is to find a realization of the filter that meets each of the requirements to a sufficient degree to make it useful.
The filter design process can be described as an optimization problem where each requirement contributes to an error function which should be minimized. Certain parts of the design process can be automated, but normally an experienced electrical engineer is needed to get a good result.
- 1Typical design requirements
- 1.8Other considerations
- 2Theoretical basis
- 3Methodology
Typical design requirements[edit]
Typical requirements which are considered in the design process are:
- The filter should have a specific frequency response
- The filter should have a specific phase shift or group delay
- The filter should have a specific impulse response
- The filter should be causal
- The filter should be stable
- The filter should be localized (pulse or step inputs should result in finite time outputs)
- The computational complexity of the filter should be low
- The filter should be implemented in particular hardware or software
The frequency function[edit]
An important parameter is the required frequency response.In particular, the steepness and complexity of the response curve is a deciding factor for the filter order and feasibility.
A first-order recursive filter will only have a single frequency-dependent component. This means that the slope of the frequency response is limited to 6 dB per octave. For many purposes, this is not sufficient. To achieve steeper slopes, higher-order filters are required.
In relation to the desired frequency function, there may also be an accompanying weighting function, which describes, for each frequency, how important it is that the resulting frequency function approximates the desired one. The larger weight, the more important is a close approximation.
Typical examples of frequency function are:
- A low-pass filter is used to cut unwanted high-frequency signals.
- A high-pass filter passes high frequencies fairly well; it is helpful as a filter to cut any unwanted low-frequency components.
- A band-pass filter passes a limited range of frequencies.
- A band-stop filter passes frequencies above and below a certain range. A very narrow band-stop filter is known as a notch filter.
- A differentiator has an amplitude response proportional to the frequency.
- A low-shelf filter passes all frequencies, but increases or reduces frequencies below the shelf frequency by specified amount.
- A high-shelf filter passes all frequencies, but increases or reduces frequencies above the shelf frequency by specified amount.
- A peak EQ filter makes a peak or a dip in the frequency response, commonly used in parametric equalizers.
Phase and group delay[edit]
- An all-pass filter passes through all frequencies unchanged, but changes the phase of the signal. Filters of this type can be used to equalize the group delay of recursive filters. This filter is also used in phaser effects.
- A Hilbert transformer is a specific all-pass filter that passes sinusoids with unchanged amplitude but shifts each sinusoid phase by ±90°.
- A fractional delay filter is an all-pass that has a specified and constant group or phase delay for all frequencies.
The impulse response[edit]
There is a direct correspondence between the filter's frequency function and its impulse response: the former is the Fourier transform of the latter. That means that any requirement on the frequency function is a requirement on the impulse response, and vice versa.
However, in certain applications it may be the filter's impulse response that is explicit and the design process then aims at producing as close an approximation as possible to the requested impulse response given all other requirements.
In some cases it may even be relevant to consider a frequency function and impulse response of the filter which are chosen independently from each other. For example, we may want both a specific frequency function of the filter and that the resulting filter have a small effective width in the signal domain as possible. The latter condition can be realized by considering a very narrow function as the wanted impulse response of the filter even though this function has no relation to the desired frequency function. The goal of the design process is then to realize a filter which tries to meet both these contradicting design goals as much as possible.
Causality[edit]
In order to be implementable, any time-dependent filter (operating in real time) must be causal: the filter response only depends on the current and past inputs. A standard approach is to leave this requirement until the final step. If the resulting filter is not causal, it can be made causal by introducing an appropriate time-shift (or delay). If the filter is a part of a larger system (which it normally is) these types of delays have to be introduced with care since they affect the operation of the entire system.
Filters that do not operate in real time (e.g. for image processing) can be non-causal. This e.g. allows the design of zero delay recursive filters, where the group delay of a causal filter is canceled by its Hermitian non-causal filter.
Stability[edit]
A stable filter assures that every limited input signal produces a limited filter response. A filter which does not meet this requirement may in some situations prove useless or even harmful. Certain design approaches can guarantee stability, for example by using only feed-forward circuits such as an FIR filter. On the other hand, filters based on feedback circuits have other advantages and may therefore be preferred, even if this class of filters includes unstable filters. In this case, the filters must be carefully designed in order to avoid instability.
Locality[edit]
In certain applications we have to deal with signals which contain components which can be described as local phenomena, for example pulses or steps, which have certain time duration. A consequence of applying a filter to a signal is, in intuitive terms, that the duration of the local phenomena is extended by the width of the filter. This implies that it is sometimes important to keep the width of the filter's impulse response function as short as possible.
According to the uncertainty relation of the Fourier transform, the product of the width of the filter's impulse response function and the width of its frequency function must exceed a certain constant. This means that any requirement on the filter's locality also implies a bound on its frequency function's width. Consequently, it may not be possible to simultaneously meet requirements on the locality of the filter's impulse response function as well as on its frequency function. This is a typical example of contradicting requirements.
Computational complexity[edit]
A general desire in any design is that the number of operations (additions and multiplications) needed to compute the filter response is as low as possible. In certain applications, this desire is a strict requirement, for example due to limited computational resources, limited power resources, or limited time. The last limitation is typical in real-time applications.
There are several ways in which a filter can have different computational complexity. For example, the order of a filter is more or less proportional to the number of operations. This means that by choosing a low order filter, the computation time can be reduced.
For discrete filters the computational complexity is more or less proportional to the number of filter coefficients. If the filter has many coefficients, for example in the case of multidimensional signals such as tomography data, it may be relevant to reduce the number of coefficients by removing those which are sufficiently close to zero. In multirate filters, the number of coefficients by taking advantage of its bandwidth limits, where the input signal is downsampled (e.g. to its critical frequency), and upsampled after filtering.
Another issue related to computational complexity is separability, that is, if and how a filter can be written as a convolution of two or more simpler filters. In particular, this issue is of importance for multidimensional filters, e.g., 2D filter which are used in image processing. In this case, a significant reduction in computational complexity can be obtained if the filter can be separated as the convolution of one 1D filter in the horizontal direction and one 1D filter in the vertical direction. A result of the filter design process may, e.g., be to approximate some desired filter as a separable filter or as a sum of separable filters.
Other considerations[edit]
It must also be decided how the filter is going to be implemented:
Analog filters[edit]
The design of linear analog filters is for the most part covered in the linear filter section.
Digital filters[edit]
Digital filters are classified into one of two basic forms, according to how they respond to a unit impulse:
- Finite impulse response, or FIR, filters express each output sample as a weighted sum of the last N input samples, where N is the order of the filter. FIR filters are normally non-recursive, meaning they do not use feedback and as such are inherently stable. A moving average filter or CIC filter are examples of FIR filters that are normally recursive (that use feedback). If the FIR coefficients are symmetrical (often the case), then such a filter is linear phase, so it delays signals of all frequencies equally which is important in many applications. It is also straightforward to avoid overflow in an FIR filter. The main disadvantage is that they may require significantly more processing and memory resources than cleverly designed IIR variants. FIR filters are generally easier to design than IIR filters - the Parks-McClellan filter design algorithm (based on the Remez algorithm) is one suitable method for designing quite good filters semi-automatically. (See Methodology.)
- Infinite impulse response, or IIR, filters are the digital counterpart to analog filters. Such a filter contains internal state, and the output and the next internal state are determined by a linear combination of the previous inputs and outputs (in other words, they use feedback, which FIR filters normally do not). In theory, the impulse response of such a filter never dies out completely, hence the name IIR, though in practice, this is not true given the finite resolution of computer arithmetic. IIR filters normally require less computing resources than an FIR filter of similar performance. However, due to the feedback, high order IIR filters may have problems with instability, arithmetic overflow, and limit cycles, and require careful design to avoid such pitfalls. Additionally, since the phase shift is inherently a non-linear function of frequency, the time delay through such a filter is frequency-dependent, which can be a problem in many situations. 2nd order IIR filters are often called 'biquads' and a common implementation of higher order filters is to cascade biquads. A useful reference for computing biquad coefficients is the RBJ Audio EQ Cookbook.
Sample rate[edit]
Unless the sample rate is fixed by some outside constraint, selecting a suitable sample rate is an important design decision. A high rate will require more in terms of computational resources, but less in terms of anti-aliasing filters. Interference and beating with other signals in the system may also be an issue.
Anti-aliasing[edit]
For any digital filter design, it is crucial to analyze and avoid aliasing effects. Often, this is done by adding analog anti-aliasing filters at the input and output, thus avoiding any frequency component above the Nyquist frequency. The complexity (i.e., steepness) of such filters depends on the required signal to noise ratio and the ratio between the sampling rate and the highest frequency of the signal.
Theoretical basis[edit]
Parts of the design problem relate to the fact that certain requirements are described in the frequency domain while others are expressed in the signal domain and that these may contradict. For example, it is not possible to obtain a filter which has both an arbitrary impulse response and arbitrary frequency function. Other effects which refer to relations between the signal and frequency domain are
- The uncertainty principle between the signal and frequency domains
- The variance extension theorem
- The asymptotic behaviour of one domain versus discontinuities in the other
The uncertainty principle[edit]
As stated by the Gabor limit, an uncertainty principle, the product of the width of the frequency function and the width of the impulse response cannot be smaller than a specific constant. This implies that if a specific frequency function is requested, corresponding to a specific frequency width, the minimum width of the filter in the signal domain is set. Vice versa, if the maximum width of the response is given, this determines the smallest possible width in the frequency.This is a typical example of contradictory requirements where the filter design process may try to find a useful compromise.
The variance extension theorem[edit]
Let be the variance of the input signal and let be the variance of the filter. The variance of the filter response, , is then given by
- = +
This means that and implies that the localization of various features such as pulses or steps in the filter response is limited by the filter width in the signal domain. If a precise localization is requested, we need a filter of small width in the signal domain and, via the uncertainty principle, its width in the frequency domain cannot be arbitrary small.
Discontinuities versus asymptotic behaviour[edit]
Let f(t) be a function and let be its Fourier transform.There is a theorem which states that if the first derivative of F which is discontinuous has order , then f has an asymptotic decay like .
A consequence of this theorem is that the frequency function of a filter should be as smooth as possible to allow its impulse response to have a fast decay, and thereby a short width.
Methodology[edit]
One common method for designing FIR filters is the Parks-McClellan filter design algorithm, based on the Remez exchange algorithm. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of N coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as you can get to the desired response given that you can use only N coefficients. This method is particularly easy in practice and at least one text[1] includes a program that takes the desired filter and N and returns the optimum coefficients. One possible drawback to filters designed this way is that they contain many small ripples in the passband(s), since such a filter minimizes the peak error.
Another method to finding a discrete FIR filter is filter optimization described in Knutsson et al., which minimizes the integral of the square of the error, instead of its maximum value. In its basic form this approach requires that an ideal frequency function of the filter is specified together with a frequency weighting function and set of coordinates in the signal domain where the filter coefficients are located.
An error function is defined as
where is the discrete filter and is the discrete-time Fourier transform defined on the specified set of coordinates. The norm used here is, formally, the usual norm on spaces. This means that measures the deviation between the requested frequency function of the filter, , and the actual frequency function of the realized filter, . However, the deviation is also subject to the weighting function before the error function is computed.
Once the error function is established, the optimal filter is given by the coefficients which minimize . This can be done by solving the corresponding least squares problem. In practice, the norm has to be approximated by means of a suitable sum over discrete points in the frequency domain. In general, however, these points should be significantly more than the number of coefficients in the signal domain to obtain a useful approximation.
Simultaneous optimization in both domains[edit]
The previous method can be extended to include an additional error term related to a desired filter impulse response in the signal domain, with a corresponding weighting function. The ideal impulse response can be chosen independently of the ideal frequency function and is in practice used to limit the effective width and to remove ringing effects of the resulting filter in the signal domain. This is done by choosing a narrow ideal filter impulse response function, e.g., an impulse, and a weighting function which grows fast with the distance from the origin, e.g., the distance squared. The optimal filter can still be calculated by solving a simple least squares problem and the resulting filter is then a 'compromise' which has a total optimal fit to the ideal functions in both domains. An important parameter is the relative strength of the two weighting functions which determines in which domain it is more important to have a good fit relative to the ideal function.
See also[edit]
References[edit]
- ^Rabiner, Lawrence R., and Gold, Bernard, 1975: Theory and Application of Digital Signal Processing (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.) ISBN0-13-914101-4
- A. Antoniou (1993). Digital Filters: Analysis, Design, and Applications (2 ed.). McGraw-Hill, New York, NY. ISBN978-0-07-002117-4.
- A. Antoniou (2006). Digital Signal Processing: Signals, Systems, and Filters. McGraw-Hill, New York, NY. doi:10.1036/0071454241. ISBN978-0-07-145424-7.
- S.W.A. Bergen; A. Antoniou (2005). 'Design of Nonrecursive Digital Filters Using the Ultraspherical Window Function'. EURASIP Journal on Applied Signal Processing. 2005 (12): 1910. doi:10.1155/ASP.2005.1910.
- A.G. Deczky (October 1972). 'Synthesis of Recursive Digital Filters Using the Minimum p-Error Criterion'. IEEE Trans. Audio Electroacoustics. AU-20 (4): 257–263. doi:10.1109/TAU.1972.1162392.
- J.K. Kaiser (1974). 'Nonrecursive Digital Filter Design Using the I0-sinh Window Function'. Proc. 1974 IEEE Int. Symp. Circuit Theory (ISCAS74). San Francisco, CA. pp. 20–23.
- H. Knutsson; M. Andersson; J. Wiklund (June 1999). 'Advanced Filter Design'. Proc. Scandinavian Symposium on Image Analysis, Kangerlussuaq, Greenland.
- S.K. Mitra (1998). Digital Signal Processing: A Computer-Based Approach. McGraw-Hill, New York, NY. ISBN978-0-07-286546-2.
- A.V. Oppenheim; R.W. Schafer; J.R. Buck (1999). Discrete-Time Signal Processing. Prentice-Hall, Upper Saddle River, NJ. ISBN978-0-13-754920-7.
- T.W. Parks; J.H. McClellan (March 1972). 'Chebyshev Approximation for Nonrecursive Digital Filters with Linear Phase'. IEEE Trans. Circuit Theory. CT-19 (2): 189–194. doi:10.1109/TCT.1972.1083419.
- L.R. Rabiner; J.H. McClellan; T.W. Parks (April 1975). 'FIR Digital Filter Design Techniques Using Weighted Chebyshev Approximation'. Proc. IEEE. 63 (4): 595–610. doi:10.1109/PROC.1975.9794.
External links[edit]
- Yehar's digital sound processing tutorial for the braindead! This paper explains simply (between others topics) filters design theory and give some examples
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Filter_design&oldid=912992146'