Abstract: The present invention provides an improved Finite Impulse Response (FIR) filter providing linear scalability and implementation without the need for delay lines, comprising, a multiprocessor architecture including a plurality of ALUs (Arithmetic and Logic Unit), Multipliers units, Data cache, and Load/Store units sharing a common Instruction cache, and multi-port memory, and an assigning means for assigning to each available processing unit the computation of specified unique partial product terms and the accumulation of each computed partial product on specified output sample values. Further the invention also provides a method for implementing an improved Finite Impulse Response (FIR) filter providing linear scalability using a multiprocessing architecture platform without the need for delay lines.
LINEARLY SCALABLE FINITE IMPULSE RESPONSE FILTER
Field of the Invention:
The invention relates to Finite Impulse Response (FIR) Digital Filters. More specifically the invention relates to an efficient implementation for Finite Impulse Response filters in multi-processor architecture.
Background of the invention:
Finite Impulse Response (FIR) filters are important and widely used form of digital filter. FIR filters are used in numerous real time applications including as telecommunication, particularly for the generation of constant phase delay characteristics. In addition, signal processing applications requiring interpolation and decimation function incorporate FIR filtration as an integral part of the process.
The output sample of a FIR filter is a convolution summation of input samples and the impulse response of the filter. The output, y(n) of a causal FIR filter can be written as:
(Formula Removed)
Where:
H is the total number of filter coefficients and n = 0, 1, 2, 3... for different values of n,
the output sample of the filter can be obtained.
h(k) is the impulse response of the filter. Filter coefficients are determined for various values of k. The value of k can never be negative for a causal system.
x(m) is the input sample, m = n - k as shown in equation (1). The value of m can never be negative for a casual system.
As stated by the above equation, the coefficients are multiplied with the appropriate input samples and summed to obtain the output sample.
The coefficients are multiplied with the appropriate input samples and then accumulated for obtaining a particular output sample.
For N number of input samples and H number of coefficients, the required number of multiplications for a given output sample is H. The saturation occurs at the IIth output sample as shown in figure 1.
The H number of multiplications are necessarily required if all the H coefficients are unique.
Compared to other filters, FIR filters offer the following advantages: FIR filters are simple to implement and design. Linear-phase filters that delay the input signal, without causing phase distortion, can be easily realized using FIR filters. FIR filters do not employ any feedback and hence present fewer difficulties in practical implementation. The absence of feedback also simplifies design by making it possible to use finite precision arithmetic. Multi-rate applications such as "decimation" (reducing the sampling rate), "interpolation" (increasing the sampling rate), can be realized best by FIR filters.
FIR filters can be easily implemented using fractional arithmetic unlike other type of filters in which it is difficult to do so. It is always possible to implement a FIR filter using coefficients with magnitude of less than 1.0 as the overall gain of the FIR filter can be adjusted at its output. All the above advantages make FIR filters preferable for fixed-point Digital Signal Processors (DSPs).
The conventional approach of FIR filter implementation utilizes a delay line (m = n - k eq. 1), resulting in increased memory requirements and slower computation.
US patent 5,732,004 describes an algorithm for FIR filtering. It proposes a method of decimating and/or interpolating a multi-bit input signal in which n/2 additions are performed, where 'n' is the number of bits in each filter coefficient. Scaling and multiplication of data with coefficients is performed using standard DSP architecture using coefficient values and associated scaling factors stored in memory. The coefficients are stored in coded form, and are then decoded prior to multiplication by the data values. A delay line is essential for the implementation of this method.
US patents 6,260,053 and 5,297,069 describe methods and implementations that utilize delay lines for FIR filter implementation. Moreover, none of these methods provides linear scalability.
The Objects and Summary of the invention:
The object of the invention is to provide an efficient implementation for FIR filter for multi-processor architectures.
Another object of this invention is to provide a linearly scalable FIR filter.
Yet another object of the invention is to obviate the need of a delay line for implementing an FIR filter thereby reducing the memory requirement for computation and resulting in a cost effective solution.
It is a further object of the invention to speed up the computation of FIR filters.
To achieve said objective this invention provides an improved Finite Impulse Response (FIR) filter providing linear scalability and implementation without the need for delay lines, comprising, a multiprocessor architecture including a plurality of ALUs (Arithmetic and Logic Unit), Multipliers units, Data cache, and Load/Store units sharing a common Instruction cache, and multi-port memory, and an assigning means for assigning to each available processing unit the computation of specified unique partial
product terms and the accumulation of each computed partial product on specified output sample values.
Wherein the assigning means is a pre-process that is used to design the implementation of the FIR filter based on the filter specifications.
Further the invention provides a method for implementing an improved Finite Impulse Response (FIR) filter providing linear scalability using a multiprocessing architecture platform without the need for delay lines, comprising the steps of:
assigning to each available processing unit the computation of specified
unique partial product terms and
the accumulation of each computed partial product on specified output
sample values.
Brief Description of drawings and set of equations:
The invention will now be described with reference to the accompanying drawings:
Figure 1 shows a generalized set of equations for the outputs of an FIR filter with N input/output samples, and H filter coefficients.
Figure 2 shows a set of equations for the outputs of a FIR filter having 11 input/output samples, and 6 filter coefficients.
Figure 3 shows a set of equations for the outputs of an FIR filter having 13 input/output samples, and 7 filter coefficients.
Figure 4 shows a schematic diagram of a Multi-processor architecture.
Figure 5 shows a schematic diagram of the internal architecture of one processor of a multi-processor architecture.
Detailed Description of the invention
Figurel shows a generalized set of equations for the outputs of an FIR filter with N input/output samples, and H filter coefficients. These equations define the outputs from an FIR filter.
Figure 2 refers to an example of the output samples of a symmetric FIR filter having 11 input/output samples, and 6 filter coefficients. If the total number of coefficients 'FT is an even number as shown in this example, then it is possible to obtain all the output samples by computing a subset of partial products that involve 'H/2' coefficients. Each of these partial products occurs in 2 symmetrically spaced samples, and can therefore be computed once and reused thereby eliminating the requirement of a delay line and reducing computational time and memory space requirements, as compared to the conventional approach of FIR filter implementation.
In the example of Figure 2 output sample y(5) requires the computation of only 3 partial products i.e. x(5)h(0), x(4)h(l) and x(3)h(2) as the other three partial product terms x(0)h(0), x(l)h(l) and x(2)h(2) have already been computed in previous samples y(0), y(2) and y(4) respectively. Similarly, x(5)h(0), x(4)h(l) and x(3)h(2) contribute to terms y(10), y(8) and y(6) respectively and so on. It can be observed from figure 2 that 'P' number of consecutive output calculations can be distributed over 'P' processors simultaneously, provided 'P' consecutive inputs are present. As shown for the case of two processors (P = 2), y(5) and y(6) can be calculated simultaneously.
Figure 3 shows an example of the output samples of a symmetrical FIR filter having 13 input/output samples, and 7 filter coefficients. If the total number of coefficients 'FT is an odd number as shown in this example, then it is possible to obtain all the output sample by calculating a maximum ((H-l)/2+l) number of partial products after the saturation. When the filter coefficient are odd i.e. H is odd, the basic algorithm remains the same as
for the even number of coefficients case with the exception for the ((H-l)/2 + 1)th column around which the figure 3 is centro-symmetric. The total number of the required multiplications for each output sample is ((H-1)/2 +1) after the saturation. As shown, the product term associated with h(3) can not be reused to calculate the partial summation for any other output sample.
The basic algorithm for symmetric FIR filters computation is as follows:
- Initialize loop index by zero
- Load input sample and coefficients
- Multiply input sample and coefficient
- Update current output
- Update partial output, if required (see Note below)
- Increment loop index by Number of processor
- If loop termination is condition satisfied then stop, else go to second step.
Note: For Centro-symmetric case one product term is never re-used.
For the case of a 2 parallel-processors architecture the basic algorithm is modified as follows:
- Initialize loop-index by zero
- Load input sample for processor # 1
- Load coefficient for processor # 1
- Multiply input sample and coefficient in processor # 1
- Update current output in processor # 1
- Update partial output, if required, in processor # 1
- Load input sample for processor #2
- Load coefficient for processor #2
- Multiply input sample and coefficient in processor #2
- Update current output in processor #2
- Update partial output, if required, in processor #2
- Increment loop-index by 2
- If loop-index equals to (N*H) then stop, or go to second step
The algorithm can be generalized for any number of processors in parallel.
Further the algorithm is also applicable for the asymmetric FIR Filters in following steps.
- Initialize loop index by zero
- Load input sample and coefficients
- Multiply input sample and coefficient
- Update current output
- Increment loop index by Number of processor
- If loop termination is condition satisfied then stop, else go to second step.
Each processor in the parallel processing architecture is provided with an independent ALU (Arithmetic and Logic Unit), Multiplier unit, Data cache, and Load/Store unit. Al the processors share a common Instruction cache, and multi-port memory. This architecture is typical of VLIW (Very Large Instruction Word) processors.
Figure 4 shows the typical architecture of a Very Long Instruction Word (VLIW) processor core. It is a Multi-processor architecture in which each processors is composed of a mix of Register Banks, Constant Generators (immediate operands) and Functional Units. Different processors may have different unit/register mixes, but a single Program Counter and a unified I-cache controls them all, so that all processors run in lockstep (as expected in a VLIW). Similarly, the execution pipeline drives all processors. Inter-processor communication, achieved by explicit register-to-register move, is compiler-controlled and invisible to the programmer. At the multi-processor level, the typical architecture specification for such a device is as follows:
• An instruction delivery mechanism is provided to get instructions from the cache to the processors' data-path.
• An inter-processor communication mechanism is provided to transfer data among processors.
• The data-cache is organized to establish a guarantee of main memory coherency in the presence of multiple memory accesses.
The Algorithm to compute output samples resides in the instruction fetch cache and expansion unit.
Figure 5 shows the schematic diagram of the internal architecture of the processor. Generally a processor is a 4-issue (maximum 4 instructions can be issued simultaneously) VLIW core comprising the following:
Four 32-bit integer ALUs, two 16x32 multipliers, one Load/Store Unit and one Branch Unit.
Sixty four 32-bit General-purpose registers and 8 1-bit branch registers (used to store branch condition, predicates and carries). Instructions allow two long immediate data operands per cycle.
The Instruction Set Architecture is a very simple integer RISC instruction set with minimal "predication" support through select instructions.
The memory-addressing repertoire includes base + offset addressing, allows speculative execution (dis-missible loads, handled by the protection unit) and software pre-fetching.
The instant invention utilizes this architecture to provide faster computation as it is evident from the fact that, if any FIR filter takes T units of time to compute N outputs in one processor, then for the increased number of processors (say P), the time required to compute N outputs will be T/P units of time, which provides 'Linear scalability'. Further, since all the processors load input samples and coefficients from the shared memory where both the data are available, there is no requirement for any delay line. The instant invention does not limit its scope to non-unique filter coefficients, but both the non-utilization of delay line as well as linear scalability can be realized for all unique coefficients.
Claim:
1. An improved symmetric Finite Impulse Response (FIR) filter providing linear
scalability and implementation without the need for delay lines, comprising:
a multiprocessor architecture including a plurality of ALUs (Arithmetic
and Logic Unit), Multipliers units, Data cache, and Load/Store units
sharing a common Instruction cache, and multi-port memory, and
an assigning means for assigning to each available processing unit the
computation of specified unique partial product terms and the
accumulation of each computed partial product on specified output sample
values.
2. An improved FIR filter, as claimed in claim 1 wherein the assigning means is a preprocess that is used to design the implementation of the FIR filter based on the filter specifications.
3. A method for implementing an improved symmetric Finite Impulse Response (FIR) filter providing linear scalability using a multiprocessing architecture platform without the need for delay lines, comprising the steps of:
assigning to each available processing unit the computation of specified unique partial product terms and the accumulation of each computed partial product on specified output sample values.
4. An improved symmetric Finite Impulse Response (FIR) filter providing linear scalability and implementation without the need for delay lines substantially as herein described with reference to and as illustrated in the accompanying drawings.
5. A method for implementing an improved symmetric Finite Impulse Response (FIR) filter providing linear scalability using a multiprocessing architecture platform without the need for delay lines substantially as herein described with reference to and as illustrated in the accompanying drawings
| # | Name | Date |
|---|---|---|
| 1 | 1166-del-2002-gpa.pdf | 2011-08-21 |
| 1 | 1166-DEL-2002_EXAMREPORT.pdf | 2016-06-30 |
| 2 | 1166-del-2002-form-3.pdf | 2011-08-21 |
| 2 | 1166-del-2002-abstract.pdf | 2011-08-21 |
| 3 | 1166-del-2002-form-2.pdf | 2011-08-21 |
| 3 | 1166-del-2002-claims.pdf | 2011-08-21 |
| 4 | 1166-del-2002-form-18.pdf | 2011-08-21 |
| 4 | 1166-del-2002-correspondence-others.pdf | 2011-08-21 |
| 5 | 1166-del-2002-correspondence-po.pdf | 2011-08-21 |
| 5 | 1166-del-2002-form-1.pdf | 2011-08-21 |
| 6 | 1166-del-2002-description (complete).pdf | 2011-08-21 |
| 6 | 1166-del-2002-drawings.pdf | 2011-08-21 |
| 7 | 1166-del-2002-description (complete).pdf | 2011-08-21 |
| 7 | 1166-del-2002-drawings.pdf | 2011-08-21 |
| 8 | 1166-del-2002-correspondence-po.pdf | 2011-08-21 |
| 8 | 1166-del-2002-form-1.pdf | 2011-08-21 |
| 9 | 1166-del-2002-correspondence-others.pdf | 2011-08-21 |
| 9 | 1166-del-2002-form-18.pdf | 2011-08-21 |
| 10 | 1166-del-2002-form-2.pdf | 2011-08-21 |
| 10 | 1166-del-2002-claims.pdf | 2011-08-21 |
| 11 | 1166-del-2002-form-3.pdf | 2011-08-21 |
| 11 | 1166-del-2002-abstract.pdf | 2011-08-21 |
| 12 | 1166-DEL-2002_EXAMREPORT.pdf | 2016-06-30 |
| 12 | 1166-del-2002-gpa.pdf | 2011-08-21 |