Abstract: An error correction system for decoding transmitted data in multichannels is disclosed. The system uses low density parity check nodes. A method of error correction using LDPC is also disclosed.
FORM-2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
PROVISIONAL
Specification
(See section 10 and rule 13)
LOW DENSITY P ARTY CHECK CODE DECODER
TATA CONSULTANCY SERVICES LTD.
an Indian Company
of Bombay House, 24 Sir Homi Mody Road,
Mumbai 400051.
Maharashtra, India.
THE FOLLOWING SPECIFICATION DESCRIBES THE INVENTION.
1
Field of invention:
This invention relates to a system for information decoding.
In particular, this invention relates to a decoding system using low density parity check (LDPC) codes.
Background of the invention: Introduction:
The use and implementation of computer networks has become very popular. Networks such as Local area networks (LANs), Metropolitan area networks (MANs), Wireless LANs (WLANs) and the like, are being widely implemented for the purpose of the accessing data. The use of such networks has lead to the development of various international standards in the field of communication such as WiFi (Wireless Fidelity) and WiMAX (Worldwide Interoperability for Microwave Access).
WiMAX is a MAN technology that can connect Wi-Fi hotspots with each other and to other parts of the Internet and provide a wireless alternative to cable and DSL. WiMAX provides up to 50 km (31 miles) of linear service area range and allows connectivity between users without a direct line of sight. The technology has been claimed to provide shared data rates up to 70 Mbit/s, which, according to WiMAX proponents, is enough bandwidth to simultaneously support more than 60 businesses and well over a thousand homes at 1 Mbit/s DSL-level connectivity. Real world tests, however, show practical maximum data rates between 500kbit/s and 2 Mbit/s, depending on conditions at a given site.
2
Also WiMAX allows interpenetration for broadband service provision of VoIP, video, and Internet access simultaneously. Even in areas without preexisting physical cable or telephone networks, WiMAX allows access between networks which are within range of each other. WiMAX antennas share a cell tower without compromising the function of cellular arrays already in place. WiMAX antennas may also be connected to an Internet backbone via either a light fiber optics cable or a directional microwave link. WiMAX also facilitates increase in bandwidth for a variety of data-intensive applications.
WiMAX has become synonymous with the IEEE 802.16 standard family, an emerging standard for fixed and mobile MAN (Metropolitan Area Network) Broadband Wireless Access (BWA). The original 802.16 and the subsequently amended 802.16a standards are both used for fixed BWA. The latter caters for non-line of sight (NLOS) applications, as BWA is increasingly becoming a residential application. The latest 802.16e amendment is supporting for mobility (mobility at vehicular speeds, around 120 km/h) in WiMAX system. The 802.16e standard will allow users' hardware (notebooks, personal digital assistants (PDAs) to access high speed Internet, and while roaming outside of the WiFi (Wireless Fidelity) hotspots. The 802.16 standard supports high data rates (up to about 70 Mbps) with a variety of channel coding options. The mandatory scheme is a convolutional code. Convolutional turbo codes, turbo product codes and LDPC codes are optional. These optional codes can be used to ensure robustness in extreme fading channels.
3
LDPC codes are linear block codes originally proposed by Gallager in the early 1960s. Their parity check matrix is sparse and has low density of one's. The original codes were regular codes having uniform column and row weight in the parity check matrix. Recently, these codes have emerged as competitors for turbo codes, with capacity approaching performance. Better performance of LDPC codes is achieved with a proper choice of code and decoding signal processing. A popular LDPC decoding algorithm is the Belief Propagation algorithm also referred to as Sum-Product algorithm. The Sum-product algorithm is a message passing algorithm operating on the Tanner graph, which is a bipartite graph representing the parity check matrix and consisting of variable nodes and check nodes. A bipartite graph is a special graph where the set of vertices can be divided into two disjoint sets with two vertices of the same set never sharing an edge.
Prior Art:
The main challenge in the LDPC code decoder hardware implementation is to effectively manage the message passing during the iterative belief propagation (BP) decoding. The system and device for decoding usually uses three schemes: (1) Parallel (2) Serial and (3) Semi Parallel.
Fully parallel decoders directly instantiate the bipartite graph of the LDPC code to the hardware. Each individual variable node or check node is physically implemented as node functional unit, and all the units are connected through an interconnection network reflecting the bipartite graph connectivity. There is no need for central memory blocks to store the messages. They can be latched close to the processing units. Such
4
fully parallel decoders can achieve very high decoding throughput in terms of bits per second. But, area of implementation due to the physical implementation of all the processing units and interconnect routing make this approach infeasible for large block lengths. Further, the parallel hardware design is fixed to a particular parity check matrix. This prohibits the reconfigurability required when the block length or rate of the code changes.
Fully-serial architecture has a smaller area since it is sufficient to have just one variable node computational unit (VCU) and one check node computational unit (CCU). The fully-serial approach is suitable for Digital Signal Processors in which there are only a few functional units available to use. However, the speed of decoding is very low in a serial decoder.
The prior art system or device which use the Sum-Product algorithm incorporates means that are adpated to operate on a Tanner graph. The device has a mechanism for initialization and means in which each iteration message passing occurs from each check node to all adjacent variable nodes in the first half of the iteration and from each variable node to its adjacent check nodes in the second half of the iteration. The device also has a mechanism by which repeated iterations of the message passing takes place along the edges of the graph, with some stopping criterion.
The steps involved in the implementation of the Sum-product algorithm are given in the following for sign-magnitude processing form:
5
Initialization: T(0)n,m = In ; E(0)n,m = 0 Iteration:
For iteration counter 1=1, 2, lmax the various
Check node update rule
###STR(
)###STR
Variable node update rule: ###STR( )###STR
Last variable node update rule: ###STR( )###STR
In the above equations Tnm is the information sent by a variable node n to its connected check node m. En>nl is the message passed from check node m to the connected variable node n (information given by the parity check m on bit n). M(n) is the set of check nodes connected to variable node n. N(m) is the set of variable nodes connected to check node m, 0(x)= -log tanh (x/2)) with x>0. In is the channel Log Likelihood Ratio (LLR) and can be obtained depending on the channels Additive White Gaussian Noise (AWGN). T indicates the iteration number with lmax being the number of iterations. It is observed that check node computation is more complex. The nonlinear function 0(x) is implemented using a look-up table (LUT).
However the use of the prior art devices using the Sum-product process for decoding has a number of disadvantages which can be summarized as follows:
6
(i) For every iteration the device needs to refer to a look up table, (ii) For every iteration the device needs to compute log likelihood ratio's, (iii) The device requires a computational means for computing log
likelihood ratios,
(iv) The device requires a storage means for storing values relating to
every log likelihood ratio computed,
(v) The device requires another computational means for computing
Noise variance values related to every iteration.
Due to the aforementioned disadvantages the complexity and the time required for decoding of LDPC code increases considerably.
This invention seeks to overcome the limitations of the prior art.
An object of this invention is to provide a decoding device and system using semiparallel processing method for the decoding of LDPC codes for the WiMAX.
Another object of this invention is to obtain a reconfigurable architecture which can be used for different block lengths and different code rates.
Another object is to provide novel mathematical expressions, which would obtain the inputs for the "location pointer" and "bank selector" which in turn would help us achieve the tanner graph connectivity.
Another object of this invention is to provide a system that reduces the bit error rate and the frame error rate
7
Another object of this invention is to provide a system which eliminates the use of look up tables thus reducing memory requirements.
Summary of the invention:
In accodance with this invention there is provided a decoder for decoding low density parity check codes. The decoder comprises:
(i) variable node means;
(ii) check node means;
(iii) permutation/switching network means;
(iv) bank selector means ;
(v) location pointer means;
(vi) check node computation means; and
(vii) variable node computation means.
In accordance with one practical embodiment, this invention envisages a decoder which incorporates the use of a semi-parallel method for decoding LDPC codes. The semi-parallel decoder device targets on appropriate trade-offs between hardware complexity and decoding speed. The decoder device consists of an array of node computation units to perform all the node computation and an array of memory blocks to store all decoding message. The message passing that reflects the bipartite graph connectivity is jointly realized by the memory address generator and the interconnection among memory blocks and node computation units. They can support flexible code rate configurations, frame lengths and degree distributions thus facilitating reconfigurability.
8
Brief description of the accompanying drawings:
The invention will be described in detail with reference to a preferred
embodiment. Reference to this embodiment does not limit the scope of the
invention.
In the accompanying drawings:
Figure 1 illustrates the decoder architecture in accordance with this
invention.
Detailed description of the accompanying drawings:
The invention will now be explained with reference to figure 1 of the accompanying drawings.
Figure 1 illustrates the decoder architecture in accordance with this invention. The decoder shown is for rate 1/2 and frame length 2304. The decoder comprises:
(i) variable node means (10);
(ii) check node means (12);
(iii) permutation/switching network means (14);
(iv) bank selector means (16);
(v) location pointer means (18);
(vi) check node computation means (20); and
(vii) Variable node computation means (22). The design is based on the cyclic behaviour of the codes. The number of memory banks and number of addresses stored in each bank would be required during the reconfigurability issue being addressed for different lengths and different rates.
9
The addresses of the neighbors of a check node and a variable node are related to a rate-dependent factor 'z'. For N=2304 and rate 1/2 a rate dependent factor with value 96 is selected. . A set of z nodes starting from 0th node till z -1 node there would be general increment of one in the corresponding node connected, i.e. if we look from check node to the variable nodes connectivity, for every set of z check nodes, the address values of the variable nodes generally gets incremented by one. As far as the variable nodes portion, similarly for every z variable nodes the address of check nodes gets incremented by one. For example, for rate 1/2, the for the first set of 96 check nodes from 0th node till 95th node, the variable node addresses for 1st node are one incremented values of that of the 0th node. The word "generally" mentioned twice accounts for the wrap around which takes place when the address is a multiple of z. For example, while incrementing, if the address obtained is 96, it becomes zero, if the address is 192, it becomes 96 and so on. That is, there is a decrement of 96. Based on this observation, it is decided to use 96 elements each for VN (variable node) and CN (check node) processing in the serial-parallel architecture, with a connection mechanism to take care of the wrap around problem.
In Figure 1, the entries shown in the memory banks are the addresses of variable node means (10) and check nodes (12). 96 processing units work in parallel and fetch the inputs one-by-one based on the entries given in the bank selector means (16), which selects the bank and the location pointer means (18), which suggests the location within the selected bank. The number of addresses in the banks is rate dependent. Figure 1 shows the addresses for rate 1/2 and block length 2304, hence there are 2304
10
variable nodes and 1152 check nodes. The number of entries in the location pointer means (18) and the bank selector means (16) apart from rate dependent vary during the iterative procedure. The entries shown are for the first 96 check-node processing operations; the check nodes in this set are 0, 1, 2, ..., 95 as seen in figure 1. For the next 96 (96, 97, ..., 191), different entries need to be loaded into these registers. To elaborate the iterative procedure further, assume that the entire frame of 2304 received values is scaled to get appropriate Log Likelihood Ratios (LLRs) and are stored in banks as suggested by addresses. The check-node computational means (20) unit then starts processing. The check node computational means (20) gets six or seven values sequentially. For example, the six values for check node 0 are taken from first location of ninety fourth bank, second location of seventy third bank and the like. 96 check node computational means (20) can get the values this manner. Once the processing is completed, the check node computational unit (20) generates the same number of outputs as inputs and the outputs are put into the memory banks in the check node means (12) in the respective addresses. The processing for the next 96 check nodes can then be taken up by loading different relevant entries into the bank selector and location pointer. The inputs of the bank selector and the locations pointer is obtained from an equation of the type:
###STR(
)###STR
Where /is the address of the variable node connected to a particular parity check node. The entries are worked out by picking any parity-check node in the group. Further, in the above equation mod(i,z) gives the bank selector value.
11
Once all the check-node processing is completed, the variable node processing starts 96 variable nodes are processed at a time, fetching the values from the variable node memory banks. The equation for computing location pointer values is:
_[C-mod(C,z)]
z
where C is the check-node address and mod(C,z) is the bank selector value. This completes the decoder architecture design and in implementation a controller is required to make sure that all the units are synchronized.
The rule for updating the check node in accordance with this invention is given by:
Check node update rule: ###STR(
)###STR
The advantages of the device used for decoding low density parity check codes using a decoder in accordance with this invention are as follows: (a) Check node update is replaced by a selection of the minimum input
value. (b)Only two magnitudes need to be saved for each parity check
equation. (c)No need to estimate the noise variance to compute the intrinsic information. The simplicity of the device is associated with a performance penalty in terms of bit error rate. It is known that the performance penalty is due to
12
over estimation of extrinsic information compared to the process known in the prior art. Hence compensation in terms of subtraction (offset) or multiplication (normalized) is being implemented. Value of 0.75 is chosen as a multiplication factor for variable node outputs, which provides comparable or better performance than the process known in the prior art. This factor can be simply implemented in the VCUs (22) by multiplying the output by 0.5 and 0.25 and adding the results to get the compensated values.
The semi-parallel method comprises the following steps:
Step 1: loading of channel values, after converting them to LLRs, in to
variable node memory banks. This step is serial processing.
Step 2: fetching the values from variable node memory banks depending on
the addresses of variable nodes means (10) that are connected to check
nodes means (12). Since 96 separate memory banks are provided, parallelly
96 values may be fetched and are given as input to switching network.
Step 3: Switching network means (14) takes values from all the 96 variable
node means (10) memory banks and does proper shifting operation
depending on the addresses present in the variable nodes means (10) that are
connected to check nodes means (12). Switching is done parallelly on 96
input values and within a clock cycle it gives shifted version of its input
values. Switching network means (14) outputs 96 values, which are nothing,
but shifted version of 96 input values is given to 96 check node
computational means (20).
Step 4: Each check node computational means (20) takes one input value at
every clock cycle and performs computation. If degree of check node means
(12), which is under updation, is "k" then check node computational means
13
(20) takes "k" clock cycles to give its output. Each input value for individual functional unit is given sequentially. Since 96 computational means are available, 96 outputs of all computational means can be obtained in "k" clock cycles. Output of each computational means consists of first minimum among its inputs, second minimum among its inputs, first minimum address, signs of all input values and product of signs of all input values.
Check node computational means (20) takes one input at a time sequentially in each clock cycle. Each check node computational means (20) fetches "k" number of T„ values from variable node memory banks (each in one clock cycle) and one entire relevant check node means (12) value which consists of first minimum (min 1), second minimum (min2), first minimum address (miniaddress), signs of all input values and product of signs of all input
values. These can be used to generate En,m values as follows.
The check node computational means (20) compares mini address with the variable node address from which current T„ value has been fetched. If it is
matched then min2 is actual magnitude of En,m otherwise mini is considered as actual magnitude. The sign of it is obtained by multiplying the overall sign with the sign information of previous iteration. With this it finds +minl or -mini or +min2 or -min2. To start with all the check node means (12) memory bank values are initialized to zeros (zero corresponds to positive sign). Separate buses are provided to check node computational
means (20) for fetching T„ and En,m values from variable node means (10) and check node means (12) memory banks respectively. So both T„ and
14
En,m can be fetched parallelly. At each clock cycle En,m value is subtracted from T„ to get Tn,m :
T =T -E
n,m n n,m
Step 5: Outputs of all 96 computational units are written parallelly into 96 check node means (12) memory banks. This completes the updation of one set of 96 check nodes means (12). These 96 check node computational means (20) will be repeatedly operated for 12 times to complete updation of 12 sets of 96 check nodes (12*96=1152), completing the check node processing (first half iteration). It takes at most k*12 clock cycles.
Step 6: After completion of check node processing, the next step is to fetch values from check node means (12) memory banks, which are required for variable node processing. Similar to check node processing, in variable node processing also, 96 values are fetched parallelly from 96 check node means (12) memory banks and are given to switching network means (14).
Step 7: The outputs of switching network (shifted version of input values) are given to variable node functional means (22).
Step 8: Each individual variable node computational means (22) takes each input sequentially and performs computation and gives the output in "x" number of clock cycles, if "x" is the degree of variable node that is under updation. Variable node computational means (22) takes "x" number of input values from check node memory banks sequentially (one at a time). It
15
derives +minl or -mini or +min2 or -min2 from check node memory values in the same way as explained in check node processing, i.e variable node processor compares minladdr with the variable node address which is under
updation. If it matches, magnitude of Enm [s considered as min2 otherwise
mini. The sign of En,m is obtained by multiplying the overall sign with its relevant individual sign information in the check node means (12) memory.
The obtained En,m values are accumulated with the intrinsic information of that variable node means (10) and result is stored in variable node memory bank. Writing the result of variable node computational means (22) into the corresponding variable node memory location represents the updation of that particular variable node means (10).
Step 9: Since 96 such computational means are there in the architecture,
outputs of all 96 variable node computational means (22) are written
parallelly in to 96 variable node memory banks.
Step 10: These 96 computational means are operated for 24 times to
complete variable node processing (second half iteration).
Step 11: This check node processing and variable node processing is
repeated for maximum number of iterations.
While considerable emphasis has been placed herein on the various components of the preferred embodiment, it will be appreciated that many alterations can be made and that many modifications can be made in the preferred embodiment without departing from the principles of the invention. These and other changes in the preferred embodiment as well as other embodiments of the invention will be apparent to those skilled in the
16
art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
Dated this 19th day of April 2006.
MOHAN DEWAN
Of R. K. Dewan & Co.
Applicant’s Patent Attorney
17
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 613-MUM-2006-FORM 4 [02-05-2024(online)].pdf | 2024-05-02 |
| 1 | Form 27 [31-03-2017(online)].pdf | 2017-03-31 |
| 2 | 613-MUM-2006-RELEVANT DOCUMENTS [28-03-2018(online)].pdf | 2018-03-28 |
| 2 | 613-MUM-2006-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 3 | 613-MUM-2006_EXAMREPORT.pdf | 2018-08-09 |
| 3 | 613-MUM-2006-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 4 | 613-MUM-2006-REPLY TO HEARING(30-6-2014).pdf | 2018-08-09 |
| 4 | 613-MUM-2006-RELEVANT DOCUMENTS [29-09-2021(online)].pdf | 2021-09-29 |
| 5 | 613-MUM-2006-REPLY TO EXAMINATION REPORT(18-9-2012).pdf | 2018-08-09 |
| 5 | 613-MUM-2006-RELEVANT DOCUMENTS [29-03-2020(online)].pdf | 2020-03-29 |
| 6 | 613-MUM-2006-RELEVANT DOCUMENTS [23-03-2019(online)].pdf | 2019-03-23 |
| 6 | 613-MUM-2006-POWER OF ATTORNEY(30-6-2014).pdf | 2018-08-09 |
| 7 | 613-MUM-2006-OTHER DOCUMENT(1-11-2013).pdf | 2018-08-09 |
| 7 | 613-mum-2006-abstract(16-4-2007).pdf | 2018-08-09 |
| 8 | 613-mum-2006-form-3.pdf | 2018-08-09 |
| 8 | 613-MUM-2006-ABSTRACT(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 9 | 613-mum-2006-abstract-1.jpg | 2018-08-09 |
| 9 | 613-mum-2006-form-26.pdf | 2018-08-09 |
| 10 | 613-MUM-2006-ANNEXURE TO FORM 3(1-11-2013).pdf | 2018-08-09 |
| 10 | 613-mum-2006-form-2.pdf | 2018-08-09 |
| 11 | 613-MUM-2006-ANNEXURE TO FORM 3(18-9-2012).pdf | 2018-08-09 |
| 12 | 613-mum-2006-claims(16-4-2007).pdf | 2018-08-09 |
| 12 | 613-mum-2006-form-1.pdf | 2018-08-09 |
| 13 | 613-MUM-2006-CLAIMS(AMENDED)-(18-9-2012).pdf | 2018-08-09 |
| 13 | 613-mum-2006-form 5(16-4-2007).pdf | 2018-08-09 |
| 14 | 613-MUM-2006-CLAIMS(AMENDED)-(30-6-2014).pdf | 2018-08-09 |
| 14 | 613-mum-2006-form 3(16-5-2007).pdf | 2018-08-09 |
| 15 | 613-MUM-2006-CLAIMS(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 15 | 613-mum-2006-form 2(title page)-(provisional)-(19-4-2006).pdf | 2018-08-09 |
| 16 | 613-MUM-2006-CLAIMS(MARKED COPY)-(18-9-2012).pdf | 2018-08-09 |
| 16 | 613-MUM-2006-FORM 2(TITLE PAGE)-(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 17 | 613-mum-2006-form 2(title page)-(complete)-(16-4-2007).pdf | 2018-08-09 |
| 17 | 613-MUM-2006-CLAIMS(MARKED COPY)-(30-6-2014).pdf | 2018-08-09 |
| 18 | 613-MUM-2006-FORM 2(TITLE PAGE)-(18-9-2012).pdf | 2018-08-09 |
| 18 | 613-MUM-2006-CORRESPONDENCE(1-11-2013).pdf | 2018-08-09 |
| 19 | 613-mum-2006-correspondence(28-4-2008).pdf | 2018-08-09 |
| 19 | 613-MUM-2006-FORM 2(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 20 | 613-MUM-2006-CORRESPONDENCE(IPO)-(15-7-2014).pdf | 2018-08-09 |
| 20 | 613-mum-2006-form 2(16-4-2007).pdf | 2018-08-09 |
| 21 | 613-mum-2006-description (provisional).pdf | 2018-08-09 |
| 21 | 613-mum-2006-form 18(29-4-2008).pdf | 2018-08-09 |
| 22 | 613-mum-2006-description(complete)-(16-4-2007).pdf | 2018-08-09 |
| 22 | 613-mum-2006-form 1(9-5-2006).pdf | 2018-08-09 |
| 23 | 613-MUM-2006-DESCRIPTION(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 23 | 613-MUM-2006-DRAWING(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 24 | 613-mum-2006-drawing(16-4-2007).pdf | 2018-08-09 |
| 25 | 613-MUM-2006-DRAWING(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 25 | 613-MUM-2006-DESCRIPTION(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 26 | 613-mum-2006-description(complete)-(16-4-2007).pdf | 2018-08-09 |
| 26 | 613-mum-2006-form 1(9-5-2006).pdf | 2018-08-09 |
| 27 | 613-mum-2006-description (provisional).pdf | 2018-08-09 |
| 27 | 613-mum-2006-form 18(29-4-2008).pdf | 2018-08-09 |
| 28 | 613-MUM-2006-CORRESPONDENCE(IPO)-(15-7-2014).pdf | 2018-08-09 |
| 28 | 613-mum-2006-form 2(16-4-2007).pdf | 2018-08-09 |
| 29 | 613-mum-2006-correspondence(28-4-2008).pdf | 2018-08-09 |
| 29 | 613-MUM-2006-FORM 2(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 30 | 613-MUM-2006-CORRESPONDENCE(1-11-2013).pdf | 2018-08-09 |
| 30 | 613-MUM-2006-FORM 2(TITLE PAGE)-(18-9-2012).pdf | 2018-08-09 |
| 31 | 613-MUM-2006-CLAIMS(MARKED COPY)-(30-6-2014).pdf | 2018-08-09 |
| 31 | 613-mum-2006-form 2(title page)-(complete)-(16-4-2007).pdf | 2018-08-09 |
| 32 | 613-MUM-2006-CLAIMS(MARKED COPY)-(18-9-2012).pdf | 2018-08-09 |
| 32 | 613-MUM-2006-FORM 2(TITLE PAGE)-(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 33 | 613-MUM-2006-CLAIMS(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 33 | 613-mum-2006-form 2(title page)-(provisional)-(19-4-2006).pdf | 2018-08-09 |
| 34 | 613-MUM-2006-CLAIMS(AMENDED)-(30-6-2014).pdf | 2018-08-09 |
| 34 | 613-mum-2006-form 3(16-5-2007).pdf | 2018-08-09 |
| 35 | 613-MUM-2006-CLAIMS(AMENDED)-(18-9-2012).pdf | 2018-08-09 |
| 35 | 613-mum-2006-form 5(16-4-2007).pdf | 2018-08-09 |
| 36 | 613-mum-2006-form-1.pdf | 2018-08-09 |
| 36 | 613-mum-2006-claims(16-4-2007).pdf | 2018-08-09 |
| 37 | 613-MUM-2006-ANNEXURE TO FORM 3(18-9-2012).pdf | 2018-08-09 |
| 38 | 613-MUM-2006-ANNEXURE TO FORM 3(1-11-2013).pdf | 2018-08-09 |
| 38 | 613-mum-2006-form-2.pdf | 2018-08-09 |
| 39 | 613-mum-2006-abstract-1.jpg | 2018-08-09 |
| 39 | 613-mum-2006-form-26.pdf | 2018-08-09 |
| 40 | 613-MUM-2006-ABSTRACT(GRANTED)-(15-7-2014).pdf | 2018-08-09 |
| 40 | 613-mum-2006-form-3.pdf | 2018-08-09 |
| 41 | 613-mum-2006-abstract(16-4-2007).pdf | 2018-08-09 |
| 41 | 613-MUM-2006-OTHER DOCUMENT(1-11-2013).pdf | 2018-08-09 |
| 42 | 613-MUM-2006-POWER OF ATTORNEY(30-6-2014).pdf | 2018-08-09 |
| 42 | 613-MUM-2006-RELEVANT DOCUMENTS [23-03-2019(online)].pdf | 2019-03-23 |
| 43 | 613-MUM-2006-REPLY TO EXAMINATION REPORT(18-9-2012).pdf | 2018-08-09 |
| 43 | 613-MUM-2006-RELEVANT DOCUMENTS [29-03-2020(online)].pdf | 2020-03-29 |
| 44 | 613-MUM-2006-REPLY TO HEARING(30-6-2014).pdf | 2018-08-09 |
| 44 | 613-MUM-2006-RELEVANT DOCUMENTS [29-09-2021(online)].pdf | 2021-09-29 |
| 45 | 613-MUM-2006_EXAMREPORT.pdf | 2018-08-09 |
| 45 | 613-MUM-2006-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 46 | 613-MUM-2006-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 46 | 613-MUM-2006-RELEVANT DOCUMENTS [28-03-2018(online)].pdf | 2018-03-28 |
| 47 | Form 27 [31-03-2017(online)].pdf | 2017-03-31 |
| 47 | 613-MUM-2006-FORM 4 [02-05-2024(online)].pdf | 2024-05-02 |