Abstract: A COMPUTER SYSTEM FOR PROVIDING CACHE MEMORY MANAGEMENT
FORM 2
THE PATENTS ACT 1970
[39 OF 1970]
COMPLETE SPECIFICATION
[See Section 10]
"A COMPUER SYSTEM FOR CACHES MEMORY MANAGEMENT"
INTEL CORPORATION, a Delaware corporation, of 2200 Mission College Boulevard, Santa Clara, California 95052, U.S.A.,
The following specification particularly describes the nature of the invention and the manner in which it is to be performed:-
WO 99/50752 PCT/US99/06501
SHARED CACHE STRUCTURE FOR TEMPORAL AND NON-TEMPORAL INSTRUCTIONS
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates in general to the field of processors, and in particular, to a technique of providing a shared cache structure for temporal and non-temporal instructions.
2. Description of the Related Art
The use of a cache memory with a processor facilitates the reduction of memory access time. The fundamental idea of cache organization is that by keeping the most frequently accessed instructions and data in the fast cache memory, the average memory access time will approach the access time of the cache. To achieve the maximum possible speed of operation, typical processors implement a cache hierarchy, that is, different levels of cache memory. The different levels of cache correspond to different distances from the processor core. The closer the cache is to the processor, the faster the data access. However, the faster the data access, the more costly it is to store data. As a result, the closer the cache level, the faster and smaller the cache.
The performance of cache memory is frequently measured in terms of its hit ratio. When the processor refers to memory and finds the word in cache, it is said to produce a hit. If the word is not found in cache, then it is in main memory and it counts as a miss. If a miss occurs, then an allocation is made at the entry indexed by the access. The access can be for loading data to the processor or storing data from
Figures 8A - 8D illustrate another example of the organization of a cache memory prior to and after a non-temporal instruction hits way 2 of cache set 0, according to one embodiment of the present invention.
Figures 9A and 9B illustrate one example of the organization of a cache memory prior to and after a temporal instruction miss to cache set 0, according to one embodiment of the present invention.
Figures 10A - 10 B illustrate an example of the organization of a cache memory prior to and after a non-temporal instruction miss to cache set 0, according to one embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
A technique is described for providing management of cache memories, in which cache allocation is determined by data utilization. In the following description, numerous specific details are set forth, such as specific memory devices, circuit diagrams, processor instructions, etc., in order to provide a thorough understanding of the present invention. However, it will be appreciated by one skilled in the art that the present invention may be practiced without these specific details. In other instances, well known techniques and structures have not been described in detail in order not to obscure the present invention. It is to be noted that a particular implementation is described as a preferred embodiment of the present invention, however, it is readily understood that other embodiments can be designed and implemented without departing from the spirit and scope of the present invention. Furthermore, it is appreciated that the present invention is described in reference to a serially arranged cache hierarchy system, but it need not be limited strictly to such a hierarchy.
WO 99/50752
PCT7US99/06501
Referring to Figure 1, a typical computer system is shown, wherein a processor 10, which forms the central processing unit (CPU) of the computer system is coupled to a main memory 11 by a bus 14. The main memory 11 is typically comprised of a random-access-memory and is usually referred to as RAM. Subsequently, the main memory 11 is generally coupled to a mass storage device 12, such as a magnetic or optical memory device, for mass storage (or saving) of information. A cache memory 13 (hereinafter also referred simply as cache) is coupled to the bus 14 as well. The cache 13 is shown located between the CPU 11 and the main memory 11, in order to exemplify the functional utilization and transfer of data associated with the cache 13. It is appreciated that the actual physical placement of the cache 13 can vary depending on the system and the processor architecture. Furthermore, a cache controller 15 is shown coupled to the cache 13 and the bus 14 for controlling the operation of the cache 13. The operation of a cache controller, such as the controller 15, is known in the art and, accordingly, in the subsequent Figures, cache controllers are not illustrated. It is presumed that some controller(s) is/are present under control of the CPU 10 to control the operation of cache(s) shown.
In operation, information transfer between the memory 11 and the CPU 10 is achieved by memory accesses from the CPU 10. When cacheable data is currently or shortly to be accessed by the CPU 10, that data is first allocated in the cache 13. That is, when the CPU 10 accesses a given information from the memory 11, it seeks the information from the cache 13. If the accessed data is in the cache 13, a "hit" occurs. Otherwise, a "miss" results and cache allocation for the data is sought. As currently practiced, most accesses (whether load or store) require the allocation of the cache 13. Only uncacheable accesses are not allocated in the cache.
WO 99/50752 PCT/US99/06501
Referring to Figure 2, a computer system implementing a multiple cache arrangement is shown. The CPU 10 is still coupled to the main memory 11 by the bus 14 and the memory 11 is then coupled to the mass storage device 12. However, in the example of Figure 2, two separate cache memories 21 and 22 are shown. The caches 21-22 are shown arranged serially and each is representative of a cache level, referred to as Level 1 (Ll) cache and Level 2 (L2) cache, respectively. Furthermore, the Ll cache 21 is shown as part of the CPU 10, while the L2 cache 22 is shown external to the CPU 10. This structure exemplifies the current practice of placing the Ll cache on the processor chip while lower level caches are placed external to it, where the lower level caches are further from the processor core. The actual placement of the various cache memories is a design choice or dictated by the processor architecture. Thus, it is appreciated that the Ll cache could be placed external to the CPU 10.
Generally, CPU 10 includes an execution unit 23, register file 24 and fetch/decoder unit 25. The execution unit 23 is the processing core of the CPU 10 for executing the various arithmetic (or non-memory) processor instructions. The register file 24 is a set of general purpose registers for storing (or saving) various information required by the execution unit 23. There may be more than one register file in more advanced systems. The fetch/decoder unit 25 fetches instructions from a storage location (such as the main memory 11) holding the instructions of a program that will be executed and decodes these instructions for execution by the execution unit 23. In more advanced processors utilizing pipelined architecture, future instructions are prefetched and decoded before the instructions are actually needed so that the processor is not idle waiting for the instructions to be fetched when needed.
WO 99/50752
PCT/US99/06501
The various units 23-25 of the CPU 10 are coupled to an internal bus structure 27. A bus interface unit (BIU) 26 provides an interface for coupling the various units of CPU 10 to the bus 14. As shown in Figure 2, the L1 cache is coupled to the internal bus 27 and functions as an internal cache for the CPU 10. However, again it is to be emphasized that the L1 cache could reside outside of the CPU 10 and coupled to the bus 14. The caches can be used to cache data, instructions or both. In some systems, the L1 cache is actually split into two sections, one section for caching data and one section for caching instructions. However, for simplicity of explanation, the various caches described in the Figures are shown as single caches with data, instructions and other information all referenced herein as data. It is appreciated that the operations of the units shown in Figure 2 are known. Furthermore it is appreciated that the CPU 10 actually includes many more components than just the components shown. Thus, only those structures pertinent to the understanding of the present invention are shown in Figure 2. In one embodiment, the invention is utilized in systems having data caches. However, the invention is applicable to any type of cache.
It is also to be noted that the computer system may be comprised of more than one CPU (as shown by the dotted line in Figure 2). In such a system, it is typical for multiple CPUs to share the main memory 11 and/or mass storage unit 12. Accordingly, some or all of the caches associated with the computer system may be shared by the various processors of the computer system. For example, with the system of Figure 2, L1 cache 21 of each processor would be utilized by its processor only, but the main memory 11 would be shared by all of the CPUs of the system. In addition, each CPU has an associated external L2 cache 22.
WO 99/50752
PCT/US99/06501
The invention can be practiced in a single CPU computer system or in a multiple CPU computer system. It is further noted that other types of units (other than processors) which access memory can function equivalently to the CPUs described herein and, therefore, are capable of performing the memory accessing functions similar to the described CPUs. For example, direct memory accessing (DMA) devices can readily access memory similar to the processors described herein. Thus, a computer system having one processor (CPU), but one or more of the memory accessing units would function equivalent to the multiple processor system shown described herein.
As noted, only two caches 21-22 are shown. However, the compute system need not be limited to only two levels of cache. It is now a practice to utilize a third level (L3) cache in more advanced systems. It is also the practice to have a serial arrangement of cache memories so that data cached in the Ll cache is also cached in the L2 cache. If there happens to be an L3 cache, then data cached in the L2 cache is typically cached in the L3 cache as well. Thus, data cached at a particular cache level is also cached at all higher levels of the cache hierarchy.
Figure 3 is a block diagram illustrating one embodiment of the organizational structure of the cache memory in which the technique of the present invention is implemented. In general, there are "x" sets in a cache structure, "y" ways per set (where y > 2), and where each way contains one data entry or one cache line. The invention provides an LRU lock bit which indicates whether any one of the ways within that set contains non-temporal (NT) data. If so, the regular or pseudo LRU bits will be updated to point to the NT data. There are also "z" regular or pseudo LRU bits per set. Unless the LRU lock bit is set, the regular or pseudo LRU bits point to the way within the set in
WO 99/50752 PCT/US99/06501
accordance with the least recently used technique implemented. The number of regular or pseudo-LRU bits per set varies depending on the number of ways per set and the LRU (regular or pseudo) technique implemented.
In the embodiment as shown, the cache 50 is organized as a four-way set associative cache. In the example of Figure 3, each page is shown as being equal to one-fourth the cache size. In particular, the cache 50 is divided into four ways (for example, way 0 (52), way 1 (54), way 2 (56) and way 3 (58)) of equal size and main memory 11 (see also Figures 1 and 2) is viewed as divided into pages (e.g., page 0 - page n). In another embodiment, each page may be larger or smaller than the cache size. The organizational structure of cache 50 (as shown in Figure 3) may be implemented within the cache 13 of Figure 1, the L1 cache and/or L2 cache 22 of Figure 2.
The cache 50 also includes an array of least recently used (LRU) bits 60o - 60n each of which points to the way within a set with the least recently used data (or NT data, if a biased LRU technique is implemented). Such listing is performed in accordance with an LRU technique under the control of the cache controller 15, to determine which cache entry to overwrite in the event that a cache set is full. The LRU logic (not shown) keeps track of the cache locations within a set that have been least recently used. In one embodiment, an LRU technique that strictly keeps track of the least-recently used directory algorithm may be implemented. In one alternate embodiment, a pseudo-LRU algorithm which makes a best attempt at keeping track of the least recently used directory element, is implemented. For discussion purposes, the bits 600 - 60n will be referred to as LRU bits 60o - 60n, while the array of LRU bits 60o - 60n will be referred to as LRU bits 60.
WO 99/50752
PCT/US99/06501
The cache 50 further includes an array of LRU lock bits 70o - 70n which indicates whether any of the ways 52, 54, 56, 58 within a given set contains data that should not pollute the cache 50 (i.e., data with infrequent usage), as described in detail in the following sections.
Figure 4 is a table illustrating the cache management technique in accordance with the principles of the present invention. The invention utilizes the array of LRU lock bits 70o - 70n to indicate whether any of the corresponding cached data is streaming or non-temporal, and as such, would be the first entry to be replaced upon a cache miss to the corresponding set. In one embodiment, the LRU lock bit 70, when set to 1, indicates that the corresponding set has an entry that is non-temporal. If the LRU lock bit 70 is cleared, upon a cache hit by a temporal instruction, the corresponding LRU bit(s) 60 is(are) updated in accordance with the LRU technique implemented (see item 1 of Figure 4) and the associated LRU lock bit is not updated. However, if the LRU lock bit 70 is already set to 1 (indicating that the corresponding set has a non-temporal instruction), the LRU lock bit 70 is not updated, and the LRU bit 60 is not updated (see item 2 ).
In the case of a cache hit by a non-temporal instruction, the LRU bit 60 and the LRU lock bit 70 are not updated, regardless of the status of the LRU lock bit 70 (see item 3). In an alternate embodiment, as controlled through a mode bit in a control register in the LI cache controller, cache hits by a streaming or non-temporal instructions force the LRU bits to the way that was hit (see item 4). In addition, the LRU lock bit 70 is set to 1. In this embodiment, the data hit by the streaming or non-temporal instruction will be the first to be replaced upon a cache miss to the corresponding set.
Upon a cache miss by a temporal instruction, the LRU lock bit is cleared and the LRU bit 60 is updated (item 5) based on a pseudo LRU
WO 99/50752 PCT/US99/06501
technique. However, upon a cache miss by a streaming or non-temporal instruction, the LRU lock bit 70 is set to 1 and the corresponding LRU bit 60 is not updated (item 6).
Examples of each of the items provided in the table of Figure 4 will now be discussed. Figures 5A and 5B illustrate one example of the organization of a cache memory prior to and after temporal instruction hits way 2 of cache set 0. This example corresponds to item 1 of Figure 4. Here, LRU lock bit 70o had been previously cleared for cache set 0, and since the cache set 0 was hit by a temporal instruction, the LRU lock bit 70o is not updated. However, the LRU bit 60o is updated in accordance with the LRU technique implemented. In the example, it is assumed that the pseudo LRU technique indicates that way 3 is the least recently used entry.
Figures 6A and 6B illustrate another example of the organization of a cache memory prior to and after temporal instruction hits way 2 of cache set 0. This example corresponds to item 2 of Figure 4. Here, LRU lock bit 70o had been previously set for cache set 0, indicating that the corresponding set contains non-temporal data. Accordingly, neither the LRU lock bit 70o nor the LRU bit 60o is updated.
Figures 7A - 7D illustrate an example of the organization of a cache memory prior to and after a non-temporal instruction hits way 2 of cache set 0. This example corresponds to item 3 of Figure 4 and may be implemented by setting a mode bit located in the LI cache controller to zero (see Figure 4). In the first case (Figures 7A and 7B), LRU lock bit 70O had been previously cleared for cache set 0. In this embodiment, a non-temporal cache hit does not update the LRU lock bit 70. Accordingly, since the cache set 0 was hit by a non-temporal instruction, neither the LRU lock bit 70o nor the LRU bit 60o is
WO 99/50752
PCT/US99/06501
updated. In the second case (Figures 7C and 7D), LRU lock bit 700 had been previously set for cache set 0, indicating that the corresponding set contains non-temporal data. Accordingly, neither the LRU lock bit 700 nor the LRU bit 60(3 is updated.
Figures 8A - 8D illustrate another example of the organization of a cache memory prior to and after a non-temporal instruction hits way 2 of cache set 0. This example corresponds to item 4 of Figure 4 and may be implemented by setting the mode bit located in the L1 cache controller to one (see Figure 4). In the first case (Figures 8A and 8B), LRU lock bit 70o had been previously cleared for cache set 0. In this example of an alternate embodiment to that example shown in Figures 7A-7D, a non-temporal cache hit updates the LRU lock bit 70. Accordingly, as shown in Figure 8A, since the cache set 0 was hit by a non-temporal instruction, the LRU lock bit 70o is updated (set to 1), as shown in Figure 8B. In addition, the LRU bits 60n are updated to indicate the way that was hit. In the case where LRU lock bit 70o had been previously set for cache set 0 (Figures 8C and 8D), the LRU lock bit 70o remains set to 1. In addition, the LRU bits 60n are forced to point to the way within the set that was hit.
Figures 9A and 9B illustrate one example of the organization of a cache memory prior to and after a temporal instruction miss to cache set 0. This example corresponds to item 5 of Figure 4. Here, LRU lock bit 70o had been previously set for cache set 0, and since there is a miss by a temporal instruction targeting set 0, the LRU lock bit 70n is cleared for that set, upon replacing the temporal miss in the cache. However, the LRU bit 60o is updated in accordance with the LRU technique implemented. In the example, the pseudo LRU technique indicates that way 3 is the least recently used entry.
WE CLAIM:
1. A computer system for providing cache memory management
comprising: -
a main memory means;
a processor coupled to said main memory means;
at least one cache memory coupled to said processor, said at least one cache memory having at least two cache ways each comprising a plurality of sets, each of said plurality of sets having a first bit indicative of whether one of said at least two cache ways contains non-temporal data;
wherein said processor accesses data from one of said main memory means or said at least one cache memory.
2. The computer system as claimed in Claim 1, wherein said at least one cache memory further comprises a second bit indicative of an order of a data entry in a corresponding way.
3. The computer system as claimed in Claim 2, wherein said order is indicative of whether said data entry is a least recently used entry with respect to other entries.
4. The computer system as claimed in Claim 1, wherein said first bit is set to indicate that an associated way contains non-temporal data.
5. The computer system as claimed in Claim 1, wherein said first bit is cleared to indicate that an associated way contains temporal data.
6. The computer system as claimed in Claim 2, further comprising cache control logic coupled to said at least one cache memory and said processor, for controlling said at least one cache memory.
7. The computer system as claimed in Claim 6, wherein said processor receives an instruction for accessing data, said processor determining if said data is located in said at least one cache memory, if so, accessing said data from said at least one cache memory, otherwise, accessing said data from said main memory means.
8. The computer system as claimed in Claim 7, wherein if said data is accessed from said at least one cache memory, said cache control logic determines if said data is temporal, if so, updating an order of said second bit corresponding to said way that is being accessed, otherwise leaving said order unchanged.
9. The computer system as claimed in Claim 8, wherein said first bit corresponding to said way is unchanged.
10. The computer system as claimed in Claim 7, wherein if said data is accessed from said at least one cache memory, said cache control logic configures said first bit to indicate that said accessed data is non- temporal, said cache control logic further updating said order of said second bit.
11. The computer system as claimed in CIaim7, wherein if said data is accessed from said main memory, said cache control logic determines if said data is non-temporal, if so, configuring said first bit to indicate that said accessed data is non-temporal, said cache control logic leaving unchanged said order of said second bit.
12. The computer system as claimed in Claim 11, wherein if said cache control logic determines that said data is temporal, said cache control logic configures said first bit to indicate that said accessed
data is temporal, said cache control logic updating said order of said second bit.
13. In a computer system, a method of allocating cache memories based
on a pattern of accesses for data utilized by a processor, comprising:
providing a main memory means;
providing a processor coupled to said main memory means;
providing at least one cache memory coupled to said processor, said at least one cache memory having at least two cache ways each comprising a plurality of sets, each of said plurality of sets having a first bit indicative of whether one of said at least two cache ways contains non-temporal data,
accessing, by said processor, data from one of said main memory means or said at least one cache memory,
14. The method as claimed in Claim 13, wherein said at least one cache memory further comprises a second bit indicative of an order of a data entry in a corresponding way.
15. The method as claimed in Claim 14, wherein said order is indicative of whether said data entry is a least recently used entry with respect to other entries.
16. The method as claimed in Claim 13, wherein said first bit is set to indicate that an associated way contains non-temporal data.
17. The method as claimed in Claim 13, wherein said first bit is cleared to indicate that an associated contains temporal data.
18. The method as claimed in Claim 14, further comprising providing a cache control logic coupled to said at least one cache memory and said processor, for controlling said at least one cache memory.
19. The method as claimed in Claim 18, wherein said processor receives an instruction for accessing data, said processor determining if said data is located in said at least one cache memory, if so, accessing said data from said at least one cache memory, otherwise, accessing said data from said main memory means.
20. The method as claimed in Claim 19, wherein if said data is accessed from said at least one cache memory, said cache control logic determines if said data is temporal, if so, updating an order of said second bit corresponding to said way that is being accessed, otherwise leaving said order unchanged.
21. The method as claimed in Claim 19, wherein said first bit corresponding to said way is unchanged.
22. The method as claimed in Claim 19, wherein if said data is accessed from said at least one cache memory, said cache control logic configures said first bit to indicate that said accessed data is non-temporal, said cache control logic further updating said order of said second bit.
23. The method as claimed in Claim 19, wherein if said data is accessed from said main memory means, a cache control logic determines if said data is non-temporal, if so, configuring said first bit to indicate that said accessed data is non-temporal, said cache control logic leaving unchanged said order of said second bit.
24. The method as claimed in Claim 23, wherein if said cache control logic determines that said data is temporal, said cache control logic configures said first bit to indicate that said accessed data is temporal, said cache control logic updating said order of said second bit.
Dated this 8th day of September, 2000
(NALINI KRISHNAMURTI)
Of REMFRY & SAGAR
ATTORNEY FOR THE APPLICANTS
| # | Name | Date |
|---|---|---|
| 1 | abstract1.jpg | 2018-08-08 |
| 1 | IN-PCT-2000-00382-MUM-FORM-4-01-04-2011.pdf | 2011-04-01 |
| 2 | in-pct-2000-00382-mum-cancelled pages(28-6-2004).pdf | 2018-08-08 |
| 2 | IN-PCT-2000-00382-MUM-CORRESPONDENCE(RENEWAL PAYMENT LETTER)-(01-04-2011).pdf | 2011-04-01 |
| 3 | Form 27 [31-03-2017(online)].pdf | 2017-03-31 |
| 4 | IN-PCT-2000-00382-MUM-RELEVANT DOCUMENTS [30-03-2018(online)].pdf | 2018-03-30 |
| 4 | in-pct-2000-00382-mum-claims(granted)-(28-6-2004).pdf | 2018-08-08 |
| 5 | in-pct-2000-00382-mum-power of authority(28-6-2004).pdf | 2018-08-08 |
| 5 | in-pct-2000-00382-mum-correspondence(23-7-2004).pdf | 2018-08-08 |
| 6 | in-pct-2000-00382-mum-petition under rule 138(28-6-2004).pdf | 2018-08-08 |
| 6 | in-pct-2000-00382-mum-correspondence(ipo)-(23-1-2003).pdf | 2018-08-08 |
| 7 | in-pct-2000-00382-mum-petition under rule 137(28-6-2004).pdf | 2018-08-08 |
| 7 | IN-PCT-2000-00382-MUM-CORRESPONDENCE(RENEWAL PAYMENT LETTER)-(24-2-2012).pdf | 2018-08-08 |
| 8 | in-pct-2000-00382-mum-form-pct-isa-210(12-9-2000).pdf | 2018-08-08 |
| 8 | in-pct-2000-00382-mum-drawing(28-6-2004).pdf | 2018-08-08 |
| 9 | in-pct-2000-00382-mum-form 1(8-9-2000).pdf | 2018-08-08 |
| 9 | in-pct-2000-00382-mum-form-pct-ipea-409(12-9-2000).pdf | 2018-08-08 |
| 10 | in-pct-2000-00382-mum-form 13(28-6-2004).pdf | 2018-08-08 |
| 10 | in-pct-2000-00382-mum-form 5(8-9-2000).pdf | 2018-08-08 |
| 11 | in-pct-2000-00382-mum-form 1a(28-6-2004).pdf | 2018-08-08 |
| 11 | in-pct-2000-00382-mum-form 4(25-3-2004).pdf | 2018-08-08 |
| 12 | in-pct-2000-00382-mum-form 3(8-9-2000).pdf | 2018-08-08 |
| 13 | in-pct-2000-00382-mum-form 2(granted)-(28-6-2004).pdf | 2018-08-08 |
| 13 | in-pct-2000-00382-mum-form 3(28-6-2004).pdf | 2018-08-08 |
| 14 | in-pct-2000-00382-mum-form 2(granted)-(28-6-2004).pdf | 2018-08-08 |
| 14 | in-pct-2000-00382-mum-form 3(28-6-2004).pdf | 2018-08-08 |
| 15 | in-pct-2000-00382-mum-form 3(8-9-2000).pdf | 2018-08-08 |
| 16 | in-pct-2000-00382-mum-form 1a(28-6-2004).pdf | 2018-08-08 |
| 16 | in-pct-2000-00382-mum-form 4(25-3-2004).pdf | 2018-08-08 |
| 17 | in-pct-2000-00382-mum-form 5(8-9-2000).pdf | 2018-08-08 |
| 17 | in-pct-2000-00382-mum-form 13(28-6-2004).pdf | 2018-08-08 |
| 18 | in-pct-2000-00382-mum-form-pct-ipea-409(12-9-2000).pdf | 2018-08-08 |
| 18 | in-pct-2000-00382-mum-form 1(8-9-2000).pdf | 2018-08-08 |
| 19 | in-pct-2000-00382-mum-drawing(28-6-2004).pdf | 2018-08-08 |
| 19 | in-pct-2000-00382-mum-form-pct-isa-210(12-9-2000).pdf | 2018-08-08 |
| 20 | IN-PCT-2000-00382-MUM-CORRESPONDENCE(RENEWAL PAYMENT LETTER)-(24-2-2012).pdf | 2018-08-08 |
| 20 | in-pct-2000-00382-mum-petition under rule 137(28-6-2004).pdf | 2018-08-08 |
| 21 | in-pct-2000-00382-mum-correspondence(ipo)-(23-1-2003).pdf | 2018-08-08 |
| 21 | in-pct-2000-00382-mum-petition under rule 138(28-6-2004).pdf | 2018-08-08 |
| 22 | in-pct-2000-00382-mum-correspondence(23-7-2004).pdf | 2018-08-08 |
| 22 | in-pct-2000-00382-mum-power of authority(28-6-2004).pdf | 2018-08-08 |
| 23 | in-pct-2000-00382-mum-claims(granted)-(28-6-2004).pdf | 2018-08-08 |
| 23 | IN-PCT-2000-00382-MUM-RELEVANT DOCUMENTS [30-03-2018(online)].pdf | 2018-03-30 |
| 24 | Form 27 [31-03-2017(online)].pdf | 2017-03-31 |
| 25 | IN-PCT-2000-00382-MUM-CORRESPONDENCE(RENEWAL PAYMENT LETTER)-(01-04-2011).pdf | 2011-04-01 |
| 25 | in-pct-2000-00382-mum-cancelled pages(28-6-2004).pdf | 2018-08-08 |
| 26 | IN-PCT-2000-00382-MUM-FORM-4-01-04-2011.pdf | 2011-04-01 |
| 26 | abstract1.jpg | 2018-08-08 |