Sign In to Follow Application
View All Documents & Correspondence

"Method, System And Apparatus For Main Memory Access Subsystem Usage To Different Partitions In A Socket With Sub Socket Partitioning"

Abstract: A proposal for enabling sub-socket partitioning is discussed that facilitates access among a plurality of partitions to a shared resource.  A round robin arbitration policy is discussed to allow each partition, within a socket, that may utilize a different operating system, access to the shared resource based at least in part on whether an assigned bandwidth parameter for each partition is consumed. The proposal includes support for virtual channels.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 December 2007
Publication Number
28/2009
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2018-11-13
Renewal Date

Applicants

INTEL CORPORATION
2200 MISSION COLLEGE BOULEVARD, SANTA CLARA, CALIFORNIA 95052, USA

Inventors

1. HARIKUMAR, AJAY
413 GOLF MANOR, NAL WIND TUNNEL ROAD, MURUGESHPALYA, BANGALORE, 560017,INDIA
2. THOMAS, TESSIL
# 29 5TH CROSS NARAYANAPPA BLOCK, BENSON TOWN, BANGALORE, KARNATAKA, 560046,INDIA
3. PUTHUR SIMON BIJU
SRINIDHI SIGNATURE, FLAT # 304, 7TH A CROSS, L.B SHASTHRY NAGAR VINMANAPURA, HAL, BANGALORE 560017,INDIA

Specification

METHOD, SYSTEM AND APPARATUS FOR MAIN MEMORY ACCESS SUBSYSTEM USAGE TO DIFFERENT PARTITIONS IN A SOCKET WITH SUB-SOCKET PARTITIONING
The present application is related to and may incorporate embodiments from three concurrently filed applications by the same set of inventors. The first application, attorney docket P26183, is titled "Method, Apparatus, and System for shared cache usage to different partitions in a socket with sub-socket partitioning", serial number XXXXXXX. The second application, attorney docket P26281, is titled "Method, System, and Apparatus for Usability Management in a System with sub-socket Partitioning", serial number XXXXXXX. The third application, attorney docket P26282, is titled "Method, System, and Apparatus for Memory address mapping scheme for sub-socket Partitioning", serial number XXXXXXX
Field
Embodiments of the invention relate to the field of partitioning, and according to one embodiment, a method and apparatus, and system for main memory access subsystem usage to different partitions in a socket with sub-socket partitioning.
General Background
As modern microprocessors become increasingly faster with growing number of cores, it becomes feasible from a performance viewpoint to run multiple operating systems on the same hardware. This ability opens up many possibilities including Server consolidation and ability to run services Operating Systems in parallel to the main Operating System. Providing this ability can be done either in software or in hardware. In software it is done using virtualization mechanisms by running a Virtual Machine Monitor (VMM) underneath the Operating Systems. The present partitioning software schemes partition only down to a socket granularity, hence, this precludes partitioning down to a particular core within the processor or socket.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention may best be understood by referring to the following description and
accompanying drawings that are used to illustrate embodiments of the invention.
FIG. 1 is an exemplary block diagram of a dual processor system in accordance with an embodiment of the invention.
FIG. 2 is an exemplary block diagram of a multi-processor system in accordance with an embodiment of the invention.
FIG. 3 is an exemplary embodiment of architectures for home and caching agents of the systems of FIGs. 1-2 in accordance with an embodiment of the invention.
FIG. 4 is a socket architecture in accordance with an embodiment of the invention.
FIG. 5 is a method for a partition flow for a time period in accordance with an embodiment of the invention.
FIG. 6 is a block diagram in accordance with an embodiment of the invention.
DETAILED DESCRIPTION
In one embodiment, at least two different operating systems may operate within each socket, such that, one or more cores are running different operating systems. Hence, "sub-socket partitioning" allows a multiple partitions to utilize different operating system within each socket. The claimed subject matter facilitates main memory access subsystem usage to different partitions in a socket with sub-socket partitioning.
In the following description, certain terminology is used to describe features of the invention. For example, the term "device" or "agent" is general and may be used to describe any electrical component coupled to a link. A "link or interconnect" is generally defined as an information-carrying medium that establishes a communication pathway for messages, namely information placed in a predetermined format. The link or interconnect may be a wired physical medium (e.g., a bus, one or more electrical wires, trace, cable, etc.) or a wireless medium (e.g., air in combination with wireless signaling technology).
The term "home agent" is broadly defined as a device that provides resources for a caching agent to access memory and, based on requests from the caching agents, can resolve conflicts, maintain ordering and the like. The home agent includes a tracker and data buffer(s) for each caching agent as described below. A "tracker" is dedicated storage for memory requests from a particular device. For instance, a first tracker may include a plurality of entries associated with a first caching agent while a second tracker may include other entries associated with a second caching agent. According to one embodiment of the invention, the "caching agent" is generally a cache controller that is adapted to route
memory requests to the home agent.
The term "logic" is generally defined as hardware and/or software that perform one or more operations such as controlling the exchange of messages between devices. When deployed in software, such software may be executable code such as an application, a routine or even one or more instructions. Software may be stored in any type of memory, normally suitable storage medium such as (i) any type of disk including floppy disks, magneto-optical disks and optical disks such as compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), digital versatile disks (DVDs), (ii) any type of semiconductor devices such as read-only memories (ROMs), random access memories (RAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), (iii) magnetic or optical cards, or (iv) any other type of media suitable for storing electronic instructions.
In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.
I. Exemplary System Architecture
Referring to FIG. 1, an exemplary block diagram of a system in accordance with one embodiment of the invention is shown. Herein, Figure 1 depicts a dual processor (DP) configuration with processors 110 and 150. For instance, this configuration may be associated with a desktop or mobile computer, a server, a set-top box, personal digital assistant (PDA), alphanumeric pager, cellular telephone, or any other type of wired or wireless communication devices.
Each processor 110 and 150 includes a memory controller (MC) 115 and 155 to enable direct communications with an associated memory 120 and 160 via links 125 and 165, respectively. Moreover, the memories 120 and 160 may be independent memories or portions of the same shared memory.
As specifically shown in FIG. 1, processors 110 and 150 are coupled to an input/output hub (IOH) 180 via point-to-point links 130 and 170, respectively. IOH 180 provides connectivity between processors 110 and 150 and input/output (I/O) devices implemented within DP system 100. In addition, processors 110 and 150 are coupled to each other via a point-to-point link 135. According to one embodiment of the invention,
these point-to-point links 130, 135, 170 may be adapted to operate in accordance with "Quickpath" specification developed by Intel Corporation of Santa Clara, California. However, the claimed subject matter is not limited to a Quickpath link and may utilize any type of link or interconnect. One skilled in the art appreciates the utilization of any link or interconnect scheme that is customized for the particular design requirements. For example, one may use any coherent or non coherent link or interconnect protocol, such as, but not limited to Peripheral Component Interconnect (PCI, PCIe, etc.), a front side bus (FSB), etc.
Referring now to FIG. 2, an exemplary block diagram of a multiprocessor (MP) system in accordance with one embodiment of the invention is shown. Similarly, MP system may be a desktop or mobile computer, a server, a set-top box, personal digital assistant (PDA), alphanumeric pager, cellular telephone, or any other type of wired or wireless communication devices.
Herein, according to one embodiment of the invention, MP system comprises a plurality of processors 210A-210D. One or more of processors, such as processors 210A-210D, may include a memory controller (MC) 220A-220D. These memory controllers 220A-220D enable direct communications with associated memories 230A-230D via links 240A-240D, respectively. In particular, as shown in FIG. 2, processor 21OA is coupled to memory 230A via a link 240A while processors 210B-210D are coupled to corresponding memories 230B-230D via links 240B-240D, respectively.
Additionally, processor 21 OA is coupled to each of the other processors 210B-210D via pTp (point-to-point) links 250, 252 and 254. Similarly, processor 21 OB is coupled to processors 210A, 2IOC and 210D via pTp links 250, 256 and 258. Processor 21OC is coupled to processors 210A, 21 OB and 210D via pTp links 252, 256 and 260. Processor 210D is coupled to processors 210A, 21 OB and 2IOC via pTp links 254, 258 and 260. Processors 210A and 21 OB are coupled via pTp interconnects 270 and 272 to a first input/output hub (IOH) 280 while processors 2IOC and 210D are coupled via point-to-point interconnects 274 and 276 to a second IOH 285.
For both systems 100 and 200 described in FIGs. 1 and 2, it is contemplated that the processors may be adapted to operate as a home agent, a caching agent or both, depending on the system architecture selected.
Referring now to FIG. 3, an exemplary embodiment of architectures for destination and source devices of the systems of FIGs. 1-2 in accordance with an embodiment of the invention is shown. For illustrative purposes, processor 21OD from Figure 2 (or processor
150 from Figure 1) is configured as a destination device 300, such as a home agent for example. Processors 210A-210C from Figure 2 (or processor 110 from Figure 1) could be configured as sources 31OA-31OC, such as caching agents for example. IOH 280 or 285 (or IOH 180 of FIG. 1) may be configured as I/O device 310D implementing a write cache 320 operates as a caching agent as well.
As described below, each source 31OA,..., or 31OD is associated with a tracker that is maintained at destination device 300 and has a predetermined number of tracker entries. The number of tracker entries is limited in size to the number of requests that may be transmitted by any source 310A,..., or 310D that saturates the bandwidth of a PTP fabric 315, which supports point-to-point communications between destination 300 and the plurality of sources (e.g., sources 310A-310D).
As shown in FIG. 3, according to this embodiment of the invention, destination 300 is a home agent that comprises home logic 325 and a plurality of trackers 3301 ...330M, where M>1. In combination with trackers 3301 ...330M, home logic 325 is adapted to operate as a scheduler to assist in the data transfer of incoming information from memory 230A of FIG. 2 and outgoing information to PTP fabric 315. Moreover, home logic 325 operates to resolve conflicts between these data transfers.
Herein, for this embodiment of the invention, since four (4) caching agents 310A-310D are implemented within system 100/200, four (M=4) trackers are illustrated and labeled "HT-0" 330A, "HT-1" 330B, "HT-2" 330C and "HT-3" 330D. These trackers 330A-330D each contain NO, Nl, N2 and N3 tracker entries respectively, where Ni > 1 (i= 1,2,3 or 4). The number of entries (NO-N3) may differ from one tracker to another. Associated with each entry of trackers 330A-330D is a corresponding data buffer represented by data buffers 340A-340D. Data buffers 340A-340D provide temporary storage for data returned from memory controller 220A, and eventually scheduled onto PTP fabric 315 for transmission to a targeted destination. The activation and deactivation of the entries for trackers 330A-330D is controlled by home logic 325 described below.
Caching agents 310A, 31 OB, and 3IOC include a miss address queue 350A, 350B, and 350C, respectively. For instance, with respect to caching agent 310A, miss address queue 350A is configured to store all of the miss transactions that are handled by home agent 300.
In addition, according to this embodiment of the invention, caching agents 31 OA, 31 OB and 3IOC further include a credit counter 360A, 360B and 360C, respectively. Each
credit counter 360A, 360B, and 360C maintains a count value representative of the number of unused tracker entries in trackers 330A, 330B, and 330C. For instance, when a new transaction is issued by caching agent 310A to home agent 300, credit counter 360A is decremented. If a transaction completes, then credit counter 360A is incremented. At reset time, credit counter 360A is initialized to the pool size equal to the number of tracker entries (NO) associated with tracker 330A. The same configuration is applicable to credit counters 360B-360C.
Also shown in FIG. 3 is an example of caching agent 31OD operating as an I/O agent that reads information from memory and writes information to an I/O interface. Alternately, caching agent 31 OD may stream I/O agent read returns as writes into the main memory. Caching agent 310D implements write cache 320, which is used to sustain high bandwidth while storing data associated with I/O operations.
FIG. 4 is a socket architecture in accordance with an embodiment of the invention. In one embodiment, a dual processor system as depicted in the previous figures with each processor socket having processor cores 402. In one embodiment, at least two different operating systems may operate within each socket, such that, one or more cores or running different operating systems. In this embodiment, a partition identifier is assigned to each partition. The cores and the distributed LLC (Last Level Cache banks) 408 are connected to each other within the socket by a first level interconnect 403. In one embodiment, the first level interconnect 403 is an on-die ring interconnect. In anther embodiment, the first level interconnect is a two dimensional mesh/cross bar. The memory controller 406 is integrated into the processor die and a pTp protocol is used for inter-processor communication and IO access. The fabric interfaces 410 and the home agent 404 are also connected to the first level interconnect. The home agents 404 and the fabric interfaces 410 are connected to each other via a second level interconnect 409. In summary, in one embodiment, the first level interconnect may be used to connect the cache memory, home agents and the off chip links to the processor cores, and the second level interconnects are used for connecting the home agent directly to the off chip links. However, the claimed subject matter is not limited to the previous configuration. One skilled in the art appreciates utilizing different configurations to facilitate communication for a particular application or power management scheme.
FIG. 5 is a method for a partition flow for a time period in accordance with an embodiment of the invention. In one embodiment, epoch signifies a time which is chosen as a new origin for time measurements. In one embodiment, at least two different
operating systems may operate within each socket, such that, one or more cores or running different operating systems. Hence, "sub-socket partitioning" that allows multiple partitions to run a different operating system within each socket. The claimed subject matter facilitates main memory access subsystem usage to different partitions in a socket with sub-socket partitioning.
In one embodiment, information from each partition is metered and depending on the bandwidth consumed and bandwidth allocated for each partition, arbitration priority will be switched between partitions. In this embodiment, time is divided into epochs and each partition is allocated a certain number of cycles of access to the shared resource in each epoch. In this embodiment, the priority among partitions during arbitration keeps changing in a round robin fashion as long as each partition still has allocated cycles left in the epoch. Once a partition has used up its allocated cycles, it will have lower priority in arbitration than those which have not yet used up their allocated cycles. The priority among partitions that still have allocated cycles left will keep changing in a round robin fashion. The priority among partitions that have used up their allocated cycles also will keep changing in a round robin fashion. Therefore, a measurable parameter could be the allocated cycles. Consequently, the allocated cycle service parameter defines the bandwidth allocated to each partition.
In summary, for the partitions that have used up their allocated cycles, and there are requests from multiple partitions pending, higher priority will be given to those who still have allocated cycles left. If everyone has used up their allocated cycles, then the priority among them keeps changing in a round robin fashion as in anti-starvation policy. In one embodiment, a shared resource like a home agent or an off-chip port will have an epoch counter, per partition allocated cycle configuration register, and per partition consumed cycles counter. The specifics of the architecture are depicted in connection with Figure 6.
However, the claimed subject matter is not limited to a home agent or off-chip port. For example, one skilled in the art appreciates utilizing the claimed subject matter in different portions of a system. In one embodiment, the claimed subject matter may be incorporated in an interface to external cores or chips for off chip access. Also, it may also be incorporated into a cache or last level cache bank control for last level cache accesses. In another embodiment, the claimed subject matter may be incorporated into a home agent for local socket memory accesses. In yet another embodiment, the claimed subject matter may be incorporated in all three of the preceding locations, such as, but not limited to an interface to external cores or chips for off chip access, a cache or last level cache bank control for last level cache accesses, and a home agent for local socket memory accesses
In one embodiment, all counters will be cleared at the start of a new epoch. The epoch counter starts running as soon as it is enabled via a configuration register write and will be free running as long as enabled. The fairness policies are configured by firmware and can be reconfigured without a reboot by quiescing the system, reprogramming and then dequiescing the system.
The example depicted in Figure 5 is for three different partitions, pO, pi, and p2. However, the claimed subject matter is not limited to three partitions. This merely depicts one example and one skilled in the art appreciates utilizing different numbers of partitions and different rotation of priority among the partitions.
Reading the time flow diagram for a epoch from left to right, starting with label 501, depicts the three different partitions, pO, pi, and p2. As discussed earlier, the three partitions may be running different operating systems within a socket. For example, pO and p2 may be running one type of operating system while pi is running another type of operating system. In another embodiment, all three partitions may be running the same operating system. In yet another embodiment, each partition is running a different partition. However, the claimed subject matter is not limited to three partitions. This merely depicts one example and one skilled in the art appreciates utilizing different numbers of partitions and different rotation of priority among the partitions.
Each partition is allocated a number of cycles and the arbitration priority is rotated among all three partitions in a round robin fashion during label 502. However, at label 504, partition pO cycles have been consumed. Consequently, arbitration priority is rotated among partitions pi and p2 while the priority of pi and p2 are both greater than pO. This trend continues for all the partitions until they have all consumed their allocated cycles. Subsequently, the arbitration priority rotates equally between the partitions for label 508.
FIG. 6 is a block diagram in accordance with an embodiment of the invention. In the point to point interface fabric (label 410 in Figure 4) for each Virtual channel (VC) a queue 602 is used to sink the requests from all the caching agents on the on die first level interconnect (label 403 in Figure 4). This queue is then part of the global arbitration for the outgoing output port or home agent protocol pipe 606. Each
virtual channel has an epoch counter and the epoch counter is incremented whenever this VC is scheduled for sending a packet. Every caching agent will have an equal number of entries in the VC queue. Each partition will have its own consumed cycles counter and max cycles allocated configuration register for each VC.
The consumed cycles counter is incremented each time a packet belonging to that partition is send out. Based on the consumed cycles counter and max cycles allocated configuration register of each partition, arbitration priority among various partitions is decided. Likewise, based on this priority, the oldest entry belonging to the highest priority partition is selected from the VC queue for transmission. For a Virtual channel with per address ordering requirement, this ordering is maintained by the address field of all the entries in the queue with the address before making a new entry into the queue, if there is a match, then the new entry will be blocked till the older entry to the same address is send out. This is valid across partitions.
In the home agent, a similar mechanism of queue for each virtual channel and associated request scheduling logic is adopted for fairness between partitions in gaining access to the home protocol processing pipe line.
In the LLC bank (408 from Figure 4), fairness in access to the caching agent structures which are used for sending requests on to the CSI fabric (410 from Figure 4) or to the home agent and access to the LLC hit/miss look up pipe is ensured by having a single request queue with equal number of entries for all processors that share the LLC bank. This queue will have an epoch counter. The epoch counter is incremented each time a request is send to the LLC lookup pipe. Each partition will have its own consumed cycles counter and max cycles allocated configuration register. Each time a request belonging to a particular partition is selected, the consumed cycles counter is incremented. Based on the consumed cycles counter and maximum cycles allocated configuration register of each partition, arbitration priority among various partitions is decided.
In the home agent and the LLC bank controller case, in addition to the requests from the local socket partitions, there will be requests from remote sockets. Priority will keep rotating between local socket and remote socket accesses and the epoch counter based mechanism is not used for arbitration decision.
While the invention has been described in terms of several embodiments of the invention, those of ordinary skill in the art will recognize that the invention is not limited to
the embodiments of the invention described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.

CLAIMS
What is claimed is:
1. A method for arbitration in a socket with a plurality of partitions comprising:
allocating a bandwidth parameter for each of the plurality of partitions;
decrementing the bandwidth parameter for each partition for each completed
request from that partition; and
rotating arbitration priority among the partitions with a non-zero bandwidth parameter, otherwise, rotating arbitration priority among the partitions if all the partitions have a zero bandwidth parameter.
2. The method of claim 1 wherein the bandwidth parameter is a number of allocated
cycles consumed for a shared resource for a request from that partition.
3. The method of claim 1 wherein the bandwidth parameter is utilized during a
defined time period.
4. The method of claim 1 wherein the bandwidth parameter is stored in a register in a
shared resource for each partition.
5. The method of claim 2 wherein the number of allocated cycles consumed is
tracked in a consumed cycles counter in a shared resource for each partition.
6. The method of claim 3 wherein the defined time period is tracked with an epoch
counter in a shared resource for each partition.
7. A system with at least one socket to support sub-socket partitioning comprising:
a processor;
a dynamic random access memory, coupled to the processor, to receive requests from the processor;
the processor to support sub-socket partitioning to utilize at least a first and a second operating system within a first and a second partition; a shared resource, coupled to the processor, with: a first counter to define a time period;
a register, coupled to the first counter, to store a bandwidth parameter for the first and second partition; and
a second counter, for each partition, coupled to the first counter, to track a number of consumed cycles for each request from each partition that is used for access to the shared resource.
8. The system of claim 7 wherein the bandwidth parameter is a number of
allocated cycles for each partition.
9. The system of claim 7 wherein the bandwidth parameter is utilized during the
defined time period stored in the first counter.
10. The system of claim 7 wherein the shared resource is a home agent.
11. The system of claim 7 wherein the home agent is a cache controller.
12. An agent to support sub-socket partitioning for a first and a second partition
comprising:
a first counter to define a time period for both the first and the second partition;
a register, coupled to the first counter, to store a bandwidth parameter for both the first and second partition;
a second counter, coupled to the first counter, to track a number of consumed cycles for the first partition that is used for access to the agent; and
a third counter, coupled to the first counter, to track a number of consumed cycles for the second partition that is used for access to the agent.
13. The agent of claim 12 wherein the bandwidth parameter is a number of
allocated cycles for each partition.
14. The agent of claim 12 wherein the bandwidth parameter is utilized during the
defined time period stored in the first counter.
15. The agent of claim 12 wherein the agent is a home agent.
16. The agent of claim 15 wherein the home agent is a cache controller.
17. A processor comprising:
a plurality of processor cores that support sub-socket partitioning such that each
one of the plurality of cores could utilize a different operating system; an interface, coupled to the processor cores, to generate and transmit a packet to a plurality of agents coupled to the plurality of processor cores via the interface; a transmit logic, integrated within the interface, with:
a queue for each virtual channel supported by the interface to
store all requests for each partition associated with the
processor; and
a counter, coupled to the queue, to be incremented when the
virtual channel transmits the packet.
18. The processor of claim 17 further comprising a plurality of last level cache banks.
19. The processor of claim 17 wherein the interface transmits the packet based at least
in part on a round robin arbitration policy that rotates priority among the partitions with a
non-zero bandwidth parameter, otherwise, rotating arbitration priority among the
partitions if all the partitions have a zero bandwidth parameter.
20. The processor of claim 19 wherein the bandwidth parameter is a number of
allocated cycles consumed for transmission of the packet to a shared resource.

Documents

Application Documents

# Name Date
1 2676-DEL-2007-GPA-(07-07-2009).pdf 2009-07-07
1 2676-DEL-2007-RELEVANT DOCUMENTS [15-09-2023(online)].pdf 2023-09-15
2 2676-DEL-2007-Correspondence-PO-(07-07-2009).pdf 2009-07-07
2 2676-DEL-2007-RELEVANT DOCUMENTS [24-09-2022(online)].pdf 2022-09-24
3 2676-DEL-2007-RELEVANT DOCUMENTS [25-09-2021(online)].pdf 2021-09-25
3 2676-DEL-2007-Correspondence-Others-(07-07-2009).pdf 2009-07-07
4 2676-DEL-2007-RELEVANT DOCUMENTS [30-03-2020(online)].pdf 2020-03-30
4 2676-DEL-2007-Correspondence-Others-(08-07-2009).pdf 2009-07-08
5 2676-DEL-2007-RELEVANT DOCUMENTS [28-03-2019(online)].pdf 2019-03-28
5 2676-del-2007-form-5.pdf 2011-08-21
6 2676-DEL-2007-IntimationOfGrant13-11-2018.pdf 2018-11-13
6 2676-del-2007-form-3.pdf 2011-08-21
7 2676-DEL-2007-PatentCertificate13-11-2018.pdf 2018-11-13
7 2676-del-2007-form-2.pdf 2011-08-21
8 2676-del-2007-form-1.pdf 2011-08-21
8 2676-DEL-2007-FORM 3 [05-06-2018(online)].pdf 2018-06-05
9 2676-DEL-2007-Correspondence-170418.pdf 2018-04-23
9 2676-del-2007-drawings.pdf 2011-08-21
10 2676-del-2007-description (complete).pdf 2011-08-21
10 2676-DEL-2007-Power of Attorney-170418.pdf 2018-04-23
11 2676-DEL-2007-AMENDED DOCUMENTS [17-04-2018(online)].pdf 2018-04-17
11 2676-del-2007-correspondence-others.pdf 2011-08-21
12 2676-DEL-2007-Changing Name-Nationality-Address For Service [17-04-2018(online)].pdf 2018-04-17
12 2676-del-2007-claims.pdf 2011-08-21
13 2676-del-2007-abstract.pdf 2011-08-21
13 2676-DEL-2007-MARKED COPIES OF AMENDEMENTS [17-04-2018(online)].pdf 2018-04-17
14 2676-DEL-2007-Form-18-(21-11-2011).pdf 2011-11-21
14 2676-DEL-2007-RELEVANT DOCUMENTS [17-04-2018(online)].pdf 2018-04-17
15 2676-DEL-2007-ABSTRACT [16-04-2018(online)].pdf 2018-04-16
15 2676-DEL-2007-Correspondence Others-(21-11-2011).pdf 2011-11-21
16 2676-DEL-2007-CLAIMS [16-04-2018(online)].pdf 2018-04-16
16 2676-DEL-2007-FER.pdf 2017-07-26
17 2676-DEL-2007-FORM 3 [08-09-2017(online)].pdf 2017-09-08
17 2676-DEL-2007-COMPLETE SPECIFICATION [16-04-2018(online)].pdf 2018-04-16
18 2676-DEL-2007-CORRESPONDENCE [16-04-2018(online)].pdf 2018-04-16
18 2676-DEL-2007-FORM 4(ii) [19-01-2018(online)].pdf 2018-01-19
19 2676-DEL-2007-DRAWING [16-04-2018(online)].pdf 2018-04-16
19 2676-DEL-2007-RELEVANT DOCUMENTS [13-04-2018(online)].pdf 2018-04-13
20 2676-DEL-2007-FER_SER_REPLY [16-04-2018(online)].pdf 2018-04-16
20 2676-DEL-2007-PETITION UNDER RULE 137 [13-04-2018(online)].pdf 2018-04-13
21 2676-DEL-2007-FORM-26 [16-04-2018(online)].pdf 2018-04-16
21 2676-DEL-2007-OTHERS [16-04-2018(online)].pdf 2018-04-16
22 2676-DEL-2007-FORM-26 [16-04-2018(online)].pdf 2018-04-16
22 2676-DEL-2007-OTHERS [16-04-2018(online)].pdf 2018-04-16
23 2676-DEL-2007-FER_SER_REPLY [16-04-2018(online)].pdf 2018-04-16
23 2676-DEL-2007-PETITION UNDER RULE 137 [13-04-2018(online)].pdf 2018-04-13
24 2676-DEL-2007-RELEVANT DOCUMENTS [13-04-2018(online)].pdf 2018-04-13
24 2676-DEL-2007-DRAWING [16-04-2018(online)].pdf 2018-04-16
25 2676-DEL-2007-CORRESPONDENCE [16-04-2018(online)].pdf 2018-04-16
25 2676-DEL-2007-FORM 4(ii) [19-01-2018(online)].pdf 2018-01-19
26 2676-DEL-2007-COMPLETE SPECIFICATION [16-04-2018(online)].pdf 2018-04-16
26 2676-DEL-2007-FORM 3 [08-09-2017(online)].pdf 2017-09-08
27 2676-DEL-2007-CLAIMS [16-04-2018(online)].pdf 2018-04-16
27 2676-DEL-2007-FER.pdf 2017-07-26
28 2676-DEL-2007-ABSTRACT [16-04-2018(online)].pdf 2018-04-16
28 2676-DEL-2007-Correspondence Others-(21-11-2011).pdf 2011-11-21
29 2676-DEL-2007-Form-18-(21-11-2011).pdf 2011-11-21
29 2676-DEL-2007-RELEVANT DOCUMENTS [17-04-2018(online)].pdf 2018-04-17
30 2676-del-2007-abstract.pdf 2011-08-21
30 2676-DEL-2007-MARKED COPIES OF AMENDEMENTS [17-04-2018(online)].pdf 2018-04-17
31 2676-DEL-2007-Changing Name-Nationality-Address For Service [17-04-2018(online)].pdf 2018-04-17
31 2676-del-2007-claims.pdf 2011-08-21
32 2676-DEL-2007-AMENDED DOCUMENTS [17-04-2018(online)].pdf 2018-04-17
32 2676-del-2007-correspondence-others.pdf 2011-08-21
33 2676-del-2007-description (complete).pdf 2011-08-21
33 2676-DEL-2007-Power of Attorney-170418.pdf 2018-04-23
34 2676-DEL-2007-Correspondence-170418.pdf 2018-04-23
34 2676-del-2007-drawings.pdf 2011-08-21
35 2676-DEL-2007-FORM 3 [05-06-2018(online)].pdf 2018-06-05
35 2676-del-2007-form-1.pdf 2011-08-21
36 2676-DEL-2007-PatentCertificate13-11-2018.pdf 2018-11-13
36 2676-del-2007-form-2.pdf 2011-08-21
37 2676-DEL-2007-IntimationOfGrant13-11-2018.pdf 2018-11-13
37 2676-del-2007-form-3.pdf 2011-08-21
38 2676-DEL-2007-RELEVANT DOCUMENTS [28-03-2019(online)].pdf 2019-03-28
38 2676-del-2007-form-5.pdf 2011-08-21
39 2676-DEL-2007-RELEVANT DOCUMENTS [30-03-2020(online)].pdf 2020-03-30
39 2676-DEL-2007-Correspondence-Others-(08-07-2009).pdf 2009-07-08
40 2676-DEL-2007-RELEVANT DOCUMENTS [25-09-2021(online)].pdf 2021-09-25
40 2676-DEL-2007-Correspondence-Others-(07-07-2009).pdf 2009-07-07
41 2676-DEL-2007-RELEVANT DOCUMENTS [24-09-2022(online)].pdf 2022-09-24
41 2676-DEL-2007-Correspondence-PO-(07-07-2009).pdf 2009-07-07
42 2676-DEL-2007-GPA-(07-07-2009).pdf 2009-07-07
42 2676-DEL-2007-RELEVANT DOCUMENTS [15-09-2023(online)].pdf 2023-09-15

Search Strategy

1 SearchStrategy_24-07-2017.pdf

ERegister / Renewals

3rd: 08 Jan 2019

From 20/12/2009 - To 20/12/2010

4th: 08 Jan 2019

From 20/12/2010 - To 20/12/2011

5th: 08 Jan 2019

From 20/12/2011 - To 20/12/2012

6th: 08 Jan 2019

From 20/12/2012 - To 20/12/2013

7th: 08 Jan 2019

From 20/12/2013 - To 20/12/2014

8th: 08 Jan 2019

From 20/12/2014 - To 20/12/2015

9th: 08 Jan 2019

From 20/12/2015 - To 20/12/2016

10th: 08 Jan 2019

From 20/12/2016 - To 20/12/2017

11th: 08 Jan 2019

From 20/12/2017 - To 20/12/2018

12th: 08 Jan 2019

From 20/12/2018 - To 20/12/2019

13th: 22 Nov 2019

From 20/12/2019 - To 20/12/2020

14th: 23 Nov 2020

From 20/12/2020 - To 20/12/2021

15th: 24 Nov 2021

From 20/12/2021 - To 20/12/2022