Abstract: The present invention relates to a method and system of dynamically distributing the traffic in a network. In one embodiment this is accomplished by receiving a plurality of data flow at a first network device classifying the received data flow into a plurality of levels of queues wherein the levels of queues are based on the flow ID COS port etc mapping the plurality of classified queues based on data flow sets over ports to at least one of the link aggregation links and computing upon failure of mapped port to route the data flow dynamically to the available ports in order to balance the traffic wherein the computing is based on the QoS parameters and service attributes of data flow.
FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
THE PATENTS RULES 2003
COMPLETE SPECIFICATION
(See section 10 rule 13)
“A Method and System of dynamically distributing traffic in a network”
Tejas Networks Limited
2nd floor GNR Tech Park 46/4 Garbebhavi Palya
Kudlu Gate Hosur main road
Bangalore 560 068 Karnataka India
The following specification particularly describes the invention and the manner in which it is to be performed.
Field of the Invention
The present application is related to computer networks and more particularly to a method and system for dynamically load-balancing over multiple links.
Background of the Invention
Most data communication networks transport data in the form of packets or frames with finite lengths. For example transmit data is divided into frames at the data link layer before they are placed on a physical transmission medium. Those frames are delivered to their destination via intermediate network devices such as layer-2 switches. In this connection link aggregation techniques are used to enhance the performance and quality of communication channels between network devices. Link aggregation known as IEEE standard 802.3ad refers to a technique to provide a single logical link by bundling two or more physical links (e.g. network cables) between two network devices.
Aggregated links provide an increased data bandwidth and thus makes high-speed communication possible without using costly cables or expensive network interface cards. In recent years many communication carriers have introduced link aggregation into their networks but for the purpose of improving availability with link redundancy. By virtue of its simultaneous use of multiple physical links the link aggregation prevents the communication channel from being completely disrupted even if some physical links fail.
A common problem in communication networks is maintaining efficient utilization of network resources particularly with regard to bandwidth so that data traffic is efficiently distributed over the available links between sources and destinations. Prior art solutions include apparatus and methods that balance data traffic using various techniques on distribution of data packets among the physical ports which ends in leaving room for implementations of different complexities and characteristics.
In many typical implementations load sharing is statically configured. For example packet distribution is based on a technique that selects a port based on addresses and session information i.e. source address destination address or both source and destination addresses. Packets with the same addresses and session information are always sent to the same port in the LAG (Link Aggregation Group) to prevent out-of-order packet delivery. Static load balancing algorithms do not take into account the amount of traffic assigned to each port and variation in traffic over time and they therefore results in suboptimal utilization of the link bandwidth.
Some dynamic load balancing algorithms for LAG have been published. They primarily focus on the idea of calculating hash values based on the packets"" addresses and session information and mapping the hash values to physical ports based on the measurements of the traffic load on the physical ports. The weakness of said techniques is that changing the mappings of hash values to physical ports affects all packet flows with the same hash values and said technique fails to address the impact on preventing out-of-order packet delivery when a large number of packet flows are momentarily assigned to different egress ports. Also said dynamic load balancing algorithms do not deal with the quality of service (QoS) requirements of packet flows nor does it deal with change of link bandwidth as it occurs in wireless based communication.
Therefore it would be desirable to have a method and system to perform adaptive load balancing of traffic over multiple links to overcome the above limitations.
Summary of the Invention
An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
Accordingly an aspect of the present invention is to provide a method of dynamically distributing the traffic in a network comprising: receiving a plurality of data flow at a first network device classifying the received data flow into a plurality of levels of queues wherein the levels of queues are based on the flow ID COS port etc mapping the plurality of classified queues based on data flow sets over ports to at least one of the link aggregation links and computing upon failure of mapped port to route the data flow dynamically to the available ports in order to balance the traffic wherein the computing is based on the QoS parameters and service attributes of data flow.
Other aspects advantages and salient features of the invention will become apparent to those skilled in the art from the following detailed description which taken in conjunction with the annexed drawings discloses exemplary embodiments of the invention.
Brief description of the drawings
The above and other aspects features and advantages of certain exemplary embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings in which:
Figure 1 is an exemplary diagram of a network in which systems and methods described herein may be implemented.
Figure 2 is a diagram of an exemplary network device of FIG. 1.
Figure 3 is an exemplary diagram of a typical LAG implementation.
Figure 4 is an exemplary diagram of a dynamically distributing traffic in a network according to one embodiment of the present invention.
Figure 5 shows a flow chart of a method of a dynamically distributing traffic in a network according to one embodiment of the present invention.
Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure.
Throughout the drawings it should be noted that like reference numbers are used to depict the same or similar elements features and structures.
Detail description of the invention
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings but are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a ” “an ” and “the” include plural referents unless the context clearly dictates otherwise. Thus for example reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic parameter or value need not be achieved exactly but that deviations or variations including for example tolerances measurement error measurement accuracy limitations and other factors known to those of skill in the art may occur in amounts that do not preclude the effect the characteristic was intended to provide.
Figs. 1 through 5 discussed below and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way that would limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged communications system. The terms used to describe various embodiments are exemplary. It should be understood that these are provided to merely aid the understanding of the description and that their use and definitions in no way limit the scope of the invention. Terms first second and the like are used to differentiate between objects having the same terminology and are in no way intended to represent a chronological order unless where explicitly stated otherwise. A set is defined as a non-empty set including at least one element.
Figure 1 is a diagram illustrating an exemplary network 100 in which systems and methods described herein may be implemented. Network 100 may include for example a local area network (LAN) a private network (e.g. a company intranet) a wide area network (WAN) a metropolitan area network (MAN) or another type of network. In one implementation network 100 may include a switched network that provides point-to-point and multi-point services a network capable of using a VLAN etc.
As shown in FIG. 1 network 100 may include network devices 110-0 110-1 and 110-2 (collectively referred to as network devices 110) interconnected by links 120-0 . . . 120-N (collectively referred to as links 120). While three network devices 110 and eight links 120 are shown in FIG. 1 more or fewer network devices 110 and/or links 120 may be used in other implementations.
Network device 110 may include a variety of devices. For example network device 110 may include a computer a router a switch a network interface card (NIC) a hub a bridge etc. Links 120 may include a path that permits communication among network devices 110 such as wired connections input ports output ports etc. For example network device 110-0 may include ports PORT0 PORT1 . . . PORTN network device 110-1 may include ports PORT0 PORT1 PORT2 PORT3 and network device 110-2 may include ports PORT0 PORT1 . . . PORT7. The ports of network devices 110 may be considered part of corresponding links 120 and may be either input ports output ports or combinations of input and output ports. While eight ports for network device 110-0 four ports for network device 110-1 and eight ports for network device 110-2 are shown in FIG. 1 more or fewer ports may be used in other implementations.
In an exemplary implementation network devices 110 may provide entry and/or exit points for datagrams (e.g. traffic) in network 100. The ports (e.g. PORT0 . . . and PORTN) of network device 110-0 may send and/or receive datagrams. The ports (e.g. PORT0 PORT1 PORT2 and PORT3) of network device 110-1 and the ports (e.g. PORT0 . . . and PORT7) of network device 110-2 may likewise send and/or receive datagrams.
In one implementation a LAG may be established between network devices 110-0 and 110-1. For example ports PORT0 . . . and PORT3 of network device 110-0 may be grouped together into a LAG110-0 that communicates bi-directionally with ports PORT0 PORT1 PORT2 and PORT3 of network device 110-1 via links 120-0 120-1 120-2 and 120-3. Datagrams may be dynamically distributed between ports (e.g. PORT0 PORT1 PORT2 and PORT3) of network device 110-0 and ports (e.g. PORT0 PORT1 PORT2 and PORT3) of network device 110-1 so that administration of what datagrams actually flow across a given link (e.g. links 120-0 . . . and 120-3) may be automatically handled by LAG110-0.
In another implementation a LAG may be established between network devices 110-0 and 110-2. For example ports PORTN-3 . . . and PORTNof network device 110-0 may be grouped together into a LAG110-2 that communicates bi-directionally with ports PORT0 PORT1 PORT2 and PORT3 of network device 110-2 via links 120-N-3 120-N-2 120-N-1 and 120-N. Ports PORT0 PORT1 PORT2 and PORT3 of network device 110-2 may be grouped together into LAG110-2. LAG110-2 may permit ports PORTN-3 . . . and PORTN of network device 110-0 and ports PORT0 PORT1 PORT2 and PORT3 of network device 110-2 to communicate bi-directionally. Datagrams may be dynamically distributed between ports (e.g. PORTN-3 . . . and PORTN) of network device 110-0 and ports (e.g. PORT0 PORT1 PORT2 and PORT3) of network device 110-2 so that administration of what datagrams actually flow across a given link (e.g. links 120-N-3 . . . and 120-N) may be automatically handled by LAG110-2. With such an arrangement network devices 110 may transmit and receive datagrams simultaneously on all links within a LAG established by network devices 110.
Although FIG. 1 shows exemplary components of network 100 in other implementations network 100 may contain fewer different or additional components than depicted in FIG. 1. In still other implementations one or more components of network 100 may perform the tasks performed by one or more other components of network 100.
FIG. 2 is an exemplary diagram of a device that may correspond to one of network devices 110 of FIG. 1. As illustrated network device 110 may include input ports 210 an ingress packet processing block 220 a switching mechanism 230 an egress packet processing block 240 output ports 250 and a control unit 260. In one implementation ingress packet processing block 220 and egress packet processing block 240 may be on the same line card.
Input ports 210 may be the point of attachment for a physical link (e.g. link 120) (not shown) and may be the point of entry for incoming datagrams. Ingress packet processing block 220 may store forwarding tables and may perform forwarding table lookup to determine to which egress packet processing and/or output port that a datagram may be forwarded. Switching mechanism 220 may interconnect ingress packet processing block 220 and egress packet processing block 240 as well as associated input ports 210 and output ports 250. Egress packet processing block 240 may store datagrams and may schedule datagrams for service on an output link (e.g. link 120) (not shown). Output ports 250 may be the point of attachment for a physical link (e.g. link 120) (not shown) and may be the point of exit for datagrams. Control unit 260 may run routing protocols and Ethernet control protocols build forwarding tables and download them to ingress packet processing block 220 and/or egress packet processing block 240 etc.
Ingress packet processing block 220 may carry out data link layer encapsulation and decapsulation. In order to provide quality of service (QoS) guarantees ingress packet processing block 220 may classify datagrams into predefined service classes. Input ports 210 may run data link-level protocols. In other implementations input ports 210 may send (e.g. may be an exit point) and/or receive (e.g. may be an entry point) datagrams.
Switching mechanism 230 may be implemented using many different techniques. For example switching mechanism 230 may include busses crossbars and/or shared memories. The simplest switching mechanism 230 may be a bus that links input ports 210 and output ports 250. A crossbar may provide multiple simultaneous data paths through switching mechanism 230. In a shared-memory switching mechanism 230 incoming datagrams may be stored in a shared memory and pointers to datagrams may be switched.
Egress packet processing block 240 may store datagrams before they are transmitted on an output link (e.g. link 120). Egress packet processing block 240 may include scheduling algorithms that support priorities and guarantees. Egress packet processing block 240 may support data link layer encapsulation and decapsulation and/or a variety of higher-level protocols. In other implementations output ports 230 may send (e.g. may be an exit point) and/or receive (e.g. may be an entry point) datagrams.
Control unit 260 may interconnect with input ports 210 ingress packet processing block 220 switching mechanism 230 egress packet processing block 240 and output ports 250. Control unit 260 may compute a forwarding table implement routing protocols and/or run software to configure and manage network device 110. In one implementation control unit 260 may include a bus 260-1 that may include a path that permits communication among a processor 260-2 a memory 260-3 and a communication interface 260-4. Processor 260-2 may include a microprocessor or processing logic that may interpret and execute instructions. Memory 260-3 may include a random access memory (RAM) a read only memory (ROM) device a magnetic and/or optical recording medium and its corresponding drive and/or another type of static and/or dynamic storage device that may store information and instructions for execution by processor 260-2. Communication interface 260-3 may include any transceiver-like mechanism that enables control unit 260 to communicate with other devices and/or systems.
Network device 110 may perform certain operations as described herein. Network device 110 may perform these operations in response to processor 260-2 executing software instructions contained in a computer-readable medium such as memory 260-3. A computer-readable medium may be defined as a physical or logical memory device.
The software instructions may be read into memory 260-3 from another computer-readable medium such as a data storage device or from another device via communication interface 260-4. The software instructions contained in memory 260-3 may cause processor 260-2 to perform processes that will be described later. Alternatively hardwired circuitry may be used in place of or in combination with software instructions to implement processes described herein. Thus implementations described herein are not limited to any specific combination of hardware circuitry and software.
Although FIG. 2 shows exemplary components of network device 110 in other implementations network device 110 may contain fewer different or additional components than depicted in FIG. 2. In still other implementations one or more components of network device 110 may perform the tasks performed by one or more other components of network device 110.
Figure 3 is an exemplary diagram of a typical static load balancing implementation using Link Aggregation Group (LAG) where the data flows i.e. flow 1 flow 2…….flow m are aggregated to form a LAG (link aggregation group) and which are mapped to the port (port 1 as shown in figure 3). In static load balancing algorithms does not take into account the amount of traffic assigned to each port and variation in traffic over time and they therefore results in suboptimal utilization of the link bandwidth.
Figure 4 is an exemplary diagram of a dynamically distributing traffic in a network according to one embodiment of the present invention. As shown in figure 4 the present method in an example embodiment creates at least three levels of queuing i.e. port QoS flow group etc. The flow group is a group of flows which are mapped together to a specific port. Further choosing key for allocating flows to ports a key is a frame data item or the value of a value data item which is used in carrying out a search. Each key (e.g. folded XOR of source and destination MAC addresses) is used to classify flows into groups (e.g. all flows who’s folded XOR of source and destination MAC results in 0101 will belong to the same flow group and will be assigned to port 01. Further distributing the traffic over flow group queues according to QoS levels. By being aware of QoS and thereby using the same in hierarchical queuing one can compute to route the data flow dynamically to the available ports in order to balance the traffic where the computing is based on the QoS parameters and service attributes of data flow in the case of failure of mapped port.
Also the system measures periodically the contention level at each port and each class by the following equation.
?C(t)?_(r_(i_m ) )=??BW(t)?_m?_(? demand total?_i )/(?Cap?_i-?¦?BW(t)?_(?allocated to all other classes?_i ) )
The measuring of contention level is elaborated by an example considering link group capacity is 3Gbps and is composed of 3 1Gbps links. Let’s assume that there are 4 classes of service. 1 is EF- Expedited forwarding 2- CIR – Committed information rate. 3- Mixed CIR+EIR (excessive information rate or best effort) and 4 – EIR only. Strict priority always gets delivered regardless of other queues. Other queues are handled by WFQ – Weighted fair queuing. Suppose the weights are 4 for class 2 2 for class 3 and 1 for class 4. This implies that if demand traffic is less than link capacity all classes get delivered but if the link is more than fully loaded than each class will get less than it demands according to the total demand and its queuing class and weight. Now suppose that in our link group 1 link 1 has 90% demand of class 4 traffic. Link 2 has 90% demand of class 4 traffic and link 3 has 90% total demand of which 0.4Gbps is class 1 and the rest is class2. Now suppose link 1 fails. Prior art methods will at best evenly distribute link 1 traffic between the remaining links so in theory each remaining link is 135% utilized. However this is not the preferred allocation. Link 2 contention for class 4 after said allocation will be 1.35= (0.9+0.45)/1 Link 3 contention is calculated as follows: BW allocated to class 1 is 0.4Gbps. The remaining bandwidth (1-0.4) will be allocated between class 2 and class 4 according to ratio of WFQ weights4/1. Therefore total BW available to class 4 is 0.12= ((1-0.4)*1)/(1+4). And the resulting contention would be3.75= 0.45/0.12. This implies that class 4 traffic on link 3 will get almost one third of the bandwidth class 4 is getting on link 2. The allocation of flows to links is a function of links capacity links weight and flows QoS class. By being aware of traffic classes the dynamic load balancing may be able to distribute traffic evenly enable high port contention at times and still preserve high priority traffic.
Further the system monitors load on queues e.g. measuring the frame memory utilization per queue at the traffic manage in order to reroute the flows according to load starting from lowest COS queues. In addition to this the system may optionally temporarily pausing traffic in low COS queues to prevent mi-ordering of frames.
Figure 5 shows a flow chart of a method of a dynamically distributing traffic in a network according to one embodiment of the present invention. At step 510 the method receives a plurality of data flow at a first network device.
At step 520 the method classify the received data flow into a plurality of levels of queues wherein the levels of queues are based on the flow ID COS port etc.
At step 530 the method maps the plurality of classified queues based on data flow sets over ports to at least one of the link aggregation links. The method classify by choosing one or more keys to classify the data flows into groups or links where the data flows are classified by allocating one or more data flows to one or more ports and wherein the key is a frame data item or the value of a value data item.
At step 540 the method computes upon failure of mapped port to route the data flow dynamically to the available ports in order to balance the traffic the routing includes distribution of traffic dynamically over flow group queues according to QOS levels to a second network device and the routing of the data over the network links which are selected based on the keys associated with the flow sets. The number of keys is determined by the ratio between link aggregation group capacity and the average desired flow set size and wherein the number of keys is rounded to the nearest power of 2.Further the computing is based on the QoS parameters and service attributes of data flow.
The balancing of traffic is triggered by due to change in number of active links. Further the balance of the traffic is composed one or more of measuring imbalance ratio of links comparing it to set threshold for balancing traffic calculating deviation from balance for each link and distributing traffic starting with the preeminent QoS class etc. Furthermore the balancing of traffic by reassigning keys to links according to available link capacity bandwidth demand of flow sets minimum contention of classes over all available links after flow reroute and minimum overall deviation between links according chosen metrics etc.
The method further includes where the re-routing of flow sets from one link to other is fooled by a pause of traffic of target class for a predetermined period of time in order to prevent mis-ordering of frames where the pause of traffic is calculated from measured link delay.
Although the method flowchart includes steps 510-540 that are arranged logically in the exemplary embodiments other embodiments of the subject matter may execute two or more steps in parallel using multiple processors or a single processor organized as two or more virtual machines or sub-processors. Moreover still other embodiments may implement the steps as two or more specific interconnected hardware modules with related control and data signals communicated between and through the modules or as portions of an application-specific integrated circuit. Thus the exemplary process flow diagrams are applicable to software firmware and/or hardware implementations.
It should also be understood that elements of the present invention may also be provided as a computer program product which may include a machine-readable medium having stored thereon instructions which may be used to program a computer (e.g. a processor or other electronic device) to perform a sequence of operations. Alternatively the operations may be performed by a combination of hardware and software. The machine-readable medium may include but is not limited to floppy diskettes optical disks CD-ROMs and magneto-optical disks ROMs RAMs EPROMs EEPROMs magnet or optical cards propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example elements of the present invention may be downloaded as a computer program product wherein the program may be transferred from a remote computer or telephonic device to a requesting process by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g. a modem or network connection).
FIGS. 1-5 are merely representational and are not drawn to scale. Certain portions thereof may be exaggerated while others may be minimized. FIGS. 1-5 illustrate various embodiments of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.
In the foregoing detailed description of embodiments of the invention various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather as the following claims reflect inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description of embodiments of the invention with each claim standing on its own as a separate embodiment.
It is understood that the above description is intended to be illustrative and not restrictive. It is intended to cover all alternatives modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should therefore be determined with reference to the appended claims along with the full scope of equivalents to which such claims are entitled. In the appended claims the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein ” respectively.
We Claim:
1. A method of dynamically distributing traffic in a network the method comprising:
receiving a plurality of data flow at a first network device;
classifying the received data flow into a plurality of levels of queues wherein the levels of queues are based on the flow ID COS port etc;
mapping the plurality of classified queues based on data flow sets over ports to at least one of the link aggregation links; and
computing upon failure of mapped port to route the data flow dynamically to the available ports in order to balance the traffic wherein the computing is based on the QoS parameters and service attributes of data flow.
2. The method of claim 1 wherein the step of classifying further including choosing one or more keys to classify the data flows into groups or links wherein the data flows are classified by allocating one or more data flows to one or more ports and wherein the key is a frame data item or the value of a value data item.
3. The method of claim 1 wherein the step of routing includes distribution of traffic dynamically over flow group queues according to QOS levels to a second network device.
4. The method of claim 1 wherein the balancing of traffic is composed at least one of measuring imbalance ratio of links comparing it to set threshold for balancing traffic calculating deviation from balance for each link and distributing traffic starting with the preeminent QoS class etc.
5. The method of claim 1 wherein the balancing of traffic is triggered by due to change in number of active links.
6. The method of claim 1 wherein the balancing of traffic by reassigning keys to links according to available link capacity bandwidth demand of flow sets minimum contention of classes over all available links after flow reroute and minimum overall deviation between links according chosen metrics etc.
7. The method of claim 1 wherein the routing of the data over the network links which are selected based on the keys associated with the flow sets.
8. The method of claim 1 wherein the number of keys is determined by the ratio between link aggregation group capacity and the average desired flow set size and wherein the number of keys is rounded to the nearest power of 2.
9. The method of claim 1 wherein the allocation of flows to links is a function of links capacity links weight and flows QoS class.
10. The method of claim 1 further comprising:
re-routing of flow sets from one link to other is fooled by a pause of traffic of target class for a predetermined period of time in order to prevent mis-ordering of frames wherein the pause of traffic is calculated from measured link delay.
Dated this the 29th day of March 2012
S Afsar
Of Krishna & Saurastri Associates
Agent for the Applicant
Registration No. IN/PA-1073
Abstract
A method and system of dynamically distributing traffic in a network
The present invention relates to a method and system of dynamically distributing the traffic in a network. In one embodiment this is accomplished by receiving a plurality of data flow at a first network device classifying the received data flow into a plurality of levels of queues wherein the levels of queues are based on the flow ID COS port etc mapping the plurality of classified queues based on data flow sets over ports to at least one of the link aggregation links and computing upon failure of mapped port to route the data flow dynamically to the available ports in order to balance the traffic wherein the computing is based on the QoS parameters and service attributes of data flow.
Figure 4 (for publication)
| # | Name | Date |
|---|---|---|
| 1 | Form-5.pdf | 2012-04-02 |
| 2 | Form-3.pdf | 2012-04-02 |
| 3 | Form-1.pdf | 2012-04-02 |
| 4 | Drawings.pdf | 2012-04-02 |
| 5 | abstract1230-CHE-2012.jpg | 2013-04-12 |
| 6 | 1230-CHE-2012-FER.pdf | 2019-11-29 |
| 7 | 1230-CHE-2012-OTHERS [29-05-2020(online)].pdf | 2020-05-29 |
| 8 | 1230-CHE-2012-FER_SER_REPLY [29-05-2020(online)].pdf | 2020-05-29 |
| 9 | 1230-CHE-2012-DRAWING [29-05-2020(online)].pdf | 2020-05-29 |
| 10 | 1230-CHE-2012-COMPLETE SPECIFICATION [29-05-2020(online)].pdf | 2020-05-29 |
| 11 | 1230-CHE-2012-CLAIMS [29-05-2020(online)].pdf | 2020-05-29 |
| 12 | 1230-CHE-2012-ABSTRACT [29-05-2020(online)].pdf | 2020-05-29 |
| 13 | 1230-CHE-2012-US(14)-HearingNotice-(HearingDate-04-07-2022).pdf | 2022-06-02 |
| 14 | 1230-CHE-2012-FORM-26 [01-07-2022(online)].pdf | 2022-07-01 |
| 15 | 1230-CHE-2012-Correspondence to notify the Controller [01-07-2022(online)].pdf | 2022-07-01 |
| 16 | 1230-CHE-2012-Correspondence_Power Of Attorney_07-07-2022.pdf | 2022-07-07 |
| 17 | 1230-CHE-2012-Written submissions and relevant documents [18-07-2022(online)].pdf | 2022-07-18 |
| 18 | 1230-CHE-2012-RELEVANT DOCUMENTS [18-07-2022(online)].pdf | 2022-07-18 |
| 19 | 1230-CHE-2012-POA [18-07-2022(online)].pdf | 2022-07-18 |
| 20 | 1230-CHE-2012-POA [18-07-2022(online)]-1.pdf | 2022-07-18 |
| 21 | 1230-CHE-2012-PETITION UNDER RULE 137 [18-07-2022(online)].pdf | 2022-07-18 |
| 22 | 1230-CHE-2012-MARKED COPIES OF AMENDEMENTS [18-07-2022(online)].pdf | 2022-07-18 |
| 23 | 1230-CHE-2012-FORM 13 [18-07-2022(online)].pdf | 2022-07-18 |
| 24 | 1230-CHE-2012-FORM 13 [18-07-2022(online)]-1.pdf | 2022-07-18 |
| 25 | 1230-CHE-2012-Annexure [18-07-2022(online)].pdf | 2022-07-18 |
| 26 | 1230-CHE-2012-AMENDED DOCUMENTS [18-07-2022(online)].pdf | 2022-07-18 |
| 27 | 1230-CHE-2012-AMENDED DOCUMENTS [18-07-2022(online)]-1.pdf | 2022-07-18 |
| 28 | 1230-CHE-2012-Proof of Right [26-07-2022(online)].pdf | 2022-07-26 |
| 29 | 1230-CHE-2012-PETITION UNDER RULE 137 [26-07-2022(online)].pdf | 2022-07-26 |
| 30 | 1230-CHE-2012-PatentCertificate08-09-2022.pdf | 2022-09-08 |
| 31 | 1230-CHE-2012-IntimationOfGrant08-09-2022.pdf | 2022-09-08 |
| 32 | 1230-CHE-2012-FORM FOR SMALL ENTITY [02-01-2023(online)].pdf | 2023-01-02 |
| 33 | 1230-CHE-2012-EVIDENCE FOR REGISTRATION UNDER SSI [02-01-2023(online)].pdf | 2023-01-02 |
| 1 | appln26-2019-11-2314-16-25_23-11-2019.pdf |