Abstract: Traffic management in Software defined networking can be drastically improved by understanding the nature of traffic and effective scheduling, a Network Packet Scheduler unit (160) is disclosed, that filters out the nature of the traffic and effectively schedules them over to appropriate virtual machines (150). Network function(150) determination method is used to classify traffic based on headers from network and transport layer fields. The Network Packet Scheduler (160) lead for better throughput, less delay and packet drop ratio (PDR) in the network and automatically increase the performance of the network.
DESC:TECHNICAL FIELD
[0001] The present disclosure relates generally to advanced computer networking, and more specifically, to data traffic management in software-defined networks (SDN).
BACKGROUND
[0002] Software Defined Network (SDN) has evolved as a network paradigm to serve real time traffic with varying requirement in large scale network. The basis of SDN lies on separation of data plane from the control programs providing flexibility to reconfigure the controller with change in requirements. It enables programmability of network control functions and abstraction of underlying architecture for application and services. It also introduces certain challenges towards scalability and performance, security hardening and cross-layer communication. The performance of SDN depends upon traffic management, load balancing, energy consumption and many more parameters.
[0003] WO2015087468A1 proposes a workload manager along with network flow graph. The network flow graph defines interactions between a plurality of subprograms distributed in the enterprise network based on compile-time information of the workload. An SDN controller analyses the network flow graph to identify the interactions between the subprograms as prompts. The SDN controller allocates network resources to define a plurality of flows through the enterprise network based on the prompts and characteristics of the enterprise network. However, the workload manager utilizes the functions of a graph and does not deal with virtualization operations of network functions.
[0004] WO2015092954A1 proposes a method wherein, source virtual processor i.e. either a logical partition (LPAR) or a virtual machine (140) executes under the control of a hypervisor on the host machine. A destination virtual processor associated with the data packet is determined by an SDN controller. In addition, the SDN controller identifies the flow between the source virtual processor and the destination virtual processor. The flow includes a least one virtual port in the virtual switch. However, the virtualization technique is not directed towards
[0005] “PolicyCop: An Autonomic QoS Policy Enforcement Framework for Software Defined Networks, :2013 IEEE SDN for Future Networks and Services (SDN4FNS)” relates to a policy enforcer and a policy validator that are introduced to classify various kind of network events and corresponding functions. PolicyCop coupled with the flexible programmability features offered by SDN, enables the network manager to describe his requirements as high-level network-wide policies, which are implemented, monitored, and enforced by Policy-Cop.
[0006] “Traffic Engineering in Software-Defined Networking: Measurement and Management, IEEE access, 2016” proposes a framework having two parts: traffic measurement and traffic management. Traffic measurement mainly studies how to monitor, measure, and acquire network status information in the SDN environment. The network status information includes the current topology connection status, ports’ status (up or down), various kinds of packet counters, dropped packet counters, utilization ratios of link bandwidths, end-to-end network latency, end-to-end traffic matrices and so on.
[0007] There is still a need of an invention which not only will manage the traffic, but also provide effective network communication and lead for better throughput, less delay and packet drop ratio (PDR) in the network and increase the performance of the network.
SUMMARY OF THE INVENTION
[0008] An aspect of the invention includes a method of traffic management for a software defined network. One or more network requests containing one or more network packets are received. These network requests are mapped to one or more network functions. These network functions are tagged to one or more virtual machines that available for execution. The virtual machines which are required for processing of the network functions are then identified. The network functions are then processed in the identified virtual machines. The flow rules are generated and sent to the network switches for processing.
[0009] In another aspect of the invention, an apparatus for traffic management for a software defined network is described. The apparatus contains a network packet scheduler unit for receive one or more network requests, to map the network requests to one or more network functions, tag network functions to one or more virtual machines for execution, identify a virtual machine for execution of one or more network functions. One or more virtual machines to execute the network functions and a packet processing unit for generating and send flow rules for one or more switches.
[0010] In another aspect of the invention, a system for traffic management in a software defined networking is described. The system contains one or more application servers, one or more host servers interfaced with one or more network switches and one or more network management servers to perform the method of traffic management.
[0011] Accordingly, an improved apparatus and method of traffic management in a software defined network for effective inter-controller communication and that lead better throughput, less delay and packet drop ratio (PDR) in the network is described.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0012] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and modules.
[0013] Figure 1 illustrates a system (Virtual Controller Architecture) for traffic management in a Software Defined Network (SDN), according to an exemplary implementation of the present disclosure.
[0014] Figure 2 illustrates a flowchart 200 for a method for traffic management for a software defined network, according to an exemplary implementation of the present disclosure.
[0015] Figure 3 illustrates a processing unit for each of the units implementing traffic management for a software defined network, according to an exemplary implementation of the present disclosure.
[0016] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[0017] The various embodiments of the present disclosure describe a system and method for efficient traffic management in software defined networking.
[0018] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these details. One skilled in the art will recognize that embodiments of the present disclosure, some of which are described below, may be incorporated into a number of systems.
[0019] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the presently disclosure and are meant to avoid obscuring of the presently disclosure.
[0020] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0021] Software Defined Network (SDN) Architecture contains three planes of operation namely an Application Plane, a Data Plane, and a Control Plane. The Application Plane contains one or more Application Servers and a Network Management Server which route the incoming network requests to the Control Plane. The Data Plane contains one or more switches and servers for acting on the requests from the Application Plane. The Control Plane is placed between the Application Plane and the Data Plane, which contains a Network Controller that translates the requests from the Application Plane to the Data Plane. Also, the Controller performs certain functions to collect statistical data and event monitoring functions as well.
[0022] The Application Plane and Control Plane interact with each other using one or more specific instances of interfaces called a North Bound Interface (NBI)s. The NBIs typically provide abstract network views and enable direct expression of network behavior and requirements. Similarly, the Data Plane and the Control Plane, interact using one or more instances of South Bound Interface (SBIs). The southbound interface provides communication and management between the network's SDN controller, nodes, physical/virtual switches, and routers. It allows the router to discover the network topology, define network flow and implement several requests relayed from northbound application programming interfaces (API).
[0023] A specific Communication Protocol is used to implement the Software Defined Networking Architecture. An example of such protocol is OpenFlow which enables network controllers to determine the path of network packets across a network of switches. Typically, OpenFlow is mainly used between the switch and controller on a secure channel. One or more network switches implementing the OpenFlow protocol are used to control the Control plane elements using an OpenFlow network controller and OpenFlow switches. The OpenFlow switches and OpenFlow network controller usually interacts with the Control plane using the South Bound Interface. An application is executed by sending requests to the OpenFlow switches in the form of packets. Such packets are typically passed on to the Control Plane
[0024] Figure 1 shows a system (Virtual Controller Architecture) for traffic management of a Software Defined Network. The system comprises of an: Application Plane, Data Plane, and a Control Plane. In one embodiment, as shown in Figure 1, the Application Plane contains one or more Application Servers (shown as AP1 and AP2) and one or more Network Management Servers (NMS). The Data Plane comprises one or more switches (shown as switches (S1-S3)) and one or more Host Point servers (H1 and H2). The Control Plane shows an apparatus comprising a Network Packet Scheduler Unit (160), one or more instances of Virtual Machines (VM1-VMn) (150), Packet Processor Unit (120) , Network Control Unit (130) , and a Network Function (140) .
[0025] The standard flow of request from the Application Plane to the Data Plane through the Control Plane for performing a network operation is described below with respect to Figure 1 of the drawings:
(1) Typically, one or more requests from the Application Servers (AP1-APn) and Network Management Server units (NMS) are sent to the Network Packet Scheduler in form of packets;
(2) The Packets are categorized into one or more Network Functions (140) by Network Packet Scheduler Unit (160);
(3) One or more Network Functions (140) broken into specific Network Function sequences which are sent to one or more Virtual Machine instances (150) for execution;
(4) If the packet is trusted, the packet processor (120) processes the packets for generation of the flow rules for processing in the Data Plane; and
(5) The generated flow rules are sent to the switches (S1-S3) in the Data Plane for processing;
(6) If the packet is not trusted, the packet is dropped.
[0026] The operations of each of the units (a Network Packet Scheduler Unit (160), one or more instances of Virtual Machines (VM1-VMn) (150), Packet Processor Unit (120), Network Control Unit (130) , and a Network Function (140) of the Control Plane for performing an atomic network operation is described in detail below.
[0027] One or more network requests (containing network packets) are received at the Network Packet Scheduler Unit (160). These network requests are mapped to one or more Network Functions. This classification of network requests into Network Functions is performed for performing atomic network operations for routing control, energy-efficiency, security enforcement, load balancing, and flow monitoring in the network. Such a classification is performed using Network and Transport layer fields. The Network Packet Scheduler Unit (160) then maps the one or more Network Functions (150) for processing in the one or more Virtual Machines (150). The manner in which the Virtual Machines (150) process the Network Functions (150) is explained below.
[0028] One or more Virtual Machines (150) serve as an execution unit for running a finite set of Network Functions (140). The number of Network Functions that can be processed by a Virtual Machine (150) is limited by a threshold value (d). The parameter (d) indicates the maximum degree of parallelism supported by the virtualization platform. In one embodiment, the virtualization platform is enabled using a “Hewlett Packard (HP) Tower Server” series along with "Spirent Network Functions Virtualization (NFV) Infrastructure Starter Kit" which enables one or more Virtual Machines (150) to work/operate.
[0029] In each Virtual Machine (150), a Network Function is associated with a unique tag (Network Function (NF) tag) that is used for referencing the function. Therefore, simultaneous network function execution in more than one Virtual Machine (150) is enabled.
[0030] A Virtual Machine (150) maintains an input queue (first-in, first out) that keeps track of the packet processing requests received from the Network Packet Scheduler (160) unit. A packet processing request consists of a packet header and an ordered list of one or more Virtual Machines (150), the packet should navigate. To serve a packet, a Virtual Machine (150) takes the packet from the queue, reads the tag of the packet, matches this tag with its Network Function tag, and executes the Network Function (140) corresponding to the matched tag on the packet.
[0031] On successful completion of the network function, the Virtual Machine (150) updates the state of the packet as ”PROCESSED” against the corresponding network function. Then, the Virtual Machine (150) sends an acknowledgment to Network Packet Scheduler (160) unit (i.e., the corresponding NPS) and subsequently sends the packet to the next Virtual Machine (150) in the ordered list. One or more Virtual Machines (150) communicate using standard message passing techniques for execution of one or more Network Functions (140).
[0032] A Virtual Machine (150) can be loaded or updated with new network functions (NFs) depending on requirements. This configuration process is governed and synchronized by the respective Network Packet Scheduler unit (160). During this configuration period, the environment of the corresponding Virtual Machine (150) is saved, and the state is set as “BLOCKED.” The information saved during this process is assigned Network Functions (140), the packet processing state of the Virtual Machine (150), and content of the queue. The Virtual Machine (150) becomes “ACTIVE” and resumes its execution once the configuration is over.
[0033] The Network Packet Scheduler (160) unit keeps track of the information about the Virtual Machine (140). This information includes the state of each Virtual Machine (140), network functions running in it along with their tags, and the processing statistics (average packet processing time, current queue size). A Network Packet Scheduler unit performs two specific tasks: Network Function (140)-sequence determination and Virtual Machine (140) allocation.
i) Network Function-sequence determination: On receipt of a packet request for execution of an application, the Network Packet Scheduler (160) unit extracts following fields from the (i) packet header: source and destination IP addresses, (ii)source and destination ports, packet size, and (iii) encryption type. Then, it determines the ordered list of Network Functions (140) (expressed as (Seq(NFi))) that is required to process the packet request. This list is created using filtering conditions based on the extracted fields. A packet should execute all the Network Functions (140) in the sequence for the successful execution of an application.
ii) Virtual Machine allocation: After determining the Network Function (140) sequence for a packet request, the Network Packet Scheduler (160) unit allocates the packet to the respective virtual machines for its execution. Typically, this allocation is achieved by providing a mapping between the Virtual Machine (140) and the Network Function (140). An example of such mapping can be expressed as , for ith network function to kth Virtual Machine (150). For performing the allocation, one or more specific data structures may be used. For example, Network Function allocation table (NFAT) and Virtual Machine Status Table (VMST) may be used.
The standard notation used in the description of the Network Function Allocation Table (NFAT) and Virtual machine status table (VMST) tables are provided in Table 1 below:
Table 1: Notation used to describe functions in NFAT and VMST Tables
Network Function Allocation Table (NFAT): It records the availability of network functions execution in the Virtual Machines (150). This table is indexed by Network Function (140) number. One entry of NFAT is of the form:
< NFi;< VMj ; VMq j >; ::;< VMk; VMqk >>
where
NFi is ith Network Function;
VMj is the jth Virtual Machine;
VMqj is the Current size of Queue in Virtual Machinej;
VMk is the kth Virtual Machine;
VMqk is the Current size of Queue in Virtual Machinek.
The queue size of one or more Virtual Machines (140) is increased after allocation of a packet to it and decreased after completion of the related network functions.
Virtual machine status table (VMST): This table stores current state of a VM, list of network function available in it along with their tag number. The table is indexed by Virtual Machine (140) number.
An entry in VMST is of the form:
<< VMi; VMi;st;< 1; VMi; TNFi >; :;< k; VMk; TNFi >>
VMj is the jth Virtual Machine;
VMi;st is the Current State of the jth Virtual Machine;
TNFi is the Average Completion Time for the ith Virtual Machine;
VMk is the kth Virtual Machine;
TNFi is the Average Completion Time for the ith Virtual Machine
By iterating through, Network Function (140) sequence, each entry of the Network Function sequence adds a NFi, it adds a tuple to the ordered list. After updating the list by iterating through the NF sequence, the network packet scheduler sends the packet to the first VM in the sequence.
[0034] Subsequently, the Virtual Machine (150) sends an acknowledgment to the corresponding Network Packet Scheduler (160) unit after the successful execution of corresponding Network Function (140) in the Virtual Machine (150), and the Network Packet Scheduler unit updates the statistics of that Virtual Machine (150).
[0035] On completion of processing of the Network Functions (140), the control passes onto the Packet Processor unit (120). The Packet Processor unit (120) absorbs the network traffic. The Packet Processor (120) generates flow rules after processing Network Functions (140) if the associated network packets are trusted. The generated flow rules are sent to one or more switches in the Data Plane for further processing. For interfacing with one or more Software Defined Networks, specific unit called as a Network Control Unit (130) may be utilized. The Network Control Unit (130) interfaces with similar units in one or more Software Defined Networks for its operation.
[0036] Accordingly, in one embodiment, the manner in which the one or more units of the Control Plane work/operate to manage traffic is described below. These units may be implemented using hardware, software, or a combination thereof.
[0037] The network packet scheduler unit (160) receives one or more network requests via a southbound network interface from one or more network switches (S1-S3) connected one or more host points (H1-H2) or one or more application servers (AP1-AP2) or one or more network management servers (NMS). The network requests comprise of one or more network packets that are to be processed. The network packet scheduler unit (160) then maps the received network requests to one or more network functions (140). The received network packets facilitate processing of the network functions (140). These network functions are then tagged to one or more virtual machines (150) for processing by inserting an entry in the Network Function Allocation Table (NFAT). For example, the network functions are to perform routing control, energy-efficiency, security enforcement, load balancing, and flow monitoring in the network. On insertion to the NFAT, a network function number may be allocated to it.
[0038] The availability of the Virtual Machine for processing indicated in the Virtual Machine Status Table (VMST). Based on the entries in the NFAT and VMST entries, the specific Virtual Machine from one or more Virtual Machines is identified for processing a network function. The Virtual Machines (150) maintain a queue for unprocessed Network functions for the processing, which pick the one Network Functions for processing from the queue. The Virtual Machines (150) then process the Network Functions (140). On processing of the Network Functions, a packet processor then generates flow rules and sends the flow rules to one or more switches in the Data Plane for further processing. If the packets associated with the Network Functions are not trusted, then the Network Packets are dropped.
[0039] In one embodiment, the virtual machines are configured to concurrently execute one or more network functions and on completion of the processing of the network functions, the virtual machines are configured to intimate the packet scheduler unit (160), of the competition of the processing. In implementing a network of Software Defined Networks, a Network Control Unit (130) may be utilized.
[0040] The manner in which the network packets may be classified and processed to realize effective traffic management in a software defined network may be traced is described with reference to the flow chart of FIG. 2. The method of FIG. 2 may operate in conjunction with FIG. 1. The flow chart begins in step 201, in which control immediately passes to step 210.
[0041] At step 210, one or more network requests are received. These network requests contain one or more network packets which are required to be processed to perform one or more network operations.
[0042] At step 220, these network requests are mapped to one or more network functions. These network functions are for performing atomic network control operations in transport and network layer fields, wherein the network functions correspond to performing routing control, energy-efficiency, security enforcement, load balancing, and flow monitoring in the network.
[0043] At step 230, the network functions are tagged to one or more virtual machines are available.
[0044] At step 240, one or more Virtual Machines are allocated for processing of the Network Functions to be executed in a specific Virtual Machine is performed using the technique described earlier in Virtual Machine allocation. Typically, such allocation is performed using a Network Function allocation table (NFAT) and a Virtual Machine Status Table (VMST) described previously.
[0045] . At step 250, the network functions are processed in the identified virtual machines. Such a processing is performed, if and only if the network packets associated with the network functions are trusted.
[0046] At step 260, flow rules are generated from processed network functions. At step 270, the generated flow rules are sent to one or more network switches for further processing. The flow chart ends in step 299.
[0047] In one embodiment, by performing the method steps 201-299, the various units of the Control Plane operate to provide an effective method for traffic management in an software defined network which enables inter-controller communication and that lead better throughput, less delay and packet drop ratio (PDR) in the network.
[0048] The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software, or a combination thereof. It should be appreciated that any component or collection of components that perform the functions described above can be generically considered as one or more processors that control the above discussed functions. The one or more processors can be implemented in numerous ways, such as with dedicated hardware, or with general purpose hardware that is programmed using microcode or software to perform the functions recited above. Further, one or more of the capabilities can be emulated.
[0049] Figure 3 shows a standard processing unit for performing the operations of devices such as host points, application servers, network management servers and a server to run the virtual machines. The computing architecture contains one or more processors 310, memory 320, network interface 350 and data store 360. In one embodiment, specific firmware along with specialized hardware may be present to enable operation of one or more virtual machines. Certain components for implementing the Software Defined Networking can be made using specific hardware. For example, Software Defined Network Development Board (OMAPL-138) may be used for implementation of Software Defined Networking (SDN).
[0050] The memory 320 and data store 360 comprises of standard computer readable storage medium may include but is not limited to: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM).
[0051] One or more servers and devices may communicate via a network using the network interface 350, for example, the Internet, a local area network, wide area network and/or wireless network. The network may comprise, but not limited to copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
[0052] Accordingly a method and apparatus for network management in a software defined network packet scheduler for Software Defined Network, which not only will manage the traffic, but also provide effective inter-controller communication and lead for better throughput, less delay and packet drop ratio (PDR) in the network and automatically increase the performance of the network.
[0053] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.
,CLAIMS:
1. A method (200) of traffic management for a software defined network, the method comprising:
receiving (210) one or more network requests;
mapping (220) the network requests to one or more network functions;
tagging (230) the network functions to one or more virtual machines available for processing;
identifying (240) one or more virtual machines for processing of the network functions;
processing (250) the network functions in the identified virtual machines;
generating (260) flow rules from processed network functions; and
sending (270) the generated flow rules to one or more network switches in the network.
2. The method as claimed in claim 1, wherein the network requests are received from one or more network switches connected to one or more host points or one application servers.
3. The method as claimed in claim 1, wherein the network requests comprises of one or more network packets that are mapped to one or more network functions, wherein the one or more network packets facilitate processing of the network functions.
4. The method as claimed in claim 1, wherein the network functions are processed only if the associated network packets are trusted.
5.The method as claimed in claim 1, wherein the network functions are for atomic network control operations in transport and network layer fields, wherein the network functions correspond to at least one of routing control, energy-efficiency, security enforcement, load balancing, and flow monitoring in the network.
6. The method as claimed in claim 1, wherein the tagging of network functions to one or more virtual machines is performed using a Network Function Allocation Table (NFAT).
7. The method as claimed in claim 1, wherein the identifying of a virtual machine for execution of a network function is performed using Virtual Machine Status Table (VMST).
8. An apparatus for traffic management in a software defined network comprising:
one or more processors and a memory, the one or more processors comprising:
a network packet scheduler unit (160) configured to:
receive one or more network requests;
map the network requests to one or more network functions (140);
tag network functions to one or more virtual machines for processing;
identify a virtual machine for processing of one or more network functions;
one or more virtual machines (150) configured to processing one or more network functions; and
a packet processor unit (120) configured to generate and send flow rules for one or more switches.
9.The apparatus as claimed in claim 8, the network packet scheduler unit (160) is configured to receive network requests via a southbound network interface from one or more network switches (S1-S3) connected one or more host points (H1-H2) or one or more application servers (AP1-AP2) or one or more network management servers (NMS) .
10.The apparatus as claimed in claim 8, the network requests received by the network packet scheduler unit (160) comprises of one or more network packets that are mapped to one or more network functions, wherein the one or more network packets facilitate processing of the network functions.
11.The apparatus as claimed in claim 8, the network packet scheduler unit (160) determines whether the network packets are trusted.
12.The apparatus as claimed in claim 8, wherein the network functions are for atomic network control operations in transport and network layer fields, wherein the network functions correspond to at least one of routing control, energy-efficiency, security enforcement, load balancing, and flow monitoring in the network.
13.The apparatus as claimed in claim 8, wherein a network control unit (130) enables the interfacing of the said software defined network with one or more other software defined networks.
14.The apparatus as claimed in claim 8, wherein the virtual machines are configured to concurrently execute one or more network functions; and
on completion of the processing of the network functions, the virtual machines are configured to intimate the packet scheduler unit (160), of the competition of the processing.
15. A system for traffic management in software defined network, the system comprising:
one or more host points (HP1, HP2) interfaced with one or more network switches (S1-S3);
one or more application servers (AP1, AP2);
one or more network management servers (NMS); and
the system configured to perform the steps of any of the claims 1-7.
| # | Name | Date |
|---|---|---|
| 1 | 202041012879-PROVISIONAL SPECIFICATION [24-03-2020(online)].pdf | 2020-03-24 |
| 1 | 202041012879-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 2 | 202041012879-FORM 1 [24-03-2020(online)].pdf | 2020-03-24 |
| 2 | 202041012879-PROOF OF ALTERATION [07-10-2024(online)].pdf | 2024-10-07 |
| 3 | 202041012879-IntimationOfGrant24-09-2024.pdf | 2024-09-24 |
| 3 | 202041012879-DRAWINGS [24-03-2020(online)].pdf | 2020-03-24 |
| 4 | 202041012879-PatentCertificate24-09-2024.pdf | 2024-09-24 |
| 4 | 202041012879-Abstract_24-03-2020.jpg | 2020-03-24 |
| 5 | 202041012879-FORM-26 [21-06-2020(online)].pdf | 2020-06-21 |
| 5 | 202041012879-ABSTRACT [09-05-2023(online)].pdf | 2023-05-09 |
| 6 | 202041012879-FORM-26 [24-06-2020(online)].pdf | 2020-06-24 |
| 6 | 202041012879-CLAIMS [09-05-2023(online)].pdf | 2023-05-09 |
| 7 | 202041012879-FORM 3 [14-09-2020(online)].pdf | 2020-09-14 |
| 7 | 202041012879-COMPLETE SPECIFICATION [09-05-2023(online)].pdf | 2023-05-09 |
| 8 | 202041012879-ENDORSEMENT BY INVENTORS [14-09-2020(online)].pdf | 2020-09-14 |
| 8 | 202041012879-DRAWING [09-05-2023(online)].pdf | 2023-05-09 |
| 9 | 202041012879-DRAWING [14-09-2020(online)].pdf | 2020-09-14 |
| 9 | 202041012879-FER_SER_REPLY [09-05-2023(online)].pdf | 2023-05-09 |
| 10 | 202041012879-CORRESPONDENCE-OTHERS [14-09-2020(online)].pdf | 2020-09-14 |
| 10 | 202041012879-FER.pdf | 2022-11-15 |
| 11 | 202041012879-COMPLETE SPECIFICATION [14-09-2020(online)].pdf | 2020-09-14 |
| 11 | 202041012879-FORM 18 [27-06-2022(online)].pdf | 2022-06-27 |
| 12 | 202041012879-Correspondence, Form1_05-10-2020.pdf | 2020-10-05 |
| 12 | 202041012879-Proof of Right [23-09-2020(online)].pdf | 2020-09-23 |
| 13 | 202041012879-Correspondence, Form1_05-10-2020.pdf | 2020-10-05 |
| 13 | 202041012879-Proof of Right [23-09-2020(online)].pdf | 2020-09-23 |
| 14 | 202041012879-COMPLETE SPECIFICATION [14-09-2020(online)].pdf | 2020-09-14 |
| 14 | 202041012879-FORM 18 [27-06-2022(online)].pdf | 2022-06-27 |
| 15 | 202041012879-CORRESPONDENCE-OTHERS [14-09-2020(online)].pdf | 2020-09-14 |
| 15 | 202041012879-FER.pdf | 2022-11-15 |
| 16 | 202041012879-DRAWING [14-09-2020(online)].pdf | 2020-09-14 |
| 16 | 202041012879-FER_SER_REPLY [09-05-2023(online)].pdf | 2023-05-09 |
| 17 | 202041012879-ENDORSEMENT BY INVENTORS [14-09-2020(online)].pdf | 2020-09-14 |
| 17 | 202041012879-DRAWING [09-05-2023(online)].pdf | 2023-05-09 |
| 18 | 202041012879-FORM 3 [14-09-2020(online)].pdf | 2020-09-14 |
| 18 | 202041012879-COMPLETE SPECIFICATION [09-05-2023(online)].pdf | 2023-05-09 |
| 19 | 202041012879-FORM-26 [24-06-2020(online)].pdf | 2020-06-24 |
| 19 | 202041012879-CLAIMS [09-05-2023(online)].pdf | 2023-05-09 |
| 20 | 202041012879-FORM-26 [21-06-2020(online)].pdf | 2020-06-21 |
| 20 | 202041012879-ABSTRACT [09-05-2023(online)].pdf | 2023-05-09 |
| 21 | 202041012879-PatentCertificate24-09-2024.pdf | 2024-09-24 |
| 21 | 202041012879-Abstract_24-03-2020.jpg | 2020-03-24 |
| 22 | 202041012879-IntimationOfGrant24-09-2024.pdf | 2024-09-24 |
| 22 | 202041012879-DRAWINGS [24-03-2020(online)].pdf | 2020-03-24 |
| 23 | 202041012879-PROOF OF ALTERATION [07-10-2024(online)].pdf | 2024-10-07 |
| 23 | 202041012879-FORM 1 [24-03-2020(online)].pdf | 2020-03-24 |
| 24 | 202041012879-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 24 | 202041012879-PROVISIONAL SPECIFICATION [24-03-2020(online)].pdf | 2020-03-24 |
| 1 | 202041012879SearchE_14-11-2022.pdf |