Sign In to Follow Application
View All Documents & Correspondence

Method And System For Distributing Data Traffic In A Network

Abstract: The present disclosure relates to a system and method for distributing data traffic in a network. The disclosure encompasses: receiving, a request for routing data traffic associated with a network function virtualization platform decision and analytics (NPDA) unit; receiving, a health status information of a plurality of NPDA instances connected with the LB unit [300a], wherein the health status information of each of the plurality of NPDA instances is indicative of one of a healthy instance and a malfunctioning instance; identifying, one or more healthy NPDA instances from the plurality of NPDA instances based on the health status information; and distributing, the data traffic from the NPDA unit among the one or more healthy NPDA instances. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 September 2023
Publication Number
20/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Rizwan Ahmad
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Kapil Gill
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Arpit Jain
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Shashank Bhushan
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Jugal Kishore
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Meenakshi Sarohi
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Kumar Debashish
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Supriya Kaushik De
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Gaurav Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Kishan Sahu
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Gaurav Saxena
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Vinay Gayki
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Durgesh Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Rahul Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR DISTRIBUTING DATA
TRAFFIC IN A NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR DISTRIBUTING DATA TRAFFIC IN A
NETWORK
FIELD OF DISCLOSURE
5
[0001] Embodiments of the present disclosure generally relate to the field of
wireless communication systems. More particularly, embodiments of the present
disclosure relate to a method and a system for distributing data traffic in a network.
10 BACKGROUND
[0002] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section be used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of prior art.
[0003] Wireless communication technology has rapidly evolved over the past
20 few decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
services became possible, and text messaging was introduced. 3G technology
25 marked the introduction of high-speed internet access, mobile video calling, and
location-based services. The fourth generation (4G) technology revolutionized
wireless communication with faster data speeds, better network coverage, and
improved security. Currently, the fifth generation (5G) technology is being
deployed, promising even faster data speeds, low latency, and the ability to connect
3
multiple devices simultaneously. With each generation, wireless communication
technology has become more advanced, sophisticated, and capable of delivering
more services to its users.
5 [0004] In communication networks, due to rapid growth of technology,
different types of services and microservices have increased for providing support
and services as per users and system consumption requirements. The microservices
such as network function virtualization (NFV) Platform Decision Analytics
(NPDA) perform hysteresis evaluation and healing services. In conventional
10 systems, when one of the NPDA instances goes down, which is performing
hysteresis analysis, there may be service hindrance in the network. In another case,
when a user needs to access the policy data information, but the NPDA instance is
not available, then there is no way to fetch or access the policy data information.
The NPDA takes care of CNF/CNFC or VNF/VNFC policy Create, Read, Update
15 and Delete (CRUD) operations, threshold based or restoration-based policy
hysteresis evaluation and policy execution engine (PEEGN) microservice
invocation to inform the scaling or healing decisions that has to be apply over the
CNF/CNFC or VNF/VNFC. For achieving these functionalities, high availability
(HA) support should be present at NPDA end i.e., in case any NPDA instance is
20 down, then another NPDA instance should provide the same functionality.
However, to make such a decision regarding the availability of other NPDA
instances, by the current NPDA instance, is a cumbersome task. Further, due to high
data traffic on a particular instance (such as, an abnormal instance), requests may
fail, resulting in failure of the system. Therefore, efficiency and performance of the
25 system may reduce from optimal operational requirements.
[0005] Thus, there exists an imperative need in the art to provide an efficient
system and method for distributing data traffic in a network, which the present
disclosure aims to address.
30
4
SUMMARY
[0006] This section is provided to introduce certain aspects of the present
disclosure in a simplified form that are further described below in the detailed
5 description. This summary is not intended to identify the key features or the scope
of the claimed subject matter.
[0007] An aspect of the present disclosure may relate to a method for
distributing data traffic in a network. The method includes receiving, by a
10 processing unit at a load balancer (LB) unit, a request for routing data traffic
associated with a network function virtualization platform decision and analytics
(NPDA) unit. Next, the method includes receiving, by the processing unit at the LB
unit, a health status information of a plurality of NPDA instances connected with
the LB unit, wherein the health status information of each of the plurality of NPDA
15 instances is indicative of one of a healthy instance and a malfunctioning instance.
Next, the method includes identifying, by the processing unit at the LB unit, one or
more healthy NPDA instances from the plurality of NPDA instances based on the
health status information. Thereafter, the method includes distributing, by the
processing unit via the LB unit, the data traffic from the NPDA unit among the one
20 or more healthy NPDA instances.
[0008] In an exemplary aspect of the present disclosure, wherein the LB unit
and the NPDA unit are in communication via an interface.
25 [0009] In an exemplary aspect of the present disclosure, the method further
comprises distributing, by processing unit, the data traffic among the one or more
healthy NPDA instances in a round robin manner.
5
[0010] In an exemplary aspect of the present disclosure, the method further
comprises storing, by each of the plurality of NPDA instances, the corresponding
health status information in at least one of a local cache associated with the each of
the plurality of NPDA instances, and an elastic search database.
5
[0011] In an exemplary aspect of the present disclosure, wherein, in response
to a healthy NPDA instance becoming a malfunctioning NPDA instance, the
method comprises redirecting, by the processing unit via the LB unit, the data traffic
from the malfunctioning NPDA instance to a healthy NPDA instance.
10
[0012] In an exemplary aspect of the present disclosure, wherein the health
status information of the plurality of NPDA instances is received by the processing
unit via an orchestration manager.
15 [0013] In an exemplary aspect of the present disclosure, the method further
comprises redistributing a data traffic associated with one or more malfunctioning
NPDA instances from the plurality of NPDA instances to the one or more healthy
NPDA instances, wherein the one or more malfunctioning NPDA instances are
identified based on the corresponding health status information associated with the
20 each of the plurality of NPDA instances.
[0014] Another aspect of the present disclosure may relate to a system for
distributing network data traffic in a network environment. The system comprising:
a processing unit configured to: receive, at a load balancer (LB) unit, from a
25 network function virtualization platform decision and analytics (NPDA) unit, a
request for routing data traffic through the NPDA unit; receive, at the LB unit,
health status information of a plurality of NPDA instances connected with the LB
unit, wherein the health status information of each of the plurality of NPDA
instances is indicative of one of a healthy instance and a malfunctioning instance;
6
transmit, via the LB unit, a request for accepting at least a part of the data traffic,
to one or more healthy NPDA instances; and distribute, via the LB unit, the data
traffic from the NPDA unit among the one or more healthy NPDA instances.
5 [0015] Yet another aspect of the present disclosure may relate to a nontransitory computer readable storage medium storing instructions for distributing
network data traffic in a network environment, the instructions include executable
code which, when executed by one or more units of a system, causes: a processing
unit of the system to receive, at a load balancer (LB) unit, from a network function
10 virtualization platform decision and analytics (NPDA) unit, a request for routing
data traffic through the NPDA unit; receive, at the LB unit, health status information
of a plurality of NPDA instances connected with the LB unit, wherein the health
status information of each of the plurality of NPDA instances is indicative of one
of a healthy instance and a malfunctioning instance; transmit, via the LB unit, a
15 request for accepting at least a part of the data traffic, to one or more healthy NPDA
instances; and distribute, via the LB unit, the data traffic from the NPDA unit
among the one or more healthy NPDA instances.
OBJECTS OF THE INVENTION
20
[0016] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
[0017] It is an object of the present disclosure to provide a system and a method
25 for handling heavy data traffic on servers efficiently.
[0018] It is another object of the present disclosure to provide a system and a
method for distributing data traffic using the NPDA_LB interface.
7
[0019] It is another object of the present disclosure to provide a system and a
method for asynchronous event-based implementation to utilize the NPDA_LB
interface efficiently.
5 [0020] It is another object of the present disclosure to provide a system and a
method for performing fault tolerance by load balancer for any failure in a high
availability mode, such that, if anyone NPDA instance is down, then a next
available NPDA instance will take care of this request.
10 DESCRIPTION OF THE DRAWINGS
[0021] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems in which like reference numerals refer to the same
15 parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Also, the embodiments shown in the figures are
not to be construed as limiting the disclosure, but the possible variants of the method
and system according to the disclosure are illustrated herein to highlight the
20 advantages of the disclosure. It will be appreciated by those skilled in the art that
disclosure of such drawings includes disclosure of electrical components or
circuitry commonly used to implement such components.
[0022] FIG. 1 illustrates an exemplary block diagram of a management and
25 orchestration (MANO) architecture.
[0023] FIG. 2 illustrates an exemplary block diagram of a computing device
upon which the features of the present disclosure may be implemented, in
accordance with exemplary implementations of the present disclosure.
8
[0024] FIG. 3 illustrates an exemplary block diagram of a system for
distributing data traffic in a network environment, in accordance with exemplary
implementations of the present disclosure.
5
[0025] FIG. 4 illustrates a method flow diagram for distributing data traffic in
a network, in accordance with exemplary implementations of the present disclosure.
[0026] FIG. 5 illustrates an exemplary system architecture for distributing data
10 traffic in a network environment, in accordance with exemplary implementations
of the present disclosure.
[0027] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
15
DETAILED DESCRIPTION
[0028] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
20 embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
25 problems discussed above.
[0029] The ensuing description provides exemplary embodiments only, and is
not intended to limit the scope, applicability, or configuration of the disclosure.
Rather, the ensuing description of the exemplary embodiments will provide those
9
skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
5
[0030] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
10 may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0031] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a
15 structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations may be performed in
parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional
steps not included in a figure.
20
[0032] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
25 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
10
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0033] As used herein, a “processing unit” or “processor” or “operating
5 processor” includes one or more processors, wherein processor refers to any logic
circuitry for processing instructions. A processor may be a general-purpose
processor, a special purpose processor, a conventional processor, a digital signal
processor, a plurality of microprocessors, one or more microprocessors in
association with a (Digital Signal Processing) DSP core, a controller, a
10 microcontroller, Application Specific Integrated Circuits, Field Programmable
Gate Array circuits, any other type of integrated circuits, etc. The processor may
perform signal coding data processing, input/output processing, and/or any other
functionality that enables the working of the system according to the present
disclosure. More specifically, the processor or processing unit is a hardware
15 processor.
[0034] As used herein, “a user equipment”, “a user device”, “a smart-userdevice”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld
device”, “a wireless communication device”, “a mobile communication device”, “a
20 communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
25 of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
11
[0035] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
5 magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
10 [0036] As used herein “interface” or “user interface” refers to a shared
boundary across which two or more separate components of a system exchange
information or data. The interface may also refer to a set of rules or protocols that
define communication or interaction of one or more modules or one or more units
with each other, which also includes the methods, functions, or procedures that may
15 be called.
[0037] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
20 a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
25 [0038] As used herein the transceiver unit includes at least one receiver and at
least one transmitter configured respectively for receiving and transmitting data,
signals, information or a combination thereof between units/components within the
system and/or connected with the system.
12
[0039] As used herein, network function virtualization (NFV) platform
decision analytics (NPDA) facilitates help for deciding the priority of using the
network resources and manages network data traffic.
5 [0040] As discussed in the background section, the current known solutions
have several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing a
method and a system for distributing data traffic in a network environment. The
present disclosure provides an efficient system and method for distributing data
10 traffic in a network. The present method and system provide an NPDA_LB
interface, which ensures that no instance gets overloaded due to bulk data traffic.
The present system and method provide the NPDA_LB interface, which distributes
incoming or outgoing requests easily among all NPDA instances. The present
method and system enable proper NPDA threshold-based resource events or
15 restoration events or policy invocation events manageable by this interface for all
the operations that can be performed on a Management and Orchestration (MANO)
platform. The present method and system support HTTP/HTTPS configurations in
parallel. The present method and system routes client requests across all servers in
a manner that maximizes speed and capacity utilization. The present method and
20 system may perform header-based routing which may save time and database hits.
[0041] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
25 [0042] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
[0043] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture/ platform [100], in
13
accordance with exemplary implementation of the present disclosure. The MANO
architecture [100] may be developed for managing telecom cloud infrastructure
automatically, managing design or deployment design, managing instantiation of
network node(s)/ service(s) etc. The MANO architecture [100] deploys the network
5 node(s) in the form of Virtual Network Function (VNF) and Cloud-native/
Container Network Function (CNF). The system as provided by the present
disclosure may comprise one or more components of the MANO architecture [100].
The MANO architecture [100] may be used to auto-instantiate the VNFs into the
corresponding environment of the present disclosure so that it could help in
10 onboarding other vendor(s) CNFs and VNFs to the platform.
[0044] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer [102], a network function virtualization (NFV) and software defined
network (SDN) design function module [104], a platform foundation services
15 module [106], a Platform Schedulers & Cron Jobs module [108] and a platform
resource adapters and utilities module [112]. All the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
20 [0045] The NFV and SDN design function module [104] comprises a VNF
lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services
catalogue [1046], a network slicing and service chaining manager [1048], a physical
and virtual resource manager [1050] and a CNF lifecycle manager [1052]. The VNF
lifecycle manager (compute) [1042] may be responsible for deciding on which
25 server of the communication network the microservice will be instantiated. The
VNF lifecycle manager (compute) [1042] may manage the overall flow of
incoming/ outgoing requests during interaction with the user. The VNF lifecycle
manager (compute) [1042] may be responsible for determining which sequence to
be followed for executing the process. For e.g., in an AMF network function of the
30 communication network (such as a 5G network), sequence for execution of
14
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
5 of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual resource manager [1050] stores the
logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] may be used for the CNFs
lifecycle management.
10
[0046] The platforms foundation services module [106] comprises a
microservices elastic load balancer [1062], an identity & access manager [1064], a
command line interface (CLI) [1066], a central logging manager [1068], and an
event routing manager [1070]. The microservices elastic load balancer [1062] may
15 be used for maintaining the load balancing of the request for the services. The
identity & access manager [1064] may be used for logging purposes. The command
line interface (CLI) [1066] may be used to provide commands to execute certain
processes which require changes during the run time. The central logging manager
[1068] may be responsible for keeping the logs of every service. These logs are
20 generated by the MANO platform [100]. These logs are used for debugging
purposes. The event routing manager [1070] may be responsible for routing the
events i.e., the application programming interface (API) hits to the corresponding
services.
25 [0047] The platforms core services module [108] comprises NFV
infrastructure monitoring manager [1082], an assure manager [1084], a
performance manager [1086], a policy execution engine [1088], a capacity
monitoring manager [1090], a release management (mgmt.) repository [1092], a
configuration manager & GCT [1094], an NFV platform decision analytics [1096],
30 a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100], a VNF
15
backup & upgrade manager [1102], a microservice auditor [1104], and a platform
operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] monitors the infrastructure part of the
NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
5 [1084] may be responsible for supervising the alarms the vendor may be generating.
The performance manager [1086] may be responsible for managing the
performance counters. The policy execution engine (PEGN) [1088] may be
responsible for managing all of the policies. The capacity monitoring manager
(CMM) [1090] may be responsible for sending the request to the PEGN [1090].
10 The release management (mgmt.) repository (RMR) [1092] may be responsible for
managing the releases and the images of all of the vendor's network nodes. The
configuration manager & (GCT) [1094] manages the configuration and GCT of all
the vendors. The NFV platform decision analytics (NPDA) [1096] helps in deciding
the priority of using the network resources. It may be further noted that the policy
15 execution engine (PEGN) [1088], the configuration manager & GCT [1094] and
the NPDA [1096] work together. The platform NoSQL DB [1098] may be a
database for storing all the inventory (both physical and logical) as well as the
metadata of the VNFs and CNF. The platform schedulers and cron jobs [1100]
schedules the task such as but not limited to triggering of an event, traverse the
20 network graph etc. The VNF backup & upgrade manager [1102] takes backup of
the images, binaries of the VNFs and the CNFs and produces those backups on
demand in case of server failure. The microservice auditor [1104] audits the
microservices. For e.g., in a hypothetical case, instances not being instantiated by
the MANO architecture [100] may be using the network resources. In such cases,
25 the microservice auditor [1104] audits and informs the same so that resources can
be released for services running in the MANO architecture [100]. The audit assures
that the services only run on the MANO platform [100]. The platform operations,
administration and maintenance manager [1106] may be used for newer instances
that are spawning.
30
16
[0048] The platform resource adapters and utilities module [112] further
comprises a platform external API adapter and gateway [1122], a generic decoder
and indexer (XML, CSV, JSON) [1124], a docker service adapter [1126], an API
adapter [1128], and a NFV gateway [1130]. The platform external API adapter and
5 gateway [1122] may be responsible for handling the external services (to the
MANO platform [100]) that require the network resources. The generic decoder
and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system
in the XML, CSV, JSON format. The docker service adapter [1126] may be the
interface provided between the telecom cloud and the MANO architecture [100] for
10 communication. The API adapter [1128] may be used to connect with the virtual
machines (VMs). The NFV gateway [1130] may be responsible for providing the
path to each service going to/incoming from the MANO architecture [100].
[0049] The docker service adapter (DSA) [1126] is a microservices-based
15 system designed to deploy and manage Container Network Functions (CNFs) and
their components (CNFCs) across Docker nodes. The DSA [1126] offers REST
endpoints for key operations, including uploading container images to a Docker
registry, terminating CNFC instances, and creating Docker volumes and networks.
CNFs, which are network functions packaged as containers, may consist of multiple
20 CNFCs. The DSA [1126] facilitates the deployment, configuration, and
management of these components by interacting with Docker's API, ensuring
proper setup and scalability within a containerized environment. This approach
provides a modular and flexible framework for handling network functions in a
virtualized network setup.
25
[0050] Referring to FIG. 2, an exemplary block diagram of a computing device
[200] (also referred herein as a computer system [200]) upon which the features of
the present disclosure may be implemented in accordance with exemplary
implementation of the present disclosure, is shown. In an implementation, the
30 computing device [200] may also implement a method for distributing data traffic
17
in a network environment utilising the system. In another implementation, the
computing device [200] itself implements the method for distributing data traffic in
a network environment using one or more units configured within the computing
device [200], wherein said one or more units are capable of implementing the
5 features as disclosed in the present disclosure.
[0051] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
10 processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
15 intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
20 (ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[0052] A storage device [210], such as a magnetic disk, optical disk, or solidstate drive is provided and coupled to the bus [202] for storing information and
25 instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
30 bus [202] for communicating information and command selections to the processor
18
[204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
5 of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0053] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
10 and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
15 instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
20 software instructions.
[0054] The computing device [200] also may include a communication
interface [218] coupled to the bus [202]. The communication interface [218]
provides a two-way data communication coupling to a network link [220] that is
25 connected to a local network [222]. For example, the communication interface
[218] may be an integrated services digital network (ISDN) card, cable modem,
satellite modem, or a modem to provide a data communication connection to a
corresponding type of telephone line. As another example, the communication
interface [218] may be a local area network (LAN) card to provide a data
30 communication connection to a compatible LAN. Wireless links may also be
19
implemented. In any such implementation, the communication interface [218]
sends and receives electrical, electromagnetic or optical signals that carry digital
data streams representing various types of information.
5 [0055] The computing device [200] can send messages and receive data,
including program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], the host [224] and the communication interface
10 [218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0056] Referring to FIG. 3, an exemplary block diagram of a system [300] for
15 distributing data traffic in a network environment is shown, in accordance with the
exemplary implementations of the present disclosure. The system [300] comprises
at least one load balancer unit [300a]. The load balancer unit [300a] may comprise
at least one processing unit [302] and at least one storage unit [304]. Also, all of the
components/ units of the system [300] are assumed to be connected to each other
20 unless otherwise indicated below. Also, in FIG. 3 only a few units are shown,
however, the system [300] may comprise multiple such units or the system [300]
may comprise any such numbers of said units, as required to implement the features
of the present disclosure. In an implementation, the system [300] may reside in a
server or a network entity. In yet another implementation, the system [300] may
25 reside partly in the server/ network entity.
[0057] The system [300] is configured for distributing data traffic in a network
environment, with the help of the interconnection between the components/units of
the system [300].
30
20
[0058] The system [300] comprises a processing unit [302]. The processing
unit [302] is configured to receive, at a load balancer (LB) unit [300a], a request for
routing data traffic associated with a network function virtualization platform
decision and analytics (NPDA) unit. In a network environment, the NPDA unit may
5 receive a bulk data traffic during the operation. In response to this, the NPDA unit
may communicate with the LB unit [300a] over an interface such as an NPDA_LB
interface. The LB unit [300a] is configured to receive the request for routing data
traffic through the NPDA unit. In an implementation, the request may comprise
information associated with NPDA identification, and data traffic.
10
[0059] The NPDA_LB interface may connect the NPDA unit and the LB unit
[300a]. The NPDA_LB interface allows for bidirectional communication between
the NPDA unit, and the LB unit [300a]. In an embodiment, the NPDA_LB interface
is configured to facilitate exchange of information using hypertext transfer protocol
15 (http) rest application programming interface (API). In an embodiment, the http rest
API is used in conjunction with JSON and/or XML communication media. In
another embodiment, the NPDA_LB interface is configured to facilitate exchange
of information by establishing a web-socket connection between the NPDA unit,
and the LB unit [300a]. A web-socket connection may involve establishing a
20 persistent connectivity between the NPDA unit, and the LB unit [300a]. An
example of the web-socket based communication includes, without limitation, a
transmission control protocol (TCP) connection. In such a connection, information,
such as operational status, health, etc. of different components may be exchanged
through the interface using a ping-pong-based communication.
25
[0060] The processing unit [302] of the system [300] is further configured to
receive, at the LB unit [300a], a health status information of a plurality of NPDA
instances connected with the LB unit [300a]. The health status information of each
of the plurality of NPDA instances is indicative of one of a healthy instance and a
30 malfunctioning instance. The health status information may be associated with
21
performance, operational efficiency, low latency and high throughput of the
plurality of NPDA instances. In an exemplary implementation, the health status
information may be one of in-service, and out of service. In an exemplary
implementation, the health status information may be good, moderate and/or poor.
5 In an implementation, the processing unit [302] is configured to store, for each of
the plurality of NPDA instances, the corresponding health status information in at
least one of a local cache associated with each of the plurality of NPDA instances,
and an elastic search database. In an exemplary implementation, the processing unit
[302] is configured to provide, via an orchestration manager, to the LB unit [300a],
10 the health status information of the plurality of NPDA instances. The orchestration
manager is configured to store details of health status information of the one or
more microservice instances. The microservice instances may include, without
limitations, the NPDA instances. In another embodiment, the processing unit [302]
is configured to store health status information of the plurality of the NPDA
15 instances into a storage unit [304].
[0061] The processing unit [302] of the system [300] is further configured to
identify, at the LB unit [300a], one or more healthy NPDA instances from the
plurality of NPDA instances, based on the health status information. In an
20 implementation, after receiving the health status of the plurality of NPDA instances,
the processing unit [302] is configured to identify, at the LB unit [300a], one or
more healthy NPDA instances based on the health status information.
[0062] The processing unit [302] of the system [300] is further configured to
25 distribute, via the LB unit [300a], the data traffic from the NPDA unit among the
one or more healthy NPDA instances. In an implementation, the distribution is
based on receiving, from the one or more healthy NPDA instances, an indication
corresponding to taking an ownership of at least a part of the data traffic. After
receiving a response from the one or more healthy NPDA instances, the processing
30 unit [302] is configured to distribute, via the LB unit [300a], the requested data
22
traffic from the NPDA unit among the one or more healthy NPDA instances. The
processing unit [302] is configured to distribute the data traffic among the one or
more healthy NPDA instances in a round robin manner.
5 [0063] In an implementation, during an operation, in response to a healthy
NPDA instance becoming a malfunctioning NPDA instance, the processing unit
[302] is further configured to redirect, via the LB unit [300a], data traffic from the
malfunctioning NPDA instance to a healthy NPDA instance. Further, the
processing unit [302] is configured to update the statuses of the malfunctioning
10 NPDA instance in the storage unit [304]. The processing unit [302] is configured
to store the information of the currently healthy NPDA instances in the storage unit
[304].
[0064] In an implementation, during an operation, the processing unit [302] is
15 configured to redistribute a data traffic associated with one or more malfunctioning
NPDA instances to the one or more healthy NPDA instances. The one or more
malfunctioning NPDA instances are identified based on the corresponding health
status information associated with each of the plurality of NPDA instances.
20 [0065] Further, in accordance with the present disclosure, it is to be
acknowledged that the functionality described for the various components/units can
be implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
25 functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
30
23
[0066] Referring to FIG. 4 an exemplary method flow diagram [400], for
distributing data traffic in a network, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the
method [400] is performed by the system [300]. As shown in FIG. 4, the method
5 [400] starts at step [402].
[0067] At step [404], the method [400] as disclosed by the present disclosure
comprises receiving, by a processing unit [302] at a load balancer (LB) unit [300a],
a request for routing data traffic associated with a network function virtualization
10 platform decision and analytics (NPDA) unit. In a network, the NPDA unit may
receive a bulk data traffic during the operation. In response to this, the NPDA unit
may communicate with the LB unit [300a] over an interface such as an NPDA_LB
interface. The LB unit [300a] may receive the request for routing data traffic
through the NPDA unit. In an implementation, the request may comprise
15 information associated with NPDA identification, and data traffic.
[0068] Next, at step [406], the method [400] as disclosed by the present
disclosure comprises receiving, by the processing unit [302] at the LB unit [300a],
a health status information of a plurality of NPDA instances connected with the LB
20 unit [300a], wherein the health status information of each of the plurality of NPDA
instances is indicative of one of a healthy instance and a malfunctioning instance.
The health status may be associated with performance, operational efficiency, low
latency and high throughput of the plurality of NPDA instances. In an exemplary
implementation, the health status information may be one of in-service, and out of
25 service. In an exemplary implementation, the health status information may be
good, moderate and/or poor. In an implementation, the processing unit [302] is
configured to store, for each of the plurality of NPDA instances, the corresponding
health status information in at least one of a local cache associated with the each of
the plurality of NPDA instances, and an elastic search database. In an exemplary
30 implementation, the processing unit [302] is configured to provide, via an
24
orchestration manager, to the LB unit [300a], the health status information of the
plurality of NPDA instances. The orchestration manager is configured to store
details of health status information of the one or more microservice instances. The
microservice instances may include, without limitations, the NPDA instances. In
5 another embodiment, the processing unit [302] is configured to store health status
information of the plurality of the NPDA instances into a storage unit [304].
[0069] Next, at step [408], the method [400] as disclosed by the present
disclosure comprises identifying, by the processing unit [302] at the LB unit [300a],
10 one or more healthy NPDA instances from the plurality of NPDA instances based
on the health status information. After receiving the health status information, the
processing unit [302], at the LB unit [300a], may identify the one or more healthy
NPDA instances from the plurality of NPDA instances. In an implementation, the
processing unit [302] at the LB unit [300a] may identify the one or more healthy
15 NPDA instances using a selection algorithm. In an implementation, the processing
unit [302] at the LB unit [300a] may determine one or more suitable healthy NPDA
instances based on the health status and corresponding data traffic handling capacity
in the network.
20 [0070] Next, at step [410], the method [400] as disclosed by the present
disclosure comprises distributing, by the processing unit [302] via the LB unit
[300a], the data traffic from the NPDA unit among the one or more healthy NPDA
instances. In an implementation, the distribution is based on receiving, from the one
or more healthy NPDA instances, an indication corresponding to taking an
25 ownership of at least a part of the data traffic. After receiving a response from the
one or more healthy NPDA instances, the processing unit [302] may distribute via
the LB unit [300a] the requested data traffic from the NPDA unit among the one
or more healthy NPDA instances. The processing unit [302] is configured to
distribute the data traffic among the one or more healthy NPDA instances in a
30 round robin manner.
25
[0071] In an implementation, during an operation, in response to a healthy
NPDA instance becoming a malfunctioning NPDA instance, the processing unit
[302] is further may redirect, via the LB unit [300a], data traffic from the
5 malfunctioning NPDA instance to a healthy NPDA instance. In an implementation,
the processing unit [302] may further redistribute a data traffic associated with one
or more malfunctioning NPDA instances from the plurality of NPDA instances to
the one or more healthy NPDA instances. The one or more malfunctioning NPDA
instances are identified based on the corresponding health status information
10 associated with the each of the plurality of NPDA instances. Further, the processing
unit [302] is configured to update the status of the malfunctioning NPDA instance
in the storage unit [304]. The processing unit [302] is configured to store the
information of the currently healthy NPDA instances in the storage unit [304].
15 [0072] Thereafter, the method [400] terminates at step [412].
[0073] Referring to FIG. 5, an exemplary system architecture [500] for
distributing data traffic in a network environment, in accordance with exemplary
implementations of the present disclosure, is shown. As shown in FIG. 5, the system
20 [500] may comprises a User Interface (UI/UX) [502], an Identity Access
Management (IAM) [504], an Elastic Load Balancer (ELB1) [506a] node, an ELB2
[506b], an Event Routing Management (ERM) [508], ELB [510a, 510b] and Micro
Service (MS) instances [516a, 516b…516n] such as NPDA instances, Operations
& Management Service (OAM) [512], Central Log management System (CLMS)
25 [514] and Elastic Search Cluster [518].
[0074] As used herein, orchestrator manager [512] refers to any one of a unit,
a node, a service or a server, which manages service operations of different
microservices in the network. The orchestrator manager maintains records details
26
of the operational microservices and shares the details of the microservices with
other microservices for operational communication.
[0075] As used herein, Identity Access Management (IAM) [504] refers to
5 such as, service, unit, platform for providing a defence against malicious or
unauthorised login activity and safeguards credentials by enabling risk-based access
controls, ensuring identity protection and authentication processes.
[0076] As used herein, Elastic Load Balancer (ELB) [506] refers to a service,
10 a unit, or a platform for managing and distributing incoming data traffic efficiently
across a group of supported servers, microservices and units in a manner that may
increase speed and performance of the network.
[0077] As used herein, Event Routing Management (ERM) [508] refers to any
15 one of a node, a server, a service or a platform for monitoring and triggering various
actions or responses within the system based on detected events. For example, for
any microservice instance that is down, the ERM may trigger an alert for taking an
action to overcome the service breakdown condition in the network.
20 [0078] As used herein, Central Log management System (CLMS) [514] refers
to a service or a platform which may collect log data from multiple sources and may
consolidate the collected data. This consolidated data is then presented on a central
interface which may be accessed by a user such as network administrator or
authorised person.
25
[0079] As used herein, Elastic Search Cluster (ESC) [518] refers to a group of
servers, or nodes that work together and form a cluster for distributing tasks,
searching and indexing across all the nodes in the cluster.
27
[0080] In an implementation, Microservices (MS) instances [516a-516n] runs
in n-way active mode. Each MS instance [516a-516n] is being served with a pair of
Elastic Load Balancer (ELB) [510a-510n]. The ELB distributes the load on MS
instances in a round robin manner. The ELB ensures that the event
5 acknowledgement against any event that is sent by MS instance to the subscribed
MS is returned to the same MS instance which has published the event. Further, all
microservices do not only maintain the state information in their local cache, but
also persist it in Elastic Search cluster or database [518]. In case one of the MS
instances goes down, Operations and Management Service (OAM) [512] detects it
10 and broadcasts the status to other running MS instances and also the ELB serving
the MS. The ELB as such distributes the ingress data traffic on the remaining
available instances. The n-way active model for deployment of MS instances,
ensures the availability of a microservice to serve the data traffic even if any
instance goes down. In an implementation, one of the available MS instances takes
15 the ownership of the instance which has gone down. It fetches the state information
of the incomplete transaction being served by the instance that has gone down, and
re-executes them. In case any transaction has not been persisted, there may be a
timeout, and the publisher MS of that event will re-transmit the same event for
execution.
20
[0081] In an implementation, the input request for accessing NPDA services
may be received from the UI/UX [502] or Command Line Interface (CLI).
[0082] In another implementation, the present system and method facilitate the
25 north bound interface (NBI) to send HTTP request to Load Balancer. Load Balancer
monitors the instances health and sends a request to healthy instances on the basis
of algorithm selection. Further, using the NPDA_LB interface, request leads to the
selected NPDA instance. Next, the orchestrator Manager alerts the LB for any
addition or removal of any application instances from the cluster.
30
28
[0083] The present disclosure may relate to a non-transitory computer readable
storage medium storing instructions for distributing network data traffic in a
network environment, the instructions include executable code which, when
executed by one or more units of a system [300], causes: a processing unit [302] of
5 the system to receive, at a load balancer (LB) unit [300a], from a network function
virtualization platform decision and analytics (NPDA) unit, a request for routing
data traffic through the NPDA unit; receive, at the LB unit [300a], health status
information of a plurality of NPDA instances connected with the LB unit [300a],
wherein the health status information of each of the plurality of NPDA instances is
10 indicative of one of a healthy instance and a malfunctioning instance; transmit, via
the LB unit [300a], a request for accepting at least a part of the data traffic, to one
or more healthy NPDA instances; and distribute, via the LB unit [300a], the data
traffic from the NPDA unit among the one or more healthy NPDA instances.
15 [0084] As is evident from the above, the present disclosure provides a
technically advanced solution by providing an efficient system and method for
handling heavy load data traffic on a particular instance or an abnormal instance
and distributing the load among all instances. The present method and system
provide a NPDA_LB interface, which ensures that no instance gets overloaded due
20 to bulk data traffic. The present system and method provide the NPDA_LB
interface, which distributes incoming or outgoing requests easily among all NPDA
instances. The present method and system enable proper NPDA threshold-based
resource events or restoration events or policy invocation events manageable by the
NPDA_LB interface, for all the operations that can be performed on the MANO
25 platform. The present method and system support HTTP/HTTPS configurations in
parallel. The present method and system routes client requests across all servers in
a manner that maximizes speed and capacity utilization. The present method and
system may perform header-based routing, which may save time and database hits.
29
[0085] While considerable emphasis has been placed herein on the disclosed
embodiments, it will be appreciated that many embodiments can be made and that
many changes can be made to the embodiments without departing from the
principles of the present disclosure. These and other changes in the embodiments
5 of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
30
We Claim:
1. A method for distributing data traffic in a network, the method comprising:
- receiving, by a processing unit [302] at a load balancer (LB) unit
5 [300a], a request for routing data traffic associated with a network
function virtualization platform decision and analytics (NPDA) unit;
- receiving, by the processing unit [302] at the LB unit [300a], a health
status information of a plurality of NPDA instances connected with
the LB unit [300a], wherein the health status information of each of
10 the plurality of NPDA instances is indicative of one of a healthy
instance and a malfunctioning instance;
- identifying, by the processing unit [302] at the LB unit [300a], one
or more healthy NPDA instances from the plurality of NPDA
instances based on the health status information; and
15 - distributing, by the processing unit [302] via the LB unit [300a], the
data traffic from the NPDA unit among the one or more healthy
NPDA instances.
2. The method as claimed in claim 1, wherein the LB unit [300a] and the
20 NPDA unit are in communication via an interface.
3. The method as claimed in claim 1, wherein the method comprises
distributing, by processing unit [302], the data traffic among the one or more
healthy NPDA instances in a round robin manner.
25
4. The method as claimed in claim 1, wherein the method comprises storing,
by each of the plurality of NPDA instances, the corresponding health status
information in at least one of a local cache associated with the each of the
plurality of NPDA instances, and an elastic search database.
31
5. The method as claimed in claim 1, wherein, in response to a healthy NPDA
instance becoming a malfunctioning NPDA instance, the method comprises
redirecting, by the processing unit [302] via the LB unit [300a], the data
5 traffic from the malfunctioning NPDA instance to a healthy NPDA instance.
6. The method as claimed in claim 1, wherein the health status information of
the plurality of NPDA instances is received by the processing unit [302] via
an orchestration manager.
10
7. The method as claim in claim 1, wherein the method further comprises
redistributing a data traffic associated with one or more malfunctioning
NPDA instances from the plurality of NPDA instances to the one or more
healthy NPDA instances, wherein the one or more malfunctioning NPDA
15 instances are identified based on the corresponding health status information
associated with the each of the plurality of NPDA instances.
8. A system for distributing network data traffic in a network, the system
comprising:
20 - a processing unit [302] configured to:
- receive, at a load balancer (LB) unit [300a], a request for
routing data traffic associated with a network function
virtualization platform decision and analytics (NPDA) unit;
- receive, at the LB unit [300a], a health status information of a
25 plurality of NPDA instances connected with the LB unit
[300a], wherein the health status information of each of the
plurality of NPDA instances is indicative of one of a healthy
instance and a malfunctioning instance;
32
- identify, at the LB unit [300a], one or more healthy NPDA
instances from the plurality of NPDA instances based on the
health status information; and
- distribute, via the LB unit [300a], the data traffic from the
5 NPDA unit among the one or more healthy NPDA instances.
9. The system as claimed in claim 8, wherein the LB unit [300a] and the NPDA
unit are in communication via an interface.
10 10. The system as claimed in claim 8, wherein the processing unit [302] is
configured to distribute the data traffic among the one or more healthy
NPDA instances in a round robin manner.
11. The system as claimed in claim 8, wherein the processing unit [302] is
15 configured to store, by each of the plurality of NPDA instances, the
corresponding health status information in at least one of a local cache
associated with the each of the plurality of NPDA instances, and an elastic
search database.
20 12. The system as claimed in claim 8, wherein, in response to a healthy NPDA
instance becoming a malfunctioning NPDA instance, the processing unit
[302] is configured to redirect, via the LB unit [300a], the data traffic from
the malfunctioning NPDA instance to a healthy NPDA instance.
25 13. The system as claimed in claim 8, wherein the processing unit [302] is
configured to provide, via an orchestration manager, to the LB unit [300a],
the health status information of the plurality of NPDA instances.

14. The system as claim in claim 8, wherein the processing unit [302] is
configured to redistribute a data traffic associated with one or more malfunctioning NPDA instances from the plurality of NPDA instances to the one or more healthy NPDA instances, wherein the one or more 5 malfunctioning NPDA instances are identified based on the corresponding health status information associated with the each of the plurality of NPDA instances.

Documents

Application Documents

# Name Date
1 202321065358-STATEMENT OF UNDERTAKING (FORM 3) [28-09-2023(online)].pdf 2023-09-28
2 202321065358-PROVISIONAL SPECIFICATION [28-09-2023(online)].pdf 2023-09-28
3 202321065358-POWER OF AUTHORITY [28-09-2023(online)].pdf 2023-09-28
4 202321065358-FORM 1 [28-09-2023(online)].pdf 2023-09-28
5 202321065358-FIGURE OF ABSTRACT [28-09-2023(online)].pdf 2023-09-28
6 202321065358-DRAWINGS [28-09-2023(online)].pdf 2023-09-28
7 202321065358-Proof of Right [09-02-2024(online)].pdf 2024-02-09
8 202321065358-FORM-5 [26-09-2024(online)].pdf 2024-09-26
9 202321065358-ENDORSEMENT BY INVENTORS [26-09-2024(online)].pdf 2024-09-26
10 202321065358-DRAWING [26-09-2024(online)].pdf 2024-09-26
11 202321065358-CORRESPONDENCE-OTHERS [26-09-2024(online)].pdf 2024-09-26
12 202321065358-COMPLETE SPECIFICATION [26-09-2024(online)].pdf 2024-09-26
13 202321065358-FORM 3 [08-10-2024(online)].pdf 2024-10-08
14 202321065358-Request Letter-Correspondence [11-10-2024(online)].pdf 2024-10-11
15 202321065358-Power of Attorney [11-10-2024(online)].pdf 2024-10-11
16 202321065358-Form 1 (Submitted on date of filing) [11-10-2024(online)].pdf 2024-10-11
17 202321065358-Covering Letter [11-10-2024(online)].pdf 2024-10-11
18 202321065358-CERTIFIED COPIES TRANSMISSION TO IB [11-10-2024(online)].pdf 2024-10-11
19 Abstract.jpg 2024-11-07
20 202321065358-ORIGINAL UR 6(1A) FORM 1 & 26-070125.pdf 2025-01-14