Sign In to Follow Application
View All Documents & Correspondence

Method And System For Distributing Data Traffic In A Network

Abstract: The present disclosure provides a method and system for distributing data traffic in a network. The present disclosure encompasses: receiving, at load balance (LB) unit [300a], a request for routing data traffic associated with a policy execution engine (PEEGN) module; receiving, at the LB unit [300a], a health status information of a plurality of microservice instances connected with the LB unit [300a], wherein the health status information of each of the plurality of microservice instances is indicative of one of a healthy instance and a malfunctioning instance; identifying, at the LB unit [300a] one or more healthy microservice instances from the plurality of microservice instances based on the health status information; and distribute, via the LB unit [300a], the data traffic from the PEEGN module among the one or more healthy microservice instances. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 September 2023
Publication Number
20/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Adityakar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Ankit Murarka
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Yog Vashishth
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Meenakshi Rani
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Santosh Kumar Yadav
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Gaurav Saxena
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

1
FORM 2
THE PATENTS ACT, 1970 (39 OF
1970)
&
5 THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
10 “METHOD AND SYSTEM FOR DISTRIBUTING DATA TRAFFIC IN A
NETWORK”

15 We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point,
Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
20
The following specification particularly describes the invention and the manner in which
it is to be performed.
25
2
METHOD AND SYSTEM FOR DISTRIBUTING DATA TRAFFIC IN A
NETWORK
5
FIELD OF INVENTION
[0001] Embodiments of the present disclosure generally relate to the field of
wireless communication systems. More particularly, embodiments of the present
10 disclosure relate to methods and systems for distributing data traffic in a network.
BACKGROUND
[0002] The following description of related art is intended to provide
15 background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section be used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of prior art.
20
[0003] Wireless communication technology has rapidly evolved over the past
few decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
25 advent of the second-generation (2G) technology, digital communication and data
services became possible, and text messaging was introduced. The third generation
(3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
3
coverage, and improved security. Currently, the fifth generation (5G) technology is
being deployed, promising even faster data speeds, low latency, and the ability to
connect multiple devices simultaneously. With each generation, wireless
communication technology has become more advanced, sophisticated, and capable
5 of delivering more services to its users.
[0004] Over the last few years, microservices architecture has gained
popularity due to its flexibility and scalability. Microservices architecture involves
breaking down a monolithic application into smaller, self-contained services, often
10 referred to as "microservices." Each microservice is responsible for a specific
functional aspect of the application and can be developed, deployed, and managed
independently. This modular approach to software design offers several advantages
such as scalability, flexibility, continuous deployment, resilience, and parallel
development.
15
[0005] In a microservices architecture, the system can fail due to high data
traffic on a particular instance or unhealthy instance requests might get failed
results. In order to avoid such a situation and to ensure that no server is overworked,
a solution is required through which incoming/outgoing requests can be easily
20 distributed among all inventory instances.
[0006] Further, over the period of time various solutions have been developed
to address the above-mentioned problems associated with the microservices
architecture. However, ensuring high availability and data consistency across
25 distributed microservices instances always remains a challenge.
[0007] Thus, there exists an imperative need in the art to provide an efficient
system and method for distributing data traffic in a network, which the present
disclosure aims to address.
4
SUMMARY
[0008] This section is provided to introduce certain aspects of the present
5 disclosure in a simplified form that are further described below in the detailed
description. This summary is not intended to identify the key features or the scope
of the claimed subject matter.
[0009] An aspect of the present disclosure may relate to a method for
10 distributing data traffic in a network. The method includes receiving, by a
processing unit at a load balancer (LB) unit, a request for routing data traffic
associated with a policy execution engine (PEEGN) module. Next, the method
includes receiving, by the processing unit at the LB unit, a health status information
of a plurality of microservice instances connected with the LB unit, wherein the
15 health status information of each of the plurality of microservice instances is
indicative of one of a healthy instance and a malfunctioning instance. Next, the
method includes identifying, by the processing unit at the LB unit, one or more
healthy microservice instances from the plurality of microservice instances based
on the health status information. Thereafter, the method includes distributing, by
20 the processing unit via the LB unit, the data traffic from the PEEGN module among
the one or more healthy microservice instances.
[0010] In an exemplary aspect of the present disclosure, wherein the LB unit
and the PEEGN module are in communication via an interface.
25
[0011] In an exemplary aspect of the present disclosure, the method comprises
distributing, by processing unit, the data traffic among the one or more healthy
microservice instances in a round robin manner.
5
[0012] In an exemplary aspect of the present disclosure, the method comprises
storing, by each of the plurality of microservice instances, the corresponding health
status information in at least one of a local cache associated with each of the
plurality of microservice instances, and an elastic search database.
5
[0013] In an exemplary aspect of the present disclosure, wherein, in response
to a healthy microservice instance becoming a malfunctioning microservice
instance, the method comprises redirecting, by the processing unit via the LB unit,
the data traffic from the malfunctioning microservice instance to a healthy
10 microservice instance.
[0014] In an exemplary aspect of the present disclosure, wherein the health
status information of the plurality of microservice instances is received by the
processing unit via an orchestration manager.
15
[0015] In an exemplary aspect of the present disclosure, the method further
comprises redistributing a data traffic associated with one or more malfunctioning
microservice instances from the plurality of microservice instances to the one or
more healthy microservice instances, wherein the one or more malfunctioning
20 microservice instances are identified based on the corresponding health status
information associated with each of the plurality of microservice instances.
[0016] Another aspect of the present disclosure may relate to a system for
distributing network data traffic in a network environment. The system comprising:
25 a processing unit configured to: receive, at a load balancer (LB) unit, a request for
routing data traffic associated with a policy execution engine (PEEGN) module;
receive, at the LB unit, health status information of a plurality of microservice
instances connected with the LB unit, wherein the health status information of each
of the plurality of microservice instances is indicative of one of a healthy instance
6
and a malfunctioning instance; identify, at the LB unit, one or more healthy
microservice instances from the plurality of microservice instances based on the
health status information; and distribute, via the LB unit, the data traffic from the
PEEGN module among the one or more healthy microservice instances.
5
[0017] Yet another aspect of the present disclosure may relate to a nontransitory computer readable storage medium storing instructions for distributing
network data traffic in a network environment, the instructions include executable
code which, when executed by one or more units of a system, causes: a processing
10 unit of the system to receive, at a load balancer (LB) unit, a request for routing data
traffic associated with a policy execution engine (PEEGN) module; receive, at the
LB unit, health status information of a plurality of microservice instances connected
with the LB unit, wherein the health status information of each of the plurality of
microservice instances is indicative of one of a healthy instance and a
15 malfunctioning instance; identify, at the LB unit, one or more healthy microservice
instances from the plurality of microservice instances based on the health status
information; and distribute, via the LB unit, the data traffic from the PEEGN
module among the one or more healthy microservice instances.
20 OBJECTS OF THE INVENTION
[0018] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
25 [0019] It is an object of the present disclosure to provide a system and a method
to ensure proper resource inventory management by various operations (e.g., create,
update, get, and delete) for all the interfaces for all the operations.
7
[0020] It is another object of the present disclosure to provide a system and
method to handle HTTP, HTTP2 and HTTPS requests in parallel.
[0021] It is yet another object of the present disclosure to provide a solution to
5 route client requests across all the servers in a manner that maximizes speed and
capacity utilization.
[0022] It is yet another object of the present disclosure to provide a solution
that provides header-based routing that saves time and database hits.
10
[0023] It is yet another object of the present disclosure to serve published
events for only those instances where the requests are raised.
DESCRIPTION OF THE DRAWINGS
15
[0024] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems in which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
20 necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Also, the embodiments shown in the figures are
not to be construed as limiting the disclosure, but the possible variants of the method
and system according to the disclosure are illustrated herein to highlight the
advantages of the disclosure. It will be appreciated by those skilled in the art that
25 disclosure of such drawings includes disclosure of electrical components or
circuitry commonly used to implement such components.
[0025] FIG. 1 illustrates an exemplary block diagram of a management and
orchestration (MANO) architecture.
8
[0026] FIG. 2 illustrates an exemplary block diagram of a computing device
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure.
5
[0027] FIG. 3 illustrates an exemplary block diagram of a system for
distributing data traffic in a network environment, in accordance with exemplary
implementations of the present disclosure.
10 [0028] FIG. 4 illustrates a method flow diagram for distributing data traffic in
a network, in accordance with exemplary implementations of the present disclosure.
[0029] FIG. 5 illustrates an exemplary system architecture for distributing data
traffic in a network environment, in accordance with exemplary implementations
15 of the present disclosure.
[0030] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
20 DETAILED DESCRIPTION
[0031] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
25 embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
9
[0032] The ensuing description provides exemplary embodiments only, and is
not intended to limit the scope, applicability, or configuration of the disclosure.
Rather, the ensuing description of the exemplary embodiments will provide those
5 skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
10 [0033] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
15 embodiments in unnecessary detail.
[0034] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a
structure diagram, or a block diagram. Although a flowchart may describe the
20 operations as a sequential process, many of the operations may be performed in
parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional
steps not included in a figure.
25 [0035] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
30 designs, nor is it meant to preclude equivalent exemplary structures and techniques
10
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
5 any additional or other elements.
[0036] As used herein, a “processing unit” or “processor” or “operating
processor” includes one or more processors, wherein processor refers to any logic
circuitry for processing instructions. A processor may be a general-purpose
10 processor, a special purpose processor, a conventional processor, a digital signal
processor, a plurality of microprocessors, one or more microprocessors in
association with a (Digital Signal Processing) DSP core, a controller, a
microcontroller, Application Specific Integrated Circuits, Field Programmable
Gate Array circuits, any other type of integrated circuits, etc. The processor may
15 perform signal coding data processing, input/output processing, and/or any other
functionality that enables the working of the system according to the present
disclosure. More specifically, the processor or processing unit is a hardware
processor.
20 [0037] As used herein, “a user equipment”, “a user device”, “a smart-userdevice”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld
device”, “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
25 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
11
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
[0038] As used herein, “storage unit” or “memory unit” refers to a machine or
5 computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
10 that may be required by one or more units of the system to perform their respective
functions.
[0039] As used herein “interface” or “user interface” refers to a shared
boundary across which two or more separate components of a system exchange
15 information or data. The interface may also refer to a set of rules or protocols that
define communication or interaction of one or more modules or one or more units
with each other, which also includes the methods, functions, or procedures that may
be called.
20 [0040] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
25 Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
[0041] As used herein the transceiver unit includes at least one receiver and at
least one transmitter configured respectively for receiving and transmitting data,
12
signals, information or a combination thereof between units/components within the
system and/or connected with the system.
[0042] As used herein, Policy Execution Engine (PEEGN) module provides a
5 network function virtualisation (NFV) software defined network (SDN) platform
functionality to support dynamic requirements of resource management and
network service orchestration in the virtualized network. Further, the PEEGN is
involved during CNF instantiation flow to check for CNF policy and to reserve
resources required to instantiate CNF at PVIM. PEEGN supports scaling policy for
10 CNFC.
[0043] As discussed in the background section, the current known solutions
have several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing a
15 method and a system for distributing data traffic in a network environment.
[0044] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
20 [0045] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
[0046] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture/ platform [100], in
25 accordance with exemplary implementation of the present disclosure. The MANO
architecture [100] may be developed for managing telecom cloud infrastructure
automatically, managing design or deployment design, managing instantiation of
network node(s)/ service(s) etc. The MANO architecture [100] deploys the network
node(s) in the form of Virtual Network Function (VNF) and Cloud-native/
13
Container Network Function (CNF). The system as provided by the present
disclosure may comprise one or more components of the MANO architecture [100].
The MANO architecture [100] may be used to auto-instantiate the VNFs into the
corresponding environment of the present disclosure so that it could help in
5 onboarding other vendor(s) CNFs and VNFs to the platform.
[0047] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer [102], a network function virtualization (NFV) and software defined
network (SDN) design function module [104], a platform foundation services
10 module [106], a Platform Schedulers and Cron Jobs module [108] and a platform
resource adapters and utilities module [112]. All the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
15 [0048] The NFV and SDN design function module [104] comprises a VNF
lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services
catalogue [1046], a network slicing and service chaining manager [1048], a physical
and virtual resource manager [1050] and a CNF lifecycle manager [1052]. The VNF
lifecycle manager (compute) [1042] may be responsible for deciding on which
20 server of the communication network, and the microservice will be instantiated.
The VNF lifecycle manager (compute) [1042] may manage the overall flow of
incoming/ outgoing requests during interaction with the user. The VNF lifecycle
manager (compute) [1042] may be responsible for determining which sequence to
be followed for executing the process. For e.g., in an AMF network function of the
25 communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
30 of network service/ network functions (NFs)) that must be applied to a specific
14
networked data packet. The physical and virtual resource manager [1050] stores the
logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] may be used for the CNFs
lifecycle management.
5
[0049] The platforms foundation services module [106] comprises a
microservices elastic load balancer [1062], an identity and access manager [1064],
a command line interface (CLI) [1066], a central logging manager [1068], and an
event routing manager [1070]. The microservices elastic load balancer [1062] may
10 be used for maintaining the load balancing of the request for the services. The
identity and access manager [1064] may be used for logging purposes. The
command line interface (CLI) [1066] may be used to provide commands to execute
certain processes which requires changes during the run time. The central logging
manager [1068] may be responsible for keeping the logs of every service. These
15 logs are generated by the MANO platform [100]. These logs are used for debugging
purposes. The event routing manager [1070] may be responsible for routing the
events i.e., the application programming interface (API) hits to the corresponding
services.
20 [0050] The platforms core services module [108] comprises NFV
infrastructure monitoring manager [1082], an assure manager [1084], a
performance manager [1086], a policy execution engine [1088], a capacity
monitoring manager [1090], a release management (mgmt.) repository [1092], a
configuration manager and GCT [1094], an NFV platform decision analytics
25 [1096], a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100],
a VNF backup and upgrade manager [1102], a microservice auditor [1104], and a
platform operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] monitors the infrastructure part of the
NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
30 [1084] may be responsible for supervising the alarms the vendor may be generating.
15
The performance manager [1086] may be responsible for managing the
performance counters. The policy execution engine (PEGN) [1088] may be
responsible for managing all of the policies. The capacity monitoring manager
(CMM) [1090] may be responsible for sending the request to the PEGN [1090].
5 The release management (mgmt.) repository (RMR) [1092] may be responsible for
managing the releases and the images of all of the vendor's network nodes. The
configuration manager and GCT [1094] manages the configuration and GCT of all
the vendors. The NFV platform decision analytics (NPDA) [1096] helps in deciding
the priority of using the network resources. It may be further noted that the policy
10 execution engine (PEGN) [1088], the configuration manager and GCT [1094] and
the NPDA [1096] work together. The platform NoSQL DB [1098] may be a
database for storing all the inventory (both physical and logical) as well as the
metadata of the VNFs and CNF. The platform schedulers and cron jobs [1100]
schedules the task such as but not limited to triggering of an event, traverse the
15 network graph etc. The VNF backup and upgrade manager [1102] takes backup of
the images, binaries of the VNFs and the CNFs and produces those backups on
demand in case of server failure. The microservice auditor [1104] audits the
microservices. For e.g., in a hypothetical case, instances not being instantiated by
the MANO architecture [100] may be using the network resources. In such case,
20 the microservice auditor [1104] audits and informs the same so that resources can
be released for services running in the MANO architecture [100]. The audit assures
that the services only run on the MANO platform [100]. The platform operations,
administration and maintenance manager [1106] may be used for newer instances
that are spawning.
25
[0051] The platform resource adapters and utilities module [112] further
comprises a platform external API adapter and gateway [1122], a generic decoder
and indexer (XML, CSV, JSON) [1124], a docker service adapter [1126], an API
adapter [1128], and a NFV gateway [1130]. The platform external API adapter and
30 gateway [1122] may be responsible for handling the external services (to the
16
MANO platform [100]) that require the network resources. The generic decoder
and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system
in the XML, CSV, JSON format. The docker service adapter [1126] may be the
interface provided between the telecom cloud and the MANO architecture [100] for
5 communication. The API adapter [1128] may be used to connect with the virtual
machines (VMs). The NFV gateway [1130] may be responsible for providing the
path to each service going to/incoming from the MANO architecture [100].
[0052] The docker service adapter (DSA) [1126] is a microservices-based
10 system designed to deploy and manage Container Network Functions (CNFs) and
their components (CNFCs) across Docker nodes. The DSA [1126] offers REST
endpoints for key operations, including uploading container images to a Docker
registry, terminating CNFC instances, and creating Docker volumes and networks.
CNFs, which are network functions packaged as containers, may consist of multiple
15 CNFCs. The DSA [1126] facilitates the deployment, configuration, and
management of these components by interacting with Docker's API, ensuring
proper setup and scalability within a containerized environment. This approach
provides a modular and flexible framework for handling network functions in a
virtualized network setup.
20
[0053] Referring to FIG. 2, an exemplary block diagram of a computing device
[200] upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure, is shown. In
an implementation, the computing device [200] may also implement a method for
25 distributing data traffic in a network environment utilising the system. In another
implementation, the computing device [200] itself implements the method for
distributing data traffic in a network environment using one or more units
configured within the computing device [200], wherein said one or more units are
capable of implementing the features as disclosed in the present disclosure.
30
17
[0054] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
5 computing device [200] may also include a main memory [206], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
10 processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
15 information and instructions for the processor [204].
[0055] A storage device [210], such as a magnetic disk, optical disk, or solidstate drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
20 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
25 [204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
30 the device to specify positions in a plane.
18
[0056] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
5 or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
10 such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
15
[0057] The computing device [200] also may include a communication
interface [218] coupled to the bus [202]. The communication interface [218]
provides a two-way data communication coupling to a network link [220] that is
connected to a local network [222]. For example, the communication interface
20 [218] may be an integrated services digital network (ISDN) card, cable modem,
satellite modem, or a modem to provide a data communication connection to a
corresponding type of telephone line. As another example, the communication
interface [218] may be a local area network (LAN) card to provide a data
communication connection to a compatible LAN. Wireless links may also be
25 implemented. In any such implementation, the communication interface [218]
sends and receives electrical, electromagnetic or optical signals that carry digital
data streams representing various types of information.
[0058] The computing device [200] can send messages and receive data,
30 including program code, through the network(s), the network link [220] and the
19
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
5 and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0059] Referring to FIG. 3, an exemplary block diagram of a system [300] for
distributing data traffic in a network environment is shown, in accordance with the
10 exemplary implementations of the present disclosure. The system [300] comprises
at least one load balancer unit [300a]. The load balancer unit [300a] may comprise
at least one processing unit [302] and at least one storing unit [304]. Also, all of the
components/ units of the system [300] are assumed to be connected to each other
unless otherwise indicated below. Also, in FIG. 3 only a few units are shown,
15 however, the system [300] may comprise multiple such units or the system [300]
may comprise any such numbers of said units, as required to implement the features
of the present disclosure. In an implementation, the system [300] may reside in a
server or a network entity. In yet another implementation, the system [300] may
reside partly in the server/ network entity.
20
[0060] The system [300] is configured for distributing data traffic in a network,
with the help of the interconnection between the components/units of the system
[300].
25 [0061] The system [300] comprises a processing unit [302]. The processing
unit [302] is configured to receive, at a load balancer (LB) unit [300a], a request for
routing data traffic associated with comprising a policy execution engine (PEEGN)
module. In a network, the PEEGN module may receive a resource management for
bulk data traffic during the operation. In response to this, PEEGN module may
20
communicate with LB unit [300a] over an interface such as PE_LB interface. The
LB unit [300a] is configured to receive the request for routing data traffic.
[0062] The PE_LB interface may connect the PEEGN module and the LB unit
5 [300a]. The PE_LB interface allows for bidirectional communication between the
PEEGN module, and the LB unit [300a]. In an embodiment, the PE_LB interface
is configured to facilitate exchange of information using hypertext transfer protocol
(http) rest application programming interface (API). In an embodiment, the http rest
API is used in conjunction with JSON and/or XML communication media. In
10 another embodiment, the PE_LB interface is configured to facilitate exchange of
information by establishing a web-socket connection between the PEEGN module,
and the LB unit [300a]. A web-socket connection may involve establishing a
persistent connectivity between the PEEGN module, and the LB unit [300a]. An
example of the web-socket based communication includes, without limitation, a
15 transmission control protocol (TCP) connection. In such a connection, information,
such as operational status, health, etc. of different components may be exchanged
through the interface using a ping-pong-based communication.
[0063] The processing unit [302] of the system [300] is further configured to
20 receive, at the LB unit [300a], a health status information of a plurality of
microservice instances connected with the LB unit [300a]. The health status
information of each of the plurality of microservice instances is indicative of one
of a healthy instance and a malfunctioning instance. The health status information
may be associated with performance, operational efficiency, low latency and high
25 throughput of the plurality of microservice instances. In an exemplary
implementation, the health status information may be one of in-service, and out of
service. In an exemplary implementation, the health status information may be
good, moderate or poor. In an implementation, the processing unit [302] is
configured to store, by each of the plurality of microservice instances, the
30 corresponding health status information in at least one of a local cache associated
21
with each of the plurality of microservice instances, and an elastic search database.
In an exemplary implementation, the processing unit [302] is configured to provide,
via an orchestration manager, to the LB unit [300a], health status information of
one or more microservice instances. The orchestration manager is configured to
5 store and provide details of health status of the one or more microservice instances,
such as, but not limited to, microservice instances to the LB unit [300a]. The
processing unit [302] is configured to store health status information of the plurality
of the microservice instances into a storage unit [304].
10 [0064] The processing unit [302] is further configured to identify, at the LB
unit [300a], one or more healthy microservice instances from the plurality of
microservice instances based on the health status information. In an
implementation, after receiving the health status of the plurality of microservice
instances, the processing unit [302] is configured to identify at the LB unit [300a]
15 one or more healthy microservice instances based on the health status information.
In an implementation, the processing unit [302] is configured to identify healthy
microservice instances using a selection algorithm.
[0065] The processing unit [302] of the system [300] is further configured to
20 distribute, via the LB unit [300a], the data traffic from the PEEGN module among
the one or more healthy microservice instances. In an implementation, the
distribution is based on receiving, from one or more of the healthy microservice
instances, an indication corresponding to taking an ownership of at least a part of
the data traffic. After receiving a response from the one or more healthy
25 microservice instances, the processing unit [302] is configured to distribute via the
LB unit [300a] the requested data traffic from the PEEGN module among the one
or more healthy microservice instances. The processing unit [302] is configured to
distribute the data traffic among the one or more healthy microservice instances in
a round robin manner.
30
22
[0066] In an implementation, during an operation, in response to a healthy
microservice instance becoming a malfunctioning microservice instance, the
processing unit [302] is further configured to redirect, via the LB unit [300a], the
data traffic from the malfunctioning microservice instance to a healthy microservice
5 instance. Further, the processing unit [302] is configured to update status of the
malfunctioning microservice instance such as, not-in-service into the storage unit
[304]. The processing unit [302] is configured to store the information of the
microservice instances, which are managing the distribution of the data traffic in
the storage unit [304].
10
[0067] Further, in accordance with the present disclosure, it is to be
acknowledged that the functionality described for the various components/units can
be implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
15 configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
20 of the present disclosure.
[0068] Referring to FIG. 4, an exemplary method flow diagram [400], for
distributing data traffic in a network, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the
25 method [400] is performed by the system [300]. As shown in FIG. 4, the method
[400] starts at step [402].
[0069] At step [404], the method [400] comprises receiving, by a processing
unit [302] at a load balancer (LB) unit [300a], a request for routing data traffic
30 associated with comprising a policy execution engine (PEEGN) module. In a
23
network, the PEEGN module may receive a resource management for bulk data
traffic during the operation. In response to this, PEEGN module may communicate
with LB unit [300a] over an interface such as PE_LB interface. The LB unit [300a]
may receive the request for routing data traffic.
5
[0070] Next, at step [406], the method [400] comprises receiving, by the
processing unit [302] at the LB unit [300a], a health status information of a plurality
of microservice instances connected with the LB unit [300a], wherein the health
status information of each of the plurality of microservice instances is indicative of
10 one of a healthy instance and a malfunctioning instance. The health status
information may be associated with performance, operational efficiency, low
latency and high throughput of the plurality of microservice instances. In an
exemplary implementation, the health status information may be one of in-service,
and out of service. In an exemplary implementation, the health status information
15 may be good, moderate or poor. In an implementation, the processing unit [302]
may store, by each of the plurality of microservice instances, the corresponding
health status information in at least one of a local cache associated with each of the
plurality of microservice instances, and an elastic search database. In an exemplary
implementation, the processing unit [302] may provide, via an orchestration
20 manager, to the LB unit [300a], health status information of one or more
microservice instances. The orchestration manager may store and provide details of
health status of the one or more microservice instances, such as, but not limited to,
microservice instances to the LB unit [300a]. The processing unit [302] may store
health status information of the plurality of the microservice instances into a storage
25 unit [304].
[0071] Next, at step [408], the method [400] comprises identifying, by the
processing unit [302] at the LB unit [300a], one or more healthy microservice
instances from the plurality of microservice instances based on the health status
30 information. After receiving the health status information, the processing unit [302]
24
at the LB unit [300a] may identify the one or more healthy microservice instances
from the plurality of microservice instances based on health status information. In
an implementation, the processing unit [302] may identify healthy microservice
instances using a selection algorithm.
5
[0072] Next, at step [410], the method [400] comprises distributing, by the
processing unit [302] via the LB unit [300a], the data traffic from the PEEGN
module among the one or more healthy microservice instances. In an
implementation, the distribution is based on receiving, from one or more of the
10 healthy microservice instances, an indication corresponding to taking an ownership
of at least a part of the data traffic. After receiving a response from the one or more
healthy microservice instances, the processing unit [302] may distribute via the LB
unit [300a] the requested data traffic from the PEEGN module among the one or
more healthy microservice instances. The processing unit [302] may distribute the
15 data traffic among the one or more healthy microservice instances in a round robin
manner.
[0073] In an implementation, during an operation, in response to a healthy
microservice instance becoming a malfunctioning microservice instance, the
20 processing unit [302] is further may redirect, via the LB unit [300a], the data traffic
from the malfunctioning microservice instance to a healthy microservice instance.
In an implementation, the processing unit [302] may further redistribute the data
traffic associated with one or more malfunctioning microservice instances from the
plurality of microservice instances to the one or more healthy microservice
25 instances. The one or more malfunctioning microservice instances are identified
based on the corresponding health status information associated with each of the
plurality of the microservice instances. Further, the processing unit [302] may
update status of the malfunctioning microservice instance such as, not-in-service
into the storage unit [304]. The processing unit [302] may store the information of
25
the microservice instances, which are managing the distribution of the data traffic
in the storage unit [304].
[0074] Thereafter, the method [400] terminates at step [412].
5
[0075] Referring to FIG. 5, an exemplary system architecture [500] for
distributing data traffic in a network environment, in accordance with exemplary
implementations of the present disclosure, is shown. As shown in FIG. 5, the system
[500] may comprises a User Interface (UI/UX) [502], an Identity Access
10 Management (IAM) [504], an Elastic Load Balancer (ELB1) [506a] node, an ELB2
[506b], an Event Routing Management (ERM) [508], ELB [510a, 510b] and Micro
Service (MS) instances [516a, 516b…516n] such as microservice instances,
Operations and Management Service (OAM) [512], Central Log management
System (CLMS) [514] and Elastic Search Cluster [518].
15
[0076] As used herein, orchestrator manager [512] refers to a unit, a node, a
service and/or a server, which manages service operations of different
microservices in the network. Orchestrator manager maintains records details of the
operational microservices and shares details of the microservices with other
20 microservices for the operational communication.
[0077] As used herein, Identity Access Management (IAM) [504] refers to a
service, a unit, and/or a platform for providing defence against malicious or
unauthorised login activity and safeguards credentials by enabling risk-based access
25 controls, ensuring identity protection and authentication processes.
[0078] As used herein, Elastic Load Balancer (ELB) [506] refers to a service,
a unit, and/or a platform for managing and distributing incoming data traffic
26
efficiently across a group of supported servers, microservices and units in a manner
that may increase speed and performance of the network.
[0079] As used herein, Event Routing Management (ERM) [508] refers to a
5 node, a server, a service and/or a platform for monitoring and triggering various
actions or responses within the system based on detected events. For example, for
any microservice instance down, the ERM may trigger any alert for taking an action
to overcome the service breakdown condition in the network.
10 [0080] As used herein, Central Log management System (CLMS) [514] refers
to a service or a platform which may collect log data from multiple sources and may
consolidate the collected data. This consolidated data is then presented on a central
interface which may be accessed by a user such as network administrator or
authorised person.
15
[0081] As used herein, Elastic Search Cluster (ESC) [518] refers to a group of
servers, or nodes that work together and form a cluster for distributing tasks,
searching and indexing across all the nodes in the cluster.
20 [0082] In an implementation, microservices (MS) instances [516a,
516b…516n] runs in n-way active mode. Each MS instance [516a-516n] is being
served with a pair of Elastic Load Balancer (ELB) [510a, 510b…510n]. The ELB
distributes the load on MS instances in a round robin manner. The ELB ensures that
the event acknowledgement against any event that is sent by MS instance to the
25 subscribed MS is returned to the same MS instance which has published the event.
Further, all microservices do not only maintain the state information in their local
cache, but also persist it in Elastic Search cluster or database [518]. In case one of
the MS instances goes down, Operations and Management Service (OAM) [512]
detects it and broadcasts the status to other running MS instances and also the ELB
27
serving the MS. The ELB as such distributes the ingress data traffic on the
remaining available instances. The n-way active model for deployment of MS
instances, ensures the availability of a microservice to serve the data traffic even if
any instance goes down. In an implementation, one of the available MS instances
5 takes the ownership of the instance which has gone down. It fetches the state
information of the incomplete transaction being served by the instance gone down
from the elastic search database and re-executes them. In case any transaction has
not been persisted, there may be a timeout, and the publisher MS of that event will
re-transmit the same event for execution.
10
[0083] In an implementation, the input request may be received from UI/UX
[502] or Command Line Interface (CLI) for accessing microservice instances.
[0084] In another implementation, the present system and method enable such
15 that the north bound interface (NBI) sends HTTP requests to Load Balancer. Load
Balancer monitors the instances health and sends request to healthy instances on
the basis of algorithm selection. Further, using the PE_LB interface request leads
to the selected microservice instances. Next, Orchestrator Manager alerts the LB
for any addition or removal of any application instances from cluster.
20
[0085] The present disclosure may relate to a non-transitory computer readable
storage medium storing instructions for distributing data traffic in a network, the
instructions include executable code which, when executed by one or more units of
a system [300], causes: a processing unit [302] of the system [300] to receive, at a
25 load balancer (LB) unit [300a], a request for routing data traffic associated with
comprising a policy execution engine (PEEGN) module; receive, at the LB unit
[300a], a health status information of a plurality of microservice instances
connected with the LB unit [300a], wherein the health status information of each of
the plurality of microservice instances is indicative of one of a healthy instance and
30 a malfunctioning instance; identify, at the LB unit [300a], one or more healthy
28
microservice instances from the plurality of microservice instances based on the
health status information; and distribute, via the LB unit [300a], the data traffic
from the PEEGN module among the one or more healthy microservice instances.
5 [0086] In one implementation, the LB unit is in communication with an
Orchestrator manager (OM). The OM informs the LB interface about the health of
the running instances. Based on the information about the health of the running
instances, the LB interface routes the requests only through the healthy instances.
10 [0087] As is evident from the above, the present disclosure provides a
technically advanced solution for performing load balancing in microservices
architecture. The present invention provides an async event-based implementation
to utilize the interface efficiently. In addition, the present invention provides fault
tolerance for any event failure. The interface provided by the present disclosure
15 works in a high availability mode and if one inventory instance goes down during
request processing, then the next available instance takes care of the request. While
considerable emphasis has been placed herein on the disclosed embodiments, it will
be appreciated that many embodiments can be made and that many changes can be
made to the embodiments without departing from the principles of the present
20 disclosure. These and other changes in the embodiments of the present disclosure
will be apparent to those skilled in the art, whereby it is to be understood that the
foregoing descriptive matter to be implemented is illustrative and non-limiting.
[0088] While considerable emphasis has been placed herein on the disclosed
25 embodiments, it will be appreciated that many embodiments can be made and that
many changes can be made to the embodiments without departing from the
principles of the present disclosure. These and other changes in the embodiments
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
30 and non-limiting.
29
We Claim:
1. A method for distributing data traffic in a network, the method comprising:
- receiving, by a processing unit [302] at a load balancer (LB) unit [300a],
a request for routing data traffic associated with a policy execution
engine (PEEGN) module;
- receiving, by the processing unit [302] at the LB unit [300a], a health
status information of a plurality of microservice instances connected
with the LB unit [300a], wherein the health status information of each
of the plurality of microservice instances is indicative of one of a
healthy instance and a malfunctioning instance;
- identifying, by the processing unit [302] at the LB unit [300a], one or
more healthy microservice instances from the plurality of microservice
instances based on the health status information; and
- distributing, by the processing unit [302] via the LB unit [300a], the
data traffic from the PEEGN module among the one or more healthy
microservice instances.
2. The method as claimed in claim 1, wherein the LB unit [300a] and the
PEEGN module are in communication via an interface.
3. The method as claimed in claim 1, wherein the method comprises
distributing, by processing unit [302], the data traffic among the one or more
healthy microservice instances in a round robin manner.
4. The method as claimed in claim 1, wherein the method comprises storing,
by each of the plurality of microservice instances, the corresponding health
status information in at least one of a local cache associated with each of the
plurality of microservice instances, and an elastic search database.
30
5. The method as claimed in claim 1, wherein, in response to a healthy
microservice instance becoming a malfunctioning microservice instance,
the method comprises redirecting, by the processing unit [302] via the LB
unit [300a], the data traffic from the malfunctioning microservice instance
to a healthy microservice instance.
6. The method as claimed in claim 1, wherein the health status information of
the plurality of microservice instances is received by the processing unit
[302] via an orchestration manager.
7. The method as claim in claim 1, wherein the method further comprises
redistributing a data traffic associated with one or more malfunctioning
microservice instances from the plurality of microservice instances to the
one or more healthy microservice instances, wherein the one or more
malfunctioning microservice instances are identified based on the
corresponding health status information associated with each of the plurality
of microservice instances.
8. A system for distributing data traffic in a network, the system comprising:
- a processing unit [302] is configured to:
- receive, at a load balancer (LB) unit [300a], a request for routing data
traffic associated with a policy execution engine (PEEGN) module;
- receive, at the LB unit [300a], a health status information of a
plurality of microservice instances connected with the LB unit
[300a], wherein the health status information of each of the plurality
of microservice instances is indicative of one of a healthy instance
and a malfunctioning instance;
31
- identify, at the LB unit [300a], one or more healthy microservice
instances from the plurality of microservice instances based on the
health status information; and
- distribute, via the LB unit [300a], the data traffic from the PEEGN
module among the one or more healthy microservice instances.
9. The system as claimed in claim 8, wherein the LB unit [300a] and the
PEEGN module are in communication via an interface.
10. The system as claimed in claim 8, wherein the processing unit [302] is
configured to distribute the data traffic among the one or more healthy
microservice instances in a round robin manner.
11. The system as claimed in claim 8, wherein the processing unit [302] is
configured to store, by each of the plurality of microservice instances, the
corresponding health status information in at least one of a local cache
associated with each of the plurality of microservice instances, and an
elastic search database.
12. The system as claimed in claim 8, wherein, in response to a healthy
microservice instance becoming a malfunctioning microservice instance,
the the processing unit [302] is configured to redirect, via the LB unit
[300a], the data traffic from the malfunctioning microservice instance to a
healthy microservice instance.
13. The system as claimed in claim 8, wherein the health status information of
the plurality of microservice instances is received by the processing unit
[302] via an orchestration manager.
32
14. The system as claimed in claim 8, wherein the processing unit [302] is
configured to redistribute a data traffic associated with one or more
malfunctioning microservice instances from the plurality of microservice
instances to the one or more healthy microservice instances, wherein the one
or more malfunctioning microservice instances are identified based on the
corresponding health status information associated with each of the plurality
of microservice instances.

Documents

Application Documents

# Name Date
1 202321065366-STATEMENT OF UNDERTAKING (FORM 3) [28-09-2023(online)].pdf 2023-09-28
2 202321065366-PROVISIONAL SPECIFICATION [28-09-2023(online)].pdf 2023-09-28
3 202321065366-POWER OF AUTHORITY [28-09-2023(online)].pdf 2023-09-28
4 202321065366-FORM 1 [28-09-2023(online)].pdf 2023-09-28
5 202321065366-FIGURE OF ABSTRACT [28-09-2023(online)].pdf 2023-09-28
6 202321065366-DRAWINGS [28-09-2023(online)].pdf 2023-09-28
7 202321065366-Proof of Right [15-02-2024(online)].pdf 2024-02-15
8 202321065366-FORM-5 [27-09-2024(online)].pdf 2024-09-27
9 202321065366-ENDORSEMENT BY INVENTORS [27-09-2024(online)].pdf 2024-09-27
10 202321065366-DRAWING [27-09-2024(online)].pdf 2024-09-27
11 202321065366-CORRESPONDENCE-OTHERS [27-09-2024(online)].pdf 2024-09-27
12 202321065366-COMPLETE SPECIFICATION [27-09-2024(online)].pdf 2024-09-27
13 202321065366-FORM 3 [08-10-2024(online)].pdf 2024-10-08
14 202321065366-Request Letter-Correspondence [11-10-2024(online)].pdf 2024-10-11
15 202321065366-Power of Attorney [11-10-2024(online)].pdf 2024-10-11
16 202321065366-Form 1 (Submitted on date of filing) [11-10-2024(online)].pdf 2024-10-11
17 202321065366-Covering Letter [11-10-2024(online)].pdf 2024-10-11
18 202321065366-CERTIFIED COPIES TRANSMISSION TO IB [11-10-2024(online)].pdf 2024-10-11
19 Abstract.jpg 2024-11-07
20 202321065366-ORIGINAL UR 6(1A) FORM 1 & 26-311224.pdf 2025-01-04