Abstract: The present disclosure relates to a system and method for distributing a traffic load using an interface. The disclosure encompasses receiving, using the interface, one or more requests at a load balancer; monitoring, using the interface, a health status of one or more registered applications and one or more operational microservice instances, in response to the received one or more requests; fetching, via an orchestrator manager, a real time health status of the one or more operational microservice instances; determining, using the interface, an optimal server from a plurality of servers based on the health status of the one or more operational microservice instances; and distributing, using the interface, the one or more requests among the one or more operational microservice instances to the optimal server. [FIG. 4]
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR DISTRIBUTING A TRAFFIC
LOAD USING AN INTERFACE”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR DISTRIBUTING A TRAFFIC LOAD
USING AN INTERFACE
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure generally relate to the field of
wireless communication system. More particularly, embodiments of the present
disclosure relate to a method and a system for distributing a traffic load using an
interface.
10
BACKGROUND
[0002] The following description of related art is intended to provide background
information pertaining to the field of the disclosure. This section may include
15 certain aspects of the art that may be related to various features of the present
disclosure. However, it should be appreciated that this section be used only to
enhance the understanding of the reader with respect to the present disclosure, and
not as admissions of prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. 3G technology
marked the introduction of high-speed internet access, mobile video calling, and
location-based services. The fourth-generation (4G) technology revolutionized
wireless communication with faster data speeds, better network coverage, and
improved security. Currently, the fifth-generation (5G) technology is being
30 deployed, promising even faster data speeds, low latency, and the ability to connect
3
multiple devices simultaneously. With each generation, wireless communication
technology has become more advanced, sophisticated, and capable of delivering
more services to its users.
5 [0004] In the 5G communication system, there is provided a plurality of network
functions (NFs), for example an Access and Mobility Management Function
(AMF), session management function (SMF), Authentication Server function
(AUSF), a Network Slice Selection Function (NSSF), Policy control function
(PCF), a Network Repository Function (NRF), Network Exposure Function (NEF),
10 Converged Charging Function (CHF) and the like. One or more of the
aforementioned NFs communicates with each other, to implement multiple
activities on the 5G communication system. For example, CHF is one of the key
network functions, which supports charging or billing services for user
consumption of services.
15
[0005] In communication networks, due to the rapid growth of technology,
different types of services and microservices have increased for providing support
and services as per users and system consumption requirements. Due to heavy
traffic, a particular microservice instance or unhealthy instance requests might fail,
20 resulting in the failure of the system. Since as per traffic, servers providing
requested services may get overworked and not capable enough to handle such
heavy traffic demands. Therefore, the efficiency and performance of the system
may drop from as per expectation and operational requirements.
25 [0006] Thus, there exists an imperative need in the art for a system and method for
efficiently distributing a traffic load, which the present disclosure aims to address.
4
SUMMARY
[0007] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
5 This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0008] An aspect of the present disclosure may relate to a method for distributing
a traffic load using an interface. The method includes receiving, by a transceiver
10 unit using the interface, one or more requests at a load balancer. Next, the method
includes monitoring, by a monitoring unit using the interface, a health status of one
or more registered applications and one or more operational microservice instances,
in response to the received one or more requests. Next, the method includes
fetching, by a retrieval unit, via an orchestrator manager, a real time health status
15 of the one or more operational microservice instances. Next, the method includes
determining, by a determination unit using the interface, an optimal server from a
plurality of servers based on the health status of the one or more operational
microservice instances. Thereafter, the method includes distributing, by the
transceiver unit, using the interface, the one or more requests among the one or
20 more operational microservice instances to the optimal server.
[0009] In an exemplary aspect of the present disclosure, the one or more requests
comprise at least one of: a load balancer request, an HTTP request.
25 [00010] In an exemplary aspect of the present disclosure, the health status of
the one or more registered applications and the one or more operational
microservice instances comprises of a positive health status and a negative health
status.
5
[00011] In an exemplary aspect of the present disclosure, the interface is a
Load Balancer-Service Adapter (LB-SA) interface to connect the load balancer
with the service adapter.
5 [00012] An aspect of the present disclosure may relate to a system for
distributing a traffic load using an interface. The system comprises a transceiver
unit configured to receive, using the interface, one or more requests at a load
balancer. The system further comprises a monitoring unit connected at least to the
transceiver unit, the monitoring unit is configured to monitor, using the interface, a
10 health status of one or more registered applications and one or more operational
microservice instances, in response to the received one or more requests. The
system further comprises a retrieval unit connected at least to the monitoring unit,
the retrieval unit is configured to fetch, via an orchestrator manager, a real time
health status of the one or more operational microservice instances. The system
15 further comprises a determination unit connected at least to the retrieval unit, the
determination unit is configured to determine, using the interface, an optimal server
from a plurality of servers based on the health status of the one or more operational
microservice instances. The transceiver unit is further configured to distribute,
using the interface, the one or more requests among the one or more operational
20 microservice instances to the optimal server.
[00013] Yet another aspect of the present disclosure may relate to a nontransitory computer readable storage medium storing instructions for distributing a
traffic load using an interface, the instructions include executable code which, when
25 executed by one or more units of a system, causes: a transceiver unit of the system
to receive, using the interface, one or more requests at a load balancer; a monitoring
unit of the system to monitor, using the interface, a health status of one or more
registered applications and one or more operational microservice instances, in
response to the received one or more requests; a retrieval unit of the system to fetch,
30 via an orchestrator manager, a real time health status of the one or more operational
6
microservice instances; a determination unit of the system to determine, using the
interface, an optimal server from a plurality of servers based on the health status of
the one or more operational microservice instances; and the transceiver unit of the
system to distribute, using the interface, the one or more requests among the one or
5 more operational microservice instances to the optimal server.
OBJECTS OF THE INVENTION
[00014] Some of the objects of the present disclosure, which at least one
10 embodiment disclosed herein satisfies are listed herein below.
[00015] It is an object of the present disclosure to provide a system and a
method for handling heavy load traffic on servers efficiently.
15 [00016] It is another object of the present disclosure to provide a system and
a method for distributing traffic load using a Load Balancer-Service Adapter (LBSA) interface.
[00017] It is yet another object of the present disclosure to provide a system
20 and a method for asynchronous event-based implementation to utilize LB-SA
interface efficiently.
[00018] It is yet another object of the present disclosure to provide a system
and a method for performing fault tolerance by load balancer for any failure in a
25 high availability mode, such that if one inventory instance is down then next
available instance take care of this request.
7
DESCRIPTION OF THE DRAWINGS
[00019] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
5 disclosed methods and systems in which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Also, the embodiments shown in the figures are
not to be construed as limiting the disclosure, but the possible variants of the method
10 and system according to the disclosure are illustrated herein to highlight the
advantages of the disclosure. It will be appreciated by those skilled in the art that
disclosure of such drawings includes disclosure of electrical components or
circuitry commonly used to implement such components.
15 [00020] FIG. 1 illustrates an exemplary block diagram of a management and
orchestration (MANO) architecture.
[00021] FIG. 2 illustrates an exemplary block diagram of a computing device
upon which the features of the present disclosure may be implemented in
20 accordance with exemplary implementation of the present disclosure.
[00022] FIG. 3 illustrates an exemplary block diagram of a system for
distributing a traffic load using an interface, in accordance with exemplary
implementations of the present disclosure.
25
[00023] FIG. 4 illustrates a method flow diagram for distributing a traffic
load using an interface, in accordance with exemplary implementations of the
present disclosure.
8
[00024] FIG. 5 illustrates an exemplary block diagram of a system for
distributing a traffic load using an interface, in accordance with exemplary
implementations of the present disclosure.
5 [00025] FIG. 6 illustrates an exemplary block diagram for distributing a
traffic load using an interface, in accordance with exemplary implementations of
the present disclosure.
[00026] The foregoing shall be more apparent from the following more
10 detailed description of the disclosure.
DETAILED DESCRIPTION
[00027] In the following description, for the purposes of explanation, various
15 specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
20 address any of the problems discussed above or might address only some of the
problems discussed above.
[00028] The ensuing description provides exemplary embodiments only, and
is not intended to limit the scope, applicability, or configuration of the disclosure.
25 Rather, the ensuing description of the exemplary embodiments will provide those
skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
30
9
[00029] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
5 may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[00030] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a
10 structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations may be performed in
parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional
steps not included in a figure.
15
[00031] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt,
the subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
20 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
25 similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[00032] As used herein, a “processing unit” or “processor” or “operating
processor” includes one or more processors, wherein processor refers to any logic
30 circuitry for processing instructions. A processor may be a general-purpose
10
processor, a special purpose processor, a conventional processor, a digital signal
processor, a plurality of microprocessors, one or more microprocessors in
association with a (Digital Signal Processing) DSP core, a controller, a
microcontroller, Application Specific Integrated Circuits, Field Programmable
5 Gate Array circuits, any other type of integrated circuits, etc. The processor may
perform signal coding data processing, input/output processing, and/or any other
functionality that enables the working of the system according to the present
disclosure. More specifically, the processor or processing unit is a hardware
processor.
10
[00033] As used herein, “a user equipment”, “a user device”, “a smart-userdevice”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld
device”, “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
15 or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
20 contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
[00034] As used herein, “storage unit” or “memory unit” refers to a machine
25 or computer-readable medium including any mechanism for storing information in
a form readable by a computer or similar machine. For example, a computerreadable medium includes read-only memory (“ROM”), random access memory
(“RAM”), magnetic disk storage media, optical storage media, flash memory
devices or other types of machine-accessible storage media. The storage unit stores
11
at least the data that may be required by one or more units of the system to perform
their respective functions.
[00035] As used herein “user interface” refers to a shared boundary across
5 which two or more separate components of a system exchange information or data.
The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
10
[00036] All modules, units, components used herein, unless explicitly
excluded herein, may be software modules or hardware processors, the processors
being a general-purpose processor, a special purpose processor, a conventional
processor, a digital signal processor (DSP), a plurality of microprocessors, one or
15 more microprocessors in association with a DSP core, a controller, a
microcontroller, Application Specific Integrated Circuits (ASIC), Field
Programmable Gate Array circuits (FPGA), any other type of integrated circuits,
etc.
20 [00037] As used herein the transceiver unit include at least one receiver and
at least one transmitter configured respectively for receiving and transmitting data,
signals, information or a combination thereof between units/components within the
system and/or connected with the system.
25 [00038] As used herein, container service adapter (CSA) (may be a docker
service adapter) refers to such as, unit, server or service which facilitates
communication between container and other services or microservices.
[00039] As used herein, a load balancer refers to a device, unit, server, or
30 service that manages incoming and outgoing network traffic. The load balancer
stores details of at least the available servers, and microservice instances. It
12
manages traffic by directing and distributing incoming or outgoing traffic to healthy
servers or resources.
[00040] As used herein, microservice refers to such as, unit, node, server
5 loosely coupled, independent deployable services specialized for performing
specific functions related to management and optimization of network operations.
Further, the microservice instance refers to single deployment of the microservice.
The microservice instances may be used for scaling and load balancing. Each
microservice may have a unique identifier. Each microservice instance’s health and
10 performance is monitored during the operation. If any microservice instance's
health or performance degrades, the network may automatically, or manually,
replace the instance or transfer the traffic load to a healthy microservice instance.
[00041] As used herein, orchestrator manager refers to such as, unit, node,
15 service or server, which manages service operations of different microservices in
the network. Orchestrator manager maintains records details of the operational
microservices and share details of the microservices with other microservices for
the operational communication.
20 [00042] As used herein, Identity Access Management (IAM) node refer to
such as, service, unit, platform for providing defence against malicious or
unauthorised login activity and safeguards credentials by enabling risk-based access
controls, ensuring identity protection and authentication processes.
25 [00043] As used herein, Elastic Load Balancer (ELB) refers to such as
service, unit, platform for managing and distributing incoming traffic efficiently
across a group of supported servers, microservices and units in a manner that may
increase speed and performance of the network.
30 [00044] As used herein, Event Routing Management (ERM) refers to such
as, node, server, service or platform for monitoring and triggering various actions
13
or responses within the system based on detected event. For example, for any
microservice instance down, the ERM may trigger any alert for taking an action to
overcome the service breakdown condition in the network.
5 [00045] As used herein, Central Log management System (CLMS) refers to
such as service or platform which may collect log data from multiple sources and
may consolidate the collected data. This consolidated data is then presented on a
central interface which may be accessed by a user such as network administrator or
authorised person.
10
[00046] As used herein, Elastic Search Cluster (ESC) refers to such as, a
group of servers, nodes that work together and form a cluster for distributing tasks,
searching and indexing across all the nodes in the cluster.
15 [00047] As discussed in the background section, the current known solutions
have several shortcomings. Thus, there exists an imperative need in the art to
provide an efficient system and method for handling heavy load traffic and
distribute the load among servers such that no server is overloaded or overworked.
The present method and system provide a Load balancer and CSA interface (LB20 SA interface), which ensures that no server get overloaded due to bulk traffic. The
present system and method provide the LB-SA interface, which distributes
incoming/outgoing requests easily among all SA instances. The present method and
system enable all instantiation, termination and Containerized Network Function
(CNF) information (metrics, state) operations on cloud which may be performed on
25 Management and Orchestration (MANO). The present method and system support
HTTP/HTTPS configurations in parallel. The present method and system routes
client requests across all servers in a manner that maximizes speed and capacity
utilization. The present method and system may perform header-based routing
which may save time and database hits.
30
14
[00048] Hereinafter, exemplary embodiments of the present disclosure will
be described with reference to the accompanying drawings.
[00049] FIG. 1 illustrates an exemplary block diagram representation of a
5 management and orchestration (MANO) architecture [100], in accordance with
exemplary implementation of the present disclosure. The MANO architecture [100]
is developed for managing telecom cloud infrastructure automatically, managing
design or deployment design, managing instantiation of a network node(s) etc. The
MANO architecture [100] deploys the network node(s) in the form of Virtual
10 Network Function (VNF) and Cloud-native/ Container Network Function (CNF).
The system may comprise one or more components of the MANO architecture. The
MANO architecture [100] is used to auto-instantiate the VNFs into the
corresponding environment of the present disclosure so that it could help in
onboarding other vendor(s) CNFs and VNFs to the platform.
15
[00050] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer, a network function virtualization (NFV) and software defined
network (SDN) design function module [104]; a platforms foundation services
module [106], a platform core services module [108] and a platform resource
20 adapters and utilities module [112], wherein all the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
[00051] The NFV and SDN design function module [104] further comprises
25 a VNF lifecycle manager (compute) [1042]; a VNF catalogue [1044]; a network
services catalogue [1046]; a network slicing and service chaining manager [1048];
a physical and virtual resource manager [1050] and a CNF lifecycle manager
[1052]. The VNF lifecycle manager (compute) [1042] is responsible for on which
server of the communication network the microservice will be instantiated. The
30 VNF lifecycle manager (compute) [1042] will manage the overall flow of
incoming/ outgoing requests during interaction with the user. The VNF lifecycle
15
manager (compute) [1042] is responsible for determining which sequence to be
followed for executing the process. For e.g. in an AMF network function of the
communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
5 VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual resource manager [1050] stores the
10 logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] is similarly used for the CNFs
lifecycle management.
[00052] The platforms foundation services module [106] further comprises
15 a microservices elastic load balancer [1062]; an identify & access manager [1064];
a command line interface (CLI) [1066]; a central logging manager [1068]; and an
event routing manager [1070]. The microservices elastic load balancer [1062] is
used for maintaining the load balancing of the request for the services. The identify
& access manager [1064] is used for logging purposes. The command line interface
20 (CLI) [1066] is used to provide commands to execute certain processes which
requires changes during the run time. The central logging manager [1068] is
responsible for keeping the logs of every services. Theses logs are generated by the
MANO platform [100]. These logs are used for debugging purposes. The event
routing manager [1070] is responsible for routing the events i.e., the application
25 programming interface (API) hits to the corresponding services.
[00053] The platforms core services module [108] further comprises NFV
infrastructure monitoring manager [1082]; an assure manager [1084]; a
performance manager [1086]; a policy execution engine [1088]; a capacity
30 monitoring manager [1090]; a release management (mgmt.) repository [1092]; a
16
configuration manager & (GCT) [1094]; an NFV platform decision analytics
[1096]; a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100];
a VNF backup & upgrade manager [1102]; a micro service auditor [1104]; and a
platform operations, administration and maintenance manager [1106]. The NFV
5 infrastructure monitoring manager [1082] monitors the infrastructure part of the
NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
[1084] is responsible for supervising the alarms the vendor is generating. The
performance manager [1086] is responsible for manging the performance counters.
The policy execution engine (PEGN) [1088] is responsible for all the managing the
10 policies. The capacity monitoring manager [1090] is responsible for sending the
request to the PEGN [1088]. The release management (mgmt.) repository (RMR)
[1092] is responsible for managing the releases and the images of all the vendor
network node. The configuration manager & (GCT) [1094] manages the
configuration and GCT of all the vendors. The NFV platform decision analytics
15 (NPDA) [1096] helps in deciding the priority of using the network resources. It is
further noted that the policy execution engine (PEGN) [1088], the configuration
manager & (GCT) [1094] and the (NPDA) [1096] work together. The platform
NoSQL DB [1098] is a database for storing all the inventory (both physical and
logical) as well as the metadata of the VNFs and CNF. The platform schedulers and
20 cron jobs [1100] schedules the task such as but not limited to triggering of an event,
traverse the network graph etc. The VNF backup & upgrade manager [1102] takes
backup of the images, binaries of the VNFs and the CNFs and produces those
backups on demand in case of server failure. The micro service auditor [1104]
audits the microservices. For e.g., in a hypothetical case, instances not being
25 instantiated by the MANO architecture [100] using the network resources then the
micro service auditor [1104] audits and informs the same so that resources can be
released for services running in the MANO architecture [100], thereby assuring the
services only run on the MANO platform [100]. The platform operations,
administration and maintenance manager [1106] is used for newer instances that
30 are spawning.
17
[00054] The platform resource adapters and utilities module [112] further
comprises a platform external API adapter and gateway [1122]; a generic decoder
and indexer (XML, CSV, JSON) [1124]; a container swarm adapter (also referred
5 to as container service adapter) [1126]; an OpenStack API adapter [1128]; and a
NFV gateway [1130]. The platform external API adapter and gateway [1122] is
responsible for handling the external services (to the MANO platform [100]) that
requires the network resources. The generic decoder and indexer (XML, CSV,
JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON
10 format. The container swarm adapter [1126] is the interface provided between the
telecom cloud and the MANO architecture [100] for communication. The
OpenStack API adapter [1128]; is used to connect with the virtual machines (VMs).
The NFV gateway [1130] is responsible for providing the path to each services
going to/incoming from the MANO architecture [100].
15
[00055] FIG. 2 illustrates an exemplary block diagram of a computing device
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. The present
disclosure can be implemented on a computing device [200] (also referred herein
20 as a computer system [200]) upon which the features of the present disclosure may
be implemented in accordance with exemplary implementation of the present
disclosure. In an implementation, the computing device [200] may also implement
a method for distributing a traffic load using an interface utilising the system. In
another implementation, the computing device [200] itself implements the method
25 for distributing a traffic load using an interface using one or more units configured
within the computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
[00056] The computing device [200] may include a bus [202] or other
30 communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
18
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
5 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
10 instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[00057] A storage device [210], such as a magnetic disk, optical disk, or
15 solid-state drive is provided and coupled to the bus [202] for storing information
and instructions. The computing device [200] may be coupled via the bus [202] to
a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
20 alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
25 cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[00058] The computing device [200] may implement the techniques
30 described herein using customized hard-wired logic, one or more ASICs or FPGAs,
19
firmware and/or program logic which in combination with the computing device
[200] causes or programs the computing device [200] to be a special-purpose
machine. According to one implementation, the techniques herein are performed by
the computing device [200] in response to the processor [204] executing one or
5 more sequences of one or more instructions contained in the main memory [206].
Such instructions may be read into the main memory [206] from another storage
medium, such as the storage device [210]. Execution of the sequences of
instructions contained in the main memory [206] causes the processor [204] to
perform the process steps described herein. In alternative implementations of the
10 present disclosure, hard-wired circuitry may be used in place of or in combination
with software instructions.
[00059] The computing device [200] also may include a communication
interface [218] coupled to the bus [202]. The communication interface [218]
15 provides a two-way data communication coupling to a network link [220] that is
connected to a local network [222]. For example, the communication interface
[218] may be an integrated services digital network (ISDN) card, cable modem,
satellite modem, or a modem to provide a data communication connection to a
corresponding type of telephone line. As another example, the communication
20 interface [218] may be a local area network (LAN) card to provide a data
communication connection to a compatible LAN. Wireless links may also be
implemented. In any such implementation, the communication interface [218]
sends and receives electrical, electromagnetic or optical signals that carry digital
data streams representing various types of information.
25
[00060] The computing device [200] can send messages and receive data,
including program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
30 ISP [226], the local network [222], the host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
20
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[00061] Referring to FIG. 3, an exemplary block diagram of a system [300]
5 for distributing a traffic load using an interface is shown, in accordance with the
exemplary implementations of the present disclosure. The system [300] comprises
at least one transceiver unit [302], at least one monitoring unit [304], at least one
retrieval unit [306], and at least one determination unit [308]. Also, all of the
components/ units of the system [300] are assumed to be connected to each other
10 unless otherwise indicated below. Also, in FIG. 3 only a few units are shown,
however, the system [300] may comprise multiple such units or the system [300]
may comprise any such numbers of said units, as required to implement the features
of the present disclosure. Further, in an implementation, the system [300] may be
present in a user device to implement the features of the present invention. The
15 system [300] may be a part of the user device / or may be independent of but in
communication with the user device (may also referred herein as a UE). In another
implementation, the system [300] may reside in a server or a network entity. In yet
another implementation, the system [300] may reside partly in the server/ network
entity and partly in the user device.
20
[00062] The system [300] is configured for distributing a traffic load using
an interface, with the help of the interconnection between the components/units of
the system [300].
25 [00063] The system [300] comprises a transceiver unit [302]. The transceiver
unit [302] is configured to receive, using the interface, one or more requests at a
load balancer. The transceiver unit [302] is configured to receive the one or more
requests such as, but not limited to, at least one of: a load balancer request, a
hypertext transfer protocol (HTTP) request at the load balancer. The one or more
30 requests may be associated with incoming or outgoing network traffic to balance
21
traffic load in the network. The interface is a Load Balancer-Service Adapter (LBSA) interface to connect the load balancer with the service adapter or container
service adapter. In an example, the one or more requests are received at the load
balancer from a northbound interface (NBI). The northbound interface (NBI) refers
5 to a communication interface between a lower layer in a network architecture (like
network management or control systems) and higher-level systems (such as
business applications or service orchestration platforms). It allows the upper-level
systems to monitor, control, and manage the lower-level network functions or
resources.
10
[00064] The system [300] includes a monitoring unit [304]. The monitoring
unit [304] is communicatively connected at least to the transceiver unit [302]. The
monitoring unit [304] is configured to monitor, using the interface, a health status
of one or more registered applications and one or more operational microservice
15 instances, in response to the received one or more requests. The monitoring unit
[304] monitors, using the LB-SA interface, the health status of the one or more
registered applications and the one or more operational microservice instances such
as a positive health status and a negative health status. In an implementation, the
one or more registered application refers to any application or service that is
20 registered within a network management or orchestration framework. Further, the
one or more registered applications keep track of their details such as location,
status, and available resources. In an exemplary implementation, the one or more
operational microservice (MS) instances refer to, such as, but not limited to,
inventory microservice and container service adapter (CSA). In an exemplary
25 embodiment, the container service adapter may be a docker service adapter. In an
implementation, the registered application may be associated with the operational
microservice instances, such as registration details procedure application for
inventory microservice.
30 [00065] The system [300] comprises a retrieval unit [306]. The retrieval unit
[306] is connected at least to the monitoring unit [304]. The retrieval unit [306] is
22
configured to fetch, via an orchestrator manager, a real time health status of the one
or more operational microservice instances. After receiving the health status (e.g.,
positive or negative) from the monitoring unit [304], the retrieval unit [306] may
fetch the real-time health status of the one or more operational microservice
5 instances from the orchestrator manager. In an exemplary implementation, the one
or more microservice (MS) instances may be such as ‘MS-1 instance-1’, ‘MS-2
instance-2’ and ‘MS-n instance-n’. The retrieval unit [206] may fetch real-time
health status such as the positive health status of the ‘MS-1 instance-1’, the negative
health status of ‘MS-2 instance-2’, and the positive health status of ‘MS-n instance10 n’ via the orchestrator manager. The orchestrator manager stores real time health
status of the one or more operational microservice instances.
[00066] The system [300] comprises a determination unit [308]. The
determination unit [308] is connected at least to the retrieval unit [306]. The
15 determination unit [308] is configured to determine, using the interface, an optimal
server from a plurality of servers based on the health status of the one or more
operational microservice instances. Based on the health status (e.g., positive health
or negative health) of the one or more operational microservice instances, the
determination unit [308] is configured to determine the optimal server from the
20 plurality of servers. The load balancers may store details of the plurality of servers
such as active servers, capacity of servers, available servers, server location and
storage. The determination unit [308] may determine the optimal server based on
the stored details of the plurality of servers in the load balancer. In an
implementation, the determination unit [308] may use one or more selection
25 algorithm or set of instructions for selecting the optimal server.
[00067] The system [300] comprises the transceiver unit [302]. The
transceiver unit [302] is further configured to distribute, using the interface, the one
or more requests among the one or more operational microservice instances to the
30 optimal server. After determining the optimal server via the determination unit
23
[308], the transceiver unit [302] is further configured to distribute the one or more
requests (e.g., incoming or outgoing) among the one or more operational
microservice instances to the optimal server using the LB-SA interface.
5 [00068] In an exemplary implementation, the load balancer scales atleast one
of the one or more registered applications and one or more operational microservice
instances based on the traffic load. In an implementation, the load balancer is
configured to monitor microservices such as inventory microservice instances and
one or more registered applications associated with other microservices details
10 registration. If any inventory instance is down and/or performance is degrading then
for managing the traffic load, the load balancer performs scaling of the inventory
microservice instance to manage the traffic load.
[00069] Further, in accordance with the present disclosure, it is to be
15 acknowledged that the functionality described for the various the components/units
can be implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
20 as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
25 [00070] Referring to FIG. 4 an exemplary method flow diagram [400], for
distributing a traffic load using an interface, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the
method [400] is performed by the system [300]. As shown in FIG. 4, the method
[400] starts at step [402].
30
24
[00071] At step [404], the method [400] as disclosed by the present
disclosure comprises receiving, by a transceiver unit [302] using the interface, one
or more requests at a load balancer. The transceiver unit [302] may receive the one
or more requests such as but not limited to, at least one of: a load balancer request,
5 a hypertext transfer protocol (HTTP) request from the load balancer. The one or
more requests may be associated with incoming or outgoing network traffic to
balance the traffic load in the network. The interface is a Load Balancer-Service
Adapter (LB-SA) interface to connect the load balancer with the service adapter or
container service adapter. In an example, the one or more requests are received at
10 the load balancer from a northbound interface (NBI). The northbound interface
(NBI) refers to a communication interface between a lower layer in a network
architecture (like network management or control systems) and higher-level
systems (such as business applications or service orchestration platforms). It allows
the upper-level systems to monitor, control, and manage the lower-level network
15 functions or resources.
[00072] Next, at step [406], the method [400] as disclosed by the present
disclosure comprises monitoring, by a monitoring unit [304] using the interface, a
health status of one or more registered applications and one or more operational
20 microservice instances, in response to the received one or more requests. The
monitoring unit [304] monitors using the LB-SA interface the health status of the
one or more registered applications and the one or more operational microservice
instances such as a positive health status and a negative health status. In an
implementation, the one or more registered application refers to any application or
25 service that is registered within a network management or orchestration framework.
Further, the one or more registered applications keep track of their details such as
location, status, and available resources. In an exemplary implementation, the one
or more operational microservice (MS) instances refer to, such as, but not limited
to, instances associated with inventory microservice and container service adapter
30 (CSA). In an exemplary implementation, the container service adapter may be a
25
docker service adapter. In an implementation, the registered application may be
associated with the operational microservice instances, such as registration details
procedure application for inventory microservice.
5 [00073] Next, at step [408], the method [400] as disclosed by the present
disclosure comprises fetching, by a retrieval unit [306], via an orchestrator
manager, a real time health status of the one or more operational microservice
instances. After receiving the health status (e.g., positive or negative) from the
monitoring unit [304], the retrieval unit [306] may fetch the real time health status
10 of the one or more operational microservice instances via the orchestrator manager.
In an exemplary implementation, the one or more microservice (MS) instances may
be such as ‘MS-1 instance-1’, ‘MS-2 instance-2’ and ‘MS-n instance-n’. The
retrieval unit [306] may fetch real time health status such as positive health status
of the ‘MS-1 instance-1’, negative health status of ‘MS-2 instance-2’, and positive
15 health status of ‘MS-n instance-n’ at the orchestrator manager. The orchestrator
manager stores real time health status of the one or more operational microservice
instances at the orchestrator manager.
[00074] Next, at step [410], the method [400] as disclosed by the present
20 disclosure comprises determining, by a determination unit [308] using the interface,
an optimal server from a plurality of servers based on the health status of the one or
more operational microservice instances. Based on the health status (e.g., positive
health or negative health) of the one or more operational microservice instances,
the determination unit [310] may determine the optimal server from the plurality of
25 servers. The load balancers may store details of the plurality of servers such as
active servers, capacity of servers, available servers, server location and storage.
The determination unit [308] may determine the optimal server based on the stored
details of the plurality of servers the load balancer. In an implementation, the
determination unit [310] may use one or more selection algorithms or set of
30 instructions for selecting the optimal server.
26
[00075] Next, at step [412], the method [400] as disclosed by the present
disclosure comprises distributing, by the transceiver unit [302], using the interface,
the one or more requests among the one or more operational microservice instances
to the optimal server. After determining the optimal server via the determination
5 unit [310], the transceiver unit [302] further may distribute the one or more requests
(e.g., incoming or outgoing) among the one or more operational microservice
instances to the optimal server using the LB-SA interface.
[00076] Thereafter, the method [400] terminates at step [414].
10
[00077] FIG. 5 shows an exemplary block diagram of a system [500] for
distributing a traffic load using an interface, in accordance with exemplary
implementations of the present disclosure. As shown in FIG. 5, the system [500]
comprises a load balancer (LB) [502] node and a container service adapter (CSA or
15 SA) node [504]. More particularly, the LB-SA interface connects the load balancer
with the Container service adapter (CSA). In an exemplary embodiment, the
container service adapter may be a docker service adapter. The role of the LB-SA
interface is to distribute all incoming and/or outgoing requests to balance load
equally in the CSA or SA service. The LB-SA interface automatically distributes
20 incoming traffic across multiple instances of CSA service. As used herein, the CSA
is a microservices-based system designed to deploy and manage Container Network
Functions (CNFs) and their components (CNFCs) across container nodes. In an
exemplary implementation, the container node may be a docker node. The CSA
adapter offers representational state transfer (REST) endpoints for key operations,
25 including uploading container images to a container registry, terminating CNFC
instances, and creating docker volumes and networks. In an embodiment, the
container registry may be a docker registry. Further, in an embodiment, the
container volumes and networks may be docker volumes and networks. CNFs,
which are network functions packaged as containers, may consist of multiple
30 CNFCs. The CSA facilitates the deployment, configuration, and management of
27
these components by interacting with container's API, ensuring proper setup and
scalability within a containerized environment. In an embodiment, the container’s
API may be a docker’s API. This approach provides a modular and flexible
framework for handling network functions in a virtualized network setup. Using the
5 interface, the load balancer monitors a health status of one or more registered
applications and one or more operational microservice instances., in response to the
received one or more requests. Further, using the interface, the load balancer
determines an optimal server from a plurality of servers based on the health status
of the one or more operational microservice instances, fetched via an orchestrator
10 manager. Thereafter, using the interface, the load balancer distributes the one or
more requests among the one or more operational microservice instances to the
optimal server.
[00078] Referring to FIG. 6, an exemplary block diagram [600] for
15 distributing a traffic load using an interface, in accordance with exemplary
implementations of the present disclosure. As shown in FIG. 6, the system [600]
comprises a User Interface (UI/UX) [602], Identity Access Management (IAM)
[604] node, Elastic Load Balancer (ELB1) [606a] node, ELB2 [606b] node, Event
Routing Management (ERM) node [608], ELB [610a-610b], Container Service
20 Adapter (CSA) [616a-616n], Orchestrator manager (OM) [612], Central Log
management System (CLMS) node [614] and Elastic Search Cluster [618].
[00079] As used herein, container service adapter (CSA) (may be a docker
service adapter) refers to such as, unit, server or service which facilitates
25 communication between container and other services or microservices.
[00080] As used herein, orchestrator manager [612] refers to such as, unit,
node, service or server, which manages service operations of different
microservices in the network. Orchestrator manager maintains records details of the
30 operational microservices and share details of the microservices with other
microservices for the operational communication.
28
[00081] As used herein, Identity Access Management (IAM) [604] refers to
such as, service, unit, platform for providing defence against malicious or
unauthorised login activity and safeguards credentials by enabling risk-based access
5 controls, ensuring identity protection and authentication processes.
[00082] As used herein, Elastic Load Balancer (ELB) [606] refers to such as
service, unit, platform for managing and distributing incoming traffic efficiently
across a group of supported servers, microservices and units in a manner that may
10 increase speed and performance of the network.
[00083] As used herein, Event Routing Management (ERM) [608] refers to
such as, node, server, service or platform for monitoring and triggering various
actions or responses within the system based on detected event. For example, for
15 any microservice instance down, the ERM may trigger any alert for taking an action
to overcome the service breakdown condition in the network.
[00084] As used herein, Central Log management System (CLMS) [614]
refers to such as service or platform which may collect log data from multiple
20 sources and may consolidate the collected data. This consolidated data is then
presented on a central interface which may be accessed by a user such as network
administrator or authorised person.
[00085] As used herein, Elastic Search Cluster (ESC) [618] refers to such as,
25 a group of servers, nodes that work together and form a cluster for distributing tasks,
searching and indexing across all the nodes in the cluster.
[00086] In an implementation, Microservices (MS) instances, such as CSA
instances, run in n-way active mode. Each MS instance or CSA instance is being
30 served with a pair of Elastic Load Balancers (ELB). The ELB distributes the load
on MS instances in a round-robin manner. ELB ensures that the event
acknowledgment against any event that is sent by the MS instance to the subscribed
29
MS is returned to the same MS instance that has published the event. Further, all
microservices do not only maintain the state information in their local cache but
also persist it in the Elastic Search database. In case one of the MS instances goes
down, the orchestrator manager detects it and broadcasts the status to other running
5 MS instances and also the ELB serving the MS. The ELB as such distributes the
ingress traffic on the remaining available instances. The n-way active model for
deployment of MS instances ensures the availability of a microservice to serve the
traffic even if any instance goes down. In an embodiment, one of the available MS
instances takes ownership of the instance which has gone down. It fetches the state
10 information of the incomplete transaction being served by the instance gone down
from ES and re-executes them. In case any transaction has not persisted, there may
be a timeout, and the publisher MS of that event will re-transmit the same event for
execution.
15 [00087] The present disclosure may relate to a non-transitory computer
readable storage medium storing instructions for distributing a traffic load using an
interface, the instructions include executable code which, when executed by one or
more units of a system, causes: a transceiver unit [302] of the system to receive,
using the interface, one or more requests at a load balancer; a monitoring unit [304]
20 connected at least to the transceiver unit [302], the monitoring unit [304] of the
system to monitor, using the interface, a health status of one or more registered
applications and one or more operational microservice instances, in response to the
received one or more requests; a retrieval unit [306] connected at least to the
monitoring unit [304], the retrieval unit [306] of the system to fetch, via an
25 orchestrator manager, a real time health status of the one or more operational
microservice instances; a determination unit [308] connected at least to the retrieval
unit [306], the determination unit [308] of the system to determine, using the
interface, an optimal server from a plurality of servers based on the health status of
the one or more operational microservice instances; and the transceiver unit [302]
30
of the system further to distribute, using the interface, the one or more requests
among the one or more operational microservice instances to the optimal server.
[00088] As is evident from the above, the present disclosure provides a
5 technically advanced solution of efficient system and method for handling heavy
load traffic and distributing the load among servers such that no server is overloaded
or overworked. The present method and system provide a LB-SA interface (Load
Balancer-Service Adapter interface), which ensures that no server gets overloaded
due to bulk traffic. The present system and method provide LB-SA interface, which
10 distributes incoming/outgoing requests easily among all SA instances. The present
method and system enable all instantiation, termination and CNF information
(metrics, state) operations on the cloud which may be performed on Management
and Orchestration (MANO). The present method and system support HTTP/HTTPS
configurations in parallel. The present method and system route client requests
15 across all servers in a manner that maximizes speed and capacity utilization. The
present method and system may perform header-based routing which may save time
and database hits.
[00089] While considerable emphasis has been placed herein on the
20 disclosed embodiments, it will be appreciated that many embodiments can be made
and that many changes can be made to the embodiments without departing from the
principles of the present disclosure. These and other changes in the embodiments
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
25 and non-limiting.
31
We Claim:
1. A method for distributing a traffic load using an interface, the method
comprising:
receiving, by a transceiver unit [302] using the interface, one or more
5 requests at a load balancer;
monitoring, by a monitoring unit [304] using the interface, a health
status of one or more registered applications and one or more operational
microservice instances, in response to the received one or more requests;
fetching, by a retrieval unit [306], via an orchestrator manager, a real
10 time health status of the one or more operational microservice instances;
determining, by a determination unit [308] using the interface, an
optimal server from a plurality of servers based on the health status of the
one or more operational microservice instances; and
distributing, by the transceiver unit [302], using the interface, the
15 one or more requests among the one or more operational microservice
instances to the optimal server.
2. The method as claimed in claim 1, wherein the one or more requests
comprises at least one of: a load balancer request, or an HTTP request.
20
3. The method as claimed in claim 1, wherein the health status of the one or
more registered applications and the one or more operational microservice
instances comprises of a positive health status and a negative health status.
25 4. The method as claimed in claim 1, wherein the interface is a Load BalancerService Adapter (LB-SA) interface to connect the load balancer with the
service adapter.
5. A system for distributing a traffic load using an interface, the system
30 comprising:
32
a transceiver unit [302] configured to receive, using the interface,
one or more requests at a load balancer;
a monitoring unit [304] connected at least to the transceiver unit
[302], the monitoring unit is configured to monitor, using the interface, a
5 health status of one or more registered applications and one or more
operational microservice instances, in response to the received one or more
requests;
a retrieval unit [306] connected at least to the monitoring unit [304],
the retrieval unit [306] is configured to fetch, via an orchestrator manager,
10 a real time health status of the one or more operational microservice
instances;
a determination unit [308] connected at least to the retrieval unit
[306], the determination unit [308] is configured to determine, using the
interface, an optimal server from a plurality of servers based on the health
15 status of the one or more operational microservice instances; and
the transceiver unit [302] is further configured to distribute, using
the interface, the one or more requests among the one or more operational
microservice instances to the optimal server.
20 6. The system as claimed in claim 5, wherein the one or more requests
comprises at least one of: a load balancer request, an HTTP request.
7. The system as claimed in claim 5, wherein the health status of the one or
more registered applications and the one or more operational microservice
25 instances comprises of a positive health status and a negative health status.
8. The system as claimed in claim 5, wherein the interface is a Load Balancer Service Adapter (LB-SA) interface to connect the load balancer with the service adapter.
| # | Name | Date |
|---|---|---|
| 1 | 202321061572-STATEMENT OF UNDERTAKING (FORM 3) [13-09-2023(online)].pdf | 2023-09-13 |
| 2 | 202321061572-PROVISIONAL SPECIFICATION [13-09-2023(online)].pdf | 2023-09-13 |
| 3 | 202321061572-POWER OF AUTHORITY [13-09-2023(online)].pdf | 2023-09-13 |
| 4 | 202321061572-FORM 1 [13-09-2023(online)].pdf | 2023-09-13 |
| 5 | 202321061572-FIGURE OF ABSTRACT [13-09-2023(online)].pdf | 2023-09-13 |
| 6 | 202321061572-DRAWINGS [13-09-2023(online)].pdf | 2023-09-13 |
| 7 | 202321061572-Proof of Right [09-01-2024(online)].pdf | 2024-01-09 |
| 8 | 202321061572-FORM-5 [13-09-2024(online)].pdf | 2024-09-13 |
| 9 | 202321061572-ENDORSEMENT BY INVENTORS [13-09-2024(online)].pdf | 2024-09-13 |
| 10 | 202321061572-DRAWING [13-09-2024(online)].pdf | 2024-09-13 |
| 11 | 202321061572-CORRESPONDENCE-OTHERS [13-09-2024(online)].pdf | 2024-09-13 |
| 12 | 202321061572-COMPLETE SPECIFICATION [13-09-2024(online)].pdf | 2024-09-13 |
| 13 | 202321061572-Request Letter-Correspondence [20-09-2024(online)].pdf | 2024-09-20 |
| 14 | 202321061572-Power of Attorney [20-09-2024(online)].pdf | 2024-09-20 |
| 15 | 202321061572-Form 1 (Submitted on date of filing) [20-09-2024(online)].pdf | 2024-09-20 |
| 16 | 202321061572-Covering Letter [20-09-2024(online)].pdf | 2024-09-20 |
| 17 | 202321061572-CERTIFIED COPIES TRANSMISSION TO IB [20-09-2024(online)].pdf | 2024-09-20 |
| 18 | 202321061572-FORM 3 [07-10-2024(online)].pdf | 2024-10-07 |
| 19 | Abstract.jpg | 2024-10-14 |
| 20 | 202321061572-ORIGINAL UR 6(1A) FORM 1 & 26-030125.pdf | 2025-01-07 |