Abstract: The present disclosure relates to a method and a system for load balancing of requests. The method comprises facilitating, by a facilitation unit [302], a registration of a plurality of platform scheduler (PS) instances and a plurality of load balancer (LB) instances based on configuring of an operation and management (OAM) server details. The method further comprises receiving, via a communication unit [304], one or more incoming requests and one or more outgoing requests, wherein each of the incoming request and the outgoing request is associated with a request context. Furthermore, the method comprises distributing, by a distribution unit [306] via the at least one LB instance, the one or more incoming requests and the one or more outgoing requests among the plurality of PS instances, wherein the distribution is based on the request context associated with each of the incoming request and the outgoing request. FIG. 4
1
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
5 &
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
10
“SYSTEM AND METHOD FOR LOAD BALANCING OF
REQUESTS”
15 We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
20
The following specification particularly describes the invention and the manner in
which it is to be performed.
25
2
SYSTEM AND METHOD FOR LOAD BALANCING OF REQUESTS
FIELD OF INVENTION
5
[0001] Embodiment of the present disclosure generally relates to the field of
network performance management. More particularly, embodiments of the present
disclosure relate to systems and methods for load balancing of requests.
10 BACKGROUND
[0002] The following description of related art is intended to provide background
information pertaining to the field of the disclosure. This section may include
certain aspects of the art that may be related to various features of the present
15 disclosure. However, it should be appreciated that this section be used only to
enhance the understanding of the reader with respect to the present disclosure, and
not as an admission of prior art.
[0003] A scheduler service is a system that manages the execution of jobs, typically
20 based on a schedule or some other trigger. A scheduler service with event-driven
architecture, makes the jobs highly available, compatible with distributed
environments, extendable and monitorable. With the right technology stack and
design, one can develop a custom scheduler service that meets specific needs. The
scheduling systems are integrated with microservices architecture to optimize
25 computational resources and enhance the performance of applications. Schedulers
play an essential role in the management of computational resources. They are
responsible for allocating resources to various tasks, ensuring that each task
receives the resources it requires to execute efficiently. In a microservices
environment, a scheduler can be used to manage the distribution of tasks among the
30 various services, ensuring that the overall system operates efficiently. Schedulers
are particularly important in a microservices environment because they help to
3
manage the complexity of dealing with multiple, independent services. They can
help to ensure that each service is given the resources it needs to function effectively
and can also help to manage the interdependencies between services, ensuring that
they work together effectively. However, the current network systems face a critical
5 challenge in efficiently managing and scheduling jobs/tasks within various network
components such as microservice(s). The scheduler services for task creation and
scheduling are struggling to effectively coordinate with the network functions.
[0004] Moreover, the network component(s) such as a capacity and performance
10 monitoring manager/ capacity monitoring manager (CP) primary function revolves
around monitoring resource usages, including CPU, RAM, storage, bandwidth, and
various parameters. The CP primarily interacts with a centralised platform such as
platform scheduler & cron job (PSC) service by continuously sending queries and
receiving event acknowledgments for breached events, wherein the resource usage
15 may end up surpassing predefined threshold values. Further, the core services of the
PSC service are struggling to effectively coordinate with the CP. Therefore, this
process has proven to be inefficient and prone to delays, leading to suboptimal task
scheduling in the network systems.
20 [0005] Hence, in view of these and other existing limitations, there arises an
imperative need to provide an efficient solution to overcome the above-mentioned
and other limitations and to provide a method and a system for load balancing of
requests.
25 SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
30 subject matter.
4
[0007] An aspect of the present disclosure may relate to a method for load
balancing of requests. The method comprises facilitating, by a facilitation unit, a
registration of a plurality of platform scheduler (PS) instances and a plurality of
load balancer (LB) instances based on configuring an operation and management
5 (OAM) server details. The method further comprises receiving, via a
communication unit, one or more incoming requests and one or more outgoing
requests, wherein each of the incoming request and the outgoing request is
associated with a request context. Furthermore, the method comprises distributing,
by a distribution unit via the at least one LB instance, the one or more incoming
10 requests and the one or more outgoing requests among the plurality of PS instances,
wherein the distribution is based on the request context associated with each of the
incoming request and the outgoing request.
[0008] In an exemplary aspect of the present disclosure, the one or more incoming
15 requests and the one or more outgoing requests are distributed among the plurality
of PS instances based on a routing policy configured at the at least one LB instance.
[0009] In an exemplary aspect of the present disclosure, the registration of the
plurality of PS instances and the at least one LB instance facilitates scalability of
20 PS service.
[0010] In an exemplary aspect of the present disclosure, the one or more incoming
requests comprise creating and scheduling task requests.
25 [0011] In an exemplary aspect of the present disclosure, the one or more outgoing
requests comprise notifications associated with the creating and scheduling task
requests.
[0012] In an exemplary aspect of the present disclosure, the request context
30 associated with each of the one or more incoming requests comprises at least an
Application Protocol Interface (API) creation, a File Transfer Protocol (FTP), an
event creation and a query.
5
[0013] In an exemplary disclosure of the present disclosure, the request context
associated with each of the one or more outgoing requests comprises at least a
response code.
5
[0014] In an exemplary disclosure of the present disclosure, the PS and LB are
communicatively coupled using the PS_LB interface.
[0015] Another aspect of the present disclosure may relate to a system for load
10 balancing of requests. The system comprises a facilitation unit, configured to
facilitate, a registration of a plurality of platform scheduler (PS) instances and a
plurality of load balancer (LB) instances by configuring an operation and
management (OAM) server details. Further, the system comprises a communication
unit connected at least with the facilitation unit, the communication unit is
15 configured to receive, one or more incoming requests and one or more outgoing
requests, wherein each of the incoming request and the outgoing request is
associated with a request context. Furthermore, a distribution unit connected at least
with the communication unit, wherein the distribution unit is configured to
distribute, via the at least one LB instance, the one or more incoming requests and
20 the one or more outgoing requests among the plurality of PS instances, wherein the
distribution is based on the request context associated with each of the incoming
request and the outgoing request.
[0016] Yet another aspect of the present disclosure may relate to a non-transitory
25 computer readable storage medium storing one or more instructions for load
balancing of requests, the instructions include executable code which, when
executed by one or more units of a system, causes a facilitation unit of the system
to facilitate a registration of a plurality of platform scheduler (PS) instances and a
plurality of load balancer (LB) instances by configuring an operation and
30 management (OAM) server details. Further, the executable code when executed
causes a communication unit of the system to receive, one or more incoming
requests and one or more outgoing requests, wherein each of the incoming request
6
and the outgoing request is associated with a request context. Furthermore, the
executable code when executed causes a distribution unit to distribute, via the at
least one LB instance, the one or more incoming requests and the one or more
outgoing requests among the plurality of PS instances, wherein the distribution is
5 based on the request context associated with each of the incoming request and the
outgoing request.
OBJECT OF THE DISCLOSURE
10 [0017] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
[0018] It is an object of the present disclosure to provide a system and a method for
load balancing of requests.
15
[0019] It is another object of the present disclosure to provide a solution to improve
the scalability for PS service to handle the request.
[0020] It is yet another object of the present disclosure to provide a solution to
20 distribute incoming request load to provide high scalability of PS microservice
instances.
BRIEF DESCRIPTION OF DRAWINGS
25 [0021] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
30 disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
7
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
5 [0022] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture, in accordance with
exemplary implementation of the present disclosure.
[0023] FIG. 2 illustrates an exemplary block diagram of a computing device upon
10 which the features of the present disclosure may be implemented, in accordance
with exemplary implementation of the present disclosure.
[0024] FIG. 3 illustrates an exemplary block diagram of a system for load balancing
of requests, in accordance with exemplary implementation of the present disclosure.
15
[0025] FIG. 4 illustrates an exemplary method flow diagram for load balancing of
requests, in accordance with exemplary implementation of the present disclosure.
[0026] FIG. 5 illustrates an exemplary system architecture of a platform scheduler
20 and cron jobs (PSC/PS) microservice, in accordance with exemplary
implementations of the present disclosure.
[0027] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
25
DETAILED DESCRIPTION
[0028] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
30 embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter can each be used independently of one
8
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above. Some of the problems discussed above might not be
fully addressed by any of the features described herein. Example embodiments of
5 the present disclosure are described below, as illustrated in various drawings in
which like reference numerals refer to the same parts throughout the different
drawings.
[0029] The ensuing description provides exemplary embodiments only, and is not
10 intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
15 disclosure as set forth.
[0030] It should be noted that the terms "mobile device", "user equipment", "user
device", “communication device”, “device” and similar terms are used
interchangeably for the purpose of describing the disclosure. These terms are not
20 intended to limit the scope of the disclosure or imply any specific functionality or
limitations on the described embodiments. The use of these terms is solely for
convenience and clarity of description. The disclosure is not limited to any
particular type of device or equipment, and it should be understood that other
equivalent terms or variations thereof may be used interchangeably without
25 departing from the scope of the disclosure as defined herein.
[0031] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skills in the art that the embodiments may be practiced without these
30 specific details. For example, circuits, systems, networks, processes, and other
components may be shown as components in block diagram form in order not to
9
obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
unnecessary detail in order to avoid obscuring the embodiments.
5 [0032] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations can be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
10 is terminated when its operations are completed but could have additional steps not
included in a figure.
[0033] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
15 subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
20 “includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive in a manner similar
to the term “comprising” as an open transition word without precluding any
additional or other elements.
25 [0034] As used herein, an “electronic device”, or “portable electronic device”, or
“user device” or “communication device” or “user equipment” or “device” refers
to any electrical, electronic, electromechanical, and computing device. The user
device is capable of receiving and/or transmitting one or parameters, performing
function/s, communicating with other user devices, and transmitting data to the
30 other user devices. The user equipment may have a processor, a display, a memory,
a battery, and an input-means such as a hard keypad and/or a soft keypad. The user
10
equipment may be capable of operating on any radio access technology including
but not limited to IP-enabled communication, ZigBee, Bluetooth, Bluetooth Low
Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance,
the user equipment may include, but not limited to, a mobile phone, smartphone,
5 virtual reality (VR) devices, augmented reality (AR) devices, laptop, a generalpurpose computer, desktop, personal digital assistant, tablet computer, mainframe
computer, or any other device as may be obvious to a person skilled in the art for
implementation of the features of the present disclosure.
10 [0035] Further, the user device and/or a system as described herein to implement
technical features as disclosed in the present disclosure may also comprise a
“processor” or “processing unit”, wherein processor refers to any logic circuitry for
processing instructions. The processor may be a general-purpose processor, a
special purpose processor, a conventional processor, a digital signal processor, a
15 plurality of microprocessors, one or more microprocessors in association with a
Digital Signal Processor (DSP) core, a controller, a microcontroller, Application
Specific Integrated Circuits, Field Programmable Gate Array circuits, any other
type of integrated circuits, etc. The processor may perform signal coding data
processing, input/output processing, and/or any other functionality that enables the
20 working of the system according to the present disclosure. More specifically, the
processor is a hardware processor.
[0036] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
25 “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
30 tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
11
contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
5 [0037] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
10 types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
[0038] As used herein, “interface” or “user interface” refers to a shared boundary
15 across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
20
[0039] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
25 microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
[0040] As used herein, the transceiver unit includes at least one receiver and at least
30 one transmitter configured respectively for receiving and transmitting data, signals,
12
information, or a combination thereof between units/components within the system
and/or connected with the system.
[0041] As discussed in the background section, the current known solutions have
5 several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing a
method and system for load balancing of requests. More particularly, the present
disclosure provides a solution to improve the scalability for PS service to handle
the request. Further, the present disclosure provides a solution to distribute
10 incoming request load to provide high scalability of PS microservice instances.
[0042] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
15 [0043] Referring to FIG. 1 an exemplary block diagram representation of a
management and orchestration (MANO) architecture [100], in accordance with
exemplary implementation of the present disclosure is illustrated. The MANO
architecture [100] is developed for managing telecom cloud infrastructure
automatically, managing design or deployment design, managing instantiation of a
20 network node(s) etc. The MANO architecture [100] deploys the network node(s) in
the form of Virtual Network Function (VNF) and Cloud-native/ Container Network
Function (CNF). The MANO architecture [100] is used to auto-instantiate the VNFs
into the corresponding environment of the present disclosure so that it could help
in onboarding other vendor(s) CNFs and VNFs to the platform.
25
[0044] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer, a network function virtualization (NFV) and software defined
network (SDN) design function module [104]; a platforms foundation services
module [106], a platform core services module [108] and a platform resource
30 adapters and utilities module [112], wherein all the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
13
[0045] The NFV and SDN design function module [104] further comprises a VNF
lifecycle manager (compute) [1042]; a VNF catalogue [1044]; a network services
catalogue [1046]; a network slicing and service chaining manager [1048]; a
5 physical and virtual resource manager [1050] and a CNF lifecycle manager [1052].
The VNF lifecycle manager (compute) [1042] is responsible for determining on
which server of the communication network the microservice will be instantiated.
The VNF lifecycle manager (compute) [1042] will manage the overall flow of
incoming/ outgoing requests during interaction with the user. The VNF lifecycle
10 manager (compute) [1042] is responsible for determining which sequence to be
followed for executing the process. For e.g., in an AMF network function of the
communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
15 information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual resource manager [1050] stores the
logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
20 (compute) [1042], the CNF lifecycle manager [1052] is similarly used for the CNFs
lifecycle management.
[0046] The platforms foundation services module [106] further comprises a
microservices edge load balancer [1062]; an identify & access manager [1064]; a
25 command line interface (CLI) [1066]; a central logging manager [1068]; and an
event routing manager [1070]. The microservices edge load balancer [1062] is used
for maintaining the load balancing of the request for the services. The identify &
access manager [1064] is used for logging purposes. The command line interface
(CLI) [1066] is used to provide commands to execute certain processes which
30 require changes during the run time. The central logging manager [1068] is
responsible for keeping the logs of every service. The logs are generated by the
14
MANO architecture [100]. The logs are used for debugging purposes. The event
routing manager [1070] is responsible for routing the events i.e., the application
programming interface (API) hits to the corresponding services.
5 [0047] The platforms core services module [108] further comprises NFV
infrastructure monitoring manager [1082]; an assure manager [1084]; a
performance manager [1086]; a policy execution engine [1088]; a capacity
monitoring manager (CP) [1090]; a release management (mgmt.) repository [1092];
a configuration manager & (Golden Configuration Template (GCT)) [1094]; an
10 NFV platform decision analytics [1096]; a platform NoSQL DB [1098]; a platform
schedulers and cron jobs (PSC) service [1100]; a VNF backup & upgrade manager
[1102]; a microservice auditor [1104]; and a platform operations, administration
and maintenance manager [1106]. The NFV infrastructure monitoring manager
[1082] monitors the infrastructure part of the NFs. For e.g., any metrics such as
15 CPU utilization by the VNF. The assure manager [1084] is responsible for
supervising the alarms the vendor is generating. The performance manager [1086]
is responsible for manging the performance counters. The policy execution engine
[1088] is responsible for managing all the policies. The capacity monitoring
manager (CP) [1090] is responsible for sending the request to the policy execution
20 engine [1088]. The capacity monitoring manager (CP) [1090] is capable of
monitoring usage of network resources such as but not limited to CPU utilization,
RAM utilization and storage utilization across all the instances of the virtual
infrastructure manager (VIM) or simply the NFV infrastructure monitoring
manager [1082]. The capacity monitoring manager (CP) [1090] is also capable of
25 monitoring said network resources for each instance of the VNF. The capacity
monitoring manager (CP) [1090] is responsible for constantly tracking the network
resource utilization. The release management (mgmt.) repository [1092] is
responsible for managing the releases and the images of all the vendor network
nodes. The configuration manager & (GCT) [1094] manages the configuration and
30 GCT of all the vendors. The NFV platform decision analytics [1096] helps in
deciding the priority of using the network resources. It is further noted that the
15
policy execution engine [1088], the configuration manager & (GCT) [1094] and the
NFV platform decision analytics [1096] work together. The platform NoSQL DB
[1098] is a database for storing all the inventory (both physical and logical) as well
as the metadata of the VNFs and CNF. The platform schedulers and cron jobs (PSC)
5 service [1100] schedules the task such as but not limited to triggering of an event,
traversing the network graph etc. The VNF backup & upgrade manager [1102] takes
backup of the images, binaries of the VNFs and the CNFs and produces those
backups on demand in case of server failure. The microservice auditor [1104] audits
the microservices. For e.g., in a hypothetical case, instances not being instantiated
10 by the MANO architecture [100] and using the network resources then the
microservice auditor [1104] audits and informs the same so that resources can be
released for services running in the MANO architecture [100], thereby assuring the
services only run on the MANO architecture [100]. The platform operations,
administration, and maintenance manager [1106] is used for newer instances that
15 are spawning.
[0048] The platform resource adapters and utilities module [112] further comprises
a platform external API adaptor and gateway [1122]; a generic decoder and indexer
(XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an OpenStack API
20 adapter [1128]; and a NFV gateway [1130]. The platform external API adaptor and
gateway [1122] is responsible for handling the external services (to the MANO
architecture [100]) that require the network resources. The generic decoder and
indexer (XML, CSV, JSON) [1124] directly gets the data of the vendor system in
the XML, CSV, JSON format. The docker service adaptor [1126] is the interface
25 provided between the telecom cloud and the MANO architecture [100] for
communication. The OpenStack API adapter [1128] is used to connect with the
virtual machines (VMs). The NFV gateway [1130] is responsible for providing the
path to each service going to/incoming from the MANO architecture [100].
30 [0049] Referring to FIG. 2 an exemplary block diagram of a computing device
[200] upon which the features of the present disclosure may be implemented, in
16
accordance with exemplary implementation of the present disclosure is illustrated.
In an implementation, the computing device [200] may implement a method for
handling an overload condition in a network by utilising a system [200]. In another
implementation, the computing device [200] itself implements the method for
5 handling an overload condition in a network using one or more units configured
within the computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
[0050] The computing device [200] may include a bus [202] or other
10 communication mechanism for communicating information, and a processor [204]
coupled with bus [202] for processing information. The processor [204] may be, for
example, a general-purpose microprocessor. The computing device [200] may also
include a main memory [206], such as a random-access memory (RAM), or other
dynamic storage device, coupled to the bus [202] for storing information and
15 instructions to be executed by the processor [204]. The main memory [206] also
may be used for storing temporary variables or other intermediate information
during execution of the instructions to be executed by the processor [204]. Such
instructions, when stored in non-transitory storage media accessible to the processor
[204], render the computing device [200] into a special-purpose machine that is
20 customized to perform the operations specified in the instructions. The computing
device [200] further includes a read only memory (ROM) [208] or other static
storage device coupled to the bus [202] for storing static information and
instructions for the processor [204].
25 [0051] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
30 displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
17
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
5 cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0052] The computing device [200] may implement the techniques described
10 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware,
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
15 sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
20 disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
[0053] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a two25 way data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
30 local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
18
implementation, the communication interface [218] sends and receives electrical,
electromagnetic, or optical signals that carry digital data streams representing
various types of information.
5 [0054] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], a host [224], the local network [222] and the communication interface
10 [218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0055] Referring to FIG. 3 exemplary block diagram of a system for load balancing
15 of requests, in accordance with exemplary implementation of the present disclosure
is illustrated. The system comprises at least one facilitation unit [302], at least one
communication unit [304], and at least one distribution unit [306]. Also, all of the
components/ units of the system [300] are assumed to be connected to each other
unless otherwise indicated below. As shown in the FIG. 3 all units shown within the
20 system [300] should also be assumed to be connected to each other. Also, in FIG. 3
only a few units are shown, however, the system [300] may comprise multiple such
units or the system [300] may comprise any such numbers of said units, as required
to implement the features of the present disclosure. Further, in an implementation,
the system [300] may reside in a server or the network entity or the system [300]
25 may be in communication with the network entity to implement the features as
disclosed in the present disclosure.
[0056] The system [300] is configured for load balancing of requests with the help
of the interconnection between the components/units of the system [300].
30
19
[0057] In operation, the facilitation unit [302], may facilitate registration of a
plurality of platform scheduler (PS) instances and a plurality of load balancer (LB)
instances by configuring an operation and management (OAM) server details.
5 [0058] As would be understood, the PS instance may be a centralised platform
which helps to create and schedule jobs on behalf of other microservices. Also, the
microservice is a small, loosely coupled distributed service and each microservice
is designed to perform a specific function. Further, each microservice may be
developed and deployed independently. Further, the microservice breaks a service
10 into small and manageable components of services. Further, as would be
understood, LB instance may be a platform that may distribute the traffic among
the plurality of the microservices to maintain the optimal load balance on each of
the microservices from the plurality of the microservices in the network
environment, based on the details of each microservice. Furthermore, the OAM
15 server may oversee, control, and maintain the operations of the plurality of
microservices in the network environment. The OAM may further keep track of the
health of the plurality of microservice registered at the OAM. Moreover, the OAM
server details may include the information related to the OAM such as, but not
limited to, server name, Internet Protocol (IP) address, port address, etc. associated
20 with the OAM server.
[0059] Continuing further, the PS instance and the LB instance may register
themselves on the OAM server for centralized monitoring, management and proper
functioning of the PS instance and the LB instance registered at the OAM server.
25 Also, the registration of the plurality of PS instances and the at least one LB instance
facilitates scalability of PS service.
[0060] Further, in an exemplary implementation, the plurality of PS instances and
the at least one LB instance to register, at the OAM server, may send a request in
30 the form of, say, http request to the OAM server for the registration. The request
may include details such as, but not limited to, hostname, IP address, port number,
20
health check information, etc. associated with the plurality of PS instance and the
at least one LB instance. Furthermore, the OAM server may process the request and
after processing the request send a response in the form, say, 200OK for successful
registration of the plurality of PS instances and the at least one LB instance.
5
[0061] Continuing further, after successful registration, the communication unit
[304] may receive one or more incoming requests and one or more outgoing
requests, wherein each of the incoming requests and the outgoing request is
associated with a request context. Further, the one or more incoming requests
10 comprise creating and scheduling task requests. Furthermore, the request context
associated with each of the one or more incoming requests comprises at least an
Application Protocol Interface (API) creation, a File Transfer Protocol (FTP), an
event creation and a query.
15 [0062] In an implementation, the create task request comprises a task schedule for
the task. Further, the create task request comprises parameters such as Task type,
Task frequency, Task periodicity, Task counter and Task information. The Task type
may be for example, an API creation, an FTP, an EVENT creation, or a QUERY.
The Task frequency can be periodic such as done daily, weekly, monthly, or one20 time execution as per the requirement of the operations team. The Task periodicity
may define the time period when the task is to be scheduled. The Task counter
defines the number of task notifications. The Task information defines details
related to resources such as name, identifier, address, and threshold value of usage.
An example of tasks may be such as, creating an event to clear cache weekly. The
25 PSC service [1100] may include a network component that works as a task manager
for managing the sequence of network tasks. The PSC service [1100] may also
employ a fixed queueing algorithm for governing the scheduling of the task. The
create task request is responsible for defining the initial stages or actions for the
task execution.
30
21
[0063] In another implementation, the scheduling task may include assigning or
prioritizing network resources to a core network component (such as network node/
function) for carrying out the execution of the intended objectives of the network
component. The network resources may include processors, network links,
5 memories etc. The scheduling of tasks in the communication network helps in
minimizing network problems such as delays and allows to automate recurring
tasks, such as backups, data synchronization and maintenance jobs.
[0064] Further, the one or more outgoing requests comprise notifications associated
10 with the creating and scheduling task requests. The notification associated with the
one or more outgoing requests may be triggered to keep the users or the operation
team informed about the status of the task such as, but not limited to, task created,
task scheduled, task executed, etc. Also, the request context associated with each of
the one or more outgoing requests comprises at least a response code and the
15 identity of the PS to which the response code is to be delivered. Furthermore, the
response code may include responses such as, but not limited to, 200Ok for
successful requests, 201 created for successful creation of tasks, etc. The system
thus is able to determine which requesting microservice a particular response code
belongs to, as the response code provides notification-type information (for
20 example, 200 OK) but does not identify the associated microservice.
[0065] Thereafter, the distribution unit [306] may distribute, via the at least one LB
instance, the data traffic one or more incoming requests and the one or more
outgoing requests among the plurality of PS instances, wherein the distribution is
25 based on the request context associated with each of the incoming request and the
outgoing request. Also, the one or more incoming requests and the one or more
outgoing requests are distributed among the plurality of PS instances based on a
routing policy configured at the at least one LB instance. Further, the at least one
LB instance may distribute the one or more incoming requests and the one or more
30 outgoing requests among the plurality of PS instances based on the routing table.
22
[0066] As would be understood, the routing table of the at least one LB instance is
a set of rules that may define how the incoming traffic, the one or more incoming
requests and the one or more outgoing requests may be distributed among the
plurality of PS instances. Moreover, the routing table may provide information to
5 route a particular request to an appropriate PS instance based on various factors
such as, but not limited to, health of the PS instance, load on the PS instance etc.
[0067] The PS and LB are communicatively coupled using the PS_LB interface.
The PS_LB interface is established after registration of OS instances with the LB
10 instance. The PS_LB interface can comprise at least one of http and web-socket
based connections. In an embodiment, the PS_LB interface is configured to
facilitate exchange of information using hypertext transfer protocol (http) rest
application programming interface (API). In an embodiment, the http rest API is
used in conjunction with JSON and/or XML communication media. In another
15 embodiment, the PS_LB interface is configured to facilitate exchange of
information by establishing a web-socket connection between the PS, and the LB.
A web-socket connection may involve establishing a persistent connectivity
between the PS, and the LB. An example of the web-socket based communication
includes, without limitation, a transmission control protocol (TCP) connection. In
20 such a connection, information, such as operational status, health, etc. of different
components may be exchanged through the interface using a ping-pong based
communication.
[0068] Referring to FIG. 4 an exemplary flow diagram of a method [400] for load
25 balancing of requests, in accordance with exemplary implementation of the present
disclosure is illustrated. In an implementation the method [400] is performed by the
system [300]. Also, as shown in FIG. 4, the method [400] initiates at step [402].
[0069] At step [404], the method [400] comprises, facilitating, by a facilitation unit
30 [302], a registration of a plurality of platform scheduler (PS) instances and a
23
plurality of load balancer (LB) instances based on configuring of an operation and
management (OAM) server details.
[0070] As would be understood, the PS instance may be a centralised platform
5 which helps to create and schedule jobs on behalf of other microservices. Also, the
microservice is a small, loosely coupled distributed service and each microservice
is designed to perform a specific function. Further, each microservice may be
developed and deployed independently. Further, the microservice breaks a service
into small and manageable components of services. Further, as would be
10 understood, LB instance may be a platform that may distribute the traffic among
the plurality of the microservices to maintain the optimal load balance on each of
the microservices from the plurality of the microservices in the network
environment, based on the details of each microservice. Furthermore, the OAM
server may oversee, control, and maintain the operations of the plurality of
15 microservices in the network environment. The OAM may further keep track of the
health of the plurality of microservice registered at the OAM. Moreover, the OAM
server details may include the information related to the OAM such as, but not
limited to, server name, Internet Protocol (IP) address, port address, etc. associated
with the OAM server.
20
[0071] Continuing further, the PS instance and the LB instance may register
themselves on the OAM server for centralized monitoring, management and proper
functioning of the PS instance and the LB instance registered at the OAM server.
Also, the registration of the plurality of PS instances and the at least one LB instance
25 facilitates scalability of PS service.
[0072] Further, in an exemplary implementation, the plurality of PS instances and
the at least one LB instance to register, at the OAM server, may send a request in
the form of, say, http request to the OAM server for the registration. The request
30 may include details such as, but not limited to, hostname, IP address, port number,
health check information, etc. associated with the plurality of PS instance and the
24
at least one LB instance. Furthermore, the OAM server may process the request and
after processing the request send a response in the form, say, 200OK for successful
registration of the plurality of PS instances and the at least one LB instance.
5 [0073] Next, at step [406], the method [400] comprises receiving, via a
communication unit [304], at least one or more of an incoming requests and one or
more outgoing requests, wherein each of the incoming request and the outgoing
request is associated with a request context. Further, the one or more incoming
requests comprise creating and scheduling task requests. Furthermore, the request
10 context associated with each of the one or more incoming requests comprises at
least an Application Protocol Interface (API) creation, a File Transfer Protocol
(FTP), an event creation and a query.
[0074] Continuing further, in an implementation, the create task request comprises
15 a task schedule for the task. Further, the create task request comprises parameters
such as Task type, Task frequency, Task periodicity, Task counter and Task
information. The Task type may be for example, an API creation, an FTP, an
EVENT creation, or a QUERY. The Task frequency can be periodic such as done
daily, weekly, monthly, or one-time execution as per the requirement of the
20 operations team. The Task periodicity may define the time period when the task is
to be scheduled. The Task counter defines the number of task notifications. The
Task information defines details related to resources such as name, identifier,
address, and threshold value of usage. An example of a task may be such as, creating
an event to clear cache weekly. The PSC service [1100] may include a network
25 component that works as a task manager for managing the sequence of network
tasks. The PSC service [1100] may also employ a fixed queueing algorithm for
governing the scheduling of the task. The create task request is responsible for
defining the initial stages or actions for the task execution.
30 [0075] Continuing further, in another implementation, the scheduling of a task may
involve assigning or prioritizing network resources to a core network component
25
(such as network node/ function) for carrying out the execution of the intended
objectives of the network component. The network resources may include
processors, network links, memories etc. The scheduling of tasks in the
communication network helps in minimizing network problems such as delays and
5 allows to automate recurring tasks, such as backups, data synchronization and
maintenance jobs.
[0076] Further, the one or more outgoing requests comprise notifications associated
with the creating and scheduling task requests. The notification associated with the
10 one or more outgoing requests may be triggered to keep the users or the operation
team informed about the status of the task such as, but not limited to, task created,
task scheduled, task executed, etc. Also, the request context associated with each of
the one or more outgoing requests comprises at least a response code and the
identity of the PS to which the response code is to be delivered. Furthermore, the
15 response code may include responses such as, but not limited to, 200Ok for
successful requests, 201 created for successful creation of tasks, etc. The system
thus enables to determine which requesting microservice a particular response code
belongs to, as the response code provides notification-type information (for
example, 200 OK) but does not identify the associated microservice.
20
[0077] Furthermore, at step [408], the method [400], comprises distributing, by a
distribution unit [306] via the at least one LB instance, the one or more incoming
requests and the one or more outgoing requests among the plurality of PS instances,
wherein the distribution is based on the request context associated with each of the
25 incoming request and the outgoing request. Also, the one or more incoming requests
and the one or more outgoing requests are distributed among the plurality of PS
instances based on a routing policy configured at the at least one LB instance.
Further, the at least one LB instance may distribute the one or more incoming
requests and the one or more outgoing requests among the plurality of PS instances
30 based on the routing table.
26
[0078] As would be understood, the routing table of the at least one LB instance is
a set of rules that may define how the incoming traffic, the one or more incoming
requests and the one or more outgoing requests may be distributed among the
plurality of PS instances. Moreover, the routing table may provide information to
5 route a particular request to an appropriate PS instance based on various factors
such as, but not limited to, health of the PS instance, load on the PS instance etc.
[0079] Thereafter, at step [410], the method [400] terminates.
10 [0080] Referring to FIG. 5 illustrates an exemplary system architecture [500] of a
platform scheduler and cron jobs (PSC/PS) microservice, in accordance with
exemplary implementations of the present disclosure. As shown in FIG. 5, the
system architecture [500] of PSC microservice comprises a CRON & Schedulers
Management [502], a Cron Management [504], a Task Management [506], a FCAP
15 management [508], an Event Handling [510], a HA and Fault Tolerance [512], a
Data Modelling Framework [514], a ES-DB Client [516], a ERM [518], a ES [520],
an ELB [522], a VNF manager [524], VM [526] (VM1 [526a],..VMn [526n]), a
GUI Interface [528], a CLI interface [530] and EDGE-LB [532].
20 [0081] CRON & Schedulers Management [502] is used for scheduling of the cron
jobs.
[0082] Cron Management [504] is used to manage all the active and inactive crons
created at PSC end.
25
[0083] Task Management [506] is used to manage all the active and inactive tasks
created at PSC end.
[0084] FCAP management [508] manages all the counters and alarms created at
30 PSC.
27
[0085] Event Handling [510] manages all the events between microservices.
[0086] HA and Fault Tolerance [512] handles all the requests if one running
instance goes down, then another active instance will complete that request.
5
[0087] Data Modelling Framework [514] is used to manage and check incoming
and outgoing format data at PSC end.
[0088] ERM [518] is Event Routing Manager which is used to send the requests
10 between publisher microservice to subscriber microservice.
[0089] ELB [522] is Edge Load balancer which is used to send the requests between
the active instances of one microservice to another microservice.
15 [0090] ES-DB Client [516] and ES [520] manage task related data for the
microservices at PSC end.
[0091] VNF manager [524] manages the one or more VM [526] in a virtual
environment in a network functions virtualization (NFV) architecture or virtualized
20 infrastructure.
[0092] GUI Interface [528] is used to communicate with ERM [518] for sending
the request to the microservices.
25 [0093] CLI interface [530] is used to communicate or trigger the EDGE-LB [532]
for managing the load of the microservices.
[0094] The present disclosure may further relate to a non-transitory computer
readable storage medium storing one or more instructions for load balancing of
30 requests, the instructions include executable code which, when executed by one or
more units of a system [300], causes a facilitation unit [302] of the system [300] to
28
facilitate a registration of a plurality of platform scheduler (PS) instances and a
plurality of load balancer (LB) instances by configuring an operation and
management (OAM) server details. Further, the executable code when executed
causes a communication unit [304] of the system [300] to receive one or more
5 incoming requests and one or more outgoing requests, wherein each of the incoming
request and the outgoing request is associated with a request context. Furthermore,
the executable code when executed causes a distribution unit [306] of the system
[300] to distribute, via the at least one LB instance, the one or more incoming
requests and the one or more outgoing requests among the plurality of PS instances,
10 wherein the distribution is based on the request context associated with each of the
incoming request and the outgoing request.
[0095] As is evident from the above, the present disclosure provides a technically
advanced solution for load balancing of requests. More particularly, the present
15 solution improves the scalability for PS service to handle the request. Further, the
present solution distributes the incoming request load to provide high scalability of
PS microservice instances.
[0096] While considerable emphasis has been placed herein on the disclosed
20 implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
25 and non-limiting.
[0097] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
30 particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
29
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
5 of the present disclosure.
30
We Claim:
1. A system for load balancing of requests, the system comprising:
- a facilitation unit, configured to facilitate, a registration of a plurality of
platform scheduler (PS) instances and a plurality of load balancer (LB)
instances by configuring an operation and management (OAM) server
details;
- a communication unit connected at least with the facilitation unit,
wherein the communication unit is configured to receive, one or more
incoming requests and one or more outgoing requests, wherein each of
the incoming request and the outgoing request is associated with a
request context; and
- a distribution unit connected at least with the communication unit,
wherein the distribution unit is configured to distribute, via at least one
LB instance, the one or more incoming requests and the one or more
outgoing requests among the plurality of PS instances, wherein the
distribution is based on the request context associated with each of the
incoming request and the outgoing request.
2. The system as claimed in claim 1, wherein the one or more incoming
requests and the one or more outgoing requests are distributed among the
plurality of PS instances based on a routing policy configured at the at least
one LB instance.
3. The system as claimed in claim 1, wherein the registration of the plurality
of PS instances and the at least one LB instance facilitates scalability of PS
service.
4. The system as claimed in claim 1, wherein the one or more incoming
requests comprise creating and scheduling task requests.
31
5. The system as claimed in claim 4, wherein the one or more outgoing
requests comprise notifications associated with the creating and scheduling
task requests.
6. The system as claimed in claim 1, wherein the request context associated
with each of the one or more incoming requests comprises at least an
Application Protocol Interface (API) creation, a File Transfer Protocol
(FTP), an event creation and a query.
7. The system as claimed in claim 1, wherein the request context associated
with each of the one or more outgoing requests comprises at least a response
code.
8. The system as claimed in claim 1, wherein the PS and LB are
communicatively coupled using PS_LB interface.
9. A method for load balancing of requests, the method comprising:
- facilitating, by a facilitation unit, a registration of a plurality of platform
scheduler (PS) instances and a plurality of load balancer (LB) instances
based on configuring an operation and management (OAM) server
details;
- receiving, via a communication unit, one or more incoming requests
and one or more outgoing requests, wherein each of the incoming
request and the outgoing request is associated with a request context;
- distributing, by a distribution unit via at least one LB instance, the one
or more incoming requests and the one or more outgoing requests
among the plurality of PS instances, wherein the distribution is based
on the request context associated with each of the incoming request and
the outgoing request.
10. The method as claimed in claim 9, wherein the one or more incoming
requests and the one or more outgoing requests are distributed among the
32
plurality of PS instances based on a routing policy configured at the at least
one LB instance.
11. The method as claimed in claim 9, wherein the registration of the plurality
of PS instances and the at least one LB instance facilitates scalability of PS
service.
12. The method as claimed in claim 9, wherein the one or more incoming
requests comprise creating and scheduling task requests.
13. The method as claimed in claim 12, wherein the one or more outgoing
requests comprise notifications associated with the creating and scheduling
task requests.
14. The method as claimed in claim 9, wherein the request context associated
with each of the one or more incoming requests comprises at least an
Application Protocol Interface (API) creation, a File Transfer Protocol
(FTP), an event creation and a query.
15. The method as claimed in claim 9, wherein the request context associated
with each of the one or more outgoing requests comprises at least a response
code.
16. The method as claimed in claim 9, wherein the PS and LB are
communicatively coupled using PS_LB interface.
| # | Name | Date |
|---|---|---|
| 1 | 202321064005-STATEMENT OF UNDERTAKING (FORM 3) [23-09-2023(online)].pdf | 2023-09-23 |
| 2 | 202321064005-PROVISIONAL SPECIFICATION [23-09-2023(online)].pdf | 2023-09-23 |
| 3 | 202321064005-POWER OF AUTHORITY [23-09-2023(online)].pdf | 2023-09-23 |
| 4 | 202321064005-FORM 1 [23-09-2023(online)].pdf | 2023-09-23 |
| 5 | 202321064005-FIGURE OF ABSTRACT [23-09-2023(online)].pdf | 2023-09-23 |
| 6 | 202321064005-DRAWINGS [23-09-2023(online)].pdf | 2023-09-23 |
| 7 | 202321064005-Proof of Right [09-02-2024(online)].pdf | 2024-02-09 |
| 8 | 202321064005-FORM-5 [23-09-2024(online)].pdf | 2024-09-23 |
| 9 | 202321064005-ENDORSEMENT BY INVENTORS [23-09-2024(online)].pdf | 2024-09-23 |
| 10 | 202321064005-DRAWING [23-09-2024(online)].pdf | 2024-09-23 |
| 11 | 202321064005-CORRESPONDENCE-OTHERS [23-09-2024(online)].pdf | 2024-09-23 |
| 12 | 202321064005-COMPLETE SPECIFICATION [23-09-2024(online)].pdf | 2024-09-23 |
| 13 | 202321064005-FORM 3 [07-10-2024(online)].pdf | 2024-10-07 |
| 14 | 202321064005-Request Letter-Correspondence [09-10-2024(online)].pdf | 2024-10-09 |
| 15 | 202321064005-Power of Attorney [09-10-2024(online)].pdf | 2024-10-09 |
| 16 | 202321064005-Form 1 (Submitted on date of filing) [09-10-2024(online)].pdf | 2024-10-09 |
| 17 | 202321064005-Covering Letter [09-10-2024(online)].pdf | 2024-10-09 |
| 18 | 202321064005-CERTIFIED COPIES TRANSMISSION TO IB [09-10-2024(online)].pdf | 2024-10-09 |
| 19 | Abstract.jpg | 2024-10-24 |
| 20 | 202321064005-ORIGINAL UR 6(1A) FORM 1 & 26-060125.pdf | 2025-01-10 |