Sign In to Follow Application
View All Documents & Correspondence

Method And System Of Load Balancing Between Capacity Manager Instances

Abstract: The present disclosure relates to a method and a system for load balancing between capacity manager (CM) instances. The disclosure encompasses receiving, from an operation and management (OAM) unit [306], health status information of plurality of capacity manager (CM) instances [308]. It may be noted that the health status information is detected by the OAM unit [306] based on monitoring and is indicative of healthy or malfunctioning CM instance. The present disclosure then encompasses distributing, a data traffic from the malfunctioning CM instance, among the healthy CM instances. The distribution is based on receiving an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM OF LOAD BALANCING BETWEEN
CAPACITY MANAGER INSTANCES”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM OF LOAD BALANCING BETWEEN
CAPACITY MANAGER INSTANCES
FIELD OF THE DISCLOSURE
5
[0001] Embodiments of the present disclosure generally relate to the field of
network performance management systems. More particularly, embodiments of the
present disclosure relate to load balancing between capacity manager (CM)
instances to improve network performance.
10
BACKGROUND
[0002] The following description of related art is intended to provide background
information pertaining to the field of the disclosure. This section may include
15 certain aspects of the art that may be related to various features of the present
disclosure. However, it should be appreciated that this section be used only to
enhance the understanding of the reader with respect to the present disclosure, and
not as admissions of prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on antilog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. The third generation
(3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth generation (5G) technology is
30 being deployed, promising even faster data speeds, low latency, and the ability to
3
connect multiple devices simultaneously. With each generation, wireless
communication technology has become more advanced, sophisticated, and capable
of delivering more services to its users.
[0004] Further, wireless communication technology, 5 a capacity manager is a
critical component responsible for optimizing the resource utilization and smooth
operation of the network. The capacity manger helps in automating resource
allocation, monitoring and optimization processes. For enhancing the scalability,
reliability, and fault tolerance, various instances of capacity mangers may be used.
10 Due to excessive load at the capacity manager instances, various problems may
arise such as performance degradation, resource exhaustion, inaccurate decision
making, and system instability. The performance degradation may be in terms of
increased latency, reduced throughput and deteriorated quality of service (QoS).
The resource exhaustion may be in terms of CPU overload, memory constraints and
15 storage limitations. The inaccurate decision making may be in terms of incorrect
resource allocation, delayed response, and congestion management failures.
[0005] Hence, load balancing is required for distribution of workloads evenly
among these instances of capacity managers. To prevent the possibility of failure
20 caused by failed requests due to either excessive traffic on a specific instance or
instances in poor health, load balancing is required for managing the instances of
capacity managers.
[0006] Accordingly, there exists a need for a solution for routing the traffic load
25 among the capacity managers for effective and efficient load balancing. Further,
there is a need for a solution having the ability to support HTTP/HTTPS in a parallel
configuration. Further, there exists a need for a solution which ensures routing
client requests across all servers in a manner that maximizes speed and capacity
utilization. Further, there exists a need for a solution which utilizes header-based
30 routing which saves time and database hits.
4
[0007] The present disclosure provides a solution to achieve load balancing
between capacity manager (CM) instances.
OBJECTS OF THE DISCLOSURE
5
[0008] This section is provided to introduce certain objects of the present disclosure
in a simplified form that are further described below in the description. In order to
overcome at least a few problems associated with the known solutions as provided
in the previous section, an object of the present disclosure is to substantially reduce
10 the limitations and drawbacks of the prior arts as described hereinabove.
[0009] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
15 [0010] It is an object of the present disclosure for ensuring seamless interaction
between capacity manager (CM) instance and Load Balancer (LB).
[0011] It is another object of the present disclosure to route client requests across
all servers in a manner that maximizes speed and capacity utilization.
20
[0012] It is yet another object of the present disclosure to provide header-based
routing which saves time and database hits.
[0013] Yet another object of the present disclosure is to provide a configurable
25 support for HTTP/HTTPS in parallel.
SUMMARY OF THE DISCLOSURE
[0014] This section is provided to introduce certain aspects of the present disclosure
30 in a simplified form that are further described below in the detailed description.
5
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0015] An aspect of the present disclosure may relate to a method of load balancing
between capacity manager (CM) instances. 5 The method comprises receiving, by a
transceiver unit at a load balancer (LB) unit, from an operation and management
(OAM) unit, a health status information of a plurality of capacity manager (CM)
instances. The health status information of the plurality of CM instances is detected
by the OAM unit based on a monitoring of the health status information of the
10 plurality capacity manager (CM) instances. The health status information of each
of the plurality of CM instances is indicative of one of a healthy CM instance and
malfunctioning CM instance. The method further comprises distributing, by a
processing unit at the LB unit, a data traffic from the malfunctioning CM instance
among the healthy CM instances. The distribution is based on receiving, from one
15 or more of the healthy CM instances, an indication corresponding to taking an
ownership of at least a part of the data traffic from the malfunctioning CM instance.
[0016] In an exemplary aspect of the present disclosure, the method comprises
distributing, by the processing unit, the data traffic based on a timeout indication
20 related to serving a transaction.
[0017] In an exemplary aspect of the present disclosure, post the receiving, from
one or more of the healthy CM instances, the indication corresponding to taking the
ownership of at least a part of the data traffic from the malfunctioning CM instance,
25 the method comprises fetching, by the one or more of the healthy CM instances, a
state information of the incomplete transaction being served by the malfunctioning
CM instance.
[0018] In an exemplary aspect of the present disclosure, the indication is based on
30 a priority assigned by the OAM unit to each of the healthy CM instances.
6
[0019] In an exemplary aspect of the present disclosure, the method comprises
distributing, by the processing unit, the data traffic based on at least one of a Header
based routing procedure, and context-based routing procedure.
5
[0020] In an exemplary aspect of the present disclosure, the method comprises
distributing, by the processing unit, the data traffic among the plurality of CM
instances in a round robin manner.
10 [0021] In an exemplary aspect of the present disclosure, the method comprises
maintaining the health status information each of the plurality of CM instances in
at least one of a local cache associated with the each of the plurality of CM
instances, and a database stored in a storage unit.
15 [0022] In an exemplary aspect of the present disclosure, the method comprises
transmitting, by the transceiver unit, an acknowledgement to the plurality of CM
instances. The acknowledgement is indicative of distribution of data traffic from
the malfunctioning CM instance among the healthy CM instances.
20 [0023] In an exemplary aspect of the present disclosure, the method comprising
receiving, by the transceiver unit at the LB unit from the OAM unit, an alert related
to one of: an addition of a CM instance and a deletion of a CM instance among the
plurality of the CM instances.
25 [0024] In another exemplary aspect of the present disclosure, the data traffic is
distributed, by the processing unit at the LB unit, from the malfunctioning CM
instance, among the healthy CM instances, over a CM_LB interface.
[0025] Another aspect of the present disclosure may relate to a system of load
30 balancing between capacity manager (CM) instances. The system comprising a load
7
balancer (LB) unit. The load balancer unit further comprises a transceiver unit
configured to receive, from an operation and management (OAM) unit, a health
status information of a plurality of capacity manager (CM) instances. The health
status information of the plurality of CM instances is detected by the OAM unit
based on a monitoring of 5 the health status information of the plurality capacity
manager (CM) instances. The health status information of each of the plurality of
CM instances is indicative of one of a healthy CM instance and malfunctioning CM
instance. The load balancer unit further comprises a processing unit configured to
distribute a data traffic from the malfunctioning CM instance among the healthy
10 CM instances. The distribution is based on receiving, from one or more of the
healthy CM instances, an indication corresponding to taking an ownership of at
least a part of the data traffic from the malfunctioning CM instance.
[0026] Another aspect of the present disclosure may relate to a non-transitory
15 computer-readable storage medium storing instruction for load balancing between
capacity manager (CM) instances, the storage medium comprising executable code
which, when executed by one or more units of a system, causes a transceiver unit
to receive, from an operation and management (OAM) unit, a health status
information of a plurality of capacity manager (CM) instances. The health status
20 information of the plurality of CM instances is detected by the OAM unit based on
a monitoring of the health status information of the plurality capacity manager
(CM) instances. The health status information of each of the plurality of CM
instances is indicative of one of a healthy CM instance and malfunctioning CM
instance. Further, the executed code when executed further causes a processing unit
25 to distribute a data traffic from the malfunctioning CM instance among the healthy
CM instances. The distribution is based on receiving, from one or more of the
healthy CM instances, an indication corresponding to taking an ownership of at
least a part of the data traffic from the malfunctioning CM instance.
30
8
DESCRIPTION OF DRAWINGS
[0027] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference 5 numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Some drawings may indicate the components using block diagrams and
may not represent the internal circuitry of each component. It will be appreciated
10 by those skilled in the art that disclosure of such drawings includes disclosure of
electrical components, electronic components or circuitry commonly used to
implement such components.
[0028] FIG. 1 illustrates an exemplary block diagram representation of a
15 management and orchestration (MANO) architecture/ platform, in accordance with
exemplary implementations of the present disclosure.
[0029] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
20 exemplary implementations of the present disclosure.
[0030] FIG. 3 illustrates an exemplary block diagram of a system for load
balancing between capacity manager (CM) instances, in accordance with
exemplary implementations of the present disclosure.
25
[0031] FIG. 4 illustrates an exemplary method flow diagram for load balancing
between capacity manager (CM) instances, in accordance with the exemplary
embodiments of the present disclosure.
9
[0032] FIG. 5 illustrates another exemplary method flow diagram for ensuring
seamless interaction between a capacity manager (CMM) and a load balancer (LB),
in accordance with exemplary embodiments of the present disclosure.
[0033] FIG. 6 illustrates an exemplary 5 system architecture for load balancing
between capacity manager (CM) instances, in accordance with the exemplary
embodiments of the present disclosure.
[0034] The foregoing shall be more apparent from the following more detailed
10 description of the disclosure.
DETAILED DESCRIPTION
[0035] In the following description, for the purposes of explanation, various
15 specific details are set forth to provide a thorough understanding of embodiments
of the present disclosure. It will be apparent, however, that embodiments of the
present disclosure may be practiced without these specific details. Several features
described hereafter can each be used independently of one another or with any
combination of other features. An individual feature may not address any of the
20 problems discussed above or might address only some of the problems discussed
above. Some of the problems discussed above might not be fully addressed by any
of the features described herein. Example embodiments of the present disclosure
are described below, as illustrated in various drawings in which like reference
numerals refer to the same parts throughout the different drawings.
25
[0036] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
30 It should be understood that various changes may be made in the function and
10
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0037] Specific details are given in the following description to provide a thorough
understanding of the 5 embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, networks, processes, and other
components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known
10 circuits, processes, algorithms, structures, and techniques may be shown without
unnecessary detail in order to avoid obscuring the embodiments.
[0038] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
15 diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations can be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
20
[0039] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
25 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive in a manner similar
11
to the term “comprising” as an open transition word without precluding any
additional or other elements.
[0040] Further, the user device and/or a system as described herein to implement
technical features 5 as disclosed in the present disclosure may also comprise
a “processor” or “processing unit”, wherein processor refers to any logic circuitry
for processing instructions. The processor may be a general-purpose processor, a
special purpose processor, a conventional processor, a digital signal processor, a
plurality of microprocessors, one or more microprocessors in association with a
10 Digital Signal Processor (DSP) core, a controller, a microcontroller, Application
Specific Integrated Circuits, Field Programmable Gate Array circuits, any other
type of integrated circuits, etc. The processor may perform signal coding data
processing, input/output processing, and/or any other functionality that enables the
working of the system according to the present disclosure. More specifically, the
15 processor is a hardware processor.
[0041] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
20 a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
25 [0042] As used herein the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
and/or connected with the system.
12
[0043] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
[0044] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration 5 (MANO) architecture/ platform [100], in
accordance with exemplary implementation of the present disclosure. The MANO
architecture [100] is developed for managing telecom cloud infrastructure
automatically, managing design or deployment design, managing instantiation of
network node(s)/ service(s) etc. The MANO architecture [100] deploys the network
10 node(s) in the form of Virtual Network Function (VNF) and Cloud-native/
Container Network Function (CNF). The system may comprise one or more
components of the MANO architecture [100]. The MANO architecture [100] is
used to auto-instantiate the VNFs into the corresponding environment of the present
disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to
15 the platform.
[0045] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer [102], a network function virtualization (NFV) and software defined
network (SDN) design function module [104], a platforms foundation services
20 module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112]. All the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
25 [0046] The NFV and SDN design function module [104] comprises a VNF
lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services
catalogue [1046], a network slicing and service chaining manager [1048], a physical
and virtual resource manager [1050] and a CNF lifecycle manager [1052]. The VNF
lifecycle manager (compute) [1042] may be responsible for deciding on which
30 server of the communication network, the microservice will be instantiated. The
13
VNF lifecycle manager (compute) [1042] may manage the overall flow of
incoming/ outgoing requests during interaction with the user. The VNF lifecycle
manager (compute) [1042] may be responsible for determining which sequence to
be followed for executing the process. For e.g. in an AMF network function of the
communication network (such as a 5 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
10 of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual resource manager [1050] stores the
logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] may be used for the CNFs
lifecycle management.
15
[0047] The platforms foundation services module [106] comprises a microservices
elastic load balancer [1062], an identify & access manager [1064], a command line
interface (CLI) [1066], a central logging manager [1068], and an event routing
manager [1070]. The microservices elastic load balancer [1062] may be used for
20 maintaining the load balancing of the request for the services. The identify & access
manager [1064] may be used for logging purposes. The command line interface
(CLI) [1066] may be used to provide commands to execute certain processes which
requires changes during the run time. The central logging manager [1068] may be
responsible for keeping the logs of every service. These logs are generated by the
25 MANO platform [100]. These logs are used for debugging purposes. The event
routing manager [1070] may be responsible for routing the events i.e., the
application programming interface (API) hits to the corresponding services.
[0048] The platforms core services module [108] comprises NFV infrastructure
30 monitoring manager [1082]; an assure manager [1084]; a performance manager
14
[1086]; a policy execution engine [1088]; a capacity monitoring manager [1090]; a
release management (mgmt.) repository [1092]; a configuration manager & a
golden configuration template (GCT) [1094]; an NFV platform decision analytics
[1096]; a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100];
a VNF backup & upgrade manager [1102 5 ]; a micro service auditor [1104]; and a
platform operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] monitors the infrastructure part of the
NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
[1084] may be responsible for supervising the alarms the vendor may be generating.
10 The performance manager [1086] may be responsible for manging the performance
counters. The policy execution engine (PEGN) [1088] may be responsible for all
the managing the policies. The capacity monitoring manager (CMM) [1090] may
be responsible for sending the request to the PEGN [1090]. The release
management (mgmt.) repository (RMR) [1092] may be responsible for managing
15 the releases and the images of all the vendor network node. The configuration
manager & (GCT) [1094] manages the configuration and GCT of all the vendors.
The NFV platform decision analytics (NPDA) [1096] helps in deciding the priority
of using the network resources. It may be further noted that the policy execution
engine (PEGN) [1088], the configuration manager & GCT [1094] and the NPDA
20 [1096] work together. The platform NoSQL DB [1098] may be a database for
storing all the inventory (both physical and logical) as well as the metadata of the
VNFs and CNF. The platform schedulers and cron jobs [1100] schedules the task
such as but not limited to triggering of an event, traverse the network graph etc. The
VNF backup & upgrade manager [1102] takes backup of the images, binaries of the
25 VNFs and the CNFs and produces those backups on demand in case of server
failure. The micro service auditor [1104] audits the microservices. For e.g., in a
hypothetical case, instances not being instantiated by the MANO architecture [100]
using the network resources then the micro service auditor [1104] audits and
informs the same so that resources can be released for services running in the
30 MANO architecture [100], thereby assuring the services only run on the MANO
15
platform [100]. The platform operations, administration and maintenance manager
[1106] may be used for newer instances that are spawning.
[0049] The platform resource adapters and utilities module [112] further comprises
a platform external API adaptor and gateway 5 [1122]; a generic decoder and indexer
(XML, CSV, JSON) [1124]; a docker swarm adaptor [1126]; an OpenStack API
adapter [1128]; and a NFV gateway [1130]. The platform external API adaptor and
gateway [1122] may be responsible for handling the external services (to the
MANO platform [100]) that requires the network resources. The generic decoder
10 and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system
in the XML, CSV, JSON format. The docker swarm adaptor [1126] may be the
interface provided between the telecom cloud and the MANO architecture [100] for
communication. The OpenStack API adapter [1128] may be used to connect with
the virtual machines (VMs). The NFV gateway [1130] may be responsible for
15 providing the path to each services going to/incoming from the MANO architecture
[100].
[0050] Referring to FIG. 2, the computing device [200] may include a bus [202] or
other communication mechanism for communicating information, and a hardware
20 processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess
memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
25 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose
machine that is customized to perform the operations specified in the
30 instructions. The computing device [200] further includes a read only memory
16
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[0051] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and 5 coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
10 alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
15 cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0052] The computing device [200] may implement the techniques described
20 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
25 sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
17
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
[0053] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication 5 interface [218] provides a twoway
data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
10 telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
15 various types of information.
[0054] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
20 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], the host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
25
[0055] Further, the system [300] may be implemented using the computing device
[200] (as shown in FIG. 2). In an implementation, the computing device [200] may
be connected to the system [300] to implement the features of the present disclosure.
18
[0056] Referring to FIG. 3, an exemplary block diagram of the system [300] for
load balancing between capacity manager (CM) instances , is shown, in accordance
with the exemplary implementations of the present disclosure. The system [300]
comprises at least one load balancer (LB) unit [302]. The LB unit [302] of the
system [300] may comprise at least one transceiver 5 unit [304], and at least one
processing unit [310]. Further, the system [300] may be connected with at least one
operation and management (OAM) unit [306], a plurality of capacity manager (CM)
instances [308]. Also, all of the components/ units of the system [300] are assumed
to be connected to each other unless otherwise indicated below. As shown in the
10 FIG. 3, all units shown within the system [300] should also be assumed to be
connected to each other. Also, in FIG. 3 only a few units are shown, however, the
system [300] may comprise multiple such units or the system [300] may comprise
any such number of said units, as required to implement the features of the present
disclosure. In an implementation, the system [300] may reside in a server or a
15 network entity. In another implementation, the system [300] may reside partly in
the server/ network entity.
[0057] The system [300] is configured for load balancing between the capacity
manager (CM) instances, with the help of the interconnection between the
20 components/units of the system [300].
[0058] As would be understood, the load balancing may refer to the process of
distributing a set of tasks over a set of resources, with the aim of making their
overall processing more efficient. Load balancing can optimize response time and
25 avoid unevenly overloading some compute nodes while other compute nodes are
left idle. The CM instances may refer to software components or systems that are
specifically designed to monitor, analyse, and optimize the utilization of resources
within a given environment. In an example, the CM instances may be similar to the
instances of CMM [1090] as provided in the FIG. 1. Also, the CM service may refer
30 to the service for monitoring capacity of network functions (the VNFs and CNFs),
19
in terms of hardware capacity and load capacity such as CPU utilization based on
thread count, RAM, throughput of the hardware, etc. In the context of the present
disclosure, load balancing is done in order to evenly distribute the load between
different instances of the capacity manager.
5
[0059] Initially, for load balancing between the CM instances, the transceiver unit
[304] receives at the LB unit [302], from the OAM unit [306], a health status
information of a plurality of capacity manager (CM) instances [308]. It is to be
noted that the health status information is detected by the OAM unit [306] based on
10 a monitoring of the health status information of the plurality capacity manager
(CM) instances [308]. It is further noted that the health status information of each
of the plurality of CM instances [308] is indicative of one of a healthy CM instance
and malfunctioning CM instance. In an implementation of the present disclosure,
the LB unit [302] may comprise one or more load balancers such as elastic load
15 balancers (ELBs) [602]. The elastic load balancer (ELB) [602] may refer to a
scalable and reliable load balancing service that distribute incoming traffic across
multiple servers or instances, ensuring optimal performance, scalability, and fault
tolerance. Further in an example, the elastic load balancers [602] of the LB unit
[302] may also be similar to the microservice elastic load balancers [1062] as
20 provided in FIG. 1.
[0060] The OAM units [306] are essential components of telecommunication
networks that are responsible for managing and monitoring the performance of
network elements and services and providing a centralized platform for network
25 operators to efficiently oversee and control various aspects of network operations.
In an example, the OAM unit [306] may be similar to the platform Operations,
Administration, and Maintenance Manager [1106] as provided in FIG. 1. It may be
noted that the health status information may refer to data which provides real-time
insights into the health and performance of network devices and services such as
30 providing information associated with device availability, resource utilization,
20
performance metrics, and fault indications, alerts for hardware failures, software
errors, or configuration issues. By monitoring health status information, network
administrators can proactively identify and address potential problems, ensuring
optimal network performance and reliability. It may be noted that the health status
information may be monitored 5 by utilizing various techniques such as using simple
network management protocols, and performance management systems for
collecting analysing and visualising the health status of the information from the
various networks and devices.
10 [0061] Further, the healthy CM instance is one that is functioning as expected,
accurately monitoring and managing resources, and providing valuable insights for
network optimization. Furthermore, the malfunctioning CM instance may be the
instance which exhibit various issues, such as inaccurate resource monitoring,
ineffective resource allocation, delayed response times, frequent errors or warnings,
15 etc.
[0062] The processing unit [310] of the load balancer unit [302] is configured to
distribute a data traffic from the malfunctioning CM instance among the healthy
CM instances. It is emphasized that the distribution is based on receiving, from one
20 or more of the healthy CM instances, an indication corresponding to taking an
ownership of at least a part of the data traffic from the malfunctioning CM instance.
[0063] As would be understood, the data traffic may refer to the flow of digital
information over a network, such as the internet or a private network. The data
25 traffic may be related to the capacity of the NFs. For example, the RAM, CPU
capacity of AMF instances, or an application, etc. The data traffic is provided to the
LB from all instances of all nodes. Further, the indication may refer to a prioritybased
message based on which the data traffic is distributed. It may be noted that
such priority may be based on first come first serve basis. For example, the instance
30 which is healthy and approaches the OAM first, that instance is given first priority
21
and the OAM assigns the priority to each CM instance, and accordingly the data
traffic is distributed.
[0064] In an exemplary aspect of the present disclosure, the processing unit [310]
is configured to distribute the data traffic 5 based on a timeout indication related to
serving a transaction. The transaction may refer to a task being performed by the
MS instance, for example the transaction may be the request and/or response or the
service associated with the instance.
10 [0065] In an exemplary aspect of the present disclosure, the processing unit [310]
is configured to distribute the data traffic based on at least one of a Header based
routing procedure, and context-based routing procedure. The header-based routing
procedure may use information contained within the packet header, such as the
source IP address, destination IP address, port numbers, or protocol type, to
15 determine the routing path. The context-based routing procedure may consider
additional factors beyond packet headers, such as network conditions, application
requirements, or policy rules, to determine the routing path.
[0066] In an exemplary aspect of the present disclosure, the processing unit [310]
20 is configured to distribute the data traffic among the plurality of CM instances [308]
in a round robin manner. The round robin manner may refer to a simple load
balancing algorithm that distributes incoming requests to servers in a circular
fashion in which each server may be assigned a weight, and requests may be
distributed to servers based on their weight.
25
[0067] In an exemplary aspect of the present disclosure, the health status
information of each of the plurality of CM instances [308] maintained in at least
one of a local cache associated with the each of the plurality of CM instances [308],
and a database stored in a storage unit [314]. As would be known, the local cache
30 may refer to a temporary storage area located on a device or system that stores
22
frequently accessed data to improve performance. Also, the database may refer to
a structured collection of data that is organized in a way that allows for efficient
storage, retrieval, and management of information.
[0068] In an exemplary aspect of the 5 present disclosure, the transceiver unit [304]
is configured to transmit an acknowledgement to the plurality of CM instances
[308], wherein the acknowledgement is indicative of distribution of data traffic
from the malfunctioning CM instance among the healthy CM instances.
10 [0069] In an exemplary aspect of the present disclosure, the transceiver unit [304]
is configured to receive, from the OAM unit [306], an alert related to one of: an
addition of a CM instance and a deletion of a CM instance among the plurality of
CM instances [308]. As would be understood, the alert may be an indication for the
addition or the deletion of the CM instance.
15
[0070] In an exemplary aspect of the present disclosure, wherein post the receiving,
from one or more of the healthy CM instances, the indication corresponding to
taking the ownership of at least a part of the data traffic from the malfunctioning
CM instance, the one or more of the healthy CM instances fetch a state information
20 of the incomplete transaction being served by the malfunctioning CM instance. The
state information of the incomplete transaction may relate to the process or the
service of the network function. For example, an instance is serving a request which
comprises 4 stages: S1, S2, S3, and S4. Now, if the instance starts malfunctioning
after processing S2 successfully, that state of the service/process is saved and now
25 the healthy instance will fetch the state of the process and start its function after S3
only. It will not restart the process from beginning.
[0071] In an exemplary aspect of the present disclosure, the indication is based on
a priority assigned by the OAM unit [306] to each of the healthy CM instances.
30
23
[0072] Referring to FIG. 4, an exemplary method flow diagram [400] for load
balancing between capacity manager (CM) instances, in accordance with
exemplary implementations of the present disclosure is shown. In an
implementation the method [400] may be performed by the system [300] (as shown
in FIG. 3). Further, in an 5 implementation, the system [300] may be present in a
server device to implement the features of the present disclosure. Also, as shown in
FIG. 4, the method [400] starts at step [402].
[0073] At step [404], the method [400] comprises receiving, by a transceiver unit
10 [304] at a load balancer (LB) unit [302], from an operation and management (OAM)
unit [306], a health status information of a plurality of capacity manager (CM)
instances [308]. It is to be noted that the health status information of the plurality
of CM instances [308] is detected by the OAM unit [306] based on a monitoring of
the health status information of the plurality capacity manager (CM) instances
15 [308]. It is further noted that the health status information of each of the plurality
of CM instances [308] is indicative of one of a healthy CM instance and
malfunctioning CM instance.
[0074] At step [406], the method [400] comprises distributing, by a processing unit
20 [310] at the LB unit [302], a data traffic from the malfunctioning CM instance
among the healthy CM instances. It is emphasized that the distribution is based on
receiving, from one or more of the healthy CM instances, an indication
corresponding to taking an ownership of at least a part of the data traffic from the
malfunctioning CM instance.
25
[0075] In an exemplary aspect of the present disclosure, the method [400]
comprises distributing, by the processing unit [310], the data traffic based on a
timeout indication related to serving a transaction.
24
[0076] In an exemplary aspect of the present disclosure, wherein post the receiving,
from one or more of the healthy CM instances, the indication corresponding to
taking the ownership of at least a part of the data traffic from the malfunctioning
CM instance, the method [400] comprises fetching, by the one or more of the
healthy CM instances, a state 5 information of the incomplete transaction being
served by the malfunctioning CM instance.
[0077] In an exemplary aspect of the present disclosure, the indication is based on
a priority assigned by the OAM unit [306] to each of the healthy CM instances.
10
[0078] In an exemplary aspect of the present disclosure, the method [400]
comprises distributing, by the processing unit [310], the data traffic based on at least
one of a Header based routing procedure, and context-based routing procedure.
15 [0079] In an exemplary aspect of the present disclosure, the method [400]
comprises distributing, by the processing unit [310], the data traffic among the
plurality of CM instances [308] in a round robin manner.
[0080] In an exemplary aspect of the present disclosure, the method [400]
20 comprises maintaining the health status information each of the plurality of CM
instances [308] in at least one of a local cache [3082] associated with the each of
the plurality of CM instances [308], and a database [3142] stored in a storage unit
[314].
25 [0081] In an exemplary aspect of the present disclosure, the method [400]
comprises transmitting, by the transceiver unit [304], an acknowledgement to the
plurality of CM instances [308], wherein the acknowledgement is indicative of
distribution of data traffic from the malfunctioning CM instance among the healthy
CM instances.
30
25
[0082] In an exemplary aspect of the present disclosure, the method [400]
comprising receiving, by the transceiver unit [304] at the LB unit [302] from the
OAM unit [306], an alert related to one of: an addition of a CM instance and a
deletion of a CM instance among the plurality of the CM instances [308].
5
[0083] Thereafter, the method [400] terminates at step [408].
[0084] Referring to FIG. 5, an exemplary method flow diagram [500] for load
balancing between capacity manager (CM) instances, in accordance with
10 exemplary implementations of the present disclosure is shown. In an
implementation the method [500] may be performed by the system [300] (as shown
in FIG. 3). Further, in an implementation, the system [300] may be present in a
server device to implement the features of the present disclosure. Also, as shown in
FIG. 5, the method [500] starts at step [502].
15
[0085] At step [504], the LB unit [302] receives the health status information of the
plurality of capacity manager (CM) instances [308] based on a monitoring of the
health status information of the plurality capacity manager (CM) instances [308]
and accordingly monitories whether the CM instance is the healthy CM instance or
20 the malfunctioning CM instance.
[0086] Then, at step [506], the OAM unit [306], alerts the LB unit [302] by sending
the alert related to addition or deletion of the CM instances. It may be noted that the
CM instances may be added from the plurality of CM instances, and accordingly
25 such CM instances may be deleted from the plurality of CM instances. The plurality
of CM instances may be within a cluster of various CM instances.
[0087] Further, at step [508], the LB unit [302] takes the data traffic within the
malfunctioning CM instances and distribute the data traffic to the healthy CM
30 instances. It may be noted that the distribution is based on the indication which may
26
select a particular instance from the healthy CM instances and distribute the data
traffic towards them. Further, in order to distribute the data traffic, the CM_LB
interface [316] may be used which may be a component used for exchanging
information between the CM instances and the LB unit [302], and utilize different
communication protocols for exchanging 5 information as may be considered to be
obvious to a person skilled in the art. The CM_LB interface [316] is interface
between LB and each CM instances to transmit, data traffic, transaction, receiving
indication, acknowledgements, i.e., all of the interactions between the LB and the
CM instances may communicate using interface. Further the CM_LB interface
10 [316] may be an HTTP interface which may be located at either LB or the CP
instance or may be located at both instances.
[0088] The method [500] herein terminates at step [510].
15 [0089] Referring to FIG. 6, another exemplary block diagram of a system
architecture [600] load balancing between capacity manager (CM) instances is
shown in accordance with the exemplary embodiments of the present disclosure.
The system architecture [600] comprises a user interface layer [102], an identity
and access manager (IAM) [1064], one or more elastic load balancers [602], an
20 event routing manager (ERM) [1070], a plurality of CM instances [604], an elastic
search cluster [606], an Operations Administration and Maintenance unit [306], and
a container network function (CNF) lifecycle manager [1052].
[0090] The user interface layer [102] may be communicatively coupled with the
25 elastic search cluster [606]. Also, the user interface layer [102] may be connected
with the IAM [1064] and the ERM [1070] using the ELB [602]. As would be
understood, the ELB are a scalable and reliable load balancing service used to
distribute incoming traffic across multiple servers or instances, ensuring optimal
performance, scalability, and fault tolerance. Similarly, the ERM [1070] may be
30 connected with the plurality of CM instances [604] using the ELB [602]. The elastic
27
search cluster [606] may be connected with the plurality of CM instances [604].
The elastic search cluster [606] may refer to a distributed search engine designed to
handle large volumes of data and complex search queries.
[0091] Another aspect of the 5 present disclosure may relate to a non-transitory
computer-readable storage medium storing instruction for load balancing between
capacity manager (CM) instances, the storage medium comprising executable code
which, when executed by one or more units of a system [300], causes a transceiver
unit [304] to receive, from an operation and management (OAM) unit [306], a
10 health status information of a plurality of capacity manager (CM) instances [308].
The health status information of the plurality of CM instances [308] is detected by
the OAM unit [306] based on a monitoring of the health status information of the
plurality capacity manager (CM) instances [308]. The health status information of
each of the plurality of CM instances [308] is indicative of one of a healthy CM
15 instance and malfunctioning CM instance. Further, the executed code when
executed further causes a processing unit [310] to distribute a data traffic from the
malfunctioning CM instance among the healthy CM instances. The distribution is
based on receiving, from one or more of the healthy CM instances, an indication
corresponding to taking an ownership of at least a part of the data traffic from the
20 malfunctioning CM instance.
[0092] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
25 particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
28
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
[0093] As is evident from the above, the present disclosure provides a technically
advanced solution of load balancing between capacity 5 manager (CM) instances and
provide a solution for providing an ability to support HTTP/HTTPS in parallel
(Configurable). Further, the present solution ensures routing client requests across
all servers in a manner that maximizes speed and capacity utilization and
additionally ensures header-based routing which saves time and database hits.
10 Furthermore, the present solution ensures async event-based implementation to
utilize interface efficiently and fault tolerance for any failure, this interface works
in a high availability mode and if one capacity manager instance went down then
next available instance will take care of this request.
15 [0094] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
20 be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
[0095] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
25 implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
30 arrangements and substitutions of units, provided they achieve the intended
29
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
30
We Claim:
1. A method [400] of load balancing between capacity manager (CM)
instances, the method [400] comprising:
- receiving, by a transceiver unit [304] at a load balancer (LB) unit
[302], from an 5 operation and management (OAM) unit [306], a
health status information of a plurality of capacity manager (CM)
instances [308],
wherein the health status information of the plurality of CM
instances [308] is detected by the OAM unit [306] based on a
10 monitoring of the health status information of the plurality capacity
manager (CM) instances [308], wherein the health status
information of each of the plurality of CM instances [308] is
indicative of one of a healthy CM instance and malfunctioning CM
instance; and
15 - distributing, by a processing unit [310] at the LB unit [302], a data
traffic from the malfunctioning CM instance, among the healthy CM
instances,
wherein the distribution is based on receiving, from one or
more of the healthy CM instances, an indication corresponding to
20 taking an ownership of at least a part of the data traffic from the
malfunctioning CM instance.
2. The method [400] as claimed in claim 1, wherein the method [400]
comprises distributing, by the processing unit [310], the data traffic based
25 on a timeout indication related to serving a transaction.
3. The method [400] as claimed in claim 1, wherein the method [400]
comprises distributing, by the processing unit [310], the data traffic based
on at least one of a Header based routing procedure, and context-based
30 routing procedure.
31
4. The method [400] as claimed in claim 1, wherein the method [400]
comprises distributing, by the processing unit [310], the data traffic among
the plurality of CM instances [308] in a round robin manner.
5. The method 5 [400] as claimed in claim 1, wherein the method [400]
comprises maintaining the health status information each of the plurality of
CM instances [308] in at least one of a local cache associated with the each
of the plurality of CM instances [308], and a database stored in a storage
unit [314].
10
6. The method [400] as claimed in claim 1, wherein the method [400]
comprises transmitting, by the transceiver unit [304], an acknowledgement
to the plurality of CM instances [308], wherein the acknowledgement is
indicative of distribution of data traffic from the malfunctioning CM
15 instance among the healthy CM instances.
7. The method [400] as claimed in claim 1, the method [400] comprising
receiving, by the transceiver unit [304] at the LB unit [302] from the OAM
unit [306], an alert related to one of: an addition of a CM instance and a
20 deletion of a CM instance among the plurality of the CM instances [308].
8. The method [400] as claimed in claim 1, wherein post the receiving, from
one or more of the healthy CM instances, the indication corresponding to
taking the ownership of at least a part of the data traffic from the
25 malfunctioning CM instance, the method [400] comprises:
- fetching, by the one or more of the healthy CM instances, a state
information of the incomplete transaction being served by the
malfunctioning CM instance.
32
9. The method [400] as claimed in claim 1, wherein the indication is based on
a priority assigned by the OAM unit [306] to each of the healthy CM
instances.
10. The method as claimed in claim 1, wherein 5 the data traffic is distributed, by
the processing unit [310] at the LB unit [302], from the malfunctioning CM
instance, among the healthy CM instances, over a CM_LB interface [316].
11. A system [300] of load balancing between capacity manager (CM)
10 instances, the system [300] comprising a load balancer (LB) unit [302], the
load balancer unit [302] further comprising:
- a transceiver unit [304] configured to:
- receive, from an operation and management (OAM) unit [306], a
health status information of a plurality of capacity manager (CM)
15 instances [308],
wherein the health status information of the plurality of CM
instances [308] is detected by the OAM unit [306] based on a
monitoring of the health status information of the plurality capacity
manager (CM) instances [308], wherein the health status information
20 of each of the plurality of CM instances [308] is indicative of one of
a healthy CM instance and malfunctioning CM instance; and
- a processing unit [310] configured to:
- distribute a data traffic from the malfunctioning CM instance among
the healthy CM instances,
25 wherein the distribution is based on receiving, from one or
more of the healthy CM instances, an indication corresponding to
taking an ownership of at least a part of the data traffic from the
malfunctioning CM instance.
33
12. The system [300] as claimed in claim 11, wherein the processing unit [310]
is configured to distribute the data traffic based on a timeout indication
related to serving a transaction.
13. The system [300] as claimed 5 in claim 11, wherein the processing unit [310]
is configured to distribute the data traffic based on at least one of a Header
based routing procedure, and context-based routing procedure.
14. The system [300] as claimed in claim 11, wherein the processing unit [310]
10 is configured to distribute the data traffic among the plurality of CM
instances [308] in a round robin manner.
15. The system [300] as claimed in claim 11, wherein the health status
information of each of the plurality of CM instances [308] maintained in at
15 least one of a local cache [3082] associated with the each of the plurality of
CM instances [308], and a database stored in a storage unit [314].
16. The system [300] as claimed in claim 11, wherein the transceiver unit [304]
is configured to transmit an acknowledgement to the plurality of CM
20 instances [308], wherein the acknowledgement is indicative of distribution
of data traffic from the malfunctioning CM instance among the healthy CM
instances.
17. The system [300] as claimed in claim 11, wherein the transceiver unit [304]
25 is configured to receive, from the OAM unit [306], an alert related to one
of: an addition of a CM instance and a deletion of a CM instance among the
plurality of CM instances [308].
18. The system [300] as claimed in claim 11, wherein post the receiving, from
30 one or more of the healthy CM instances, the indication corresponding to
34
taking the ownership of at least a part of the data traffic from the
malfunctioning CM instance, the one or more of the healthy CM instances
fetch a state information of the incomplete transaction being served by the
malfunctioning CM instance.
5
19. The system [300] as claimed in claim 11, wherein the indication is based on
a priority assigned by the OAM unit [306] to each of the healthy CM
instances.
20. The system [300] as c 10 laimed in claim 11, wherein the processing unit [310]
at the LB unit [302] is further configured to distribute the data traffic, from
the malfunctioning CM instance, among the healthy CM instances, over a CM_LB interface [316].

Documents

Application Documents

# Name Date
1 202321062849-STATEMENT OF UNDERTAKING (FORM 3) [19-09-2023(online)].pdf 2023-09-19
2 202321062849-PROVISIONAL SPECIFICATION [19-09-2023(online)].pdf 2023-09-19
3 202321062849-POWER OF AUTHORITY [19-09-2023(online)].pdf 2023-09-19
4 202321062849-FORM 1 [19-09-2023(online)].pdf 2023-09-19
5 202321062849-FIGURE OF ABSTRACT [19-09-2023(online)].pdf 2023-09-19
6 202321062849-DRAWINGS [19-09-2023(online)].pdf 2023-09-19
7 202321062849-Proof of Right [11-01-2024(online)].pdf 2024-01-11
8 202321062849-FORM-5 [19-09-2024(online)].pdf 2024-09-19
9 202321062849-ENDORSEMENT BY INVENTORS [19-09-2024(online)].pdf 2024-09-19
10 202321062849-DRAWING [19-09-2024(online)].pdf 2024-09-19
11 202321062849-CORRESPONDENCE-OTHERS [19-09-2024(online)].pdf 2024-09-19
12 202321062849-COMPLETE SPECIFICATION [19-09-2024(online)].pdf 2024-09-19
13 202321062849-Request Letter-Correspondence [07-10-2024(online)].pdf 2024-10-07
14 202321062849-Power of Attorney [07-10-2024(online)].pdf 2024-10-07
15 202321062849-Form 1 (Submitted on date of filing) [07-10-2024(online)].pdf 2024-10-07
16 202321062849-Covering Letter [07-10-2024(online)].pdf 2024-10-07
17 202321062849-CERTIFIED COPIES TRANSMISSION TO IB [07-10-2024(online)].pdf 2024-10-07
18 202321062849-FORM 3 [08-10-2024(online)].pdf 2024-10-08
19 Abstract.jpg 2024-10-18
20 202321062849-ORIGINAL UR 6(1A) FORM 1 & 26-090125.pdf 2025-01-14