Abstract: The present disclosure relates to method and system for handling an overload condition in a network. The method comprises receiving, at a primary network function (NF) a plurality of input messages associated with the network. The method further comprises duplicating, from the primary NF at the secondary and the tertiary NF, an update associated with the plurality of input messages. Further, the method comprises monitoring a total transactions per second (TPS) value associated with the update. Furthermore, the method comprises determining a positive breach condition associated with the update in an event the TPS value associated with the update crosses a preconfigured TPS threshold value. Thereafter, the method comprises initiating a staggering logic to handle the overload condition associated with the plurality of input messages, based on the positive breach condition, wherein the staggering logic comprises a rate limiter instance for duplicating the update at a predefined rate. FIG. 4
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR HANDLING AN OVERLOAD
CONDITION IN A NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR HANDLING AN OVERLOAD
CONDITION IN A NETWORK
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure generally relate to a field of wireless
communication. More particularly, the present disclosure relates to methods and
systems for handling an overload condition in a network.
10 BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few
20 decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second generation (2G) technology, digital communication and data
services became possible, and text messaging was introduced. The third generation
25 (3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth generation (5G) technology is
being deployed, promising even faster data speeds, low latency, and the ability to
30 connect multiple devices simultaneously. With each generation, wireless
3
communication technology has become more advanced, sophisticated, and capable
of delivering more services to its users.
[0004] Currently, to ensure high availability, uninterrupted services and efficient
failover mechanisms, one or more Network Functions 5 (NFs) in a cluster of 3 each
are deployed. Each network function (NF) out of the cluster acts as an active NF, a
standby NF and a spare NF. Further, the active NF is the primary NF that handles
live traffic and performs intended functions. It is responsible for processing
requests, executing tasks, and delivering services to clients or end-users. Further,
10 the standby NF is a redundant copy of the active NF. This NF closely monitors the
active NF and maintains synchronized state information. The standby NF is ready
to take over the active role instantly in case of a failure or disruption in the active
NF. This ensures seamless failover and continuity of the services. Furthermore, the
spare NF is an additional backup NF that remains idle but is fully configured and
15 synchronized with the active NF and the standby NF. It serves as an additional
safety net in case both the active NF and the standby NF encounter failures
simultaneously. The spare NF can be quickly activated to restore services in such
critical situations.
20 [0005] Further, any changes made on the active NF are propagated to the standby
NF and the spare NF to keep them updated through an RPC (Replication and
clustering) unit, which is used for duplication. The RPC makes sure that the data is
synchronized and/or consistent in real-time or near real-time across all NFs
comprising the active NF, standby NF and spare NF. In present cellular networks,
25 all NFs are stateful, in which data related to user/connection/association is stored
inside the network functions itself. However, a stateful NF causes multiple
connection related problems in case of its failure and it needs a separate standby NF
to continue the process, which increases maintenance cost and reduces reliability of
network system. A part of the state maintained by each NF is a session data cache.
30 In order to support proper processing of requests on NFs other than the primary
"active" NF, this session data cache must be synchronized throughout all the NFs
4
in the cluster. However, there may be situations, where while performing
replication, upon a restart of the spare NF, the corresponding active NF may
experience an overload condition because of large number of transactions to be
handled. An example may be a Policy Control Function (PCF) in the 5G network,
which may become overloaded during periods 5 of high transaction volume, such as
bulk replication, after a spare NF restart. This causes service disruptions and
hampers the quality of service for end users. Further, during bulk replication, a
sudden surge in data transfers can lead to network congestion. Currently, there is no
mechanism designed to gracefully handle situations where nodes are added back to
10 the system after temporary downtime. Therefore, there is a need for a solution that
provides a mechanism to handle high Total Transactions per Second (TPS) during
bulk replication to prevent overload on the active node, such as the Policy Control
Function (PCF).
15 [0006] Hence, in view of these and other existing limitations, there arises an
imperative need to provide an efficient solution to overcome the above-mentioned
and other limitations and to provide a method and a system for handling an overload
condition in a network.
20 SUMMARY
[0007] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
25 subject matter.
[0008] An aspect of the present disclosure may relate to a method for handling an
overload condition in a network. The method comprises receiving, by a transceiver
unit at a primary network function from a User Equipment (UE), a plurality of input
30 messages associated with the network. The method further comprises duplicating,
by a processing unit, from the primary network function to a secondary network
5
function and a tertiary network function, an update associated with the plurality of
input messages. Further, the method comprises monitoring, by a monitoring unit at
the primary network function, a total transactions per second (TPS) value associated
with the update. Furthermore, the method comprises determining, by a
determination unit at the primary network 5 function, a positive breach condition
associated with the update in an event the TPS value associated with the update
crosses a preconfigured TPS threshold value. Thereafter, the method comprises
initiating, by a control unit at the primary network function, a staggering logic to
handle the overload condition associated with the plurality of input messages, based
10 on the positive breach condition, wherein the staggering logic comprises a rate
limiter instance for duplicating the updates at a predefined rate.
[0009] In an exemplary aspect of the present disclosure, each input message from
the plurality of input messages comprises at least one of a requesting task, an
15 executing task, and a delivering service task.
[0010] In an exemplary aspect of the present disclosure, the primary network
function corresponds to an active network function, the secondary network function
corresponds to a standby network function, and the tertiary network function
20 corresponds to a spare network function.
[0011] In an exemplary aspect of the present disclosure, the preconfigured TPS
threshold value is at least one of a user defined TPS threshold value, and a
dynamically defined TPS threshold value.
25
[0012] In an exemplary aspect of the present disclosure, the updates associated with
the plurality of input messages is duplicated, at the secondary network function and
the tertiary network function, at a default rate in an event the TPS value associated
with the updates fails to cross the preconfigured TPS threshold value.
30
6
[0013] Another aspect of the present disclosure relates to a system for handling an
overload condition in a network. The system comprises a transceiver unit
configured to receive, at a primary network function from a User Equipment (UE),
a plurality of input messages associated with the network. The system further
comprises a processing unit connected to at 5 least the transceiver unit, wherein the
processing unit is configured to duplicate, from the primary network function to a
secondary network function and a tertiary network function, an update associated
with the plurality of input messages. Further, the system comprises monitoring unit
connected to at least the processing unit, wherein the monitoring unit is configured
10 to monitor, at the primary network function, a total transactions per second (TPS)
value associated with the update. Furthermore, the system comprises a determining
unit connected to at least the monitoring unit, the determination unit is configured
to determine, at the primary network function, a positive breach condition
associated with the update in an event the TPS value associated with the update
15 crosses a preconfigured TPS threshold value. Thereafter, the system comprises a
control unit connected to at least the determination unit, the control unit is
configured to initiate, at the primary network function, a staggering logic to handle
the overload condition associated with the plurality of input messages, based on the
positive breach condition, wherein the staggering logic comprises a rate limiter
20 instance to duplicate the update at a predefined rate.
[0014] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing one or more instructions for handling an
overload condition in a network. The instructions include executable code which,
25 when executed by one or more units of a system, causes a transceiver unit of the
system to receive, at a primary network function from a User Equipment (UE), a
plurality of input messages associated with the network. Further, the instructions
include executable code which, when executed causes a processing unit of the
system to duplicate, from the primary network function to a secondary network
30 function and a tertiary network function, an update associated with the plurality of
input messages. Further, the executable code, which when executed causes a
7
monitoring unit of the system to monitor, at the primary network function, a total
transactions per second (TPS) value associated with the update. Furthermore, the
executable code which when executed causes a determining unit of the system to
determine, at the primary network function, a positive breach condition associated
with the update in an event 5 the TPS value associated with the update crosses a
preconfigured TPS threshold value. Moreover, the executable code which when
executed causes a control unit of the system to initiate, at the primary network
function, a staggering logic to handle the overload condition associated with the
plurality of input messages, based on the positive breach condition, wherein the
10 staggering logic comprises a rate limiter instance to duplicate the update at a
predefined rate.
OBJECT OF THE DISCLOSURE
15 [0015] Some of the objects of the present disclosure which at least one embodiment
disclosed herein satisfies are listed herein below.
[0016] It is an object of the present disclosure to provide a method and a system for
handling an overload condition in a network.
20
[0017] It is another object of the present disclosure to provide a solution for
preventing overloading of an active network function or node.
[0018] It is another object of the present disclosure to provide a solution to improve
25 data integrity by introducing delays and staggering the data writing process.
[0019] It is another object of the present disclosure to provide a solution for
efficient resource utilization by controlling the rate of writing.
30 [0020] It is another object of the present disclosure to provide a solution to reduce
network congestion during bulk replication.
8
[0021] It is another object of the present disclosure to provide a solution to
determine the rate of replication at the network function.
[0022] It is yet another object 5 of the present disclosure to provide a solution to
indicate the level of resilience in the system.
DESCRIPTION OF DRAWINGS
10 [0023] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
15 disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
20 to implement such components.
[0024] FIG. 1 illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture.
25 [0025] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented, in accordance
with exemplary implementation of the present disclosure.
[0026] FIG. 3 illustrates an exemplary block diagram of a system for handling an
30 overload condition in a network, in accordance with exemplary implementation of
the present disclosure.
9
[0027] FIG. 4 illustrates an exemplary method flow diagram for handling an
overload condition in a network, in accordance with exemplary implementation of
the present disclosure.
5
[0028] FIG. 5 illustrates an exemplary process flow diagram for handling an
overload condition in a network, in accordance with exemplary implementation of
the present disclosure.
10 DETAILED DESCRIPTION
[0029] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
15 embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
20
[0030] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
25 It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0031] Specific details are given in the following description to provide a thorough
30 understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
10
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0032] Also, it is noted that individual embodiments 5 may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
10 is terminated when its operations are completed but could have additional steps not
included in a figure.
[0033] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
15 subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
20 “includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive in a manner similar
to the term “comprising” as an open transition word without precluding any
additional or other elements.
25 [0034] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
30 Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
11
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
5
[0035] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
10 or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
15 contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
[0036] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
20 form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
25 functions.
[0037] As used herein “interface” or “user interface refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
30 communication or interaction of one or more modules or one or more units with
12
each other, which also includes the methods, functions, or procedures that may be
called.
[0038] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or 5 hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
10 circuits (FPGA), any other type of integrated circuits, etc.
[0039] As used herein the transceiver unit include at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
15 and/or connected with the system.
[0040] As discussed in the background section, to ensure high availability,
uninterrupted services, and efficient failover mechanisms, one or more Network
Functions (NFs) in a cluster of 3 each are deployed. Each network function out of
20 the cluster acts as an active network function, a standby network function and a
spare network function. Further, any changes made on the active NF are propagated
to the standby NF and the spare NF to keep them updated through an RPC
(Replication and clustering) unit, which is used for duplication to makes sure that
the data is synchronized and/or consistent in real-time or near real-time across all
25 NFs comprising the active NF, standby NF and spare NF. In present cellular
networks, all NFs are stateful, in which data related to user/connection/association
is stored inside the network functions itself. However, a stateful NF causes multiple
connection related problems in case of its failure and it needs a separate standby NF
to continue the process, which increases maintenance cost and reduces reliability of
30 network system. A part of the state maintained by each NF is a session data cache.
In order to support proper processing of requests on NFs other than the primary
13
"active" NF, this session data cache must be synchronized throughout all the NFs
in the cluster. However, there may be situations, where while performing
replication, upon a restart of the spare NF, the corresponding active NF may
experience overload condition because of large number of transactions to be
handled. An example may be a Policy 5 Control Function (PCF) in the 5G network,
which may become overloaded during periods of high transaction volume, such as
bulk replication, after a spare NF restart. This causes service disruptions and
hampers the quality of service for end users. Further, during bulk replication, a
sudden surge in data transfers can lead to network congestion. Currently, there is no
10 mechanism designed to gracefully handle situations where nodes are added back to
the system after temporary downtime.
[0041] The present disclosure aims to overcome the above-mentioned and other
existing problems in this field of technology by providing method and system for
15 handling an overload condition in a network. More particularly, the present
disclosure provides solution to prevent overloading in an active NF, such as a Policy
Control Function (PCF). Further, the present solution improves data integrity by
introducing delays and staggering the data writing process. Also, the present
solution provides efficient resource utilization by controlling the rate of writing.
20 Furthermore, the present solution reduces network congestion during bulk
replication. Moreover, the present solution increases the level of resilience in the
system and determines the rate of replication in the network function.
[0042] Hereinafter, exemplary embodiments of the present disclosure will be
25 described with reference to the accompanying drawings.
[0043] Referring to FIG. 1 an exemplary block diagram representation of 5th
generation core (5GC) network architecture, in accordance with exemplary
implementation of the present disclosure is shown. As shown in FIG. 1, the 5GC
30 network architecture [100] includes a user equipment (UE) [102], a radio access
network (RAN) [104], an access and mobility management function (AMF) [106],
14
a Session Management Function (SMF) [108], a Service Communication Proxy
(SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice
Specific Authentication and Authorization Function (NSSAAF) [114], a Network
Slice Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118],
a Network Repository Function (NRF) [5 120], a Policy Control Function (PCF)
[122], a Unified Data Management (UDM) [124], an application function (AF)
[126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all
the components are assumed to be connected to each other in a manner as obvious
to the person skilled in the art for implementing features of the present disclosure.
10
[0044] Radio Access Network (RAN) [104] is the part of a mobile
telecommunications system that connects user equipment (UE) [102] to the core
network (CN) and provides access to different types of networks (e.g., 5G network).
It consists of radio base stations and the radio access technologies that enable
15 wireless communication.
[0045] Access and Mobility Management Function (AMF) [106] is a 5G core
network function responsible for managing access and mobility aspects, such as UE
registration, connection, and reachability. It also handles mobility management
20 procedures like handovers and paging.
[0046] Session Management Function (SMF) [108] is a 5G core network function
responsible for managing session-related aspects, such as establishing, modifying,
and releasing sessions. It coordinates with the User Plane Function (UPF) for data
25 forwarding and handles IP address allocation and QoS enforcement.
[0047] Service Communication Proxy (SCP) [110] is a network function in the 5G
core network that facilitates communication between other network functions by
providing a secure and efficient messaging service. It acts as a mediator for service30
based interfaces.
15
[0048] Authentication Server Function (AUSF) [112] is a network function in the
5G core responsible for authenticating UEs during registration and providing
security services. It generates and verifies authentication vectors and tokens.
[0049] Network Slice Specific 5 Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and
authorization services specific to network slices. It ensures that UEs can access only
the slices for which they are authorized.
10 [0050] Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors
such as subscription, requested services, and network policies.
[0051] Network Exposure Function (NEF) [118] is a network function that exposes
15 capabilities and services of the 5G network to external applications, enabling
integration with third-party services and applications.
[0052] Network Repository Function (NRF) [120] is a network function that acts
as a central repository for information about available network functions and
20 services. It facilitates the discovery and dynamic registration of network functions.
[0053] Policy Control Function (PCF) [122] is a network function responsible for
policy control decisions, such as QoS, charging, and access control, based on
subscriber information and network policies.
25
[0054] Unified Data Management (UDM) [124] is a network function that
centralizes the management of subscriber data, including authentication,
authorization, and subscription information.
16
[0055] Application Function (AF) [126] is a network function that represents
external applications interfacing with the 5G core network to access network
capabilities and services.
[0056] User Plane Function (UPF) 5 [128] is a network function responsible for
handling user data traffic, including packet routing, forwarding, and QoS
enforcement.
[0057] Data Network (DN) [130] refers to a network that provides data services to
10 user equipment (UE) in a telecommunications system. The data services may
include but are not limited to Internet services, private data network related services.
[0058] Referring to FIG. 2 an exemplary block diagram of a computing device
[200] upon which the features of the present disclosure may be implemented in
15 accordance with exemplary implementation of the present disclosure is shown. In
an implementation, the computing device [200] may implement a method for
handling an overload condition in a network by utilising a system [300]. In another
implementation, the computing device [200] itself implements the method for
handling an overload condition in a network using one or more units configured
20 within the computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
[0059] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
25 processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess
memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
30 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
17
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose
machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage 5 device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[0060] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
10 instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
15 bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
20 of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0061] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
25 and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
30 instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
18
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
5
[0062] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway
data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
10 integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
15 implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
various types of information.
[0063] The computing device [200] can send messages and receive data, including
20 program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], a host [224], the local network [222] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
25 and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0064] Referring to FIG. 3 an exemplary block diagram of a system [300] for
handling an overload condition in a network, in accordance with exemplary
30 implementation of the present disclosure is depicted. The system comprises at least
one transceiver unit [302], at least one processing unit [304], at least one monitoring
19
unit [306], at least one determination unit [308], and at least one control unit [310].
Also, all of the components/ units of the system [300] are assumed to be connected
to each other unless otherwise indicated below. As shown in the FIG. 3 all units
shown within the system [300] should also be assumed to be connected to each
other. Also, in FIG. 3 only a few units 5 are shown, however, the system [300] may
comprise multiple such units or the system [300] may comprise any such numbers
of said units, as required to implement the features of the present disclosure.
Further, in an implementation, the system [300] may reside in a server or the
network entity or the system [300] may be in communication with the network
10 entity to implement the features as disclosed in the present disclosure.
[0065] The system [300] is configured for handling an overload condition in a
network. with the help of the interconnection between the components/units of the
system [300].
15
[0066] Particularly, the transceiver unit [302] is configured to receive, at a primary
network function from a User Equipment (UE), a plurality of input messages
associated with the network. The primary network function corresponds to an active
network function. Further, the active network function handles live traffic on a node
20 and performs intended function. The intended functions are performed by the
primary network function based on the received plurality of input messages.
Furthermore, each input message from the plurality of input messages comprises at
least one of a requesting task, an executing task, and a delivering service task. The
primary network function processes the requesting task from the UE, executes the
25 task and delivers the service received in the requesting task to the UE. Further, the
tasks may be an application specific and/or network function specific. The tasks
may include such as, but not limited to, handling incoming requests and providing
responses, dumping and/or providing application specific session cache
information, replicating network function configuration data across the node in a
30 cluster, etc.
20
[0067] Further, the system [300] comprises the processing unit [304] connected to
at least the transceiver unit [302], wherein the processing unit [304] is configured
to duplicate, from the primary network function to a secondary network function
and a tertiary network function, an update associated with the plurality of input
messages. The input messages received 5 at the primary network function comprise
changes to be made in the data on the active network function, which results in
updated data, which is further duplicated to secondary and tertiary network
functions for keeping the data synchronized on all NFs. The secondary network
function corresponds to a standby network function. Further, the standby network
10 function is a redundant copy of the active network function i.e., the primary network
function. The standby network function closely monitors the active network
function and is ready to take over the active role instantly in case of a failure or
disruption in the active network function. Furthermore, the tertiary network
function corresponds to a spare network function. The spare network function is an
15 additional backup network function that remains idle but may be fully configured
and synchronized with the active network function and the standby network
function. The spare network function serves as an additional safety net in case both
the active network function and the standby network function encounter failures
simultaneously. Also, the spare network function may be quickly activated to
20 restore services in such critical situations. The updates at the primary network
function may be duplicated at the secondary network function and the tertiary
network function through Replication and Clustering (RPC). The RPC ensures that
the data remains synchronized and/or consistent among all network functions (i.e.,
the active network function, the standby network function and the spare network
25 function) in real-time or near-real-time.
[0068] Furthermore, the system [300] comprises the monitoring unit [306]
connected to at least the processing unit [304], wherein the monitoring unit [306] is
configured to monitor, at the primary network function, a total transactions per
30 second (TPS) value associated with the updates. The TPS relates to a transaction
speed of each network function i.e., it is a measure of the maximum number of
21
transactions each network function can process in a second. The monitoring unit
monitors the TPS value of the primary network function associated with each
update received through the input messages.
[0069] Furthermore, the system 5 [300] comprises the determination unit [308]
connected to at least the monitoring unit [306], wherein the determination unit [306]
is configured to determine, at the primary network function, a positive breach
condition associated with the update in an event the TPS value associated with the
update crosses a preconfigured TPS threshold value. The positive breach condition
10 may be a condition where the primary network function i.e., the active network
function may replicate/duplicate the updates associated with the input message at a
rate that breaches the preconfigured TPS threshold value, resulting in the overload
condition on the primary network function. Further, the preconfigured TPS
threshold value is at least one of a user defined TPS threshold value, and a
15 dynamically defined TPS threshold value. Furthermore, in an implementation, the
preconfigured TPS threshold value of the primary network function i.e., the active
network function may be 1000. Also, the preconfigured TPS threshold value is not
limited to the mentioned value and may vary. If more than 1000 updates, associated
with the received input message, are replicated/duplicated form the primary
20 network function i.e., the active network function to the secondary network function
i.e., the standby network function and the tertiary network function i.e., the spare
network function, then it may be determined as the positive breach condition.
Therefore, the positive breach condition is a condition where the updates are
replicated/duplicated at a higher rate than the preconfigured TPS threshold value.
25 Moreover, the TPS threshold value may vary, and the value depends upon the
capacity of the network function. The TPS threshold value may be 1000, 2000 or
any integer value less than the capacity of the network function.
[0070] Thereafter, the control unit [310] connected to at least the determination unit
30 [308], is configured to initiate, at the primary network function, a staggering logic
to handle the overload condition associated with the plurality of input messages,
22
based on the positive breach condition, wherein the staggering logic comprises a
rate limiter instance to duplicate the updates at a predefined rate. The input
messages are the incoming requests on the primary network function. The incoming
messages may be different for different network functions involved in a network
architecture (such as 5G architecture). 5 Further, in an implementation the rate limiter
is configured to replicate/duplicate the updates associated with the input messages
at a default rate if the preconfigured TPS threshold value is not breached.
Furthermore, in another implementation if the preconfigured TPS threshold value
is breached then the replication/duplication of the update associated with the input
10 message is allowed at the rate that may be defined by the rate limiter depending
upon the updates. It is to be noted that the predefined rate will be less than the TPS
threshold value, to ensure that the transactions are always processed at a rate which
does not result in overloading of the network functions.
15 [0071] Moreover, the updates associated with the plurality of input messages is
duplicated, at the secondary network function and the tertiary network function, at
a default rate in an event the TPS value associated with the update fails to cross the
preconfigured TPS threshold value. Furthermore, in an implementation, the
preconfigured TPS threshold value of the primary network function i.e., the active
20 network function may be 1000. Also, the preconfigured TPS threshold value is not
limited to the mentioned value and may vary. If less than 1000 updates, associated
with the received input message, are replicated/duplicated form the primary
network function i.e., the active network function to the secondary network function
i.e., the standby network function and the tertiary network function i.e., the spare
25 network function, then the replication/duplication of updates are the handled at a
pre-determined value i.e., the default rate. Therefore, the rate limit of
replication/duplication in the mentioned implementation is less than 1000.
[0072] Referring to FIG. 4 an exemplary method flow diagram for handling an
30 overload condition in a network, in accordance with exemplary implementation of
the present disclosure is illustrated. In an implementation the method [400] is
23
performed by the system [300]. Also, as shown in FIG. 4, the method [400] initiates
at step [402].
[0073] At step [404], the method comprises receiving, by a transceiver unit [302]
at a primary network function from a 5 User Equipment (UE), a plurality of input
messages associated with the network. The primary network function corresponds
to an active network function. Further, the active network function handles live
traffic and performs intended functions. The intended functions are performed by
the primary network function based on the received plurality of input messages.
10 Furthermore, each input message from the plurality of input messages comprises at
least one of a requesting task, an executing task, and a delivering service task. The
primary network function processes the request, executes the tasks and delivers the
service received in the requesting task to the user equipment.
15 [0074] Next, at step [406], the method comprises duplicating, by a processing unit
[304] from the primary network function to a secondary network function and a
tertiary network function, an update associated with the plurality of input messages.
The input messages received at the primary network function comprise changes to
be made in the data on the active network function, which results in updated data,
20 which is further duplicated to secondary and tertiary network functions for keeping
the data synchronized on all NFs. The secondary network function corresponds to
a standby network function. Further, the standby network function is a redundant
copy of the active network function i.e., the primary network function. The standby
network function closely monitors the active network function and is ready to take
25 over the active role instantly in case of a failure or disruption in the active network
function. Furthermore, the tertiary network function corresponds to a spare network
function. The spare network function is an additional backup network function that
remains idle but may be fully configured and synchronized with the active network
function and the standby network function. The spare network function serves as
30 an additional safety net in case both the active network function and the standby
network function encounter failures simultaneously. Also, the spare network
24
function may be quickly activated to restore services in such critical situations. The
update at the primary network function may be duplicated at the secondary network
function and the tertiary network function through Replication and Clustering
(RPC). The RPC ensures that the data remains synchronized and/or consistent
among all network functions (i.e., the active 5 network function, the standby network
function and the spare network function) in real-time or near-real-time.
[0075] Further, at step [408], the method comprises, monitoring, by a monitoring
unit [306] at the primary network function, a total transactions per second (TPS)
10 value associated with the update. The TPS relates to a transaction speed of each
network function i.e., a measure of the maximum number of transactions each
network function can process in a second. The monitoring unit monitors the TPS
value of the primary network function associated with each update received through
input message.
15
[0076] Furthermore, at step [410], the method comprises, determining, by a
determination unit [308] at the primary network function, a positive breach
condition associated with the update in an event the TPS value associated with the
update crosses a preconfigured TPS threshold value. The positive breach condition
20 may be a condition where the primary network function i.e., the active network
function may replicate/duplicate the updates associated with the input message at a
rate that breaches the preconfigured TPS threshold value, resulting in the overload
condition on the primary network function. Further, the preconfigured TPS
threshold value is at least one of a user defined TPS threshold value, and a
25 dynamically defined TPS threshold value.
[0077] Thereafter, at step [412], a control unit [310] initiates, at the primary
network function, a staggering logic to handle the overload condition associated
with the plurality of input messages, based on the positive breach condition,
30 wherein the staggering logic comprises a rate limiter instance for duplicating the
updates at a predefined rate. Further, in an implementation the rate limiter is
25
configured to replicate/duplicate the updates associated with the input message at a
default rate if the preconfigured TPS threshold value is not breached. Further, the
rate limiter defines the rate at which the Network function handles the
replication/duplication of the updates associated with the received input message.
Further, in another implementation 5 if the preconfigured TPS threshold value is
breached then the replication/duplication of the updates associated with the input
message is allowed at the rate that may be defined by the rate limiter depending
upon the updates.
10 [0078] It is to be noted that the predefined rate will be less than the TPS threshold
value, to ensure that the transactions are always processed at a rate which does not
result in overloading of the network functions.
[0079] Furthermore, in an implementation, the preconfigured TPS threshold value
15 of the primary network function i.e., the active network function may be 1000. Also,
the preconfigured TPS threshold value is not limited to the mentioned value and
may vary. If less than 1000 updates, associated with the received input message, are
replicated/duplicated form the primary network function i.e., the active network
function to the secondary network function i.e., the standby network function and
20 the tertiary network function i.e., the spare network function, then the
replication/duplication of updates are the handled at a pre-determined value i.e., the
default rate. Therefore, the rate limit of replication/duplication defined in the rate
limiter, in the mentioned implementation, may be any integer value less than 1000.
25 [0080] Moreover, the updates associated with the plurality of input messages is
duplicated, at the secondary network function and the tertiary network function, at
a default rate in an event the TPS value associated with the updates fails to cross
the preconfigured TPS threshold value.
30 [0081] Thereafter, at step [414], the method terminates.
26
[0082] Referring to FIG. 5 an exemplary process flow diagram for handling an
overload condition in a network, in accordance with exemplary implementation of
the present disclosure is illustrated.
[0083] At step [502], the process starts, 5 wherein a request (i.e., a request which
updates the data) is received at a primary network function (i.e., an active network
function) from a user equipment. The request comprises input messages which
comprise changes to be made in the data on the active network function, which
results in updated data, which is further duplicated to secondary and tertiary
10 network functions for keeping the data synchronized on all NFs.
[0084] Next, at step [504], the primary network function initiates the process of
duplication/replication of the updates at a secondary network function (i.e., a
standby network function) and a tertiary network function (i.e., a spare network
15 function).
[0085] Further, at step [506], the primary network function determines whether a
transaction per second (TPS) value of duplication/replication of the current update
request (i.e., an update associated with an input message) is breaching a staggering
20 TPS value (i.e., a preconfigured TPS threshold value).
[0086] At step [508], if the TPS value of the duplication/replication of the current
update request (i.e., an update associated with an input message) breaches the
staggering TPS value (i.e., a preconfigured TPS threshold value), then a staggering
25 logic is initiated to handle an overload condition associated with the received
request from the user equipment.
[0087] Furthermore, at step [510], if the TPS value of the duplication/replication of
the current update request (i.e., an update associated with an input message) does
30 not breach the staggering TPS value (i.e., a preconfigured TPS threshold value),
then the data duplication/replication may continue at a default rate.
27
[0088] Thereafter, at step [512], the process terminates.
[0089] The present disclosure further discloses a non-transitory computer readable
storage medium storing one or more instructions for 5 handling an overload condition
in a network. The instructions include executable code which, when executed by
one or more units of a system [300], causes a transceiver unit [302] of the system
[300] to receive, at a primary network function from a User Equipment (UE), a
plurality of input messages associated with the network. Further, the instructions
10 include executable code which, when executed causes a processing unit [304] of
the system [300] to duplicate, from the primary network function to a secondary
network function and a tertiary network function, an update associated with the
plurality of input messages. Further, the executable code which when executed
causes a monitoring unit [306] of the system [300] to monitor, at the primary
15 network function, a total transactions per second (TPS) value associated with the
update. Furthermore, the executable code which when executed causes a
determination unit [308] of the system [300] to determine, at the primary network
function, a positive breach condition associated with the update in an event the TPS
value associated with the update crosses a preconfigured TPS threshold value.
20 Moreover, the executable code which when executed causes a control unit [310] of
the system [300] to initiate, at the primary network function, a staggering logic to
handle the overload condition associated with the plurality of input messages, based
on the positive breach condition, wherein the staggering logic comprises a rate
limiter instance to duplicate the update at a predefined rate.
25
[0090] As is evident from the above, the present disclosure provides a technically
advanced solution for handling an overload condition in a network. The present
solution prevents overloading of an active network function. An example of the
network function may be a Policy Control Function (PCF). Further, the present
30 solution improves data integrity by introducing delays and staggering the data
writing process. Also, the present solution provides efficient resource utilization by
28
controlling the rate of writing. Furthermore, the present solution reduces network
congestion during bulk replication. Moreover, the present solution increases the
level of resilience in the system and determines the rate of replication in the network
function.
5
[0091] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
10 of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
[0092] Further, in accordance with the present disclosure, it is to be acknowledged
15 that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
20 as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
We Claim:
1. A method [400] for handling an overload condition in a network, the method comprising:
- receiving, by a transceiver unit [302] at a primary network function from
a User Equipment, a plurality of input messages associated with the
network;
- duplicating, by a processing unit [304] from the primary network
function to a secondary network function and a tertiary network
function, an update associated with the plurality of input messages;
- monitoring, by a monitoring unit [306] at the primary network function,
a total transactions per second (TPS) value associated with the update;
- determining, by a determination unit [308] at the primary network
function, a positive breach condition associated with the update in an
event the TPS value associated with the update crosses a preconfigured
TPS threshold value; and
- initiating, by a control unit [310] at the primary network function, a
staggering logic to handle the overload condition associated with the
plurality of input messages, based on the positive breach condition,
wherein the staggering logic comprises a rate limiter instance for
duplicating the update at a predefined rate.
2. The method [400] as claimed in claim 1, wherein each input message from the plurality of input messages comprises at least one of a requesting task, an executing task, and a delivering service task.
3. The method [400] as claimed in claim 1, wherein the primary network
function corresponds to an active network function, the secondary network function corresponds to a standby network function, and the tertiary network function corresponds to a spare network function.
4. The method [400] as claimed in claim 1, wherein the preconfigured TPS
threshold value is at least one of a user defined TPS threshold value, and a dynamically defined TPS threshold value.
5. The method [5 400] as claimed in claim 1, wherein the update associated with the plurality of input messages is duplicated, at the secondary network function and the tertiary network function, at a default rate in an event the TPS value associated with the update fails to cross the preconfigured TPS threshold value.
6. A system [300] for handling an overload condition in a network, the system
comprises:
- a transceiver unit [302], wherein the transceiver unit is configured to:
o receive, at a primary network function from a User Equipment
(UE), a plurality of input messages associated with the network;
- a processing unit [304] connected to at least the transceiver unit [302],
wherein the processing unit [304] is configured to:
o duplicate, from the primary network function to a secondary
network function and a tertiary network function, an update
associated with the plurality of input messages;
- a monitoring unit [306] connected to at least the processing unit [304],
wherein the monitoring unit [306] is configured to:
o monitor, at the primary network function, a total transactions per
second (TPS) value associated with the update;
- a determination unit [308] connected to at least the monitoring unit
[306], wherein the determination unit [308] is configured to:
o determine, at the primary network function, a positive breach
condition associated with the update in an event the TPS value
associated with the update crosses a preconfigured TPS
threshold value; and
- a control unit [310] connected to at least the determination unit [308],
wherein the control unit [310] is configured to:
o initiate, at the primary network function, a staggering logic to
handle the overload condition associated with the plurality of
input messages, based on the positive breach condition,
wherein the staggering logic comprises a rate limiter instance to
duplicate the update at a predefined rate.
7. The system [300] as claimed in claim 6, wherein each input message from the plurality of input messages comprises at least one of a requesting task, an executing task, and a delivering service task.
8. The system [300] as claimed in claim 6, wherein the primary network
function corresponds to an active network function, the secondary network function corresponds to a standby network function, and the tertiary network function corresponds to a spare network function.
9. The system [300] as claimed in claim 6, wherein the preconfigured TPS
threshold value is at least one of a user defined TPS threshold value, and a dynamically defined TPS threshold value.
10. The system [300] as claimed in claim 6, wherein the update associated with the plurality of input messages is duplicated, at the secondary network function and the tertiary network function, at a default rate in an event the TPS value associated with the update fails to cross the preconfigured TPS threshold value.
Dated this the 8th Day of September, 2023
| # | Name | Date |
|---|---|---|
| 1 | 202321060627-STATEMENT OF UNDERTAKING (FORM 3) [08-09-2023(online)].pdf | 2023-09-08 |
| 2 | 202321060627-PROVISIONAL SPECIFICATION [08-09-2023(online)].pdf | 2023-09-08 |
| 3 | 202321060627-POWER OF AUTHORITY [08-09-2023(online)].pdf | 2023-09-08 |
| 4 | 202321060627-FORM 1 [08-09-2023(online)].pdf | 2023-09-08 |
| 5 | 202321060627-FIGURE OF ABSTRACT [08-09-2023(online)].pdf | 2023-09-08 |
| 6 | 202321060627-DRAWINGS [08-09-2023(online)].pdf | 2023-09-08 |
| 7 | 202321060627-Proof of Right [26-12-2023(online)].pdf | 2023-12-26 |
| 8 | 202321060627-ORIGINAL UR 6(1A) FORM 1 & 26-050424.pdf | 2024-04-15 |
| 9 | 202321060627-FORM-5 [28-08-2024(online)].pdf | 2024-08-28 |
| 10 | 202321060627-ENDORSEMENT BY INVENTORS [28-08-2024(online)].pdf | 2024-08-28 |
| 11 | 202321060627-DRAWING [28-08-2024(online)].pdf | 2024-08-28 |
| 12 | 202321060627-CORRESPONDENCE-OTHERS [28-08-2024(online)].pdf | 2024-08-28 |
| 13 | 202321060627-COMPLETE SPECIFICATION [28-08-2024(online)].pdf | 2024-08-28 |
| 14 | 202321060627-Request Letter-Correspondence [03-09-2024(online)].pdf | 2024-09-03 |
| 15 | 202321060627-Power of Attorney [03-09-2024(online)].pdf | 2024-09-03 |
| 16 | 202321060627-Form 1 (Submitted on date of filing) [03-09-2024(online)].pdf | 2024-09-03 |
| 17 | 202321060627-Covering Letter [03-09-2024(online)].pdf | 2024-09-03 |
| 18 | 202321060627-CERTIFIED COPIES TRANSMISSION TO IB [03-09-2024(online)].pdf | 2024-09-03 |
| 19 | Abstract 1.jpg | 2024-09-04 |
| 20 | 202321060627-FORM 3 [08-10-2024(online)].pdf | 2024-10-08 |
| 21 | 202321060627-FORM-9 [30-12-2024(online)].pdf | 2024-12-30 |
| 22 | 202321060627-FORM 18A [31-12-2024(online)].pdf | 2024-12-31 |
| 23 | 202321060627-FER.pdf | 2025-02-06 |
| 24 | 202321060627-FORM 3 [24-03-2025(online)].pdf | 2025-03-24 |
| 25 | 202321060627-FER_SER_REPLY [24-03-2025(online)].pdf | 2025-03-24 |
| 26 | 202321060627-SER.pdf | 2025-06-16 |
| 27 | 202321060627-FER_SER_REPLY [09-07-2025(online)].pdf | 2025-07-09 |
| 28 | 202321060627-PatentCertificate11-09-2025.pdf | 2025-09-11 |
| 29 | 202321060627-IntimationOfGrant11-09-2025.pdf | 2025-09-11 |
| 1 | 202321060627_SearchStrategyNew_E_SearchstrategyE_04-02-2025.pdf |