Abstract: The present disclosure relates to a method and a system for scaling up network nodes. The disclosure encompasses: receiving, by a receiving unit [102], a current load data associated with each of a plurality of network nodes; predicting, by a processing unit [108] via a trained model [206], a load threshold value for each of the plurality of network nodes; comparing, by a comparing unit [104], the current load data with the corresponding load threshold value of each of the plurality of network nodes to forecast overload conditions at the plurality of network nodes; and alerting, by an alerting unit [106], a Network Management System (NMS) to scale up the network nodes in an event the current load data breaches the corresponding load threshold value of the plurality of network nodes. [FIG. 3]
FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
& THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR SCALING UP NETWORK NODES”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR SCALING UP NETWORK NODES
FIELD OF THE DISCLOSURE
[0001] The present disclosure relates generally to the field of wireless
communication systems. More particularly, the present disclosure relates to methods and systems for scaling up network nodes to handle overload conditions.
BACKGROUND
[0002] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] Wireless communication technology has rapidly evolved over the past
few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth-generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth-generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] In the prior art, managing network scalability in 5G networks,
particularly for Service Communication Proxies (SCPs), presents several challenges. As the number of 5G subscribers increases or the service patterns of existing subscribers change, SCP proxies may begin to experience higher traffic loads. Initially, traffic distribution optimizations are performed to ensure that the load is evenly distributed across SCP proxies. However, there may come a point where all proxies are operating at maximum capacity, leading to potential service degradation and impacting user experience. A significant problem in the existing techniques is the lack of predictive mechanisms to anticipate and manage these overload conditions. Traditional methods rely on reactive approaches, where scaling decisions are made only after the network is already experiencing congestion. This can lead to delays in scaling up the network infrastructure, resulting in reduced service quality and potential downtime. Furthermore, the prior art lacks an intelligent system that can provide recommendations for scaling out SCP proxies, taking into account factors such as the optimal site for deployment and the types of Network Functions (NFs) that should be supported by the new proxies. The absence of a proactive and intelligent scaling approach limits the efficiency and effectiveness of network management in 5G systems.
[0005] Thus, in order to improve radio access network capacity and
performance, there exists an imperative need in the art to provide methods and systems for scaling up network nodes that efficiently manage the overload conditions at the network.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one
implementation disclosed herein satisfies are listed herein below.
[0007] It is an object of the present disclosure to provide a system and method
for scaling up network nodes.
[0008] It is another object of the present disclosure to provide a system and
method for scaling up network nodes that proactively manages network load by predicting future overload conditions using historical data trends.
[0009] It is another object of the present disclosure to provide a system and
method for scaling up network nodes that utilize Artificial Intelligence (AI) and Machine Learning (ML) to notify network administrators of the need to scale out before reaching critical load levels, ensuring uninterrupted service quality.
[0010] It is another object of the present disclosure to provide a system and
method for scaling up network nodes that offer consent-based scale-out decisions, allowing for more controlled and deliberate expansion of network resources.
[0011] It is another object of the present disclosure to provide a system and
method for scaling up network nodes that generate specific site and Network Function (NF) type recommendations for the new scale-out SCP Proxies, optimizing resource distribution and efficiency.
[0012] It is another object of the present disclosure to provide a system and
method for scaling up network nodes that enable a seamless and dynamic adaptation of the network infrastructure in response to the evolving demands of 5G service patterns and subscriber behaviours.
[0013] It is another object of the present disclosure to provide a system and
method for scaling up network nodes that minimize the latency between detecting potential overload conditions and initiating scale-out actions, thereby reducing the likelihood of service degradation or downtime.
[0014] It is another object of the present disclosure to provide a system and
method for scaling up network nodes that incorporate a user-friendly notification system, ensuring that critical information regarding load thresholds and scale-out recommendations is communicated efficiently to the Network Management System (NMS).
[0015] It is yet another object of the present disclosure to provide a system and
method for scaling up network nodes that systematically store and utilize load data, enabling a more intelligent and data-driven approach to network management and scaling decisions.
SUMMARY
[0016] This section is provided to introduce certain implementations of the
present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0017] An aspect of the present disclosure provides a method for scaling up
network nodes. The method includes receiving, by a receiving unit, a current load data associated with each of a plurality of network nodes. The method further includes predicting, by a processing unit using a trained model, a load threshold value for each of the plurality of network nodes. The method further includes comparing, by a comparing unit, the current load data with the corresponding load threshold value of each of the plurality of network nodes to forecast overload conditions at the plurality of network nodes. Thereafter, the method includes alerting, by an alerting unit, a Network Management System (NMS) to scale up the network nodes in an event the current load data breach the corresponding load threshold value of the plurality of network nodes.
[0018] In an aspect, each of the plurality of network nodes is a Service
Communication Proxy (SCP) of a 5th Generation (5G) network.
[0019] In an aspect, the trained model is trained based on of a historical set of
data associated with the plurality of network nodes, the historical set of data comprises past traffic load patterns, traffic distribution trends, peak traffic times, and historical overload events at the plurality of network nodes.
[0020] In an aspect, the trained model is an artificial intelligence (AI) based
model.
[0021] In an aspect, the current load data associated with the plurality of
network nodes comprises information about increase and decrease of traffic at the plurality of network nodes, information about peak traffic data and low traffic data at the plurality of network nodes in past, historical trend of traffic at the plurality of network nodes, reason and causes of increase and decrease of traffic at the plurality of network nodes.
[0022] In an aspect, the method comprises notifying, by the processing unit,
network node scale-up data to the NMS, wherein the network node scale-up data comprises site details, network function (NF) type details, number of required network nodes.
[0023] In an aspect, the scale-up corresponds to addition of at least one SCP
node in the 5G network.
[0024] Another aspect of the present disclosure provides a system for scaling
up network nodes. The system includes a receiving unit configured to receive a current load data associated with each of a plurality of network nodes. The system further includes a processing unit configured to predict, via a trained model, a load threshold value for each of the plurality of network nodes. The system further includes a comparing unit configured to compare the current load data with the corresponding load threshold value of each of the plurality of network nodes to forecast overload conditions at the plurality of network nodes. Further, the system includes an alerting unit configured to alert a Network Management System (NMS) to scale up the network nodes in an event the current load data breaches the corresponding load threshold value of the plurality of network nodes.
[0025] Yet another aspect of the present disclosure provides a non-transitory
computer-readable storage medium storing instruction for scaling up network nodes, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a receiving unit to receive a current load data
associated with each of a plurality of network nodes; a processing unit to predict, via a trained model, a load threshold value for each of the plurality of network nodes; a comparing unit to compare the current load data with the corresponding load threshold value of each of the plurality of network nodes to forecast overload conditions at the plurality of network nodes; and an alerting unit to alert a Network Management System (NMS) to scale up the network nodes in an event the current load data breaches the corresponding load threshold value of the plurality of network nodes.
BRIEF DESCRIPTION OF DRAWINGS
[0026] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary implementations of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0027] FIG. 1A illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture.
[0028] FIG.1B illustrates an exemplary block diagram of a system with
functional units and modules, in accordance with exemplary implementations of the present disclosure.
[0029] FIG. 2 illustrates an exemplary block diagram of an architecture for
implementation of a system for scaling up network nodes in a wireless
5 communication network, in accordance with exemplary implementations of the
present disclosure.
[0030] FIG. 3 illustrates an exemplary method flow diagram indicating the
process scaling up network nodes, in accordance with exemplary implementations of the present disclosure.
10 [0031] FIG. 4 illustrates an exemplary block diagram of a computing device
upon which an embodiment of the present disclosure may be implemented.
[0032] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
15 DETAILED DESCRIPTION
[0033] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of implementations of the present disclosure. It will be apparent, however, that implementations of the present disclosure may be practiced without these specific
20 details. Several features described hereafter can each be used independently of one
another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example implementations
25 of the present disclosure are described below, as illustrated in various drawings in
which like reference numerals refer to the same parts throughout the different drawings.
[0034] The ensuing description provides exemplary implementations only, and
is not intended to limit the scope, applicability, or configuration of the disclosure.
30 Rather, the ensuing description of the exemplary implementations will provide
those skilled in the art with an enabling description for implementing an exemplary implementation. It should be understood that various changes may be made in the
8
5 function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
[0035] It should be noted that the terms "mobile device", "user equipment",
"user device", “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not
10 intended to limit the scope of the invention or imply any specific functionality or
limitations on the described implementations. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope
15 of the invention as defined herein.
[0036] Specific details are given in the following description to provide a
thorough understanding of the implementations. However, it will be understood by
one of ordinary skill in the art that the implementations may be practiced without
these specific details. For example, circuits, systems, networks, processes, and other
20 components may be shown as components in block diagram form in order not to
obscure the implementations in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the implementations.
[0037] Also, it is noted that individual implementations may be described as a
25 process which is depicted as a flowchart, a flow diagram, a data flow diagram, a
structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations can be performed in
parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional
30 steps not included in figures.
[0038] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any
9
5 aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
10 description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0039] As used herein, an “electronic device”, or “portable electronic device”,
or “user device” or “communication device” or “user equipment” or “device” refers
15 to any electrical, electronic, electromechanical and computing device. The user
device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices and transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery and an input-means such as a hard keypad and/or a soft keypad. The user
20 equipment may be capable of operating on any radio access technology including
but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop,
25 a general-purpose computer, desktop, personal digital assistant, tablet computer,
mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
[0040] Further, the user device may also comprise a “processor”
or “processing unit” includes processing unit, wherein processor refers to any logic
30 circuitry for processing instructions. The processor may be a general-purpose
processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (DSP) core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of
10
5 integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0041] As portable electronic devices and wireless technologies continue to
10 improve and grow in popularity, the advancing wireless technologies for data
transfer are also expected to evolve and replace the older generations of
technologies. In the field of wireless data communications, the dynamic
advancement of various generations of cellular technology are also seen. The
development, in this respect, has been incremental in the order of second generation
15 (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G),
and more such generations are expected to continue in the forthcoming time.
[0042] Radio Access Technology (RAT) refers to the technology used by
mobile devices/ user equipment (UE) to connect to a cellular network. It refers to the specific protocol and standards that govern the way devices communicate with
20 base stations, which are responsible for providing the wireless connection. Further,
each RAT has its own set of protocols and standards for communication, which define the frequency bands, modulation techniques, and other parameters used for transmitting and receiving data. Examples of RATs include GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), UMTS
25 (Universal Mobile Telecommunications System), LTE (Long-Term Evolution),
and 5G. The choice of RAT depends on a variety of factors, including the network infrastructure, the available spectrum, and the mobile device's/device's capabilities. Mobile devices often support multiple RATs, allowing them to connect to different types of networks and provide optimal performance based on the available network
30 resources.
[0043] As used herein, Service Communication Proxy is a decentralized
solution and composed of control plane and data plane. This solution is deployed along side of 5G Network Functions (NF) for providing routing control, resiliency,
11
5 and observability to the core network. In addition, the SCP is configured to message
forwarding and routing to destination NF/NF service, message forwarding and routing to a next hop SCP, Communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer Application Programming Interface (API)), load balancing, monitoring, overload control and the like.
10 [0044] As discussed in the background section, the current known solutions for
managing network scalability in 5G networks, particularly for Service Communication Proxies (SCPs), presents several challenges. As the number of 5G subscribers increases or the service patterns of existing subscribers change, SCP proxies may begin to experience higher traffic loads. Initially, traffic distribution
15 optimizations are performed to ensure that the load is evenly distributed across SCP
proxies. However, there may come a point where all proxies are operating at maximum capacity, leading to potential service degradation and impacting user experience. A significant problem in the prior art is the lack of predictive mechanisms to anticipate and manage these overload conditions. Traditional
20 methods rely on reactive approaches, where scaling decisions are made only after
the network is already experiencing congestion. This can lead to delays in scaling up the network infrastructure, resulting in reduced service quality and potential downtime. Furthermore, the prior art lacks an intelligent system that can provide recommendations for scaling out SCP proxies, taking into account factors such as
25 the optimal site for deployment and the types of Network Functions (NFs) that
should be supported by the new proxies. This absence of a proactive and intelligent scaling approach limits the efficiency and effectiveness of network management in 5G systems.
[0045] The present disclosure aims to overcome the above-mentioned and
30 other existing problems in this field of technology by introducing a method that
enhances the scalability management of 5G network nodes, particularly Service Communication Proxies (SCPs). The method significantly improves upon the prior art by integrating a predictive mechanism that uses artificial intelligence to anticipate and address overload conditions before they lead to network congestion
12
5 and service degradation. In the disclosed method, a receiving unit collects current
load data from a plurality of network nodes, which include SCPs for determining real-time network usage and potential stress points. A processing unit, using a trained model, predicts a load threshold value for each network node. The predictive capability of the trained model is a substantial improvement over prior systems as
10 it is based on a historical set of data comprising past traffic load patterns, traffic
distribution trends, peak traffic times, and historical overload events. This means the system can recognize potential overload conditions much earlier. When the comparing unit assesses the current load data against the load threshold values and identifies a potential overload situation, it does not simply wait for the congestion
15 to occur. Instead, an alerting unit proactively informs the Network Management
System (NMS) of the need to scale up the network nodes. This early warning system enables the NMS to implement scale-up measures in a timely fashion, thereby avoiding the reactive delays seen in prior art systems.
[0046] It would be appreciated by the person skilled in the art that the present
20 disclosure provides a solution that transforms the reactive, often delayed response
to network overload into a proactive, intelligent, and strategic process. This approach not only improves the user experience by maintaining service quality but also enhances the operational efficiency of network management in 5G systems.
[0047] In an example, there are 5 virtual proxy or machines available at the
25 network. The present disclosure keeps track of the parameters received at the
network to check the possibility of the bottleneck conditions at the network. For
instance, in one case, these 5 virtual machines may at the verge of getting overload
in next one month. Thus, the present disclosure may alert the Network Management
System (NMS) team to scale up the proxy at the network such as increasing the
30 number of proxies at the SCP. This may prevent the bad effects or bad impact that
may occur after existing proxy achieve the condition of bottleneck.
[0048] In another example, the message received at the plurality of network
nodes are analysed by the system using the ML based model to check if a certain
13
5 threshold value is reached or crossed at the network node. In case the certain
threshold value is not crossed, the scaling up of the network nodes is not required.
However, if the certain threshold value is crossed, then an alert may be sent to the
NMS team to plan scaling up the network nodes, proxy to efficiently handle the
bottleneck conditions at the network. In an example, the threshold may be defined
10 as 85% of the actual capacity of the virtual machines.
[0049] In an example, the threshold value is pre-defined by a network operator.
[0050] In another example, the pre-defined threshold can be modified by the
network operator.
[0051] Hereinafter, exemplary implementations of the present disclosure will
15 be described with reference to the accompanying drawings.
[0052] FIG. 1A illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture. As shown in FIG. 1A, the 5GC network architecture [101] includes a user equipment (UE) [101a], a radio access network (RAN) [101b], a 5G Core Network and a Data Network [101p]. The 5G
20 Core Network includes an access and mobility management function (AMF)
[101c], a Session Management Function (SMF) [101d], a Service Communication Proxy (SCP) [101e], an Authentication Server Function (AUSF) [101f], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [101g], a Network Slice Selection Function (NSSF) [101h], a Network Exposure Function
25 (NEF) [101i], a Network Repository Function (NRF) [101j], a Policy Control
Function (PCF) [101k], a Unified Data Management (UDM) [101l], an application function (AF) [101m], a User Plane Function (UPF) [101n].
[0053] The User Equipment (UE) [101a] interfaces with the network via the
Radio Access Network (RAN) [101b]. The RAN [101b] in the 5G architecture is
30 also called as New Radio or NG-RAN, and these terms may be interchangeably
used herein. Radio Access Network (RAN) [101b] is the part of a mobile telecommunications system that connects user equipment (UE) [101a] to the core network (CN) and provides access to different types of networks (e.g., 5G, LTE).
14
5 It consists of radio base stations and the radio access technologies that enable
wireless communication.
[0054] The Access and Mobility Management Function (AMF) [101c]
manages connectivity and mobility. When a UE [101a] is active, i.e. it is interacting with the 5G network, e.g., by using data/ call functionalities, the AMF [101c] knows
10 and maintains the location of the UE [101a] within the network. The AMF [101c]
is configured to maintain the tracking area or registration area of the UE [101a], in case the UE is inactive. The AMF [101c] is configured to communicate with other network functions/ elements such as the Session Management Function (SMF) [101d], etc. to ensure that the UE [101a] is allowed and is able to avail the services
15 by the network.
[0055] Particularly, the Access and Mobility Management Function (AMF)
[101c] is a 5G core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability etc. It also handles mobility management procedures like handovers and paging.
20 [0056] The Session Management Function (SMF) [101d] is a 5G core network
function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
[0057] The Service Communication Proxy (SCP) [101e] is a network function
25 in the 5G core that facilitates communication between other network functions by
providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
[0058] The Authentication Server Function (AUSF) [101f] is a network
function in the 5G core responsible for authenticating UEs during registration and
30 providing security services. It generates and verifies authentication vectors and
tokens.
15
5 [0059] The Network Slice Specific Authentication and Authorization Function
(NSSAAF) [101g] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
[0060] The Network Slice Selection Function (NSSF) [101h] is a network
10 function responsible for selecting the appropriate network slice for a UE based on
factors such as subscription, requested services, and network policies.
[0061] The Network Exposure Function (NEF) [101i] is a network function
that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications.
15 [0062] The Network Repository Function (NRF) [101j] is a network function
that acts as a central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
[0063] The Policy Control Function (PCF) [101k] is a network function
20 responsible for policy control decisions, such as QoS, charging, and access control,
based on subscriber information and network policies.
[0064] The Unified Data Management (UDM) [101l] is a network function that
centralizes the management of subscriber data, including authentication, authorization, and subscription information.
25 [0065] The Application Function (AF) [101m] is a network function that
represents external applications interfacing with the 5G core network to access network capabilities and services.
[0066] The User Plane Function (UPF) [101n] is a network function
responsible for handling user data traffic, including packet routing, forwarding, and
30 QoS enforcement.
16
5 [0067] The Data Network (DN) [101p] represents external networks or
services that users connect to through the mobile network, such as the internet or enterprise networks.
[0068] Referring to FIG. 1B, an exemplary block diagram of a system [100]
for scaling up network nodes is shown, in accordance with the exemplary
10 implementations of the present invention. The system [100] comprises a receiving
unit [102], a comparing unit [104], an alerting unit [106] and a processing unit [108]. Also, all of the components/ units of the system [100] are assumed to be connected to each other unless otherwise indicated below. Also, in FIG. 1 only a few units are shown, however, the system [100] may comprise multiple such units
15 or the system [100] may comprise any such numbers of said units, as required to
implement the features of the present disclosure. Further, in an implementation, the system [100] may be present at a network level to implement the features of the present invention. In an implementation, the system [100] may reside in a server, a network entity or a SCP controller [204].
20 [0069] The system [100] is configured for scaling up network nodes handling
of overload conditions, with the help of the interconnection between the components/units of the system [100].
[0070] In order to monitor the overload conditions and to alert the Network
Management System, the receiving unit [102] of the system [100] is configured to
25 receive a current load data of each of a plurality of network nodes. Further, the each
of the plurality of network nodes is a Service Communication Proxy (SCP) of a 5th Generation (5G) network. The current load data may include but not limited to information associated with existing users and new users, real-time traffic, Transaction Per Second (TPS), key performance indicators, Metric, metadata
30 associated with signals, the traffic volume, types of services being accessed, and
the number of active users at any given time on each of the plurality of network nodes.
17
5 [0071] Further, system [100] comprises the processing unit [108]
communicatively coupled to the receiving unit [102]. The processing unit [108] is configured to predict, using a trained model, a load threshold value for each of the plurality of network nodes based on the analysis of a set of data associated with the plurality of network nodes. In an implementation of the present disclosure, the
10 trained model is a machine learning (ML) based model. The Machine Learning
based techniques are used for prediction of overload conditions at the network level based on the possible trends associated with increase in number of users accessing the network. The threshold value may be defined based on the analysis of the data using the machine learning model (such as the trained model). In an example,
15 threshold value may be defined as 90% of the actual capacity of the proxy level. In
another example, threshold level may vary based on the analysis of the key performance indicators associated with received messages, historical data and the like. Thus, the system [100] automatically predicts the overload conditions whenever the system [100] suspect of crossing the threshold level. For instance, the
20 machine trained model may automatically predict the time left or possible time left
in which the overload conditions may occur based on the current trend and pattern. In an example, the trained model may determine based on the analysis of the data that the overload conditions may be achieved in next 15 day; thus, the NMS may take step to efficiently control such conditions prior to the occurrence of such
25 conditions.
[0072] In an implementation of the present disclosure, the set of data associated
with the plurality of network nodes comprises information about the increase and
decrease of traffic at the plurality of network nodes, information about peak traffic
data and low traffic data at the plurality of network nodes in past, historical trend
30 of traffic at the plurality of network nodes, reason and causes of increase and
decrease of traffic at the plurality of network nodes. The trained model [206] may be trained based on a historical set of data associated with the plurality of network nodes. This historical set of data comprises past traffic load patterns, traffic distribution trends, peak traffic times, and historical overload events at the plurality
18
5 of network nodes. By analysing the historical data, the trained model [206] can
accurately predict the load threshold value for each network node.
[0073] Further, the system [100] comprises comparing unit [104]
communicatively coupled to the receiving unit [102] and the processing unit [108]. The comparing unit [104] is configured to compare the current load data with the
10 corresponding load threshold value of each of the plurality of network nodes to
forecast overload conditions on the plurality of network nodes. In an example, by performing comparison, the comparing unit [104] is able to predict when a node is approaching a state where demand may exceed capacity i.e., an overload condition. It would be appreciated by the person skilled in the art that the foresight allows
15 initiation of preventive measures to handle the increased load efficiently. Such as
scale up of network nodes.
[0074] Further, the system comprises alerting unit [106] communicatively
coupled to the comparing unit [104]. The alerting unit [106] is configured to alert a
Network Management System (NMS) to scale up the network nodes in an event
20 when the current load data exceeds the corresponding load threshold value of the
plurality of network nodes.
[0075] In an implementation of the present disclosure, the processing unit
[108] is configured to notify the network node scale-up data to the NMS, wherein
the network node scale-up data comprises node deployment site details, network
25 function (NF) type details, and number of required network nodes.
[0076] In an implementation of the present disclosure, the NMS may take the
required steps to efficiently handle the overload condition at the network. In an
example, the NMS may plan to scale up the proxy or virtual machines when the
plurality of proxy has overall crossed the defined threshold level in terms of their
30 capacity. The alerting to the NMS may be done via a social based platform, via
email, via Short Message Service (SMS), and the like.
[0077] Referring to FIG. 2, an exemplary block diagram of an architecture for
implementation of a system for scaling up network nodes to handle overload in a
19
5 wireless communication network is shown, in accordance with exemplary
implementations of the present disclosure. The system architecture [200] comprises one or more SCP proxies (such as SCP proxy1 [202A] and/or SCP proxy2 [202B]), SCP controller [204], trained model [206], one or more network function (NF) consumers (such as NF consumer1 [208A] and/or NF consumer2 [208B]), one or
10 more NFs (such as NF-A [210A], NF-B [210B], NF-C [210C], NF-D [210D]), and
notification targets [212]. Also, all of the components/units of the system architecture [200] are assumed to be connected to each other unless otherwise indicated below. Also, in FIG. 2 only a few units are shown, however, the system architecture [200] may comprise multiple such units or the system architecture
15 [200] may comprise any such numbers of said units, as required to implement the
features of the present disclosure.
[0078] In operation, the SCP Proxy (such as SCP proxy1 [202A] and/or SCP
proxy2 [202B]) may first determine the current load at a regular interval. Following the determination of the current load, the SCP controller [204] may receive the
20 determined current load. The current load data may correspond to network load data
of the one or more network functions (such as NF-A [210A], NF-B [210B], NF-C [210C], NF-D [210D]). Examples, of the one or more network functions (such as NF-A [210A], NF-B [210B], NF-C [210C], NF-D [210D]) may include, but not limited only to various network functions of the5Gcore network. Examples of NFs
25 include, but not limited only to PCF, charging function (CHF), AMF, SMF, UDM,
NSSF. Thereafter, the trained model [206] may retrieve the current load data from the at least one SCP controller [204] at intervals that may be set based on at least of the network operator's policy or other network requirements such as traffic conditions, historical data patterns, expected service demand, or planned network
30 maintenance activities.
[0079] Once the trained model [206] acquires the current load data, it may
forecast a load threshold value for potential overload situations. The forecast may be performed by analysing the current load data against historical traffic patterns and trends to establish a threshold that indicates the maximum load that the SCP
20
5 Proxy (such as SCP proxy1 [202A] and/or SCP proxy2 [202B]) can manage before
it is deemed at risk of overload. The trained model [206] then communicates the predictive threshold data to the SCP controller [204]. The SCP controller [204] may then compare the current load with the corresponding predicted threshold.
[0080] In response to the detected breach, the SCP controller [204] may
10 generate an alert that may be transmitted to Network Management System (NMS).
Subsequently, the SCP controller [204] may generate a recommendation for a scale-out action. The recommendation may include detailed specifications, including for example, the site location and the types of network functions (NF) that will be supported by the additional SCP Proxy nodes required to manage the excess load.
15 The generated recommendation and the alert are transmitted to the Network
Management System (NMS) through the notification targets [212]. For example, the recommendation includes the specific site where the new SCP Proxy should be placed and specifies that it should support NF-C [210C] type functions, which are currently under heavy demand. The NMS receives this recommendation through
20 the notification targets [212], thereby allowing taking swift action to scale out and
balance the network load effectively. Notification target refers to an entity or a group of entities (such as a dedicated person or a group of persons) that facilitates in taking the scale out decision. The entity or group of entities would be communicated, via email and SMS etc., of the breach of the current load data.
25 Further, scale out may be shown as alarm on NMS and it would be visible to the
entity or the group of entities.
[0081] It would be appreciated by the person skilled in the art that the system
architecture [200] ensures that as the network faces varying loads, proactive
measures are taken to scale the network resources accordingly to maintain an
30 optimal user experience by preventing network overload and managing the
distribution of traffic across the network infrastructure.
[0082] Referring to FIG. 3, an exemplary method flow diagram [300] for
scaling up network nodes is shown, in accordance with exemplary implementations
21
5 of the present invention. In an implementation the method [300] is performed by
the system [100], the system architecture [200] or the SCP controller [204]. As shown in FIG. 3, the method [300] starts at step [302].
[0083] At step [304], the method [300] as disclosed by the present disclosure
comprises receiving, by a receiving unit [102], a current load data associated with
10 each of a plurality of network nodes. The current load data may include but not
limited to information associated with existing users and new users, real-time traffic, Transaction Per Second (TPS), key performance indicators, Metric, metadata associated with signals, the traffic volume, types of services being accessed, and the number of active users at any given time on each of the plurality
15 of network nodes.
[0084] Next, at step [306], the method [300] as disclosed by the present
disclosure comprises predicting, by a processing unit [108] using a trained model [206], a load threshold value for each of the plurality of network nodes. The trained model [206] may include but not limited to artificial intelligence and machine
20 learning techniques. The trained model [206] may be trained based on a historical
set of data associated with the plurality of network nodes. The historical set of data comprises past traffic load patterns, traffic distribution trends, peak traffic times, and historical overload events at the plurality of network nodes. By analyzing the historical data, the trained model [206] can accurately predict the load threshold
25 value for each network node. The predicted load threshold value may represent the
point at which the network node is expected to become overloaded.
[0085] Next, at step [308], the method [300] as disclosed by the present
disclosure comprises comparing, by a comparing unit [104], the current load data
with the corresponding load threshold value of each of the plurality of network
30 nodes to forecast overload conditions at the plurality of network nodes. By
continuously comparing the current load data with the corresponding predicted threshold, the comparing unit [104] can identify instances where the network node is approaching or exceeding their load capacity limits. This early detection of
22
5 overload conditions allows for timely intervention and scaling actions to be taken,
thereby ensuring the smooth functioning of the network and maintaining an optimal user experience. In an example, SCP Proxy1 [202A] predicted load threshold of 1000 transactions per second (TPS). SCP controller [204] receives the current load data, which indicates that SCP Proxy1 [202A] is currently handling 950 TPS. As
10 the SCP controller [204] continues to monitor the load, it observes an increase to
1020 TPS, surpassing the threshold. This comparison, performed by the comparing unit [104], identifies an overload condition on SCP Proxy1 [202A]. The SCP controller [204] can then alert the Network Management System (NMS) to take appropriate scaling actions, such as adding additional SCP proxies or redistributing
15 traffic, to prevent potential service degradation and maintain a seamless user
experience.
[0086] Next, at step [310], the method [300] as disclosed by the present
disclosure comprises alerting, by an alerting unit [106], a Network Management System (NMS) to scale up the network nodes in an event the current load data
20 breaches the corresponding load threshold value of the plurality of network nodes.
For example, if the current load data on SCP Proxy1 [202A] exceeds its load threshold value, the alerting unit [106] sends an alert to the Network Management System (NMS). The NMS then initiates actions to add additional SCP proxies or enhance the capacity of existing proxies to manage the increased traffic, ensuring
25 uninterrupted service for users.
[0087] Thereafter, the method terminates at step [312].
[0088] In an exemplary implementation of the present disclosure, the each of
the plurality of network nodes is the SCP of the 5th Generation (5G) network.
[0089] In an exemplary implementation of the present disclosure, the trained
30 model is the ML based model.
[0090] In an exemplary implementation of the present disclosure, the set of
data associated with the plurality of network nodes comprises information about increase and decrease of traffic at the plurality of network nodes, information about
23
5 peak traffic data and low traffic data at the plurality of network nodes in past,
historical trend of traffic at the plurality of network nodes, reason and causes of increase and decrease of traffic at the plurality of network nodes.
[0091] In an exemplary implementation of the present disclosure, the method
[300] further comprises notifying, by the processing unit [108], the network node
10 scale-up data to the NMS, wherein the network node scale-up data comprises site
details, network function (NF) type details, number of required network nodes.
[0092] In an exemplary implementation of the present disclosure, the scale-up
of the network nodes corresponds to the addition of at least one SCP node in the 5G network.
15 [0093] In an example, the Network Management System may include entities
associated with the management of the network issues such as network managing users, managing teams, network handing teams, platforms and the like.
[0094] In an exemplary implementation of the present disclosure, the alerting
may include sending a notification on the device of the network management team.
20 The network management team after receiving the notification may plan to scale up
the virtual machines or nodes or proxy in order to meet the future demands to access the network.
[0095] As is evident from the above, the present disclosure provides a
technically advanced solution for handling the overload conditions and accordingly
25 notify or alert the NMS. Thus, the present disclosure overall efficiently monitors
the user experience, reliably and seamlessly mange the handling of overload
conditions, meet the public demands, and the like. Further, the present disclosure
using AI at SCP level predicts future overload condition beforehand based on
historical trends and thus provide scale-out notification and consent-based scale-
30 out to meet the requirement of the new users. Also, the present disclosure provides
site & supporting NF recommendations for scaling-out SCP Proxy.
24
5 [0096] FIG. 4 illustrates an exemplary block diagram of a computing system
[400] upon which an embodiment of the present disclosure may be implemented.
In an implementation, the computing device implements the method for scaling up
network nodes using the system [100]. In another implementation, the computing
device itself implements the method for scaling up network nodes in 5G core (5GC)
10 network by using one or more units configured within the computing device,
wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0097] The computer system [400] encompasses a wide range of electronic
devices capable of processing data and performing computations. Examples of
15 computer system [400] include, but are not limited only to, personal computers,
laptops, tablets, smartphones, user equipment (UE), servers, and embedded systems. The devices may operate independently or as part of a network and can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally, computer system [400] may include peripheral devices, such as monitors,
20 keyboards, and printers, as well as integrated components within larger electronic
systems, showcasing their versatility in various technological applications.
[0098] The computer system [400] may include a bus [402] or other
communication mechanism for communicating information, and a processor [404] coupled with bus [402] for processing information. The processor [404] may be, for
25 example, a general-purpose microprocessor. The computer system [400] may also
include a main memory [406], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [402] for storing information and instructions to be executed by the processor [404]. The main memory [406] also may be used for storing temporary variables or other intermediate information
30 during execution of the instructions to be executed by the processor [404]. Such
instructions, when stored in non-transitory storage media accessible to the processor [404], render the computer system [400] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computer system [400] further includes a read only memory (ROM) [408] or other static
25
5 storage device coupled to the bus [402] for storing static information and
instructions for the processor [404].
[0099] A storage device [410], such as a magnetic disk, optical disk, or solid-
state drive is provided and coupled to the bus [402] for storing information and instructions. The computer system [400] may be coupled via the bus [402] to a
10 display [412], such as a cathode ray tube (CRT), for displaying information to a
computer user. An input device [414], including alphanumeric and other keys, may be coupled to the bus [402] for communicating information and command selections to the processor [404]. Another type of user input device may be a cursor control [416], such as a mouse, a trackball, or cursor direction keys, for
15 communicating direction information and command selections to the processor
[404], and for controlling cursor movement on the display [412]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[00100] The computer system [400] may implement the techniques described
20 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computer system [400] causes
or programs the computer system [400] to be a special-purpose machine. According
to one embodiment, the techniques herein are performed by the computer system
[400] in response to the processor [404] executing one or more sequences of one or
25 more instructions contained in the main memory [406]. Such instructions may be
read into the main memory [406] from another storage medium, such as the storage
device [410]. Execution of the sequences of instructions contained in the main
memory [406] causes the processor [404] to perform the process steps described
herein. In alternative embodiments, hard-wired circuitry may be used in place of or
30 in combination with software instructions.
[00101] The computer system [400] also may include a communication
interface [418] coupled to the bus [402]. The communication interface [418] provides a two-way data communication coupling to a network link [420] that is
26
connected to a local network [422]. For example, the communication interface [418] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [418] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [418] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[00102] The computer system [400] can send messages and receive data,
including program code, through the network(s), the network link [420] and the communication interface 418. In the Internet example, a server [430] might transmit a requested code for an application program through the Internet [428], the Internet Service Provider (ISP) [426], the local network [422] and the communication interface [418]. The received code may be executed by the processor [404] as it is received, and/or stored in the storage device [410], or other non-volatile storage for later execution.
[00103] An aspect of the present disclosure relates to a non-transitory computer-
readable storage medium storing instruction for scaling up network nodes, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a receiving unit [102] to receive a current load data associated with each of a plurality of network nodes; a processing unit [108] to predict, via a trained model [206], a load threshold value for each of the plurality of network nodes; a comparing unit [104] to compare the current load data with the corresponding load threshold value of each of the plurality of network nodes to forecast overload conditions at the plurality of network nodes; and an alerting unit [106] to alert a Network Management System (NMS) to scale up the network nodes in an event the current load data breaches the corresponding load threshold value of the plurality of network nodes.
[00104] The present disclosure aims to overcome the problems in this field of
technology by introducing a method and system that enhances the scalability management of 5G network nodes, particularly Service Communication Proxies (SCPs). The method significantly improves upon the prior art by integrating a predictive mechanism that uses artificial intelligence to anticipate and address overload conditions before they lead to network congestion and service degradation. In the disclosed method, a receiving unit collects current load data from a plurality of network nodes, which include SCPs for determining real-time network usage and potential stress points. A processing unit, using a trained model, predicts a load threshold value for each network node. The predictive capability of the trained model is a substantial improvement over prior systems as it is based on a historical set of data comprising past traffic load patterns, traffic distribution trends, peak traffic times, and historical overload events. This means the system can recognize potential overload conditions much earlier. When the comparing unit assesses the current load data against the load threshold values and identifies a potential overload situation, it does not simply wait for the congestion to occur. Instead, an alerting unit proactively informs the Network Management System (NMS) of the need to scale up the network nodes. This early warning system enables the NMS to implement scale-up measures in a timely fashion, thereby avoiding the reactive delays seen in prior art systems.
[00105] Further, in accordance with the present disclosure, it is to be
acknowledged that the functionality described for the various the components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[00106] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
We Claim
1. A method for scaling up network nodes, the method comprising:
receiving, by a receiving unit [102], a current load data associated with each of a plurality of network nodes;
predicting, by a processing unit [108] using a trained model [206], a load threshold value for each of the plurality of network nodes;
comparing, by a comparing unit [104], the current load data with the corresponding load threshold value of each of the plurality of network nodes to forecast overload conditions at the plurality of network nodes; and
alerting, by an alerting unit [106], a Network Management System (NMS) to scale up the network nodes in an event the current load data breaches the corresponding load threshold value of the plurality of network nodes.
2. The method as claimed in claim 1, wherein each of the plurality of network nodes is a Service Communication Proxy (SCP) of a 5th Generation (5G) network.
3. The method as claimed in claim 1, wherein the trained model [206] is trained based on of a historical set of data associated with the plurality of network nodes, the historical set of data comprises past traffic load patterns, traffic distribution trends, peak traffic times, and historical overload events at the plurality of network nodes.
4. The method as claimed in claim 1, wherein the trained model [206] is an artificial intelligence (AI) based model.
5. The method as claimed in claim 1, wherein the current load data associated with the plurality of network nodes comprises information indicative of increase and decrease of traffic at the plurality of network nodes, information indicative of peak traffic data and low traffic data at the plurality of network nodes in past, historical trend of traffic at the plurality of network nodes, reason and causes of increase and decrease of traffic at the plurality of network nodes.
6. The method as claimed in claim 1, further comprises notifying, by the processing unit [108], network node scale-up data to the NMS, wherein the network node scale-up data comprises site details, network function (NF) type details, number of required network nodes.
7. The method as claimed in claim 6, wherein the scale-up corresponds to addition of at least one SCP node in the 5G network.
8. A system for scaling up network nodes, said system comprising:
a receiving unit [102] configured to receive a current load data associated with each of a plurality of network nodes;
a processing unit [108] configured to predict, via a trained model [206], a load threshold value for each of the plurality of network nodes;
a comparing unit [104] configured to compare the current load data with the corresponding load threshold value of each of the plurality of network nodes to forecast overload conditions at the plurality of network nodes; and
an alerting unit [106] configured to alert a Network Management System (NMS) to scale up the network nodes in an event the current load
data breaches the corresponding load threshold value of the plurality of network nodes.
9. The system as claimed in claim 8, wherein each of the plurality of network nodes is a Service Communication Proxy (SCP) of a 5th Generation (5G) network.
10. The system as claimed in claim 8, wherein the trained model [206] is trained based on of a historical set of data associated with the plurality of network nodes, the historical set of data comprises past traffic load patterns, traffic distribution trends, peak traffic times, and historical overload events at the plurality of network nodes.
11. The system as claimed in claim 8, wherein the trained model [206] is an artificial intelligence (AI) based model.
12. The system as claimed in claim 8, wherein the current load data associated with the plurality of network nodes comprises information indicative of increase and decrease of traffic at the plurality of network nodes, information indicative of peak traffic data and low traffic data at the plurality of network nodes in past, historical trend of traffic at the plurality of network nodes, reason and causes of increase and decrease of traffic at the plurality of network nodes.
13. The system as claimed in claim 8, wherein the processing unit [108] is configured to notify network node scale-up data to the NMS, wherein the
network node scale-up data comprises site details, network function (NF) type details, number of required network nodes.
14. The system as claimed in claim 13, wherein the network node scale-up data corresponds to addition of at least one SCP node in the 5G network.
| # | Name | Date |
|---|---|---|
| 1 | 202321044304-STATEMENT OF UNDERTAKING (FORM 3) [03-07-2023(online)].pdf | 2023-07-03 |
| 2 | 202321044304-PROVISIONAL SPECIFICATION [03-07-2023(online)].pdf | 2023-07-03 |
| 3 | 202321044304-FORM 1 [03-07-2023(online)].pdf | 2023-07-03 |
| 4 | 202321044304-FIGURE OF ABSTRACT [03-07-2023(online)].pdf | 2023-07-03 |
| 5 | 202321044304-DRAWINGS [03-07-2023(online)].pdf | 2023-07-03 |
| 6 | 202321044304-FORM-26 [06-09-2023(online)].pdf | 2023-09-06 |
| 7 | 202321044304-Proof of Right [03-10-2023(online)].pdf | 2023-10-03 |
| 8 | 202321044304-ORIGINAL UR 6(1A) FORM 1 & 26)-181023.pdf | 2023-11-06 |
| 9 | 202321044304-ENDORSEMENT BY INVENTORS [30-05-2024(online)].pdf | 2024-05-30 |
| 10 | 202321044304-DRAWING [30-05-2024(online)].pdf | 2024-05-30 |
| 11 | 202321044304-CORRESPONDENCE-OTHERS [30-05-2024(online)].pdf | 2024-05-30 |
| 12 | 202321044304-COMPLETE SPECIFICATION [30-05-2024(online)].pdf | 2024-05-30 |
| 13 | Abstract1.jpg | 2024-06-27 |
| 14 | 202321044304-FORM 3 [31-07-2024(online)].pdf | 2024-07-31 |
| 15 | 202321044304-Request Letter-Correspondence [09-08-2024(online)].pdf | 2024-08-09 |
| 16 | 202321044304-Power of Attorney [09-08-2024(online)].pdf | 2024-08-09 |
| 17 | 202321044304-Form 1 (Submitted on date of filing) [09-08-2024(online)].pdf | 2024-08-09 |
| 18 | 202321044304-Covering Letter [09-08-2024(online)].pdf | 2024-08-09 |
| 19 | 202321044304-CERTIFIED COPIES TRANSMISSION TO IB [09-08-2024(online)].pdf | 2024-08-09 |
| 20 | 202321044304-FORM 18 [24-01-2025(online)].pdf | 2025-01-24 |