Sign In to Follow Application
View All Documents & Correspondence

Method And System For Dynamically Distributing Traffic To A Plurality Of Instances Of Network Function

Abstract: The present disclosure relates to a method and system for dynamically distributing traffic to a plurality of instances of a Network Function (NF). The disclosure encompasses: determining, by a determination unit [302], a first capacity and a first load for the plurality of the NF instances using a trained model; fetching, by a fetching unit [304], a second capacity and a second load for the plurality of the NF instances in real time; comparing, by a comparator unit [306], the first capacity and the first load with the second capacity and the second load; determining, by the determination unit [302], a delta upon comparing the first capacity and the second capacity and, the first load and the second load, wherein the delta comprises computed relative weights; and updating, by an updating unit [308], the delta at the plurality of the NF instances. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
08 July 2023
Publication Number
47/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-10-17
Renewal Date

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Inventors

1. Sandeep Bisht
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
2. Prashant Pandey
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Ravindra Yadav
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR DYNAMICALLY DISTRIBUTING TRAFFIC TO A PLURALITY OF INSTANCES OF NETWORK
FUNCTION”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR DYNAMICALLY DISTRIBUTING TRAFFIC TO A PLURALITY OF INSTANCES OF NETWORK FUNCTION
FIELD OF THE INVENTION
5
[0001] The present disclosure relates generally to the field of wireless communication systems. In particular, the present disclosure relates to load balancing and capacity management of network functions. More particularly, the present disclosure relates to a method and system for dynamically distributing traffic to a plurality of instances of a network function (NF). 10
BACKGROUND
[0002] The following description of related art is intended to provide background information
pertaining to the field of the disclosure. This section may include certain aspects of the art that
15 may be related to various features of the present disclosure. However, it should be appreciated
that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades,
20 with each generation bringing significant improvements and advancements. The first
generation of wireless communication technology was based on analog technology and offered
only voice services. However, with the advent of the second-generation (2G) technology,
digital communication and data services became possible, and text messaging was introduced.
The third-generation (3G) technology marked the introduction of high-speed internet access,
25 mobile video calling, and location-based services. The fourth-generation (4G) technology
revolutionized wireless communication with faster data speeds, better network coverage, and
improved security. Currently, the fifth-generation (5G) technology is being deployed,
promising even faster data speeds, low latency, and the ability to connect multiple devices
simultaneously. With each generation, wireless communication technology has become more
30 advanced, sophisticated, and capable of delivering more services to its users.
[0004] For existing solutions, one significant problem with the prior art is that some network functions (NFs) do not support capacity and current load reporting. This limitation makes it difficult to monitor, manage, and balance the load effectively, which can lead to network
2

congestion and possible service degradation. Another problem with existing systems is that
compute utilization at the same load can vary significantly between different NF instances.
This inconsistency can lead to inefficiencies in resource allocation, as well as potential network
function overloads and service disruptions. Existing systems often lack real-time monitoring
5 of NF compute statistics. This lack of timely information makes it difficult to avoid potential
failures due to high resource utilization, which can degrade network key performance indicators (KPIs). Further, the prior art relies on the NF's ability to support capacity and current load in registration or heartbeat requests. If an NF does not have this functionality, the system's ability to manage resources effectively is severely compromised. 10
[0005] These issues contribute to inefficient use of network resources, potential service disruptions, and overall instability in the network's performance.
[0006] Therefore, in light of the foregoing discussion, there exists a need to overcome the
15 aforementioned drawbacks.
[0007] Thus, there exists an imperative need in the art to provide a method and system for
dynamically distributing traffic to a plurality of instances of a network function (NF). The
proposed invention seeks to address these shortcomings by providing a more dynamic,
20 efficient, and robust solution for managing network function resources.
OBJECTS OF THE DISCLOSURE
[0008] Some of the objects of the present disclosure, which at least one embodiment disclosed
25 herein satisfies are listed herein below.
[0009] It is an object of the present disclosure to provide a method and system for dynamically distributing traffic to a plurality of instances of a network function (NF).
30 [0010] It is another object of the present disclosure to provide a method and system for
distributing traffic to a plurality of instances of a network function (NF) that ensures efficient allocation and utilization of network function (NF) resources. By fetching the allocated compute resources and the current utilization of these resources from registered NF instances, the invention seeks to optimize the distribution of network traffic across multiple NF instances.
3

[0011] It is yet another object of the present disclosure to provide a method and system for
distributing traffic to a plurality of instances of a network function (NF) that enables real-time
monitoring of NF compute statistics. This helps in proactive avoidance of potential failures due
5 to high resource utilization, thus enhancing network performance and stability.
[0012] It is yet another object of the present disclosure to provide a method and system for
distributing traffic to a plurality of instances of a network function (NF) that aims to handle
variance in resource utilization between different NF instances at the same load. By
10 determining the relative capacity and current load for each NF instance using artificial
intelligence, the invention strives to manage resource allocation and load balancing more effectively.
[0013] It is yet another object of the present disclosure to provide a method and system for
15 distributing traffic to a plurality of instances of a network function (NF) that seeks to function
efficiently irrespective of whether a NF supports capacity and current load reporting in Registration/Heartbeat requests. This provides the system with greater flexibility in managing resources and ensuring robust network performance.
20 [0014] It is yet another object of the present disclosure to provide a method and system for
distributing traffic to a plurality of instances of a network function (NF) that enhance the existing 3GPP standard, filling gaps in its procedures and specifications and providing critical support for the robust functioning of the 5G network.
25 SUMMARY
[0015] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter. 30
[0016] An aspect of the present disclosure may relate to a method for dynamically distributing traffic to a plurality of instances of a Network Function (NF). The method comprises the steps of determining, by a determination unit, a first capacity and a first load for the plurality of the NF instances using a trained model. Further the method comprises fetching, by a fetching unit,
4

a second capacity and a second load for the plurality of the NF instances in a real time. Further,
the method comprises comparing, by a comparator unit, the first capacity and the first load with
the second capacity and the second load. Further the method comprises determining, by the
determination unit, a delta upon comparing the first capacity and the second capacity and, the
5 first load and the second load, wherein the delta comprises computed relative weights.
Furthermore, the method comprises updating, by an updating unit, the delta at the plurality of the NF instances.
[0017] In an exemplary aspect of the present disclosure, the first capacity and the first load
10 comprise information determined based on a plurality of compute parameters.
[0018] In an exemplary aspect of the present disclosure, the plurality of compute parameters is fetched from at least one of the plurality of NF instances.
15 [0019] In an exemplary aspect of the present disclosure, the second capacity and the second
load comprise information fetched from a repository.
[0020] In an exemplary aspect of the present disclosure, the compute parameters comprise at least one of CPU usage, a memory usage, or network bandwidth. 20
[0021] In an exemplary aspect of the present disclosure, updating, by the updating unit, the delta at the plurality of the NF instances facilitates network traffic management and workload distribution.
25 [0022] In an exemplary aspect of the present disclosure, the trained model is trained on
historical data comprising past compute resource utilization trends, historical load distribution patterns, and prior network traffic behaviours of the plurality of instances of Network Function (NF).
30 [0023] In an exemplary aspect of the present disclosure, the trained model processes the first
capacity, the first load, the second capacity and the second load to identify patterns or trends in compute resource utilization.
5

[0024] In an exemplary aspect of the present disclosure, the delta comprises computed relative weights.
[0025] Another aspect of the present disclosure relates to a system for dynamically distributing
5 traffic to a plurality of instances of a Network Function (NF). The system comprises a
determination unit configured to determine, a first capacity and a first load for the plurality of the NF instances using a trained model. Further, the system comprises a fetching unit configured to fetch, a second capacity and a second load for the plurality of the NF instances in real time. Further, the system comprises a comparator unit configured to compare the first
10 capacity and the first load with the second capacity and the second load. Further, the system
comprises the determination unit configured to determine, a delta upon comparing the first capacity and the second capacity and, the first load and the second load, wherein the delta comprises computed relative weights. Further, the system comprises an updating unit configured to update the delta at the plurality of the NF instances.
15
[0026] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for dynamically distributing traffic to a plurality of instances of a Network Function (NF), the instructions include executable code which, when executed by one or more units of a system, causes: a determination unit of the system to
20 determine, a first capacity and a first load for the plurality of the NF instances using a trained
model; a fetching unit of the system to fetch, a second capacity and a second load for the plurality of the NF instances in real time; a comparator unit of the system to compare, the first capacity and the first load with the second capacity and the second load; the determination unit of the system to determine, a delta upon comparing the first capacity and the second capacity
25 and, the first load and the second load, wherein the delta comprises computed relative weights;
and an updating unit of the system to update, the delta at the plurality of the NF instances.
BRIEF DESCRIPTION OF DRAWINGS
30 [0027] The accompanying drawings, which are incorporated herein, and constitute a part of
this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the
6

components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components. 5
[0028] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture.
[0029] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the
10 features of the present disclosure may be implemented in accordance with exemplary
implementation of the present disclosure.
[0030] FIG. 3 illustrates an exemplary block diagram of a system for dynamically distributing
traffic to a plurality of instances of a Network Function (NF), in accordance with exemplary
15 embodiments of the present disclosure.
[0031] FIG. 4 illustrates an exemplary method flow diagram indicating the process for dynamically distributing traffic to a plurality of instances of a Network Function (NF), in accordance with exemplary embodiments of the present disclosure. 20
[0032] FIG. 5 illustrates an exemplary architecture diagram of a system for dynamically distributing traffic to a plurality of instances of a Network Function (NF), in accordance with exemplary embodiments of the present disclosure.
25 [0033] The foregoing shall be more apparent from the following more detailed description of
the disclosure.
DETAILED DESCRIPTION
30 [0034] In the following description, for the purposes of explanation, various specific details
are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature
7

may not address any of the problems discussed above or might address only some of the
problems discussed above. Some of the problems discussed above might not be fully addressed
by any of the features described herein. Example embodiments of the present disclosure are
described below, as illustrated in various drawings in which like reference numerals refer to
5 the same parts throughout the different drawings.
[0035] The ensuing description provides exemplary embodiments only, and is not intended to
limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description
of the exemplary embodiments will provide those skilled in the art with an enabling description
10 for implementing an exemplary embodiment. It should be understood that various changes may
be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0036] It should be noted that the terms "mobile device", "user equipment", "user device", “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0037] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in
25 the art that the embodiments may be practiced without these specific details. For example,
circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
30
[0038] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the
8

operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0039] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an
5 example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed
herein is not limited by such examples. In addition, any aspect or design described herein as
“exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or
advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary
structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent
10 that the terms “includes,” “has,” “contains,” and other similar words are used in either the
detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
15 [0040] As used herein, an “electronic device”, or “portable electronic device”, or “user device”
or “communication device” or “user equipment” or “device” refers to any electrical, electronic, electromechanical and computing device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices and transmitting data to the other user devices. The user equipment may have a processor, a
20 display, a memory, a battery and an input-means such as a hard keypad and/or a soft keypad.
The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices,
25 augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital
assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
[0041] As portable electronic devices and wireless technologies continue to improve and grow
30 in popularity, the advancing wireless technologies for data transfer are also expected to evolve
and replace the older generations of technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology are also seen. The development, in this respect, has been incremental in the order of second generation (2G), third
9

generation (3G), fourth generation (4G), and now fifth generation (5G), and more such generations are expected to continue in the forthcoming time.
[0042] Radio Access Technology (RAT) refers to the technology used by mobile devices/user
5 equipment (UE) to connect to a cellular network. It refers to the specific protocol and standards
that govern the way devices communicate with base stations, which are responsible for providing the wireless connection. Further, each RAT has its own set of protocols and standards for communication, which define the frequency bands, modulation techniques, and other parameters used for transmitting and receiving data. Examples of RATs include GSM (Global
10 System for Mobile Communications), CDMA (Code Division Multiple Access), UMTS
(Universal Mobile Telecommunications System), LTE (Long-Term Evolution), and 5G. The choice of RAT depends on a variety of factors, including the network infrastructure, the available spectrum, and the mobile device's/device's capabilities. Mobile devices often support multiple RATs, allowing them to connect to different types of networks and provide optimal
15 performance based on the available network resources.
[0043] gNodeB: (gNB) refers to the base station component in 5G (fifth-generation) wireless networks. It is an essential element of the Radio Access Network (RAN) responsible for transmitting and receiving wireless signals to and from user devices, such as smartphones,
20 tablets, and Internet of Things (IoT) devices. In 5G networks, there are similar components in
other generations of wireless networks. Here are a few examples: Base Transceiver Station (BTS): In 2G (second-generation) networks, the BTS serves as the base station responsible for transmitting and receiving wireless signals. It connects mobile devices to the cellular network infrastructure.
25
[0044] NodeB: In 3G (third-generation) networks, the NodeB is the base station component
that enables wireless communication. It facilitates the transmission and reception of signals
between user devices and the network. eNodeB: In 4G (fourth-generation) LTE (Long-Term
Evolution) networks, the eNodeB serves as the base station. It supports high-speed data
30 transmission, low latency, and improved network capacity. Access Point (AP): In Wi-Fi
networks, an access point functions as a central hub that enables wireless devices to connect to a wired network. It provides a wireless interface for devices to access the network and facilitates communication between them. The examples illustrate the base station components in different generations of wireless networks, such as BTS in 2G, NodeB in 3G, eNodeB in 4G
10

LTE, and gNodeB in 5G. Each component plays a crucial role in facilitating wireless connectivity and communication between user devices and the network infrastructure.
[0045] As discussed in the background section, one significant problem with the prior art is
5 that some network functions (NFs) do not support capacity and current load reporting. This
limitation makes it difficult to monitor, manage, and balance the load effectively, which can lead to network congestion and possible service degradation. Another problem with existing systems is that compute utilization at the same load can vary significantly between different NF instances. This inconsistency can lead to inefficiencies in resource allocation, as well as
10 potential network function overloads and service disruptions. Existing systems often lack real-
time monitoring of NF compute statistics. This lack of timely information makes it difficult to avoid potential failures due to high resource utilization, which can degrade network key performance indicators (KPIs). Further, the prior art relies heavily on the NF's ability to support capacity and current load in registration or heartbeat requests. If a NF does not have this
15 functionality, the system's ability to manage resources effectively is severely compromised.
[0046] These issues contribute to inefficient use of network resources, potential service
disruptions, and overall instability in the network's performance. Thus, there exists an
imperative need in the art to provide a method and system for optimizing allocation of resources
20 and load distribution among network function instances. The proposed invention seeks to
address these shortcomings by providing a more dynamic, efficient, and robust solution for managing network function resources.
[0047] As discussed in the background section, the current known solutions have several
25 shortcomings. The present disclosure aims to overcome the above-mentioned and other
existing problems in this field of technology by providing a method and system for dynamically distributing traffic to a plurality of instances of a network function (NF).
[0048] Hereinafter, exemplary embodiments of the present disclosure will be described with
30 reference to the accompanying drawings.
[0049] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture, in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment
11

(UE) [102], a radio access network (RAN) [104], an access and mobility management function
(AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy
(SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific
Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection
5 Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository
Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management
(UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data
network (DN) [130], wherein all the components are assumed to be connected to each other in
a manner as obvious to the person skilled in the art for implementing features of the present
10 disclosure.
[0050] Radio Access Network (RAN) [104] is the part of a mobile telecommunications system
that connects user equipment (UE) [102] to the core network (CN) and provides access to
different types of networks (e.g., 5G network). It consists of radio base stations and the radio
15 access technologies that enable wireless communication.
[0051] Access and Mobility Management Function (AMF) [106] is a 5G core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability. It also handles mobility management procedures like handovers and paging. 20
[0052] Session Management Function (SMF) [108] is a 5G core network function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement. 25
[0053] Service Communication Proxy (SCP) [110] is a network function in the 5G core network that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
30 [0054] Authentication Server Function (AUSF) [112] is a network function in the 5G core
responsible for authenticating UEs during registration and providing security services. It generates and verifies authentication vectors and tokens.
12

[0055] Network Slice Specific Authentication and Authorization Function (NSSAAF) [114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
5 [0056] Network Slice Selection Function (NSSF) [116] is a network function responsible for
selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
[0057] Network Exposure Function (NEF) [118] is a network function that exposes capabilities
10 and services of the 5G network to external applications, enabling integration with third-party
services and applications.
[0058] Network Repository Function (NRF) [120] is a network function that acts as a central
repository for information about available network functions and services. It facilitates the
15 discovery and dynamic registration of network functions.
[0059] Policy Control Function (PCF) [122] is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies. 20
[0060] Unified Data Management (UDM) [124] is a network function that centralizes the management of subscriber data, including authentication, authorization, and subscription information.
25 [0061] Application Function (AF) [126] is a network function that represents external
applications interfacing with the 5G core network to access network capabilities and services.
[0062] User Plane Function (UPF) [128] is a network function responsible for handling user data traffic, including packet routing, forwarding, and QoS enforcement. 30
[0063] Data Network (DN) [130] refers to a network that provides data services to user equipment (UE) in a telecommunications system. The data services may include but are not limited to Internet services, private data network related services.
13

[0064] FIG. 2 illustrates an exemplary block diagram of a computing device [1000] (also
referred to herein as computer system [1000]) upon which the features of the present disclosure
may be implemented in accordance with exemplary implementation of the present disclosure.
In an implementation, the computing device [1000] may also implement a method for
5 dynamically distributing traffic to a plurality of instances of a Network Function (NF) utilising
the system. In another implementation, the computing device [1000] itself implements the method for dynamically distributing traffic to a plurality of instances of a Network Function (NF) using one or more units configured within the computing device [1000], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure. 10
[0065] The computing device [1000] may include a bus [1002] or other communication mechanism for communicating information, and a processor [1004] coupled with the bus [1002] for processing information. The processor [1004] may be, for example, a general purpose microprocessor. The computing device [1000] may also include a main memory
15 [1006], such as a random access memory (RAM), or other dynamic storage device, coupled to
the bus [1002] for storing information and instructions to be executed by the processor [1004]. The main memory [1006] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [1004]. Such instructions, when stored in non-transitory storage media accessible to the
20 processor [1004], render the computing device [1000] into a special-purpose machine that is
customized to perform the operations specified in the instructions. The computing device [1000] further includes a read only memory (ROM) [1008] or other static storage device coupled to the bus [1002] for storing static information and instructions for the processor [1004].
25
[0066] A storage device [1010], such as a magnetic disk, optical disk, or solid-state drive is
provided and coupled to the bus [1002] for storing information and instructions. The computing
device [1000] may be coupled via the bus [1002] to a display [1012], such as a cathode ray
tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED
30 (OLED) display, etc. for displaying information to a computer user. An input device [1014],
including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [1002] for communicating information and command selections to the processor [1004]. Another type of user input device may be a cursor controller [1016], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command
14

selections to the processor [1004], and for controlling cursor movement on the display [1012]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
5 [0067] The computing device [1000] may implement the techniques described herein using
customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [1000] causes or programs the computing device [1000] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [1000] in response to the processor
10 [1004] executing one or more sequences of one or more instructions contained in the main
memory [1006]. Such instructions may be read into the main memory [1006] from another storage medium, such as the storage device [1010]. Execution of the sequences of instructions contained in the main memory [1006] causes the processor [1004] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry
15 may be used in place of or in combination with software instructions.
[0068] The computing device [1000] also may include a communication interface [1018] coupled to the bus [1002]. The communication interface [1018] provides a two-way data communication coupling to a network link [1020] that is connected to a local network [1022].
20 For example, the communication interface [1018] may be an integrated services digital network
(ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [1018] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such
25 implementation, the communication interface [1018] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing various types of information.
[0069] The computing device [1000] can send messages and receive data, including program
30 code, through the network(s), the network link [1020] and the communication interface [1018].
In the Internet example, a server [1030] might transmit a requested code for an application program through the Internet [1028], the ISP [1026], the host [1024] and a multi-functional device and the communication interface [1018]. The received code may be executed by the
15

processor [1004] as it is received, and/or stored in the storage device [1010], or other non-volatile storage for later execution.
[0070] The computing device [1000] encompasses a wide range of electronic devices capable
5 of processing data and performing computations. Examples of a computing device [1000]
include, but are not limited only to, personal computers, laptops, tablets, smartphones, servers,
and embedded systems. The devices may operate independently or as part of a network and
can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally, a
computing device [1000] may include peripheral devices, such as monitors, keyboards, and
10 printers, as well as integrated components within larger electronic systems, showcasing their
versatility in various technological applications.
[0071] FIG. 3 illustrates an exemplary block diagram of a system [300] for dynamically distributing traffic to a plurality of instances of a Network Function (NF), in accordance with exemplary embodiments of the present disclosure. The system [300] comprises at least one determination unit [302], at least one fetching unit [304], at least one comparator unit [306] and at least one updating unit [308]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
[0072] The system [300] is configured for dynamically distributing traffic to a plurality of
25 instances of a Network Function (NF), with the help of the interconnection between the
components/units of the system [300].
[0073] In an exemplary aspect, the system [300] may be implemented via a Service
Communication Proxy Predictive Artificial Intelligence (SCP-pAI). The SCP-pAI may
30 comprise a trained model or artificial intelligence model.
[0074] The system [300] comprises a determination unit [302]. The determination unit [302] is configured to determine a first capacity and a first load for the plurality of the NF instances using a trained model. The determination unit [302] may determine the first capacity and the
16

first load for the plurality of the NF instances using a trained model. In an exemplary aspect,
the first capacity and the first load comprise information determined based on a plurality of
compute parameters. The compute parameters may comprise at least one of CPU usage, a
memory usage, or network bandwidth. The plurality of compute parameters is fetched from at
5 least one of the plurality of NF instances. In an implementation, the determination unit [302]
may use at least one trained model(s) or combination of trained model(s) for determining the first capacity and the first load for the plurality of the NF instances. The trained model is trained on historical data comprising past compute resource utilization trends, historical load distribution patterns, and prior network traffic behaviours of the plurality of instances of
10 Network Function (NF). The trained model processes the fetched information to identify
patterns or trends in compute resource utilization. In an exemplary aspect, the first capacity and the first load may be applicable load and capacity determined using an artificial intelligence (AI) or trained model. In an exemplary aspect, the NF instances may comprise, such as, but not limited to, AMF [106] instances, SMF [108] instances, PCF [122] instances and SCP [110]
15 instances.
[0075] The system [300] comprises a fetching unit [304]. The fetching unit [304] is configured to fetch, a second capacity and a second load for the plurality of the NF instances in real time. The fetching unit [304] may fetch the second capacity and the second load for the plurality of
20 the NF instances in real time. In an exemplary aspect, the second capacity and the second load
comprise information fetched from a repository. The repository may be associated with a SCP controller or SCP [110]. The second capacity and the second load for the plurality of the NF instances may be stored in real time. In an exemplary aspect, the second capacity and the second load for the plurality of the NF instances may store or record current capacity or load.
25
[0076] The system [300] comprises a comparator unit [306]. The comparator unit [306] is
configured to compare the first capacity and the first load with the second capacity and the
second load. In an exemplary aspect, the comparator unit [306] may be communicatively
attached with the fetching unit [304] and the determination unit [302]. The comparator unit
30 [306] may compare the first capacity and the first load with the second capacity and the second
load for determining at least one of consumption of load, required capacity, mismatching of consumption of load, applicable load, and overused or underused capacity. The comparator unit [306] may send comparison results of the load and capacity to the determination unit [302] for further processing.
17

[0077] The system [300] further comprises the determination unit [302]. The determination
unit [302] is configured to determine a delta upon comparing the first capacity and the second
capacity and, the first load and the second load. After receiving the comparison from the
5 comparator unit [306], the determination unit [302] may determine the delta upon comparing
the first capacity and the second capacity and, the first load and the second load. The delta comprises computed relative weight(s).
[0078] For example, the system [300] manages traffic load across two network function (NF)
10 instances, NF1 and NF2, each with different capacities and current loads. NF1 has a capacity
of 1000 units and a current load of 700 units, while NF2 has a capacity of 1200 units and a
current load of 800 units. The comparator unit [306] compares the capacities and loads of the
NF1 and NF2 and sends the comparison data to the determination unit [302], which calculates
a delta, indicating differences and computing relative weights for load balancing. The
15 determination unit [302] calculates that NF2 should handle 55% of the total traffic and NF1
should handle 45%. Given the total load of 1500 units, NF2's new load becomes 825 units, and
NF1's new load becomes 675 units. By redistributing the traffic according to these computed
relative weights, the determination unit [302] ensures efficient operation and balanced
performance across the network, preventing any single instance from being overloaded.
20
[0079] In an exemplary aspect, the determination unit [302] may determine the delta between
the fetched information and determined or predicted information associated with capacity and
load of the instance of the NF instances such as, instances of the SCP [110]. In an exemplary
aspect, the delta may represent an adjustment value, which is sent to the instances of the NF,
25 for adjusting one or more service execution between the instances of the NF, such that each
instance of the NF may not bear excessive traffic load.
[0080] For example, the comparator unit [306] sends load statistics data to the determination
unit [302], including the current capacity and load of two NF instances, NF1 and NF2. The
30 NF1 can handle 1000 calls and is currently handling 700 calls, while NF2 can handle 1200
calls and is currently handling 800 calls. The determination unit [302] after receiving the compared values from the comparator unit [306] calculates a delta, determining that NF1 is closer to its maximum capacity and load than NF2. It also considers predicted traffic, such as NF2 potentially receiving 200 more calls and NF1 receiving 100 more calls in the next hour.
18

Based on these comparisons and predictions, the determination unit [302] calculates delta (such
as an adjustment value) and decides to reroute some of the traffic from NF1 to NF2 to balance
the load and prevent NF1 from being overloaded. The adjustment value is then sent to the NF
instances, resulting in some calls being redirected to NF2, ensuring both servers handle traffic
5 more efficiently without overloading.
[0081] The system [300] comprises an updating unit [308]. The updating unit [308] is
configured to update the delta at the plurality of the NF instances. In an exemplary aspect, the
updating unit [308] may be communicatively attached with the determination unit [302]. The
10 updating unit [308] may update the delta at the plurality of the NF instances, which is received
from the determination unit [302]. The updating unit [308] may update the delta at the plurality of the NF instances facilitate network traffic management and workload distribution.
[0082] In an exemplary aspect, the determination unit [302] may send the computed relative
15 weight(s) to the instances of the AMF [106] and SCP [110] via the updating unit [308]. After
receiving the computed relative weight(s), AMF [106] may use the weight values to update one
or more service request(s) transmission towards each instance of the SCP [110] by redirecting
the one or more service request(s) through different instances of the SCP [110] so that each
instance of the SCP [110] may run service request smoothly without any excessive traffic load
20 at individual instance. Each instance of the SCP [110] may relatively manage the capacity and
load for executing the one or more service request(s) from the instances of the AMF [106].
[0083] In an exemplary aspect, the determination unit [302] may send the computed relative
weight(s) to the instances of the other instances, such as instances of the PCF [122] via the
25 updating unit [308]. After receiving the computed relative weight(s), the instances of the PCF
[122] may use the weight values to update the requests handling towards each instance of the SCP [110].
[0084] Further, in accordance with the present disclosure, it is to be acknowledged that the
30 functionality described for the various components/units can be implemented interchangeably.
While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative
19

arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0085] Referring to FIG. 4, an exemplary method flow diagram [400] for dynamically
5 distributing traffic to a plurality of instances of a Network Function (NF), in accordance with
exemplary implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402]. 10
[0086] At step [404], the method [400] as disclosed by the present disclosure comprises determining, by a determination unit [302], a first capacity and a first load for the plurality of the NF instances using a trained model. The method [400] implemented by the determination unit [302] of the system [300] may determine the first capacity and the first load for the plurality
15 of the NF instances using a trained model. In an exemplary aspect, the first capacity and the
first load comprise information determined based on a plurality of compute parameters. The compute parameters may comprise at least one of CPU usage, a memory usage, or network bandwidth. The plurality of compute parameters are fetched from at least one of the plurality of NF instances. In an implementation, the determination unit [302] may use at least one trained
20 model(s) or combination of trained model(s) for determining the first capacity and the first load
for the plurality of the NF instances. The trained model is trained on historical data comprising past compute resource utilization trends, historical load distribution patterns, and prior network traffic behaviours of the plurality of instances of Network Function (NF).
25 [0087] The trained model processes the fetched information to identify patterns or trends in
compute resource utilization. In an exemplary aspect, the first capacity and the first load may be applicable load and capacity determined using an artificial intelligence (AI) or trained model. In an exemplary aspect, the NF instances may comprise, such as, but not limited to, AMF [106] instances, SMF [108] instances, PCF [122] instances and SCP [110] instances.
30
[0088] Next, at step [406], the method [400] as disclosed by the present disclosure comprises fetching, by a fetching unit [304] a second capacity and a second load for the plurality of the NF instances in real time. The method [400] implemented by the fetching unit [304] of the system [300] may fetch the second capacity and the second load for the plurality of the NF
20

instances in real time. In an exemplary aspect, the second capacity and the second load
comprise information fetched from a repository. The repository may be associated with a SCP
controller or SCP [110]. The second capacity and the second load for the plurality of the NF
instances may store in real time. In an exemplary aspect, the second capacity and the second
5 load for the plurality of the NF instances may store or record current capacity or load.
[0089] Next, at step [408], the method [400] as disclosed by the present disclosure comprises comparing, by a comparator unit [306], the first capacity and the first load with the second capacity and the second load. The method [400] implemented by the comparator unit [306] of
10 the system [300] may compare the first capacity and the first load with the second capacity and
the second load. In an exemplary aspect, the comparator unit [306] may be communicatively attached with the fetching unit [304] and the determination unit [302]. The comparator unit [306] may compare the first capacity and the first load with the second capacity and the second load for determining at least one of consumption of load, required capacity, mismatching of
15 consumption of load, applicable load, and overused or underused capacity. The comparator unit
[306] may send comparison results of the load and capacity to the determination unit [302] for further processing.
[0090] Next, at step [410], the method [400] as disclosed by the present disclosure comprises
20 determining, by the determination unit [302], a delta upon comparing the first capacity and the
second capacity and, the first load and the second load. After receiving the comparison from
the comparator unit [306], the determination unit [302] further may determine the delta upon
comparing the first capacity and the second capacity and, the first load and the second load.
The delta comprises computed relative weight(s). In an exemplary aspect, the determination
25 unit [302] may determine the delta between the fetched information and determined or
predicted information associated with capacity and load of the instance of the NF instances
such as, instances of the SCP [110]. In an exemplary aspect, the delta may represent an
adjustment value, which is sent to the instances of the NF, for adjusting one or more service
execution between the instances of the NF, such that each instance of the NF may not bear
30 excessive traffic load.
[0091] Next, at step [412], the method [400] as disclosed by the present disclosure comprises updating, by an updating unit [308], the delta at the plurality of the NF instances. The method [400] implemented by the updating unit [308] of the system [300] may update the delta at the
21

plurality of the NF instances. In an exemplary aspect, the updating unit [308] may be
communicatively attached with the determination unit [302]. The updating unit [308] may
update the delta at the plurality of the NF instances, which is received from the determination
unit [302]. The updating unit [308] may update the delta at the plurality of the NF instances
5 which facilitates network traffic management and workload distribution.
[0092] In an exemplary aspect, the determination unit [302] may send the computed relative weight(s) to the instances of the AMF [106] and SCP [110] via the updating unit [308]. After receiving the computed relative weight(s), AMF [106] may use the weight values to update one
10 or more service request(s) transmission towards each instance of the SCP [110] by redirecting
the one or more service request(s) through different instances of the SCP [110] so that each instance of the SCP [110] may run service request smoothly without any excessive traffic load at individual instance. Each instance of the SCP [110] may relatively manage the capacity and load for executing the one or more service request(s) from the instances of the AMF [106].
15
[0093] In an exemplary aspect, the determination unit [302] may send the computed relative
weight(s) to the instances of the other instances, such as instances of the PCF [122] via the
updating unit [308]. After receiving the computed relative weight(s), the instances of the PCF
[122] may use the weight values to update the requests handling towards each instance of the
20 SCP [110].
[0094] Thereafter, the method [400] terminates at step [414].
[0095] FIG. 5 illustrates an exemplary architecture diagram of a system [500] for dynamically
25 distributing traffic to a plurality of instances of a Network Function (NF), in accordance with
exemplary embodiments of the present disclosure. As shown in FIG. 5, the system [500]
comprises at least one Service Communication Proxy Predictive Artificial Intelligence (SCP-
pAI) [502], at least one Service Communication Proxy (SCP) [110], at least one Network
Function (NF) Consumers [506] and at least one Network Function (NF) Producers [508]. In
30 an exemplary aspect, the SCP [110] may comprise a set of instances, such as SCP1 [110a],
SCP2 [110b] …. SCPn [110n]. Further, the NF consumer [506] may comprise a set of instances of different NFs, such as AMF1 [106a], AMF2 [106b] …. AMFn [106n]. Further, the NF producer [508] may comprise a set of instances of different NFs, such as PCF1 [122a], PCF2 [122b] …. PCFn [122n].
22

[0096] In an embodiment, the system [500] comprises a SCP controller, which may communicatively couple with the SCP-pAI [502]. In another embodiment, the system [500] comprises the SCP controller, which may be inside the SCP-pAI [502]. 5
[0097] In an operation, the SCP-pAI [502] is configured to fetch load statistics information associated with compute resources assigned and the current utilization of the compute resources from a plurality of NF instances (such as instances of the NF Producers/Consumers and instances of the SCP [110]) registered at the SCP controller or Network Repository
10 Function (NRF) [120]. In an implementation, the plurality of NF instances may register at the
SCP controller or NRF [120] during initial registration in the network. In an implementation, the plurality of NF instances may register at the SCP controller or NRF [120] on demand basis. The fetched load statistics information associated with compute resources comprise at least one of: CPU usage, memory usage, or network bandwidth.
15
[0098] In an exemplary aspect, the SCP-pAI [502] may comprise at least one or combination of trained model or artificial intelligence model. Further, the SCP-pAI [502] is configured to determine using a trained model capacity and current load for the registered NF instances (such as AMF [106] instances, PCF [122] instances and the SCP [110] instances). The trained model
20 is trained on historical data comprising past compute resource utilization trends, historical load
distribution patterns, and prior network traffic behaviours of the plurality of NF instances and the SCP instances. Further, the determining step uses the trained model to process the fetched information to identify patterns or trends in compute resource utilization. In an exemplary aspect, the trained model may be such as, but not limited to, machine learning model, artificial
25 intelligence model, decision model, neural network model, support vector machines, random
forests, and gradient boosting methods and the like.
[0099] Once the data from the NF Consumers [506] instances, NF Producers [508] instances
and SCP [110] instances has been gathered, the SCP-pAI [502] may utilize the trained model
30 or artificial intelligence (AI) model to determine the relative capacity and current load of each
NF instance (such as AMF [106] instances, PCF [122] instances, and the SCP [110] instances). This means that the SCP-pAI [502] assesses how much load or work each instance (such as AMF [106] instances and PCF [122] instances) and the SCP instances can handle relative to others and how much work it's currently processing or managing the load. This is critical for
23

understanding the overall load on the network and how effectively the resources are being utilized. As used herein, relative capacity and load represents relative distribution among the instances of the same NF(s) node.
5 [0100] Further, the SCP-pAI [502] is configured to compare the determined current load and
capacity with a pre-stored information available at the SCP controller or SCP [110]. The SCP-
pAI [502] then compares the current load and capacity data it fetched from the NF Consumers
[506] instances, NF Producers [508] instances and the SCP [110] instances with the data stored
at the SCP controller or SCP [110]. This comparison allows the SCP-pAI [502] to determine or
10 identify any discrepancies or mismatches between the real-time data from the NF Consumers
[506] instances, NF Producers [508] instances and the SCP [110] instances, and the data recorded at the SCP controller or SCP [110].
[0101] Furthermore, the SCP-pAI [502] is configured, in case of a mismatch between fetched
15 information and the information at the SCP controller or SCP [110], to update the current load
and capacity information applicable for the NF instances (such as AMF [106] instances, PCF
[122] instances and the SCP instances) to the SCP controller or SCP [110], wherein the updated
capacity and load information at SCP controller or SCP [110] facilitates for effective traffic
management and load distribution among the plurality of NF instances. Further, upon receiving
20 the updated information from the SCP-pAI [502], the system [500] may update the SCP
controller about the revised capacity and load information. In an exemplary aspect, after receiving the update from the SCP-pAI [502], the SCP controller may update the SCP [110]. In an exemplary aspect, the SCP-pAI [502] may update the SCP controller or SCP [110].
25 [0102] Also, the SCP [110] or SCP controller upon receiving the updated data from the SCP-
pAI [502], manages network traffic and distributes workload based on the current capacity and load information for the NF instances and SCP instances.
[0103] In an exemplary aspect, if a mismatch is detected during the comparison, the SCP-pAI
30 [502] may proceed to update the capacity and current load data for the NF Consumers [506]
instances, NF Producers [508] instances and SCP [110] instances at the SCP controller or SCP [110]. This ensures that the SCP controller or SCP [110] always has up-to-date information regarding the current state of network resources. Finally, upon receiving updated data from the SCP-pAI [502], the system [500] updates the SCP controller or SCP [110] about the revised
24

capacity and load information. This step is crucial for maintaining network efficiency and stability, as the SCP [110] or SCP controller rely on accurate and updated information to manage network traffic and balance loads across the various NF instances (such as AMF [106] instances, PCF [122] instances and the SCP [110] instances). 5
[0104] It would be appreciated by the person skilled in the art that the aforementioned method
is operational irrespective of the capability of the NF instances to support capacity and current
load reporting in their registration or heartbeat requests, thereby enabling proactive real-time
monitoring and efficient management of network resources i.e. technique of the proposed
10 invention allows for real-time, proactive monitoring of network resources and their usage,
helping to prevent potential network failures due to resource overload. It does so irrespective of whether a network function supports capacity and current load reporting in its registration or heartbeat requests, thus ensuring more robust and reliable network performance.
15 [0105] In an example, consider a 5G network with various Network Functions (NFs) such as
User Plane Function (UPF) [128], Session Management Function (SMF) [108], Access and Mobility Management Function (AMF) [106] and Service Communication Proxy (SCP) [110] among others. These NFs are registered with a Service Capability Proxy (SCP) controller or Network Repository Function (NRF) [120]. At the first step, the Service Communication Proxy
20 - Predictive Artificial Intelligence (SCP-pAI) [502] starts to fetch data from these NF instances.
For instance, it may gather data on how much CPU and memory the UPF [128], SMF [108], and AMF [106] are currently using and how much they have been allocated during registration.
[0106] In an exemplary aspect, the AMF [106] may represent the set of instances of AMF
25 [106]. The gathered data on how much CPU and memory used or how much load is presented,
may represent how much load may be present on the set of instances of the AMF [106]. Once this data is gathered, the SCP-pAI [502] uses trained model or artificial intelligence-based model to determine the relative capacity and current load for each NF instance, such as instances of the AMF [106], PCF [122] and SCP [110]. 30
[0107] For example, during a communication between the AMF [106] and PCF [122] via the SCP [110], the SCP-pAI [502] may fetch the information such as SCP 1 [110a] current load is 70% and SCP 2 [110b] current load is 20%. In an exemplary aspect, the AMF [106] may be NF consumer and PCF [122] may be NF producer. Further, the SCP-pAI [502] may determine
25

or predict the applicable capacity and load for the instances of the PCF [122] and SCP [110].
Further, SCP-pAI [502] may determine a delta between the fetched information and determined
or predicted information associated with capacity and load of the instance of the SCP [110]. In
an exemplary aspect, the delta may be a computed relative weight(s), which may be determined
5 by the SCP-pAI [502].
[0108] The computed relative weights refer to the calculated values indicate how much traffic or load should be distributed to each Network Function (NF) instance to maintain balanced network performance. For example, consider two instances of the Service Communication
10 Proxy (SCP), SCP1 [110a] and SCP2 [110b]. Initially, SCP1 [110a] is handling 70% of the
load, and SCP2 [110b] is handling 20%. The SCP-pAI [502] fetches this data or load statistics data and determines, using its trained model, that the optimal load distribution should be more balanced to prevent overloading SCP1 [110a]. It calculates relative weights indicating that SCP1 [110a] should handle 50% and SCP2 [110b] should handle 40%. These weights are then
15 communicated to the NF Consumers (such as instances of the Access and Mobility
Management Function (AMF) [106]) and NF Producers (such as instances of the Policy Control Function (PCF) [122]). Based on these weights, AMF [106] instances adjust their traffic distribution, redirecting some requests from SCP1 [110a] to SCP2 [110b], resulting in SCP1's [110a] load decreasing to 50% and SCP2's [110b] load increasing to 40%.
20
[0109] The SCP-pAI [502] may send the computed relative weight(s) to the instances of the
AMF [106] and SCP [110]. After receiving the computed relative weight(s), AMF [106] may
use the weight values to update the requests towards each instance of the SCP [110] by
redirecting the communication request through different instances of the SCP [110], leading
25 the current load at SCP 1 [110a] to 50% and current load at SCP 2 [110b] to 40%.
[0110] In an embodiment, the SCP-pAI [502] may send the relative computed weight(s) to the
instances of the NF producers [508], such as instances of the PCF [122]. After receiving the
computed relative weight(s), the NF producers [508] may use the weight values to update the
30 requests handling towards each instance of the SCP [110].
[0111] Further, the SCP-pAI [502] compares this real-time data with the capacity and load data stored at the SCP controller. If it finds that the SCP controller's data is outdated or incorrect (for example, if the controller's data suggests that the SCP1 [110a] is operating at 70% capacity
26

instead of the actual 50%), it will detect a mismatch. In case of a mismatch, the SCP-pAI [502]
updates the capacity and current load data at the SCP controller. This would involve, for
example, updating the SCP1 [110a] capacity utilization from 70% to the actual 50%. The SCP
controller, upon receiving this updated information, then informs the SCP [110] about the
5 revised capacity and load data. The SCP [110] may use this information to manage network
traffic and ensure that no single NF instance (such as instances of SCP [110] and AMF [106]) is overloaded, thereby maintaining efficient and stable network performance.
[0112] The present disclosure further discloses a non-transitory computer readable storage
10 medium storing instructions for dynamically distributing traffic to a plurality of instances of a
Network Function (NF), the instructions include executable code which, when executed by one
or more units of a system, causes: a determination unit [302] of the system to determine, a first
capacity and a first load for the plurality of the NF instances using a trained model; a fetching
unit [304] of the system to fetch, a second capacity and a second load for the plurality of the
15 NF instances in real time; a comparator unit [306] of the system to compare, the first capacity
and the first load with the second capacity and the second load; the determination unit [302] of
the system to determine, a delta upon comparing the first capacity and the second capacity and,
the first load and the second load; and an updating unit [308] of the system to update, the delta
at the plurality of the NF instances.
20
ADVANTAGES OF THE PRESENT DISCLOSURE
[0113] The present disclosure provides a method and system for dynamically distributing traffic to a plurality of instances of a Network Function (NF). 25
[0114] The present disclosure provides a method and system for dynamically distributing
traffic to a plurality of instances of a Network Function (NF) that ensures efficient allocation
and utilization of network function (NF) resources. By fetching the allocated compute
resources and the current utilization of these resources from registered NF instances, the
30 invention seeks to optimize the distribution of network traffic across multiple NF instances.
[0115] The present disclosure provides a method and system for dynamically distributing traffic to a plurality of instances of a Network Function (NF) that enables real-time monitoring
27

of NF compute statistics. This helps in proactive avoidance of potential failures due to high resource utilization, thus enhancing network performance and stability.
[0116] The present disclosure provides a method and system for dynamically distributing
5 traffic to a plurality of instances of a Network Function (NF) that aims to handle variance in
resource utilization between different NF instances at the same load. By determining the relative capacity and current load for each NF instance using artificial intelligence, the invention strives to manage resource allocation and load balancing more effectively.
10 [0117] The present disclosure provides a method and system for dynamically distributing
traffic to a plurality of instances of a Network Function (NF) that seeks to function efficiently irrespective of whether a NF supports capacity and current load reporting in Registration/Heartbeat requests. This provides the system with greater flexibility in managing resources and ensuring a robust network performance.
15
[0118] The present disclosure provides a method and system for dynamically distributing traffic to a plurality of instances of a Network Function (NF) that enhance the existing 3GPP standard, filling gaps in its procedures and specifications, and providing critical support for the robust functioning of the 5G network. 20
[0119] While considerable emphasis has been placed herein on the disclosed implementations,
it will be appreciated that many implementations can be made and that many changes can be
made to the implementations without departing from the principles of the present disclosure.
These and other changes in the implementations of the present disclosure will be apparent to
25 those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to
be implemented is illustrative and non-limiting.
28

I/We Claim:
1. A method for dynamically distributing traffic to a plurality of instances of a Network
Function (NF), said method comprising the steps of:
5 determining, by a determination unit [302], a first capacity and a first load for the
plurality of the NF instances using a trained model;
fetching, by a fetching unit [304] a second capacity and a second load for the plurality of the NF instances in real time;
comparing, by a comparator unit [306], the first capacity and the first load with the
10 second capacity and the second load;
determining, by the determination unit [302], a delta upon comparing the first capacity and the second capacity and, the first load and the second load, wherein the delta comprises computed relative weights; and
updating, by an updating unit [308], the delta at the plurality of instances of the NF. 15
2. The method as claimed in claim 1, wherein the first capacity and the first load comprise
information determined based on a plurality of compute parameters.
3. The method as claimed in claim 2, wherein the plurality of compute parameters is fetched
20 from at least one of the plurality of NF instances.
4. The method as claimed in claim 1, wherein the second capacity and the second load
comprise information fetched from a repository.
25 5. The method as claimed in claim 2, wherein the plurality of compute parameters
comprises at least one of CPU usage, a memory usage, or network bandwidth.
6. The method as claimed in claim 1, wherein the updating, by the updating unit [308], the
delta at the plurality of the NF instances facilitates network traffic management and
30 workload distribution.
7. The method as claimed in claim 1, wherein the trained model is trained on historical data
comprising past compute resource utilization trends, historical load distribution patterns,
29

and prior network traffic behaviours of the plurality of instances of Network Function (NF).
8. The method as claimed in claim 1, wherein the trained model processes the first capacity,
5 the first load, the second capacity and the second load to identify patterns or trends in
compute resource utilization.
9. A system for dynamically distributing traffic to a plurality of instances of a Network
Function (NF), said system comprising:
10 a determination unit [302] configured to determine, a first capacity and a first load for
the plurality of the NF instances using a trained model;
a fetching unit [304] configured to fetch, a second capacity and a second load for the plurality of the NF instances in real time;
a comparator unit [306] configured to compare, the first capacity and the first load
15 with the second capacity and the second load;
the determination unit [302] configured to determine, a delta upon comparing the first capacity and the second capacity and, the first load and the second load, wherein the delta comprises computed relative weights; and
an updating unit [308] configured to update, the delta at the plurality of the NF
20 instances.
10. The system as claimed in claim 9, wherein the first capacity and the first load comprise
information determined based on a plurality of compute parameters.
25 11. The system as claimed in claim 10, wherein the plurality of compute parameters is
fetched from at least one of the plurality of the NF instances.
12. The system as claimed in claim 9, wherein the second capacity and the second load
comprise information fetched from a repository.
30
13. The system as claimed in claim 10, wherein the plurality of compute parameters
comprises at least one of CPU usage, a memory usage, or network bandwidth.
30

14. The system as claimed in claim 9, wherein the updating unit [308] updates the delta at the plurality of the NF instances to facilitate network traffic management and workload distribution.
5 15. The system as claimed in claim 9, wherein the trained model is trained on historical data
comprising past compute resource utilization trends, historical load distribution patterns, and prior network traffic behaviours of the plurality of instances of Network Function (NF).
10 16. The system as claimed in claim 9, wherein the trained model processes the first capacity,
the first load, the second capacity and the second load to identify patterns or trends in compute resource utilization.

Documents

Application Documents

# Name Date
1 202321046062-STATEMENT OF UNDERTAKING (FORM 3) [08-07-2023(online)].pdf 2023-07-08
2 202321046062-PROVISIONAL SPECIFICATION [08-07-2023(online)].pdf 2023-07-08
3 202321046062-FORM 1 [08-07-2023(online)].pdf 2023-07-08
4 202321046062-FIGURE OF ABSTRACT [08-07-2023(online)].pdf 2023-07-08
5 202321046062-DRAWINGS [08-07-2023(online)].pdf 2023-07-08
6 202321046062-FORM-26 [12-09-2023(online)].pdf 2023-09-12
7 202321046062-Proof of Right [17-10-2023(online)].pdf 2023-10-17
8 202321046062-ORIGINAL UR 6(1A) FORM 1 & 26)-011223.pdf 2023-12-08
9 202321046062-ENDORSEMENT BY INVENTORS [19-06-2024(online)].pdf 2024-06-19
10 202321046062-DRAWING [19-06-2024(online)].pdf 2024-06-19
11 202321046062-CORRESPONDENCE-OTHERS [19-06-2024(online)].pdf 2024-06-19
12 202321046062-COMPLETE SPECIFICATION [19-06-2024(online)].pdf 2024-06-19
13 202321046062-FORM 3 [01-08-2024(online)].pdf 2024-08-01
14 202321046062-Request Letter-Correspondence [13-08-2024(online)].pdf 2024-08-13
15 202321046062-Power of Attorney [13-08-2024(online)].pdf 2024-08-13
16 202321046062-Form 1 (Submitted on date of filing) [13-08-2024(online)].pdf 2024-08-13
17 202321046062-Covering Letter [13-08-2024(online)].pdf 2024-08-13
18 202321046062-CERTIFIED COPIES TRANSMISSION TO IB [13-08-2024(online)].pdf 2024-08-13
19 Abstract1.jpg 2024-10-05
20 202321046062-FORM-9 [18-11-2024(online)].pdf 2024-11-18
21 202321046062-FORM 18A [18-11-2024(online)].pdf 2024-11-18
22 202321046062-FER.pdf 2024-12-19
23 202321046062-FORM 3 [11-02-2025(online)].pdf 2025-02-11
24 202321046062-FER_SER_REPLY [21-02-2025(online)].pdf 2025-02-21
25 202321046062-US(14)-HearingNotice-(HearingDate-22-07-2025).pdf 2025-07-01
26 202321046062-FORM-26 [14-07-2025(online)].pdf 2025-07-14
27 202321046062-Correspondence to notify the Controller [14-07-2025(online)].pdf 2025-07-14
28 202321046062-Written submissions and relevant documents [04-08-2025(online)].pdf 2025-08-04
29 202321046062-PatentCertificate17-10-2025.pdf 2025-10-17
30 202321046062-IntimationOfGrant17-10-2025.pdf 2025-10-17

Search Strategy

1 SearchE_18-12-2024.pdf

ERegister / Renewals