Abstract: The present disclosure relates to a system (108) and a method (500) for network congestion management. The system (108) receives a request for determining a back-off time value for one or more UEs (104) from one or more network entities (110). The system (108) validates the request and transmits an error response when the validation is unsuccessful. The system (108) retrieves a set of operational data associated with the one or more UEs (104) from a database (210) or the one or more network entities (110). The system (108) predicts the back-off time values for the requested one or more UEs (104). The back-off time value may be predicted using an Artificial Intelligence (AI) engine (214) based on a set of parameters associated with the retrieved data. The system (108) transmits the predicted back-off time value to the one or more network entities (110). FIGURE 3
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10; rule 13)
TITLE OF THE INVENTION
SYSTEM AND METHOD FOR NETWORK CONGESTION MANAGEMENT
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad -
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade 5 dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully 10 reserved by the owner.
FIELD OF DISCLOSURE
[0002] The embodiments of the present disclosure generally relate to
communication networks. In particular, the present disclosure relates to a system 15 and a method for network congestion management.
DEFINITIONS
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context
20 in which they are used indicates otherwise.
[0004] The expression ‘network data analytics function (NWDAF) engine’
used hereinafter in the specification refers to is a component within the network architecture responsible for collecting, processing, and analyzing a variety of operational and historical data associated with user equipment (UE) and network
25 entities. The NWDAF engine primary role is to predict and determine optimal back¬
off time values for UEs during periods of network congestion.
[0005] The expression ‘session management function (SMF)’ used
hereinafter in the specification refers to a network function (NF) within a 5G core network architecture responsible for managing user equipment (UE) sessions.
30 These sessions establish data connections between UEs and network services. The SMF performs various tasks related to session management.
2
[0006] The expression ‘at least one request’ used hereinafter in the
specification refers to a message sent by the SMF to the NWDAF engine requesting
assistance in determining a back-off time value for a User Equipment (UE).
[0007] The expression ‘back-off time value’ used hereinafter in the
5 specification refers to a duration during which the UE waits before retrying a transmission after encountering congestion or an error.
[0008] The expression ‘Operational Data’ used hereinafter in the
specification refers to real-time data collected from network elements and devices, such as RAN log data from base stations and session log data from SMF.
10 [0009] The expression ‘Historical Data’ used hereinafter in the specification
refers to data accumulated over time that provides a record of past network
activities, including previous back-off times applied to UEs.
[0010] The expression ‘combined Data’ used hereinafter in the specification
refers to the integration of operational and historical data to form a comprehensive
15 dataset used for decision-making, such as determining optimal back-off times.
[0011] The expression ‘validation of requests’ used hereinafter in the
specification refers to the process by which the NWDAF engine verifies the accuracy and legitimacy of requests received from the SMF before proceeding with data retrieval and analysis.
20 [0012] The expression ‘error response message’ used hereinafter in the
specification refers to a notification sent by the NWDAF engine to the SMF
indicating the failure of request validation or other errors encountered during
processing.
[0013] The expression ‘key performance indicator (KPI) Metrics’ used
25 hereinafter in the specification refers to quantitative measures used to evaluate the performance of network elements and services, often retrieved from performance management systems like an IPM.
[0014] The expression ‘at least one error response message’ used
hereinafter in the specification refers to a message sent by the NWDAF engine back
30 to the SMF in response to an unsuccessful validation of the request for determining a back-off time value. This message informs the SMF that the NWDAF engine
3
couldn’t process the request due to an error.
[0015] The expression ‘plurality of historical back-off time values’ used
hereinafter in the specification refers to a collection of back-off time values assigned to the UE or potentially similar UEs in the past during previous network 5 congestion events.
[0016] The expression ‘Network Congestion’ refers to a situation where a
high volume of data traffic is trying to use a network’s resources (bandwidth,
processing power) simultaneously. This can lead to slow transmission speeds,
delays, and even dropped connections for users.
10 [0017] The expression ‘Network Congestion Management’ refers to the
techniques used to address situations where there is too much data traffic trying to
use a network’s resources at once.
[0018] These definitions are in addition to those expressed in the art.
15 BACKGROUND OF DISCLOSURE
[0019] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only
20 to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0020] User data congestions may occur when a network experiences higher
traffic or requests for service from one or more user equipment (UE) than its handling capacity. This may lead to a degradation in the performance and quality
25 of services provided to UEs.
[0021] To reduce data congestion, network entities such as the NWDAF
engine determine a back-off time before the UE can send subsequent data packets or requests for services. In such cases, the back-off time determined by the NWDAF engine may indicate the amount of time the UE has to wait before sending a
30 subsequent request for service to consumer Network Function (NF), such as session management function (SMF). Usually, when the UE sends a first request to the
4
SMF, the NWDAF engine assigns a predetermined back-off time value of a predetermined duration if the network is experiencing congestion and increments the back-off time value for each subsequent request for service if the network continues to experience congestion. However, this approach does not allow the 5 network to provide services to UE as soon as the congestion clears. The UEs may necessarily wait until the back-off time value expires even if the network becomes available for providing services in the meantime.
[0022] Additionally, computing back-off times for each request during
congestion may be unnecessary and expensive as the NWDAF engine may have to
10 retrieve subscriber records, check the previous back-off time, increment value, and return said value to the SMF for re-transmission to the UE, thereby exacerbating computational burdens during congestion.
[0023] There is, therefore, a need in the art to provide a method and a system
that can overcome the shortcomings of the existing prior arts.
15
OBJECTS OF THE PRESENT DISCLOSURE
[0024] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0025] An object of the present disclosure is to provide a system and a
20 method for network congestion management.
[0026] Another object of the present disclosure is to provide the system and
the method that determines a back-off time value using an artificial intelligence (AI) engine, thereby reducing the number of times the back-off time value is computed.
25 [0027] Another object of the present disclosure is to provide the system and
the method that determines the back-off time value based on when the network congestion is likely to be cleared.
[0028] Another object of the present disclosure is to provide the system and
the method that determines the back-off time value using radio access network
30 (RAN) logs, session logs, one or more key performance indicator (KPI) data, back¬off time values of other user equipment (UE), and the like.
5
[0029] Another object of the present disclosure is to provide the system and
the method that predicts the back-off time value for the UE such that said UE reattempts to establish a session with the network entities as soon as the congestion clears. 5
SUMMARY
[0030] In an exemplary embodiment, the present invention discloses a
method for network congestion management. The method comprising receiving, at a network data analytics function (NWDAF) engine, at least one request for
10 determining at least one back-off time value for at least one user equipment (UE) from a session management function (SMF). The method comprising retrieving, by the NWDAF engine, at least one set of operational data associated with the at least one UE. The method comprising combining the retrieved at least one set of operational data with at least one set of historical data associated with the at least
15 one UE to form at least one set of combined data. The method comprising
predicting, based on the at least one set of combined data, at least one back-off time
value for the at least one UE. The method comprising transmitting the predicted at
least one back-off time value to the SMF.
[0031] In some embodiments, the SMF cause the at least one UE to wait for
20 at least one duration associated with the at least one back off time value before reattempting a connection with the network.
[0032] In some embodiments, the NWDAF engine sends at least one error
response message for an unsuccessful validation of the at least one received request.
[0033] In some embodiments, the at least one set of operational data
25 includes a radio access network (RAN) data retrieved from a plurality of base
stations, a session log data retrieved from the SMF, and a plurality of key
performance indicator (KPI) metric values retrieved from an intelligence
performance management (IPM).
[0034] In some embodiments, the at least one set of historical data includes
30 a plurality of historical back-off time values associated with the at least one UE.
[0035] In some embodiments, the at least one received request is validated
6
by the NWDAF engine.
[0036] In some embodiments, an AI engine determines the at least one back-
off time value for the at least one UE.
[0037] In an exemplary embodiment, the present invention discloses a
5 system for network congestion management. The system is configured to receive,
at an NWDAF engine, at least one request for determining at least one back-off
time value for at least one user equipment (UE) from a session management
function (SMF). The system is configured to retrieve, by the NWDAF engine, at
least one set of operational data associated with the at least one UE. The system is
10 configured to combine the retrieved at least one set of operational data with at least
one set of historical data associated with the at least one UE to form at least one set
of combined data. The system is configured to predict, based on the at least one set
of combined data, at least one back-off time value for the at least one UE. The
system is configured to transmit the predicted at least one back-off time value to
15 the SMF.
[0038] In accordance with one embodiment of the present disclosure, a user
equipment that is communicatively coupled with a network is disclosed. The coupling comprises of receiving, at an NWDAF engine, at least one request for determining at least one back-off time value for at least one user equipment (UE) 20 from a session management function (SMF), retrieving, by the NWDAF engine, at least one set of operational data associated with the at least one UE, combining the retrieved at least one set of operational data with at least one set of historical data associated with the at least one UE to form at least one set of combined data, determining, based on the at least one set of combined data, the at least one back-25 off time value for the at least one UE and communicating the determined at least one back-off time value to the SMF. BRIEF DESCRIPTION OF DRAWINGS
[0039] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
30 disclosed methods and systems in which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
7
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Some drawings may indicate the components
using block diagrams and may not represent the internal circuitry of each
component. It will be appreciated by those skilled in the art that disclosure of such
5 drawings includes the disclosure of electrical components, electronic components
or circuitry commonly used to implement such components.
[0040] FIG. 1 illustrates an architecture for network congestion
management, in accordance with embodiments of the present disclosure.
[0041] FIG. 2 illustrates a block diagram of a system, in accordance with
10 embodiments of the present disclosure.
[0042] FIG. 3 illustrates an implementation of the system, in accordance
with embodiments of the present disclosure.
[0043] FIG. 4 illustrates a flowchart of a method for network congestion
management, in accordance with embodiments of the present disclosure.
15 [0044] FIG. 5 illustrates another flowchart of a method for network
congestion management, in accordance with embodiments of the present disclosure.
[0045] FIG. 6 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented.
20 [0046] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 - Architecture for network congestion management 25 102-1, 102-2 - A plurality of users
104-1, 104-2 - A plurality of user equipments
112-1, 112-2- A plurality of base stations
106 - Network
108 - System 30 110-1, 110-2 - A plurality of network entities
114 - Monitoring unit
8
200 - Block diagram
202 - A plurality of processors
204 - Memory
206 - A plurality of interface(s) 5 208 - Processing engine
210 - Database
212 - Network data analytics function (NWDAF) engine
214 - Artificial Intelligence (AI) engine
216 - Other unit(s) 10 300 – An implementation of the system (108)
302 - SMF
304 - IPM
600 - A computer system
610 - External storage device 15 620 - Bus
630 - Main memory
640 - Read only memory
650 - Mass storage device
660 - Communication port(s) 20 670 - Processor
DETAILED DESCRIPTION OF DISCLOSURE
[0047] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
25 embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the
30 problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
9
[0048] The ensuing description provides exemplary embodiments only, and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary 5 embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0049] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
10 of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without
15 unnecessary detail in order to avoid obscuring the embodiments.
[0050] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in
20 parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling
25 function or the main function.
[0051] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not
30 necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques
10
known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any 5 additional or other elements.
[0052] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the
10 phrases “in one embodiment” or “in an embodiment” in various places throughout
this specification are not necessarily all referring to the same embodiment.
Furthermore, the particular features, structures, or characteristics may be combined
in any suitable manner in one or more embodiments.
[0053] The terminology used herein is for the purpose of describing
15 particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations,
20 elements, and/or components, but do not preclude the presence or addition of one
or more other features, integers, steps, operations, elements, components, and/or
groups thereof. As used herein, the term “and/or” includes any and all combinations
of one or more of the associated listed items.
[0054] The present disclosure relates to a system and a method for network
25 congestion management. Various embodiments throughout the disclosure will be explained in more detail with reference to FIGS. 1-6.
[0055] Referring to FIG. 1, the network architecture (100) may include one
or more computing devices or user equipments (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of
30 ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as
11
the users (102). Similarly, a person of ordinary skill in the art will understand that
one or more user equipments (104-1, 104-2…104-N) may be individually referred
to as the user equipment (104) and collectively referred to as the user equipment
(104). A person of ordinary skill in the art will appreciate that the terms “computing
5 device(s)” and “user equipment” may be used interchangeably throughout the
disclosure. Although two user equipments (104) are depicted in FIG. 1, however
any number of the user equipments (104) may be included without departing from
the scope of the ongoing description.
[0056] In an embodiment, the user equipment (104) may include, but is not
10 limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a global positioning system (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing
15 device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104) may include, but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices,
20 laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, where the user equipment (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102)
25 or the entity such as touch pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) may not be restricted to the mentioned devices and various other devices may be used. The architecture (100) may include a monitoring unit (114) having a user interface that provides audio-visual indications to the user based on a set of signals
30 transmitted by the system (108). In an embodiment, the monitoring unit (114) may be implemented on a UE (104) and may be used by operators of the network (106).
12
[0057] In an embodiment, the user equipment (104) may include smart
devices operating in a smart environment, for example, an internet of things (IoT) system. In such an embodiment, the user equipment (104) may include but is not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, 5 electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof.
10 A person of ordinary skill in the art will appreciate that the user equipment (104)
may include, but is not limited to, intelligent, multi-sensing, network-connected
devices that can integrate seamlessly with each other and/or with a central server or
a cloud-computing system or any other device that is network-connected.
[0058] Referring to FIG. 1, the user equipment (104) may communicate
15 with a system (108) through a network (106). In an embodiment, the network (106) may include at least one of a Fifth Generation (5G) network, 6G network, or the like. The network (106) may enable the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to
20 facilitate this communication. In another embodiment, the network (106) may be implemented as or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like. In an embodiment, each
25 of the UE (104) may have a unique identifier attribute associated therewith. In an embodiment, the unique identifier attribute may be indicative of Mobile Station International Subscriber Directory Number (MSISDN), International Mobile Equipment Identity (IMEI) number, International Mobile Subscriber Identity (IMSI), Subscriber Permanent Identifier (SUPI) and the like.
30 [0059] In an embodiment, the network (106) may include one or more base
stations (112), which the UEs (104) may connect to and request services from. The
13
base station (112) may be a network infrastructure that provides wireless access to one or more terminals associated therewith. The base station (112) may have coverage defined to be a predetermined geographic area based on the distance over which a signal may be transmitted. The base station (112) may include, but not be 5 limited to, a wireless access point, evolved NodeB (eNodeB), 5G node or next generation NodeB (gNB), wireless point, transmission/reception point (TRP), and the like. In an embodiment, the base station (112) may include one or more operational units that enable telecommunication between two or more UEs (104). In an embodiment, the one or more operational units may include, but not be limited
10 to, transceivers, baseband unit (BBU), remote radio unit (RRU), antennae, mobile switching centres, radio network control units, one or more processors associated thereto, and a plurality of network entities (110) such as access and mobility management function (AMF) unit, session management function (SMF) unit, network exposure function (NEF) units, or any custom built functions executing
15 one or more processor-executable instructions, but not limited thereto.
[0060] In an embodiment, the base stations (112) and the one or more
network entities (110) associated therewith may generate a set of operational data while providing services to the one or more UEs (104). In an embodiment, the set of operational data may include, but is not limited to, radio access network (RAN)
20 data, session logs, back-off time values of other UEs (104), Key Performance Indicators (KPIs) metrics or KPI related data. In an embodiment, the RAN data may be generated as the base stations (112) interact with each other and the UE (104) to provide services thereto. In an embodiment, the RAN data, generated by the base stations (112) such as NodeBs (eNBs) or next-generation NodeBs (gNBs), may
25 include one or more attributes that may be used to derive performance and health metrics of the network (106). In an embodiment, the one or more attributes may include, but not be limited to, radio summary logs, timestamps, the UE (104) information such as unique identifier attributes, configurations details, device type, etc., call event details, signal strength metrics, throughput metrics, unique attributes
30 associated with the base stations (112), alarms and fault details, error codes, and the like.
14
[0061] In an embodiment, the session logs may be generated as one or more
network entities (110) create and maintain sessions between or with one or more UEs (104) for providing network services. For example, the network entities (110) may create one or more packet data unit (PDU) sessions to enable the exchange of 5 data packets between the UEs (104) and the packet data networks (PDNs), such as the Internet. In an embodiment, the base station (112) may include an Intelligent Performance Management (IPM) (304 such as IPM shown in FIG. 3) that determines one or more Key Performance Indicators (KPIs) that indicate the health and performance of the network (106). In an embodiment, the operational data may
10 be used to compute the back-off times.
[0062] In an embodiment, the system (108) may be coupled to a monitoring
unit (114) that may provide an audio-visual interface to the user (102) for monitoring and analyzing data. In an embodiment, the monitoring unit (114) may provide an interface, including, but not limited to, a graphical user interface (GUI),
15 an application programming interface (API) or a command line interface (CLI). In an embodiment, the monitoring unit (114) may provide a dashboard for analyzing and monitoring the network congestion and back-off time values generated in real¬time. In an embodiment, the monitoring unit (114) may be used by users (102) or operators of the network (106).
20 [0063] In an embodiment, the system (108) may receive a request for
determining a back-off time value for one or more UEs (104) from the one or more network entities (110). In an embodiment, the system (108) may validate the request and transmit an error response when the validation is unsuccessful. In an embodiment, the system (108) may retrieve a set of operational data associated with
25 the one or more UEs (104). In such embodiments, the validation of the requests
may be successful. In an embodiment, the set of operational data may be retrieved
from the database or the one or more network entities (110).
[0064] In an embodiment, the system (108) may correlate the retrieved
operational data or correlated data with the one or more UEs (104) requesting
30 services. In an embodiment, the operational data may be correlated with the unique identifier attributes of the UE (104). In an embodiment, the operational data may
15
be correlated to determine the number of times the UE (104) has requested to
establish or modify a PDU session previously, the importance level or priority
attribute of the UE (104), and whether it is necessary to allow the UE (104) to
reattempt.
5 [0065] In an embodiment, the system (108) may transmit the predicted
back-off time values to the one or more network entities (110). In such
embodiments, the network entities (110) may cause the one or more UEs (104)
requesting for services to wait for the duration of the back-off time values before
reattempting connection with the network (106).
10 [0066] FIG. 2 illustrates an exemplary block diagram (200) of the system
(108), in accordance with embodiments of the present disclosure.
[0067] In an aspect, the system (108) may include one or more processor(s)
(202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, 15 digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-20 readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and 25 the like.
[0068] Referring to FIG. 2, the system (108) may include an interface(s)
(206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication to/from 30 the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such
16
components include but are not limited to, processing unit/engine(s) (208) and a database (210).
[0069] In an embodiment, the processing unit/engine(s) (208) may be
implemented as a combination of hardware and programming (for example, 5 programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage
10 medium, and the hardware for the processing engine(s) (208) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (108) may include the
15 machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
20 [0070] In an embodiment, the database (210) includes data that may be
either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing engines (208). In an embodiment, the database (210) may be separate from the system (108). In an embodiment, the database (210) may be indicative of including, but not limited to,
25 a relational database, a distributed database, distributed file sharing system, a cloud-based database, or the like.
[0071] In an exemplary embodiment, the processing engine (208) may
include one or more engines selected from any of a Network Data Analytics Function (NWDAF) engine (212), the AI engine (214), and other engines (216)
30 having functions that may include, but are not limited to, testing, storage, and peripheral functions, such as wireless communication unit for remote operation, and
17
the like, as described in FIG. 3.
[0072] In an embodiment, the system (108) may use an artificial intelligence
(AI) such as the AI engine (214) to predict the back-off time values for the requested one or more UEs (104). In an embodiment, the AI engine (214) may be indicative 5 of including, but not limited to, artificial intelligence (AI) or pre-trained machine learning (ML) models, expert systems, and the like. In an embodiment, the AI engine (214) may be trained on historical data of one or more operational data, as well as the back-off time values previously computed for the UEs (104) during periods of congestion. The back-off time value may be predicted based on a set of
10 parameters associated with the retrieved data. The set of parameters are derived from the combined operational and historical data. In an embodiment, the set of parameters may include, but are not limited to, previous usage patterns and priority attribute values of one or more UEs (104), number of reattempts by UEs (104) on failure, network congestion clearance forecasts, and the like. In an embodiment, the
15 AI engine (214) may determine the back-off time value such that the one or more UEs (104) may reattempt connection with the network (106) as soon as the congestion clears.
[0073] In an embodiment, each of the processing engines (208) may be
communicatively coupled to implement the system (108) and method of the present
20 disclosure.
[0074] FIG. 3 illustrates an exemplary implementation (300) of the system
(108), in accordance with embodiments of the present disclosure.
[0075] In an embodiment, the system (108) may receive a request for
determining a back-off time value for one or more UEs (104) from a consumer
25 Network Function, or the one or more network entities (110) such as the SMF (302). The system (108) validates the request and transmits an error response when the validation is unsuccessful. In an embodiment, the NWDAF engine (212) of the system (108) may receive and process said request. In an embodiment, the NWDAF engine (212) may retrieve a set of operational data associated with the one or more
30 UEs (104). In an embodiment, the set of operational data may be retrieved from the database (210) or the one or more network entities (110). In an embodiment, the
18
NWDAF engine (212) may retrieve the RAN data from the one or more base
stations (112), session logs or PDU related data from the SMF (302), historical
back-off time values from the database and KPI metric values from the IPM (304).
[0076] In an aspect, the RAN data may refer to the detailed records
5 generated by RAN components, such as base stations or cell towers. These logs capture information about network traffic, signal strength, and user activities. For example, the RAN data might record the number of active connections and signal quality metrics for a specific cell tower over a given time period. This data helps analyze network performance and identify congestion points.
10 [0077] In an aspect, the session log data may include records of the UE
(104) sessions managed by network elements like the SMF (302). These logs detail session start and end times, data usage, and session errors. For example, the session log data may show the duration of data session of the UE (104), the amount of data transferred, and any interruptions or failures experienced during the session. This
15 information is crucial for monitoring session quality and managing network resources.
[0078] In an aspect, the KPI related data are key performance indicators
used to assess network performance. The KPI metric values may include parameters like latency, throughput, and/or packet loss. For example, the KPI metric values
20 may include the average latency experienced by the user (102), measured in
milliseconds, or the throughput of data transferred through a network segment,
measured in megabits per second. The KPI metric values amy help in evaluating
network efficiency and performance.
[0079] In an embodiment, the at least one set of historical data includes a
25 plurality of historical back-off time values associated with the at least one UE (104). In an embodiment, the at least one set of historical data is used to improve the prediction of back-off times for the UE (104) experiencing congestion. For example, the at least one set of historical data on connection attempts indicates that the UEs (104) frequently attempt to connect during peak hours, specifically
30 between 6 PM and 9 PM, with a 70% success rate. This contrasts with a 90% success rate during off-peak times. This data may be utilized to predict and manage
19
congestion effectively by adjusting network resources to accommodate higher loads during peak hours. To mitigate repeated attempts and alleviate congestion, the system (108) may increase the back-off time for the UEs (104) attempting to reconnect during these high-traffic periods. In another example, data on the UE 5 (104) reattempts shows that after a failed connection attempt, the UE (104) may make reattempts within 1 minute, averaging 3 reattempts per incident. The success rate of connections significantly drops after the third reattempt, informing the system (108) to incrementally increase the back-off time after each failure, helping to reduce network congestion caused by frequent reattempts. The data may also be
10 used to analyze and address persistent failure patterns, guiding targeted network optimizations to enhance overall performance.
[0080] In an embodiment, the NWDAF engine (212) may correlate the
retrieved operational data with the one or more UEs (104) requesting for services. In an embodiment, the operational data may be correlated to determine the number
15 of times the UE (104) has requested to establish or modify a PDU session, the priority value of UE (104), and whether it is necessary to allow the UEs (104) to reattempt. In an embodiment, the NWDAF engine (212) may, using the AI engine (214), predict the back-off time values for the requested one or more UEs (104). The back-off time value may be predicted based on a set of parameters associated
20 with the retrieved data. In an embodiment, the set of parameters may include, but are not limited to, previous usage patterns and priority attribute values of one or more UEs (104), network congestion clearance forecasts, and the like. The NWDAF engine (212) may transmit the predicted back-off time value to the SMF (302). In such embodiments, the SMF (302) may cause one or more UEs (104)
25 requesting for services to wait for the duration of the back-off time value before reattempting connection with the network (106).
[0081] In an aspect the NWDAF engine (212) may first collect operational
data related to the UE (104), such as KPI related data, the RAN data and PDU related data from the IPM (304), the base station (112) and the SMF (302)
30 simultaneously. This operational data may include real-time metrics on network performance, the UE (104) activity, and current network conditions.
20
[0082] In an aspect, the NWDAF engine (212) may also receive historical
data associated with the UE (104) that may be stored in the database (210). The
historical data may include past usage patterns, previous connection attempts, and
any previously assigned back-off time.
5 [0083] In an aspect, the NWDAF engine (212) then combines the
operational data with the historical data to create a comprehensive dataset. This
combined dataset provides a detailed picture of both current network conditions and
past behaviour of the UE (104).
[0084] In an aspect, the AI engine (214) analyses and determines the
10 combined data to identify patterns and trends using the AI/ML model. The
determination helps predict the at least one back-off time value for the UE (104),
considering factors such as the severity of current network congestion, the priority
level of the UE (104), and past reattempt behaviour of the UE (104).
[0085] In one aspect, the AI/ML model uses data about the back-off timer
15 value set for the UE (104) in a specific location and a set of parameters to predict at least one back-off time value. These parameters are derived from the combined operational and historical data.
[0086] In an aspect, the determined at least one back-off time value is
communicated/inserted back to the database (210) that sends the back-off time
20 value to the NWDAF engine (212) that further sends the determined back-off time value to the SMF (302), which may then instruct the UE (104) to apply the back¬off time before making another connection attempt.
[0087] FIG. 4 illustrates an exemplary flowchart of a method (400) for
network congestion management, in accordance with embodiments of the present
25 disclosure.
[0088] At step 402, the SMF (302), which is responsible for managing
sessions and handling the connectivity of User Equipment (UE) to the network, sends a request to the Network Data Analytics Function (NWDAF) engine (212). This request is aimed at determining at least one back-off time value for one or
30 more UEs (104). The back-off time indicates how long the UE (104) should wait before reattempting to connect or send data, which helps in avoiding network
21
congestion and ensuring smoother network operations.
[0089] At step 404, the NWDAF engine (212) may validate the received
request. This involves examining the request to check for the presence of necessary
parameters and verifying the legitimacy of the request.
5 [0090] In an aspect, the NWDAF engine (212) may verify that the request
comes from an authorized and authenticated network function. This involves checking that all required fields are present and correctly formatted, such as the unique identifiers of the user equipment (UE) (104), session details, and the parameters needed for determining the back-off time value. Secondly, the NWDAF
10 engine (212) may verify the legitimacy of the request by ensuring the requesting entity is authorized and authenticated. This involves checking the credentials of the session management function (SMF) (302) that submitted the request and verifying that it is a recognized and trusted entity within the network. The NWDAF engine (212) may also confirms that the request pertains to an active and recognized
15 session or UE (104), ensuring that the request is relevant and appropriate for processing. Additionally, the NWDAF engine (212) may check that the requesting entity is authorized and authenticated. Additionally, the NWDAF engine (212) may check for the presence of necessary parameters that are crucial for accurate back¬off time prediction. These parameters may include combined operational and
20 historical data of the UE (104), and any other contextual information required for
the analysis. If any required information is missing or incorrect, the NWDAF
engine (212) identifies these deficiencies and flags the request as invalid.
[0091] At step 406, the method checks whether the validation is successful
or not.
25 [0092] At step 408, if the validation fails or if the validation is unsuccessful,
an error response is sent back to the NF that originated the request, in this case, the SMF (302). This error response includes details about why the validation failed, allowing the SMF (302) to correct the issues and resend the request if necessary. For example, if the initial request lacked necessary UE identifiers, the error
30 response would indicate this, prompting the SMF (302) to include the missing identifiers in a new request.
22
[0093] At step 410, if the validation is successful, the NWDAF engine (212)
retrieves relevant operational data from various sources. This includes RAN data, Key Performance Indicator (KPI) metrics, historical back-off time values, and other pertinent data stored in the database (210). For instance, the NWDAF engine (212) 5 might access logs detailing recent activities, signal strengths, and previous congestion events related to the UE (104).
[0094] At step 412, the AI engine (214) analyses and determines the
combined data to identify patterns and trends using the AI/ML model. The determination helps predict the at least one back-off time value for the UE (104),
10 considering factors such as the severity of current network congestion, the priority level of the UE (104), and past reattempt behaviour of the UE (104). The AI/ML model uses data about the back-off timer value set for the UE (104) in a specific location and a set of parameters to predict at least one back-off time value. These parameters are derived from the combined operational and historical data.
15 [0095] At step 414, finally, the predicted at least one back-off time value is
sent back to the consumer NF or the SMF (302). This forecast data includes the recommended wait times for each UE (104) before they should reattempt to connect or send data. For instance, the NWDAF engine (212) might determine that a high-priority UE should wait 30 seconds before reattempting, while a lower-priority UE
20 should wait 60 seconds, thereby managing network congestion more effectively and maintaining overall network performance.
[0096] FIG. 5 illustrates a flowchart of a method (500) for network
congestion management, in accordance with embodiments of the present disclosure.
25 [0097] At step 502, of the method (500), the NWDAF engine (212) is set up
to receive a request from the SMF (302) aimed at determining appropriate back-off time values for one or more UEs (104) within the network (106). For example, if the SMF (302) detects increased congestion due to high data traffic or network overload, it sends a request to the NWDAF engine (212) to assist in optimizing
30 transmission delays for the UEs (104). This interaction initiates a critical phase where the NWDAF engine (212) begins its data retrieval and analysis processes.
23
The NWDAF engine (212) retrieves current operational data from various network sources, such as real-time traffic volumes from base stations and session logs from network elements. This step is pivotal as it forms the basis for subsequent decisions regarding back-off time calculations, ensuring that network resources are efficiently 5 managed to maintain optimal performance and user experience during periods of heightened demand or congestion.
[0098] At step 504, the NWDAF engine (212) proceeds by retrieving at least
one set of operational data associated with the identified UEs (104). This operational data encompasses real-time metrics such as current network traffic
10 loads, session durations, and performance indicators gathered from various network elements like base stations and routers. For example, the NWDAF engine (212) may pull data from the RAN data to assess current traffic patterns and session logs from the SMF to understand ongoing user sessions and their demands on network resources.
15 [0099] At step 506, after retrieving operational data in step 504, the
NWDAF engine (212) proceeds to merge this real-time information with historical data pertaining to the identified UEs (104). The historical data encompasses a range of information crucial for understanding network behaviour over time, such as previous instances of back-off time values implemented during congested periods,
20 historical network utilization patterns, and long-term performance trends gathered from comprehensive network management systems. For instance, if historical data shows that during peak usage hours certain UEs (104) consistently required longer back-off times to manage congestion effectively, the NWDAF engine (212) integrates this insight with current operational metrics to make informed decisions.
25 By combining these datasets, the method (500) enhances its ability to calculate optimized back-off times tailored to current network conditions, thereby improving overall network efficiency and user experience.
[00100] At step 508, based on the at least one set of combined data, the at
least one back-off time value for the at least one UE (104) is determined. The
30 method (500) utilizes an AI engine (214) that may use a predictive algorithm or machine learning model to determine the at least one set of combined data. The at
24
least one set of combined data may include both the operational data and the historical data associated with the UE (104), such as recent network usage trends, historical congestion patterns, and the UE (104) specific behaviour. Specifically, the AI engine (214) may assess patterns and correlations within the data and 5 estimate the at least one back-off time value for the at least one UE (104). For example, if the at least one set of combined data indicates a high frequency of connection attempts by the UE (104) and a significant level of network congestion, the AI engine (214) might predict a longer back-off time to mitigate network strain. The AI engine (214) predictions may be derived from training data that reflect
10 typical network conditions and behaviour of the UE (104), ensuring that the back-off time values are modified according to current network dynamics and historical usage patterns.
[00101] In an aspect, the determined back-off time value plays a crucial role
in network congestion management by effectively regulating the frequency and
15 timing of connection attempts by the UE (104). By assigning a specific back-off time value, the system (108) ensures that the UE (104) experiencing network congestion waits for a calculated period before reattempting to establish or modify a session to prevent multiple simultaneous connection attempts that may lead to further congestion.
20 [00102] At step 510, the NWDAF engine (212) communicates the
determined back-off time values back to the SMF (302). This communication ensures that the SMF (302) is promptly informed of the calculated back-off times, enabling the SMF to relay these instructions to the respective UEs (104). For instance, if a UE (104) is instructed to wait longer before reattempting
25 communication due to detected congestion, the SMF (302) ensures this instruction
is conveyed accurately, thereby contributing to effective network congestion
management and improved overall network performance.
[00103] In an embodiment, the SMF (302) may send at least one back off
time value to the at least one UE (104). The SMF (302) may need to establish
30 communication protocols and interfaces with the UE (104). Prior to sending back-off time values to the UE (104), the SMF (302) must accurately calculate these
25
values based on inputs received from the NWDAF engine (212). This calculation
relies on: data retrieved from the NWDAF engine (212), including operational and
historical data and algorithms or rules defined within the SMF (302) to process this
data and determine appropriate back-off time value.
5 [00104] In an embodiment, the NWDAF engine (212) sends at least one error
response message for an unsuccessful validation of the at least one received request.
[00105] In an embodiment, the at least one set of operational data includes
the RAN data retrieved from a plurality of base stations (112), a session log data
retrieved from the SMF (302), and a plurality of key performance indicator (KPI)
10 metric values retrieved from an intelligence performance management (IPM) (304).
[00106] In an embodiment, the at least one set of historical data includes a
plurality of historical back-off time values associated with the at least one UE (104).
[00107] In an embodiment, the at least one received request is validated by
the NWDAF engine (212).
15 [00108] FIG. 6 is an illustration (600) of a non-limiting example of details of
computing hardware used in the system (108), in accordance with an embodiment of the present disclosure. As shown in FIG. 2, the system (108) may include an external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), a communication port (660), and a 20 processor (670). A person skilled in the art will appreciate that the system (108) may include more than one processor (670) and communication ports (660). Processor (670) may include various modules associated with embodiments of the present disclosure.
[00109] In an embodiment, the communication port (660) is any of an RS-
25 232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a
Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or
other existing or future ports. The communication port (660) is chosen depending
on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or
any network to which the system (108) connects.
30 [00110] In an embodiment, the memory (630) is Random Access Memory
(RAM), or any other dynamic storage device commonly known in the art. Read-
26
only memory (640) is any static storage device(s) e.g., but not limited to, a
Programmable Read Only Memory (PROM) chips for storing static information
e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor
(670).
5 [00111] In an embodiment, the mass storage (650) is any current or future
mass storage solution, which is used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having
10 Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
[00112] In an embodiment, the bus (620) communicatively couples the
processor(s) (670) with the other memory, storage, and communication blocks. The
15 bus (620) is, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the system (108).
20 [00113] Optionally, operator and administrative interfaces, e.g., a display,
keyboard, joystick, and a cursor control device, may also be coupled to the bus (620) to support direct operator interaction with the system (108). Other operators and administrative interfaces are provided through network connections connected through the communication port (660). The components described above are meant
25 only to exemplify various possibilities. In no way should the aforementioned
exemplary illustration (600) limit the scope of the present disclosure.
[00114] In accordance with one embodiment of the present disclosure, a user
equipment that is communicatively coupled with a network is disclosed. The coupling comprises of receiving, at the NWDAF engine, at least one request for
30 determining at least one back-off time value for at least one UE from the SMF, retrieving, by the NWDAF engine, at least one set of operational data associated
27
with the at least one UE, combining the retrieved at least one set of operational data with at least one set of historical data associated with the at least one UE to form at least one set of combined data, determining, based on the at least one set of combined data, the at least one back-off time value for the at least one UE and 5 communicating the determined at least one back-off time value to the SMF.
[00115] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred
10 embodiments of the disclosure will be apparent to those skilled in the art from the
disclosure herein, whereby it is to be distinctly understood that the foregoing
descriptive matter to be implemented merely as illustrative of the disclosure and
not as limitation.
[00116] In an aspect, the present disclosure provides a system and a method
15 that determines the back-off time value based on which the network congestion is
likely to be cleared. The present disclosure provides a system and a method that
predicts the back-off time value for the UE such that the UE reattempts establishing
a session with the network entities as soon as the congestion clears.
[00117] In an aspect, the present disclosure can be implemented within a 5G
20 communication network or with various network elements that may involve various
algorithms, protocols, or mechanisms for network congestion management.
[00118] The present disclosure provides technical advancement related to
network congestion management. This advancement addresses the limitations of existing solutions by incorporating the system and the method for dynamically
25 predicting and determining back-off time values for the UE to manage network load effectively. The present disclosure involves inventive aspects such as the integration of real-time operational data with historical usage patterns, the use of advanced data analytics to predict network congestion and the automated communication of optimized back-off times to the SMF. These aspects offer
30 significant improvements in performance and efficiency by reducing network congestion, improving user experience through reduced connection attempts, and
28
optimizing network resource utilization. By implementing these specific
techniques, the disclosed invention enhances the overall management of network
traffic, resulting in smoother network operations, reduced latency, and better
quality of service for end-users. 5
ADVANTAGES OF THE PRESENT DISCLOSURE
[00119] The present disclosure provides a system and a method for network
congestion management.
[00120] The present disclosure provides the system and the method that
10 determines a back-off time value using an artificial intelligence (AI) engine, thereby
reducing the number of times the back-off time value is computed.
[00121] The present disclosure provides the system and the method that
determines the back-off time value based on which the network congestion is likely
to be cleared.
15 [00122] The present disclosure provides the system and the method that
determines the back-off time value using RAN data, session logs, one or more key
performance indicator (KPI) data, back-off time values of other user equipment
(UE), and the like.
[00123] The present disclosure provides the system and the method that
20 predicts the back-off time value for the UE such that said UE reattempts to establish
a session with the network entities as soon as the congestion clears.
29
We Claim:
1. A method (500) for network congestion management, the method (500)
comprising:
5 receiving (502), at a network data analytics function (NWDAF)
engine (212), at least one request for determining at least one back-off time value for at least one user equipment (UE) (104) from a session management function (SMF) (302);
retrieving (504), by the NWDAF engine (212), at least one set of
10 operational data associated with the at least one UE (104);
combining (506) the retrieved at least one set of operational data with at least one set of historical data associated with the at least one UE (104) to form at least one set of combined data;
determining (508), based on the at least one set of combined data,
15 the at least one back-off time value for the at least one UE (104); and
communicating (510) the determined at least one back-off time value to the SMF (302).
2. The method (500) as claimed in claim 1, wherein the SMF sends at least
20 one back-off time value to the at least one UE (104).
3. The method (500) as claimed in claim 1, wherein the NWDAF engine
(212) sends at least one error response message for an unsuccessful
validation of the at least one received request.
25
4. The method (500) as claimed in claim 1, wherein the at least one set of
operational data includes a radio access network (RAN) data retrieved from
a plurality of base stations (112), a session log data retrieved from the SMF
(302), and a plurality of key performance indicator (KPI) metric values
30 retrieved from an intelligence performance management (IPM) (304).
30
5. The method (500) as claimed in claim 1, wherein the at least one set of
historical data includes a plurality of historical back-off time values
associated with the at least one UE (104).
5
6. The method (500) as claimed in claim 1, wherein the at least one received
request is validated by the NWDAF engine (212).
7. The method (500) as claimed in claim 1, wherein an AI engine (214)
10 determines the at least one back-off time value for the at least one UE (104).
8. A system (108) for network congestion management, the system (108)
comprising:
a processing engine (208), configured to:
15 receive, at a network data analytics function (NWDAF) engine
(212), at least one request for determining at least one back-off time value for at least one user equipment (UE) (104) from a session management function (SMF) (302);
retrieve, by the NWDAF engine (212), at least one set of operational
20 data associated with the at least one UE (104);
combine the retrieved at least one set of operational data with at least one set of historical data associated with the at least one UE (104) to form at least one set of combined data;
determine, based on the at least one set of combined data, the at least
25 one back-off time value for the at least one UE (104); and
communicate the determined at least one back-off time value to the SMF (302).
9. The system as claimed in claim 8, wherein the SMF (302) sends at least
30 one back-off time value to the at least one UE (104).
31
10. The system as claimed in claim 8, wherein the NWDAF engine (212) sends at least one error response message for an unsuccessful validation of the at least one received request.
5 11. The system as claimed in claim 8, wherein the at least one set of
operational data includes a radio access network (RAN) data retrieved from a plurality of base stations (112), a session log data retrieved from the SMF (302), and a plurality of key performance indicator (KPI) metric values retrieved from an intelligence performance management (IPM) (304). 10
12. The system as claimed in claim 8, wherein the at least one set of historical data includes a plurality of historical back-off time values associated with the at least one UE (104).
15 13. The system as claimed in claim 8, wherein the at least one received
request is validated by the NWDAF engine (212).
14. The system as claimed in claim 8, wherein an AI engine (214)
determines the at least one back-off time value for the at least one UE (104).
20
15. A user equipment (UE) (104) communicatively coupled with a network
(106), the coupling comprises steps of:
receiving, at a network data analytics function (NWDAF) engine
(212), at least one request for determining at least one back-off time value
25 for at least one user equipment (UE) (104) from a session management
function (SMF) (302);
retrieving, by the NWDAF engine (212), at least one set of operational data associated with the at least one UE (104);
combining the retrieved at least one set of operational data with at
30 least one set of historical data associated with the at least one UE (104) to
form at least one set of combined data;
32
determining, based on the at least one set of combined data, the at least one back-off time value for the at least one UE (104); and communicating the determined at least one back-off time value to the SMF (302). 5
| # | Name | Date |
|---|---|---|
| 1 | 202321050215-STATEMENT OF UNDERTAKING (FORM 3) [25-07-2023(online)].pdf | 2023-07-25 |
| 2 | 202321050215-PROVISIONAL SPECIFICATION [25-07-2023(online)].pdf | 2023-07-25 |
| 3 | 202321050215-FORM 1 [25-07-2023(online)].pdf | 2023-07-25 |
| 4 | 202321050215-DRAWINGS [25-07-2023(online)].pdf | 2023-07-25 |
| 5 | 202321050215-DECLARATION OF INVENTORSHIP (FORM 5) [25-07-2023(online)].pdf | 2023-07-25 |
| 6 | 202321050215-FORM-26 [25-10-2023(online)].pdf | 2023-10-25 |
| 7 | 202321050215-FORM-26 [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202321050215-FORM 13 [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202321050215-FORM-26 [30-04-2024(online)].pdf | 2024-04-30 |
| 10 | 202321050215-Request Letter-Correspondence [03-06-2024(online)].pdf | 2024-06-03 |
| 11 | 202321050215-Power of Attorney [03-06-2024(online)].pdf | 2024-06-03 |
| 12 | 202321050215-Covering Letter [03-06-2024(online)].pdf | 2024-06-03 |
| 13 | 202321050215-CORRESPONDENCE(IPO)-(WIPO DAS)-10-07-2024.pdf | 2024-07-10 |
| 14 | 202321050215-ORIGINAL UR 6(1A) FORM 26-100724.pdf | 2024-07-15 |
| 15 | 202321050215-FORM-5 [24-07-2024(online)].pdf | 2024-07-24 |
| 16 | 202321050215-DRAWING [24-07-2024(online)].pdf | 2024-07-24 |
| 17 | 202321050215-CORRESPONDENCE-OTHERS [24-07-2024(online)].pdf | 2024-07-24 |
| 18 | 202321050215-COMPLETE SPECIFICATION [24-07-2024(online)].pdf | 2024-07-24 |
| 19 | 202321050215-FORM 18 [03-10-2024(online)].pdf | 2024-10-03 |
| 20 | Abstract-1.jpg | 2024-10-04 |
| 21 | 202321050215-FORM 3 [11-11-2024(online)].pdf | 2024-11-11 |