Sign In to Follow Application
View All Documents & Correspondence

System And Method For Policy Computation For User Data Congestion In Wireless Network

Abstract: The present disclosure relates to a system (108) and a method (600) for analyzing user data congestion in a network (106). The method (600) comprising receiving, by a network data analytics function (NWDAF) engine, a request for user data congestion analysis from a network function (NF) consumer. The method (600) comprising collecting, by the NWDAF engine, data related to the received request from a user plane function (UPF). The method (600) comprising analyzing, by the NWDAF engine, the collected data to determine whether at least one policy is applicable to the collected data. The method (600) comprising after analyzing, applying, by the NWDAF engine, the at least one policy on the collected data. The method (600) comprising sending, by the NWDAF engine (212), at least one notification comprising at least of the collected data and the applied policy to the NF consumer. Figure.6

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 July 2023
Publication Number
47/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-08-12
Renewal Date

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. SHOBHARAM, Meenakshi
2B-62, Narmada, Kalpataru, Riverside, Takka, Panvel, Raigargh - 410206, Maharashtra, India.
4. AICH, Ajitabh
House No. 513, Ward 15, Lichu Bagan, Rubber Bagan, Tezpur, Assam - 784001, India.
5. SINGH, Vivek
16/81, Kachhpura Yamuna Bridge, Agra, Uttar Pradesh - 282006, India.
6. PATEL, Darpan Mahendra
Building No 4, Flat 602, Wimbledon Park, Opp Singhania School, Samata Nagar, Next to Cadbury Co, Thane West - 400606, Maharashtra, India
7. DEB, Chiranjeeb
Ambicapatty, Silchar, Assam - 788004, India.
8. BAGAV, Akash Vinayak
B/16, Nishigandh Soc, Deendayal Road, Near GM Garage, Vishnunagar, Dombivli (W) - 421202, Maharashtra, India.
9. VISHAWAKARMA, Rishee Kumar
D1-35, Greenfiels Rocks Jogeshwari East Mumbai - 400060, Maharashtra, India.

Specification

FORM 2
HE PATENTS ACT, 1970
(39 of 1970) PATENTS RULES, 2003
COMPLETE SPECIFICATION
OF THE
^.TITLE OF TE INVENTION
WIRELESS NETWORK
APPLICANT
380006, Gujarat, India; Nationality : India
following specification particularly describes the invention and the manner in which it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains
material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade 5 dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
10
FIELD OF THE DISCLOSURE
[0002] The present disclosure generally relates to a wireless
telecommunications network. More particularly, the present disclosure relates to a system and a method for policy computation and forecasting for user data
15 congestion in a wireless network.
DEFINITIONS
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context 20 in which they are used to indicate otherwise.
[0004] The expression "Network Data Analytics Function (NWDAF)" used
hereinafter in the specification refers to a network function that provides analytics
information to other network functions in a 5G core network.
[0005] The expression "Network Slice Selection Assistance Information (S-
25 NSSAI)" used hereinafter in the specification refers to information used to assist in
the selection of a network slice instance for a user equipment.
[0006] The expression "Cell Identifier (ID)" used hereinafter in the
specification refers a numeric or alphanumeric identifier that uniquely identifies a specific cell (base station) within a mobile network. Each cell typically covers a 30 geographical area and provides wireless connectivity to UEs within its coverage
2

range.
[0007] The term SMF as used herein, refers to Session Management
Function. The SMF plays a crucial role in establishing, managing, and terminating
communication sessions between User Equipment (UE) and 5G network services.
5 [0008] The expression “Tracking Area Identities (TAIs)” used hereinafter
in the specification refers to identifiers used in mobile telecommunications networks, particularly in the context of LTE (Long Term Evolution) and 5G networks. TAIs play a crucial role in network management and mobility tracking of the UEs as they move within the network coverage areas.
10 [0009] The expression “User Plane Function (UPF)” used hereinafter in the
specification refers to a key component in 5G and LTE (Long Term Evolution)
mobile networks. It plays a crucial role in handling and processing user data traffic
as it travels between the User Equipment (UE) and external networks or services.
[0010] The expression “Access and Mobility Management Function
15 (AMF)” used hereinafter in the specification refers to a key network function within the 5G core network architecture. It plays a crucial role in managing the initial access of User Equipment (UE) to the 5G network, as well as handling mobility-related functions as UEs move within the network.
20 BACKGROUND OF THE DISCLOSURE
[0011] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only
25 to enhance the understanding of the reader with respect to the present disclosure, and not as admission of the prior art.
[0012] User data congestion is experienced while transferring user data over
a control plane or a user plane or both. User data congestion occurs when a number of packets being transmitted through the network is more than the packet handling
30 capacity of the network. Currently, a network data analytics function (NWDAF) provides user data congestion-related analytics on the basis of selected user
3

equipment (UE), a specific area including a list of tracking area identity (TAI) or cell identifiers (IDs), and single network slice selection assistance information (S-NSSAI).
[0013] Conventional systems identify a number of UEs connected to a cell
5 site or a network in a particular location. Further, the conventional systems send a notification to a network operations team when the number of UEs connected to the cell site or the network in that particular location crosses a predefined threshold limit. This may lead to congestion at that particular location for a time period. However, the conventional systems may lack in providing customized policy for
10 the data analysis and performing future predictions to avoid the congestion at the particular location.
[0014] Conventional systems suffer from one or more drawbacks hindering
their adoption. The conventional systems face difficulty in managing data congestion effectively, particularly in providing customized policies and future
15 predictions. There is, therefore, a need in the art to provide a system and a method that mitigates the problems associated with the prior arts.
SUMMARY OF THE DISCLOSURE
[0015] In an exemplary embodiment, the present invention discloses a
20 method for analyzing user data congestion in a network. The method comprising receiving, by a network data analytics function (NWDAF) engine, a request for user data congestion analysis from a network function (NF) consumer. The method comprising collecting, by the NWDAF engine, data related to the received request from a user plane function (UPF). The method comprising analyzing, by the
25 NWDAF engine, the collected data to determine whether at least one policy is applicable to the collected data. The method comprising after analyzing, applying, by the NWDAF engine, the at least one policy on the collected data. The method comprising sending, by the NWDAF engine, at least one notification comprising at least of the collected data and the applied policy to the NF consumer.
30 [0016] In an embodiment, the at least one policy is created by the NF
consumer based on the collected data and wherein the at least one policy comprises
4

a set of rules for the NF consumer when the collected data fail to meet a predefined criteria.
[0017] In an embodiment, the NF consumer comprises at least a access and
mobility management function (AMF) and a session management function (SMF).
5 [0018] In an embodiment, the method further comprising providing at least
one action by the NF consumers, based on the at least one notification.
[0019] In an exemplary embodiment, the present invention discloses a
system for analyzing user data congestion in a network. The system comprising a memory, a processing engine configured to execute a set of instructions stored in
10 the memory to receive a request for user data congestion analysis from a network function (NF) consumer. The processing engine configured to collect, by a network data analytics function (NWDAF) engine, data related to the received request from a user plane function (UPF). The processing engine configured to analyze, by the NWDAF engine, the collected data to determine whether at least one policy is
15 applicable to the collected data. The processing engine configured to after analyzing, apply, by the NWDAF engine, the at least one policy on the collected data. The processing engine configured to send, by the NWDAF engine, at least one notification comprising at least one of the collected data and the applied policy to the NF consumer.
20 [0020] In an embodiment, the system is further configured to provide at
least one action, by the NF consumers, based on the at least one notification.
[0021] In an exemplary embodiment, the present invention discloses a user
equipment (UE) communicatively coupled with a network. The coupling comprises steps of receiving, by the network, a connection request from the UE, sending, by
25 the network, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request. The user data congestion in the network is analyzed by a method comprising receiving, by a network data analytics function (NWDAF) engine, a request for user data congestion analysis from a network function (NF) consumer. The method
30 comprising collecting, by the NWDAF engine, data related to the received request from a user plane function (UPF). The method comprising analyzing, by the
5

NWDAF engine, the collected data to determine whether at least one policy is applicable to the collected data. The method comprising after analyzing, applying, by the NWDAF engine, the at least one policy on the collected data. The method comprising sending, by the NWDAF engine, at least one notification comprising at 5 least of the collected data and the applied policy to the NF consumer.
OBJECTS OF THE DISCLOSURE
[0022] It is an object of the present disclosure to provide a system and a
method for policy computation and forecasting for user data congestion in a
10 wireless network.
[0023] It is an object of the present disclosure to provide a system and a
method that includes a network data analytics function (NWDAF) equipped with automatic mechanisms and Artificial Intelligence/Machine Learning (AI/ML) to perform policy computation and future prediction for user data congestion.
15 [0024] It is an object of the present disclosure to provide a system and a
method that provides customized policy for the data analysis to a user based on
which a user may take actions to avoid data congestion problems.
[0025] It is an object of the present disclosure to provide a system and a
method that generates predictions for possible data congestion for the user
20 equipment (UE) or the group of UEs.
[0026] It is an object of the present disclosure to provide a system and a
method that efficiently maintains load on specific locations and avoids user data congestion.
25 BRIEF DESCRIPTION OF THE DRAWINGS
[0027] In the figures, similar components and/or features may have the
same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the
30 specification, the description is applicable to any one of the similar components
6

having the same first reference label irrespective of the second reference label.
[0028] The diagrams are for illustration only, which thus is not a limitation
of the present disclosure, and wherein:
[0029] FIG. 1 illustrates an exemplary network architecture for
5 implementing a system for analyzing user data congestion in a network, in
accordance with an embodiment of the present disclosure.
[0030] FIG. 2 illustrates an exemplary block diagram of the system, in
accordance with an embodiment of the present disclosure.
[0031] FIG. 3 illustrates an exemplary architecture of the system, in
10 accordance with an embodiment of the present disclosure.
[0032] FIG. 4 illustrates an exemplary flow chart of a method for analyzing
user data congestion in a network, in accordance with an embodiment of the present
disclosure.
[0033] FIG. 5 illustrates an exemplary computer system in which or with
15 which the embodiments of the present disclosure may be implemented.
[0034] FIG. 6 illustrates an exemplary flow chart of the method, in
accordance with an embodiment of the present disclosure.
[0035] The foregoing shall be more apparent from the following more
detailed description of the disclosure. 20
LIST OF REFERENCE NUMERALS
100 – Network Architecture
102-1, 102-2…102-N – Users
104-1, 104-2…104-N – User Equipments 25 106 – Network
108 – System
112 – Centralized Server
200 - Block diagram
202 – Processor(s) 30 204 – Memory
206 – Interface(s)
7

208 – Processing Engine
210 – Database
212 – Network data analytics function (NWDAF) engine
214 – Artificial intelligence/machine learning (AI/ML) Engine 5 216 – Other Engine(s)
218 – Forecasting Engine
300 – System architecture
302 – UPF (User Plane Function)
400 – Flow Chart 10 500 – Computing Hardware
510 – External Storage Device
520 – Bus
530 – Main Memory
540 – Read Only Memory 15 550 – Mass Storage Device
560 – Communication Port
570 – Processor
600 – Flow diagram
20
DETAILED DESCRIPTION OF THE DISCLOSURE
[0036] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that
25 embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be
30 fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in
8

which like reference numerals refer to the same parts throughout the different drawings.
[0037] The ensuing description provides exemplary embodiments only and
is not intended to limit the scope, applicability, or configuration of the disclosure. 5 Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
10 [0038] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to
15 obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
unnecessary detail in order to avoid obscuring the embodiments.
[0039] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a
20 structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a
25 procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0040] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt,
30 the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not
9

necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed 5 description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0041] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature,
10 structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined
15 in any suitable manner in one or more embodiments.
[0042] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms
20 “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the
25 associated listed items. It should be noted that the terms “mobile device”, “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms
30 is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other
10

equivalent terms or variations thereof may be used interchangeably without
departing from the scope of the invention as defined herein.
[0043] As used herein, an “electronic device”, or “portable electronic
device”, or “user device” or “communication device” or “user equipment” or 5 “device” refers to any electrical, electronic, electromechanical, and computing device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices, and transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery, and an input-means such as a hard keypad
10 and/or a soft keypad. The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR)
15 devices, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, mainframe computer, or any other device as may be obvious to a
person skilled in the art for implementation of the features of the present disclosure.
[0044] Further, the user device may also comprise a “processor” or
“processing unit” includes processing unit, wherein processor refers to any logic
20 circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a digital signal processing (DSP) core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable
25 Gate Array circuits, any other type of integrated circuits, etc. The processor may
perform signal coding data processing, input/output processing, and/or any other
functionality that enables the working of the system according to the present
disclosure. More specifically, the processor is a hardware processor.
[0045] As portable electronic devices and wireless technologies continue to
30 improve and grow in popularity, the advancing wireless technologies for data transfer are also expected to evolve and replace the older generations of
11

technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology are also seen. The development, in this respect, has been incremental in the order of second generation (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G), 5 and more such generations are expected to continue in the forthcoming time.
[0046] While considerable emphasis has been placed herein on the
components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the
10 disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
15 [0047] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIG. 1- FIG. 6.
[0048] FIG. 1 illustrates an exemplary network architecture for
implementing a system for analyzing user data congestion in a network, in accordance with an embodiment of the present disclosure.
20 [0049] Referring to FIG. 1, the network architecture (100) includes one or
more computing devices or user equipments (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as
25 the users (102). Similarly, a person of ordinary skill in the art will understand that one or more user equipments (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the
30 disclosure. Although three user equipments (104) are depicted in FIG. 1, however
12

any number of the user equipments (104) may be included without departing from the scope of the ongoing description.
[0050] In an embodiment, the user equipment (104) includes smart devices
operating in a smart environment, for example, an Internet of Things (IoT) system.
5 In such an embodiment, the user equipment (104) may include, but is not limited
to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal,
electrical, magnetic, etc.), networked appliances, networked peripheral devices,
networked lighting system, communication devices, networked vehicle accessories,
networked vehicular devices, smart accessories, tablets, smart television (TV),
10 computers, smart security system, smart home system, other devices for monitoring
or interacting with or for the users (102) and/or entities, or any combination thereof.
A person of ordinary skill in the art will appreciate that the user equipment (104)
may include, but is not limited to, intelligent, multi-sensing, network-connected
devices, that can integrate seamlessly with each other and/or with a central server
15 or a cloud-computing system or any other device that is network-connected.
[0051] In an embodiment, the user equipment (104) includes, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch 20 computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104) includes, but is not limited to, any electrical, electronic, electro-25 mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the user equipment (104) may include one or more in-built or externally coupled accessories including, 30 but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) such as touch
13

pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill
in the art will appreciate that the user equipment (104) may not be restricted to the
mentioned devices and various other devices may be used.
[0052] Referring to FIG. 1, the user equipment (104) communicates with a
5 system (108) through a network (106). ). The UE (104) may be communicatively coupled with the network (106). The communicatively coupling comprises of receiving, from the UE (104), a connection request by the network (106), sending an acknowledgment of the connection request, and transmitting a plurality of signals in response to the connection request. In an embodiment, the network (106)
10 includes at least one of a Fifth Generation (5G) network, a Sixth Generation (6G) network, or the like. The network (106) enables the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) includes a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network
15 (106) is implemented as, or includes any of a variety of different communication
technologies such as a wide area network (WAN), a local area network (LAN), a
wireless network, a mobile network, a Virtual Private Network (VPN), the Internet,
the Public Switched Telephone Network (PSTN), or the like.
[0053] In another exemplary embodiment, the centralized server (112)
20 includes or comprise, by way of example but not limitation, one or more of: a stand¬alone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at
25 least a portion of any of the above, some combination thereof.
[0054] In an embodiment, the system (108) may include a network data
analytics function (NWDAF) engine , as shown in FIG. 2, equipped with automatic mechanisms and an Artificial Intelligence/Machine Learning (AI/ML) engine to perform policy computation and future prediction for user data congestion. The
30 NWDAF engine may integrate with a network function (NF), for example, a user plane function (UPF), to get user data congestion-related data for a UE (104) or a
14

group of UEs for a location upon receiving a subscription request from the UPF. The UPF is a network function implemented for handling and routing user data traffic between user equipment (UE) and external data networks. The UPF manages the data plane, ensuring that data packets are efficiently forwarded from the user 5 equipment (104) to the appropriate destination and vice versa.
[0055] The NWDAF engine integrates with the UPF to collect data related
to user data congestion. The NWDAF engine may use the data collected over a long time and feed it to an AI/ML engine. The AI/ML engine may be trained using the historical data collected from the NWDAF engine and provide possible data
10 congestion details about the location to the NWDAF engine. The NWDAF engine may then provide the possible data congestion details to an NF consumer to avoid a heavy load situation.
[0056] In one aspect, the UE (104) is configured for initiating management
user data congestion in a network environment through an application interface of
15 the user equipment (104). A connection between the UE (104) and a nearest server is established. The UE (104) also displays a notification and the possible data congestion via the application interface.
[0057] In an embodiment, if the NF consumer, for example, an Access and
Mobility Management Function (AMF) consumer need to create a policy for a user
20 data congestion model along with a standard subscription, the NWDAF engine may analyse the data received from the UPF and notify the AMF consumer after applying their policy conditions in case of any threshold breaches. Based on the notification, the AMF consumer may efficiently maintain the load on such locations. In an embodiment, the NWDAF engine may be configured to apply any
25 custom policies to notify forecasted data congestions to users (102). In an example,
the NWDAF engine may be configured to notify operators of the network (106) of
forecasted data congestion when a network function parameter exceeds a threshold
5 times, or any predetermined number of times, within a given time-interval.
[0058] Although FIG. 1 shows exemplary components of the network
30 architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components,
15

or additional functional components than depicted in FIG. 1. Additionally, or
alternatively, one or more components of the network architecture (100) may
perform functions described as being performed by one or more other components
of the network architecture (100).
5 [0059] FIG. 2 illustrates an exemplary block diagram of the system, in
accordance with an embodiment of the present disclosure. The system (108) is configured to manage and analyze user data congestion within network environments, facilitating the mitigation of heavy load situations and enhancing overall network efficiency.
10 [0060] In an embodiment, the system (108) may include one or more
processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one
15 or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The
20 memory (204) may comprise any non-transitory storage device including, for
example, volatile memory such as random-access memory (RAM), or non-volatile
memory such as erasable programmable read only memory (EPROM), flash
memory, and the like.
[0061] In an embodiment, the system (108) may include an interface(s)
25 (206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not
30 limited to, processing engine(s) (208) and a database (210).
[0062] In an embodiment, the processing engine(s) (208) may be
16

implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For 5 example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may
10 store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other
15 examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0063] Further, the processing engine(s) (208), as described in FIG. 1, may
include a NWDAF engine (212), an AI/ML engine (214), and other engine(s) (216). In an embodiment, the other engine(s) may include, but not limited to, an
20 input/output engine, and a notification engine.
[0064] The processing engine (208) is a component within the network
infrastructure that collects, analyzes, and provides insights from network data. The NWDAF engine (212) uses advanced analytics and machine learning algorithms to understand network conditions and predict future states. The NWDAF engine (212)
25 may receive user data congestion related data for a UE (104) or a group of UEs for a specific location. The AI/ML engine (214) may be trained using a threshold value, time points and the received data. The AI/ML engine (214) may determine possible data congestion details about the location and provide the data congestion information about the location to the system (108). The system (108) may then
30 provide the possible data congestion details to the NF consumer to avoid heavy load situation.
17

[0065] In an aspect, the NWDAF engine (212) is tasked with gathering data
related to user congestion, which is information specific to individual user equipment (UE) or groups of UEs within a certain location. User data congestion, typically, is a situation where the volume of user data traffic exceeds the network's 5 capacity to handle it efficiently, leading to reduced performance and degraded quality of service. For example, during a live-streamed sports event, the sudden surge in data traffic can cause user data congestion, resulting in buffering and delays for viewers. The user data congestion information is collected including metrics on network usage, traffic patterns, bandwidth consumption, and other relevant
10 parameters that contribute to understanding congestion patterns.
[0066] In one aspect, the NWDAF engine (212) uses the AI/ML engine
(214) configured with artificial intelligence and machine learning algorithms to process the data collected. The AI/ML engine (214) employs models trained using predefined thresholds, temporal data points, and the congestion-related data
15 received from the data collection engine. The purpose of the training is to identify
patterns and predict potential congestion before it happens.
[0067] Utilizing the insights generated by the AI/ML engine (214), the
system (108) can forecast data congestion issues within the network. By analysing historical and real-time data, the AI/ML engine (214) can anticipate congested areas
20 and periods, enabling pre-emptive actions to alleviate or avoid network strain. The
historical and real-time data is utilized to train the AI/ML engine (214).
[0068] In an embodiment, the processing engine (208) is configured to
collect, by the NWDAF engine, data related to the received request from a user plane function (UPF). The processing engine (208) is configured to analyze, by the
25 NWDAF engine, the collected data to determine whether at least one policy is applicable to the collected data. The processing engine (208) is configured to after analyzing, apply, by the NWDAF engine, the at least one policy on the collected data. The processing engine (208) is configured to send, by the NWDAF engine, at least one notification comprising at least one of the collected data and the applied
30 policy to the NF consumer.
[0069] In an embodiment, the at least one policy is created by the NF
18

consumer based on the collected data and wherein the at least one policy comprises
a set of rules for the NF consumer when the collected data fail to meet a predefined
criteria. In an embodiment, the NF consumer comprises an Access and Mobility
Management Function (AMF).
5 [0070] In an embodiment, the processing engine (208) is configured to
further configured to provide at least one action, by the NF consumers, based on the at least one notification. Once the system has determined possible congestion scenarios, it communicates these details to the network function (NF) consumer. The NF consumer represents the operational entities that manage network resources
10 and can take informed steps to mitigate impending congestion, such as rerouting
traffic, scaling resources, or implementing traffic shaping policies.
[0071] FIG. 3 illustrates an exemplary system architecture (300), in which
the system (108), as described in FIG. 1 and FIG. 2, is implemented, in accordance with an embodiment of the present disclosure. The system architecture (300)
15 includes, but may not be limited to, the user equipment (UE) (104), a user plane
function (UPF) (302), the NWDAF engine (212), and the AI/ML engine (214).
[0072] In one aspect, the system architecture (300) is configured to collect
user data congestion related data for the UE (104) or a group of UEs (104) for a location from the UPF (302). Data congestion information, collected from the UPF
20 (302), and the data congestion analytics, obtained through the UE (104), are shared with the NWDAF engine (212) for processing the data congestion. The UE (104) is in directional communication with the system (108).
[0073] In one aspect, the NWDAF relies on an automated mechanisms and
the AI/ML engine (214) configured for the intricate analysis and policy formulation
25 necessary for optimal data flow regulation and congestion prevention.
[0074] The AI/ML engine (214) processes historical and real-time network
traffic data, employing techniques based on algorithms to discern patterns and establish predictive models. Through continuous learning, the AI/ML engine (214) refines its understanding of traffic dynamics, enhancing its predictive accuracy over
30 time.
[0075] The NWDAF engine (212) utilizes these insights to conduct
19

complex policy processing that dynamically respond to varying network conditions. Such policies are informed by parameters including traffic volume thresholds, service quality metrics, and user demand projections. These policies are adaptive, subject to modification as the AI/ML engine (214) integrates new data insights, 5 allowing for a proactive and responsive network management stance. The policies may be stored in the database (210).
[0076] The system (108), in one aspect, may be integrated with one or more
data consumers. The one or more data consumers may include, but not limited to, a session management function (SMF) and the AMF. The SMF is a component
10 within the 5G core network architecture responsible for managing sessions between UE (104) and networks (106). The SMF is tasked with session establishment, modification, and release, along with the allocation and management of internet protocol (IP) addresses. The SMF also handles policy enforcement and interacts with other network functions, such as the AMF and the UPF (302), to ensure
15 efficient and secure data session management in a 5G network. The AMF is a component within the 5G core network architecture responsible for managing access and mobility procedures, including registration, connection management, and mobility management functions. It interacts with other network functions to ensure seamless connectivity and mobility for the UE in a 5G network.
20 [0077] In an aspect, the system (108) is configured to communicate the UPF
(302) to get user data congestion related data for the UE (104) or a group of UEs (104) for a location. The system (108) may use the data collected over a long time, such as historical data, and feed it to the AI/ML engine (214). The AI/ML engine (214) may be trained using the historical data. The AI/ML engine (214) may
25 provide possible data congestion details about the location after superimposing the
historical data collected from the NWDAF engine (212) into the AI/ML engine
(214). The system (108) may provide the details to the NF consumer to avoid heavy
load situation.
[0078] In an embodiment, if the NF consumer, for example, a user can
30 create a custom policy. The custom policy may be a set of rules, defined by the NF consumer, which can be applicable to manage the user data congestion. In one
20

example, the NF consumer creates the custom policy using the AMF for the user data congestion model along with a standard subscription to the User Data Congestion for the analytics, the NWDAF engine (212) may analyse the data received from the UPF (302) and notify the consumer after applying the custom 5 policy conditions in case of any threshold breaches. Based on the notification, the AMF may efficiently maintain the load on such locations. Thereby, the NWDAF engine (212) determines first if the policy, stored in the database (210), can be applicable for managing the user data congestion. If the policies are not applicable, then the user may produce the custom policy, which can be applicable for managing 10 the user data congestion.
[0079] FIG. 4 illustrates an exemplary flow chart of a method for analyzing
user data congestion in a network, in accordance with an embodiment of the present
disclosure.
[0080] Referring to FIG. 4, the method (400) may start processing at step
15 (402).
[0081] At step 404, the method (400) may include receiving a request for data congestion analytics from an NF consumer or a user (102) by the system (108). The NF consumer can be an entity like the SMF or AMF, sending a request to the system (108) seeking analysis on data congestion. The request is received through the
20 interface(s) (206), which facilitate communication between the NF consumer or user and the system.
[0082] At step 406, the method (400) may include collecting the user data congestion-related data from the NF consumer or the user (102). The NWDAF engine (212) gathers this data, which may include metrics, such as network usage,
25 traffic patterns, and bandwidth consumption from various sources like the UPF (302) and UE (104). This data is crucial for understanding the current state of the network and identifying congestion patterns.
[0083] The method (400) may include analyzing the collected data. The AI/ML engine (214) processes the collected data. Predefined thresholds and temporal data
30 points are used to train machine learning models. These models analyze the
21

collected data to identify patterns and predict potential data congestion scenarios. This analysis helps in understanding the severity and impact of the congestion. [0084] At step (408), the method (400) may include determining if at least one policy is applied to the analyzed data. This involves checking if there are any pre-5 existing policies that need to be enforced based on the analyzed data. Policies might include rules for managing traffic, rerouting data, or prioritizing certain types of data during congestion.
[0085] On determining that at least one policy is applied to the analyzed data, the method (400) may include applying a custom policy on the analyzed data in
10 case of any threshold breaches, at step (410). This means that if the analysis indicates that certain thresholds (like traffic volume or bandwidth usage) are exceeded, the system (108) applies custom policies designed to mitigate the impact. These policies are stored in the memory (204) and can include actions like rerouting traffic, scaling network resources, or implementing traffic shaping measures.
15 [0086] At step 412, the method (400) includes providing a notification after applying the custom policy and possible data congestion details to the NF consumer. Once the custom policy is applied, the system (108) sends a notification to the NF consumer, informing them of the actions taken and the current status of the network congestion. This notification helps the NF consumer to stay informed
20 and take further actions if necessary.
[0087] On determining, at step (408), that at least one policy is not applied to the analyzed data, the method (400) may include providing the possible data congestion details to the NF consumer, at step (412). If no specific policy is required, the system (108) still communicates the analyzed data and possible
25 congestion details to the NF consumer, allowing them to make informed decisions. The process ends at step (414).
[0088] FIG. 5 is an illustration (500) of a non-limiting example of details of
computing hardware used in the system (108), in accordance with an embodiment of the present disclosure. As shown in FIG. 5, the system (108) may include an
30 external storage device (510), a bus (520), a main memory (530), a read only memory (540), a mass storage device (550), a communication port (560), and a
22

processor (570). A person skilled in the art will appreciate that the system (108)
may include more than one processor (570) and communication ports (560).
Processor (570) may include various modules associated with embodiments of the
present disclosure.
5 [0089] In an embodiment, the communication port (560) is any of an RS-
232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (560) is chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or
10 any network to which the system (108) connects.
[0090] In an embodiment, the memory (530) is Random Access Memory
(RAM), or any other dynamic storage device commonly known in the art. Read¬only memory (540) is any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information
15 e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (570).
[0091] In an embodiment, the mass storage (550) is any current or future
mass storage solution, which is used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced
20 Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
25 [0092] In an embodiment, the bus (520) communicatively couples the
processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) is, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as
30 other buses, such a front side bus (FSB), which connects the processor (570) to the system (108).
23

[0093] Optionally, operator and administrative interfaces, e.g., a display,
keyboard, joystick, and a cursor control device, may also be coupled to the bus
(520) to support direct operator interaction with the system. Other operator and
administrative interfaces are provided through network connections connected
5 through the communication port (560). Components described above are meant
only to exemplify various possibilities. In no way should the aforementioned
exemplary illustration (500) limit the scope of the present disclosure.
[0094] The method and system of the present disclosure may be
implemented in a number of ways. For example, the methods and systems of the
10 present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure
15 may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
20 [0095] FIG. 5 in view of FIG. 2 illustrates an exemplary embodiment of a
Network Data Analytics Function (NWDAF) system (108), which is adept at managing user data congestion within a network environment. This system is composed of several key components, each playing a pivotal role in the overall functionality and effectiveness of the invention.
25 [0096] An NWDAF engine (212) is tasked with the function of gathering
data related to user data congestion. The NWDAF engine (212) obtains this information for individual user equipment (UE) (104) or a collective group of UEs situated within a particular geographic location. The data is sourced from a user plane function (UPF (302)), ensuring that the information is both relevant and
30 timely.
[0097] To process the collected data, system (108) incorporates a
24

forecasting engine (218). This engine is designed to analyze the data over an extended period, applying sophisticated algorithms to project potential data congestion scenarios for the location in question. The use of historical data is a cornerstone of the forecasting engine (218), allowing it to make accurate 5 predictions about future network conditions.
[0098] Communication within system (108) is facilitated through an
interface (206). This interface serves the dual purpose of enabling subscription to the UPF (302) and providing a conduit for interaction with one or more network function (NF) consumers. The versatility of interface (206) is essential for
10 maintaining a dynamic and responsive network analytics system.
[0099] The processor (202) is functionally linked to both the NWDAF
engine (212) and the forecasting engine (218). It is responsible for executing an AI/ML engine that has been meticulously trained on historical data, allowing system 108 to adapt and respond to the evolving patterns of network usage
15 effectively.
[00100] The system (108) also includes an AI/ML engine (214). The AI/ML
engine (214) takes on the role of refining the AI/ML model with the historical data that has been collated. Its ability to learn and forecast potential data congestion scenarios is vital for pre-emptive network management strategies.
20 [00101] System (108) is engineered to not only collate and analyze data but
also to provide actionable insights to NF consumers. These insights are instrumental
in aiding NF consumers to manage network loads and avert situations that could
lead to heavy congestion.
[00102] The system (108) is configured to support NF consumers, such as
25 the AMF. It empowers these consumers to develop policies for managing user data
congestion. These policies are grounded in standard user data congestion analytics
but can be customized to address specific network management needs.
[00103] In instances where network data thresholds are exceeded, indicating
potential congestion, the system (108) is equipped to apply custom policies to the
30 analyzed data. Upon application of these policies, the system is configured to issue notifications and relay details regarding the data congestion to the AMF. This
25

capability ensures that the system (108) is capable of delivering critical information to facilitate swift and informed decision-making.
[00104] FIG. 6 illustrates an exemplary flow chart of the method (600), in
accordance with an embodiment of the present disclosure.
5 [00105] At step (602), the method (600) comprising receiving (602), by a
network data analytics function (NWDAF) engine (212), a request for user data congestion analysis from a network function (NF) consumer. User data congestion analysis is essential for maintaining optimal performance and ensuring seamless user experiences. By continuously monitoring and analyzing data traffic patterns,
10 network operators can identify potential congestion points and take proactive measures to manage and alleviate them. Through sophisticated analytics tools and algorithms, such as those employed by the NWDAF (Network Data Analytics Function), operators can detect anomalies in data flow, spikes in usage, or bottlenecks in network capacity. This analysis allows for timely intervention, such
15 as dynamically reallocating resources, adjusting Quality of Service (QoS) parameters, or prioritizing critical applications during peak usage periods. The NF consumer is an entity within the network infrastructure that utilizes the services and data provided by other network functions to perform one or more operations. The NF (Network Function) consumer in the context of telecommunications networks
20 refers to an essential component that plays a pivotal role in the service-based architecture of 5G and similar advanced network frameworks. NF consumers are entities within the network architecture, including applications or network elements, that utilize services and data provided by other network functions. These consumers interact with various NFs to orchestrate and manage network resources
25 effectively. In an embodiment, the NF consumer comprises an Access and Mobility Management Function (AMF) and a session management function (SMF). The request is associated with user data congestion analysis for the network. The request likely includes parameters such as specific User Equipment (UE) identifiers, selected areas (such as Tracking Area Identities or Cell IDs), and possibly Service
30 Network Slice Selection Assistance Information (S-NSSAI).
26

[00106] At step (604), the method includes collecting, by the NWDAF
engine (212), data related to the received request from a user plane function (UPF). The data collected includes metrics on network usage, traffic patterns, bandwidth consumption, and other relevant parameters that contribute to understanding 5 congestion patterns.
[00107] At step (606), the method includes analyzing (606), by the NWDAF
engine (212), the collected data to determine whether at least one policy is applicable to the collected data. In an embodiment, the at least one policy is created by the NF consumer based on the collected data and the at least one policy
10 comprises a set of rules for the NF consumer when the collected data fail to meet a predefined criteria. The analysis is performed using sophisticated algorithms and historical data to assess the current network conditions against predefined policies. Upon receiving data from various network components such as UPF, the NWDAF engine employs advanced analytics to scrutinize network performance metrics, user
15 activity patterns, and quality of service indicators. This analysis aims to detect anomalies, identify trends, and evaluate whether predefined policies ranging from congestion thresholds to quality thresholds are met (predefined criteria). The predefined policies encompass a wide range of rules and parameters, each ensure efficient resource allocation, optimal service delivery, and robust security
20 measures. For example, Quality of Service (QoS) policies define specific performance metrics like latency and bandwidth that must be maintained to meet service level agreements (SLAs) with customers. Security policies outline protocols for data encryption, access control, and threat mitigation to safeguard against unauthorized access and cyber threats. Traffic management policies regulate the
25 prioritization and shaping of network traffic to prevent congestion and ensure smooth data flow. Upon analyzing the data, the NF consumer identifies scenarios where network performance or user experience falls below expected standards. In response, the NF consumer formulates policies that define specific rules and conditions under which corrective actions are automatically initiated. For example,
30 if data traffic exceeds a specified congestion threshold, the policy might dictate prioritizing critical services or implementing traffic management measures to
27

alleviate strain. These policies are comprehensive, encompassing conditions for activation, parameters that trigger responses, and actions to be taken in specific scenarios.
[00108] At step (608), the method (600) comprising after analyzing,
5 applying, by the NWDAF engine (212), the at least one policy on the collected data. The NWDAF engine applies the at least one policy on the data in the event of any threshold breaches. The NWDAF engine completes its analysis of collected data, it proceeds to apply the predefined policies to manage and optimize network operations based on the identified insights. This involves utilizing the results of data
10 analytics to implement specific rules and actions aimed at addressing various network conditions and ensuring desired performance levels. When threshold breaches occur such as exceeding predefined limits for network congestion, latency, or other performance metrics, the NWDAF engine triggers the corresponding policies. These policies are specifically designed to dictate how the network should
15 react in such scenarios. For example, if the data analysis indicates that a particular area of the network is experiencing congestion beyond acceptable levels, the NWDAF engine may apply a policy to prioritize critical traffic, reroute non-essential traffic, or dynamically allocate additional resources to alleviate the congestion. For instance, if the analysis reveals a surge in data traffic that threatens
20 to exceed network capacity, the NWDAF engine may activate policies designed to
prioritize critical services or allocate additional resources dynamically.
[00109] At step (610), the method (600) comprising sending, by the NWDAF
engine (212), at least one notification comprising at least of the collected data and the applied policy to the NF consumer.
25 [00110] In an embodiment, the method further comprising providing at least
one action, by the NF consumers, based on the at least one notification. Upon receiving these notifications, the NF consumers take proactive actions to address or capitalize on the insights provided. For example, if the NWDAF engine alerts about potential network congestion exceeding acceptable limits, an NF consumer
30 responsible for traffic management may provide the action of dynamically reroute traffic flows to less congested paths or prioritize critical services to maintain QoS
28

(Quality of Service) levels.
[00111] In an exemplary embodiment, the present invention discloses a user
equipment (UE) communicatively coupled with a network. The coupling comprises steps of receiving, by the network, a connection request from the UE, sending, by 5 the network, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request. The user data congestion in the network is analyzed by a method comprising receiving, by a network data analytics function (NWDAF) engine, a request for user data congestion analysis from a network function (NF) consumer. The method
10 comprising collecting, by the NWDAF engine, data related to the received request from a user plane function (UPF). The method comprising analyzing, by the NWDAF engine, the collected data to determine whether at least one policy is applicable to the collected data. The method comprising after analyzing, applying, by the NWDAF engine, the at least one policy on the collected data. The method
15 comprising sending, by the NWDAF engine, at least one notification comprising at
least of the collected data and the applied policy to the NF consumer.
[00112] The present disclosure provides technical advancement related to
analyzing user data congestion in a network. This advancement addresses the limitations of existing solutions by continuously monitoring and analyzing data
20 traffic patterns, network operators can identify potential congestion points and take proactive measures to manage and alleviate them. Through sophisticated analytics tools and algorithms, such as those employed by the NWDAF, operators can detect anomalies in data flow, spikes in usage, or bottlenecks in network capacity. This analysis allows for timely intervention, such as dynamically reallocating resources,
25 adjusting Quality of Service (QoS) parameters, or prioritizing critical applications during peak usage periods. Thus, the present disclosure optimize network efficiency and reliability while enhancing service delivery.
[00113] While the foregoing describes various embodiments of the present
disclosure, other and further embodiments of the present disclosure may be devised
30 without departing from the basic scope thereof. The scope of the present disclosure is determined by the claims that follow. The present disclosure is not limited to the
29

described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the present disclosure when combined with information and knowledge available to the person having ordinary skill in the art. 5
ADVANTAGES OF THE PRESENT DISCLOSURE
[00114] The present disclosure provides a system and a method that includes
a Network data analytics function (NWDAF) equipped with automatic mechanisms and Artificial Intelligence/Machine Learning (AI/ML) to efficiently perform policy
10 computation and future prediction for user data congestion.
[00115] The present disclosure provides a system and a method that provides
customized policy for the data analysis to a user based on which the user may take
efficient actions to avoid data congestion problems.
[00116] The present disclosure provides a system and a method that
15 generates predictions for possible data congestion for a user equipment (UE) or a group of UEs and notifies the user to avoid data congestion problems in advance.

We Claim:
1. A method (600) for analyzing user data congestion in a network (106), , the
method (600) comprising:
receiving (602), by a network data analytics function (NWDAF)
5 engine (212), a request for user data congestion analysis from a network
function (NF) consumer;
collecting (604), by the NWDAF engine (212), data related to the received request from a user plane function (UPF);
analyzing (606), by the NWDAF engine (212), the collected data to
10 determine whether at least one policy is applicable to the collected data;
after analyzing, applying (608), by the NWDAF engine (212), the at least one policy on the collected data; and
sending (610), by the NWDAF engine (212), at least one notification
comprising at least one of the collected data and the applied policy to the
15 NF consumer
2. The method (600) of claim 1, wherein the at least one policy is created by
the NF consumer based on the collected data and wherein the at least one
policy comprises a set of rules to be followed by the NF consumer when the
20 collected data fail to meet a predefined criteria.
3. The method (600) of claim 1, wherein the NF consumer comprises at least
a access and mobility management function (AMF) and a session
management function (SMF).
25
4. The method of claim 1, further comprising performing at least one action
by the NF consumers, based on the at least one notification.
5. A system (108) for analyzing user data congestion in a network (106), the
30 system (108) comprising:
31

a memory (204);
a processing engine (208) configured to execute a set of instructions stored in the memory (204) to:
receive, by a network data analytics function (NWDAF) engine
5 (212), a request for user data congestion analysis from a network function
(NF) consumer;
collect, by the NWDAF engine (212), data related to the received request from a user plane function (UPF);
analyze, by the NWDAF engine (212), the collected data to
10 determine whether at least one policy is applicable to the collected data;
after analyzing, apply, by the NWDAF engine (212), the at least one policy on the collected data; and
send, by the NWDAF engine (212), at least one notification
comprising at least one of the collected data and the applied policy to the
15 NF consumer.
6. The system (108) of claim 5, wherein the at least one policy is created by
the NF consumer based on the collected data and wherein the at least one
policy comprises a set of rules to be followed by the NF consumer when the
20 collected data fail to meet a predefined criteria.
7. The system (108) of claim 5, wherein the NF consumer comprises at least a
access and mobility management function (AMF) and a session
management function (SMF).
25
8. The system (108) of claim 5, further configured to provide at least one
action, by the NF consumers, based on the at least one notification.
9. A user equipment (UE) (104) communicatively coupled with a network
30 (106), the coupling comprises steps of:
32

receiving, by the network (106), a connection request from the UE (104);
sending, by the network (106), an acknowledgment of the
connection request to the UE (104); and
5 transmitting a plurality of signals in response to the connection
request, wherein user data congestion in the network (106) is analyzed by a method as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321050217-STATEMENT OF UNDERTAKING (FORM 3) [25-07-2023(online)].pdf 2023-07-25
2 202321050217-PROVISIONAL SPECIFICATION [25-07-2023(online)].pdf 2023-07-25
3 202321050217-FORM 1 [25-07-2023(online)].pdf 2023-07-25
4 202321050217-DRAWINGS [25-07-2023(online)].pdf 2023-07-25
5 202321050217-DECLARATION OF INVENTORSHIP (FORM 5) [25-07-2023(online)].pdf 2023-07-25
6 202321050217-FORM-26 [25-10-2023(online)].pdf 2023-10-25
7 202321050217-FORM-26 [26-04-2024(online)].pdf 2024-04-26
8 202321050217-FORM 13 [26-04-2024(online)].pdf 2024-04-26
9 202321050217-FORM-26 [30-04-2024(online)].pdf 2024-04-30
10 202321050217-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321050217-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321050217-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321050217-CORRESPONDENCE(IPO)-(WIPO DAS)-10-07-2024.pdf 2024-07-10
14 202321050217-ORIGINAL UR 6(1A) FORM 26-100724.pdf 2024-07-15
15 202321050217-FORM-5 [24-07-2024(online)].pdf 2024-07-24
16 202321050217-DRAWING [24-07-2024(online)].pdf 2024-07-24
17 202321050217-CORRESPONDENCE-OTHERS [24-07-2024(online)].pdf 2024-07-24
18 202321050217-COMPLETE SPECIFICATION [24-07-2024(online)].pdf 2024-07-24
19 Abstract-1.jpg 2024-10-04
20 202321050217-FORM-9 [23-10-2024(online)].pdf 2024-10-23
21 202321050217-FORM 18A [24-10-2024(online)].pdf 2024-10-24
22 202321050217-FORM 3 [12-11-2024(online)].pdf 2024-11-12
23 202321050217-Proof of Right [08-02-2025(online)].pdf 2025-02-08
24 202321050217-FER.pdf 2025-02-13
25 202321050217-ORIGINAL UR 6(1A) FORM 1-270225.pdf 2025-03-01
26 202321050217-OTHERS [11-04-2025(online)].pdf 2025-04-11
27 202321050217-FORM 3 [11-04-2025(online)].pdf 2025-04-11
28 202321050217-FORM 3 [11-04-2025(online)]-1.pdf 2025-04-11
29 202321050217-FER_SER_REPLY [11-04-2025(online)].pdf 2025-04-11
30 202321050217-DRAWING [11-04-2025(online)].pdf 2025-04-11
31 202321050217-CLAIMS [11-04-2025(online)].pdf 2025-04-11
32 202321050217-PatentCertificate12-08-2025.pdf 2025-08-12
33 202321050217-IntimationOfGrant12-08-2025.pdf 2025-08-12

Search Strategy

1 202321050217_SearchStrategyNew_E_NWDAFE_07-02-2025.pdf

ERegister / Renewals

3rd: 06 Nov 2025

From 25/07/2025 - To 25/07/2026