Abstract: The present disclosure relates to a system (108, 300) and a method (450) for quality of service (QoS) analytics in a network environment. The method (450) includes receiving a request to determine QoS metrics from network entities (110), validating the request and transmitting a negative response when validation is unsuccessful, retrieving, when successful, a set of operational data associated with user equipment (UEs) (104), forecasting the historical QoS metrics for the requested one or more UEs (104), determining whether any of the forecasted one or more QoS metrics breach a corresponding threshold range, and transmitting the predicted one or more QoS metrics to the one or more network entities (110) when the one or more threshold ranges are anticipated to be breached. FIGURE 3
FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10; rule 13)
TITLE OF THE INVENTION
SYSTEM AND METHOD FOR DETERMINING QUALITY OF SERVICE METRICS IN A
NETWORK
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad -
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[001] A portion of the disclosure of this patent document contains
material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade 5 dress protection, belonging to JIO PLATFORMS LIMITED or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully 10 reserved by the owner.
FIELD OF THE DISCLOSURE
[002] The embodiments of the present disclosure generally relate to
communication networks. In particular, the present disclosure relates to a system and a method for determining quality-of-service in a network.
15 DEFINITION
[003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[004] The expression ‘Intelligent Performance Management (IPM)’ used
20 hereinafter in the specification refers to a system for dynamically optimizing the performance of network resources based on real-time data and advanced analytics. The IPM system integrates various components such as monitoring tools, predictive algorithms, and automated decision-making processes to continuously assess and adjust network configurations. This enables proactive identification and resolution 25 of performance bottlenecks, congestion points, and other issues that could impact service quality.
[005] The expression ‘Quality of Service (QoS) metrics’ used hereinafter
2
in the specification refers to measurable parameters used to evaluate the performance of a service according to predefined standards or user expectations.
[006] The expression ‘Network Data Analytics Function (NWDAF)’ used
hereinafter in the specification refers to a key component in 5G networks, 5 specifically within the Service-Based Architecture (SBA) defined by the 3GPP (3rd Generation Partnership Project). NWDAF is responsible for collecting, analyzing, and processing network data to provide insights that enable various network functions and services to operate more efficiently and effectively.
[007] The expression ‘Radio Access Network (RAN) logs’ used
10 hereinafter in the specification refers to detailed records or data generated by the components and devices within a Radio Access Network, which is a critical part of a mobile telecommunications system. The RAN logs capture various operational and performance-related information about the network elements, including base stations (NodeBs, eNodeBs in LTE/4G, gNodeBs in 5G), antennas, and other 15 equipment responsible for wireless communication with mobile devices.
[008] These definitions are in addition to those expressed in the art.
BACKGROUND OF THE DISCLOSURE
[009] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may 20 include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0010] Operators of tele-communication networks often quantify quality of
25 services provided over the network. Although networks today generate and store
high volumes of data, it is still difficult to analyse quality of service with acceptable
accuracy. Further, the manner with which existing networks implement network
3
entities, such as the Network Data Analytics Function (NWDAF), severely limit the network’s ability to measure or quantify quality of service provided to users. Further, existing solutions often do not provide predictive analytics on the quality of service of the network.
5 [0011] Monitoring the quality of service provided by the network may be
important for ensuring subscribers are provided with a standard quality of service, while also allow operators to optimize for network performance and cost. Additionally, existing solutions struggle with providing insights by combining one or more quality of service metrics. For instance, existing solutions do not allow for 10 analyzing aggregates of quality-of-service metrics associated with a plurality of users. Furthermore, existing solutions do not forecast and notify degradations or over performance of the networks for performance of preventative maintenance or proactive expansion of the network.
[0012] Conventional systems and methods face difficulties in accurately
15 analyzing and predicting the quality of service over telecommunication networks. These challenges include the inability to combine and analyze multiple quality of service metrics effectively, as well as the lack of predictive analytics for proactive network management. There is, therefore, a need in the art to provide a method and system that can overcome the shortcomings of the existing prior arts.
20 SUMMARY OF THE DISCLOSURE
[0013] The present disclosure discloses a system for determining quality of
service (QoS) metrics in a network environment. The system includes a processing engine configured to receive one or more requests to determine one or more QoS metrics from one or more network entities. The system is further configured to 25 retrieve a set of operational data associated with user equipment requesting QoS analysis from a memory. The set of operational data is data generated by base stations and the one or more network entities providing services to the user equipment and includes Radio Access Network (RAN) logs, session logs, and key performance indicator (KPI) metrics. The system transmits the retrieved set of
4
operational data to an Artificial Intelligence (AI) engine, with the AI engine trained on historical operational data and historical QoS metrics. The system is further configured to forecast one or more QoS metrics associated with the user equipment based on the set of operational data and determine if the forecasted one or more 5 QoS metrics fail to exceed a predefined threshold range, with the failure to exceed the predefined threshold indicative of malfunctioning of at least one service. The system then sends a notification to the user equipment upon determining malfunctioning of the at least one service. The system also determines whether any of the operational data received breaches a corresponding threshold range and 10 transmits the predicted one or more QoS values to the one or more network entities when the one or more threshold ranges are determined to be breached.
[0014] In one embodiment, the system is further configured to identify one
or more network errors causing the one or more QoS metrics to breach the corresponding predefined threshold ranges and generate one or more 15 recommendations to maintain the at least one QoS metric within the corresponding threshold range.
[0015] In one embodiment, the retrieved operational data is pre-processed
for transmitting to the AI engine.
[0016] In one embodiment, the AI engine is configured to dynamically
20 adjust the predefined threshold range based on current network conditions.
[0017] In one embodiment, the system includes an interface for presenting
the forecasted QoS metrics and any threshold breaches to a network administrator.
[0018] In one embodiment of the present disclosure, a method for
determining quality of service (QoS) metrics in a network environment is disclosed. 25 The method includes receiving a request to determine one or more QoS metrics from one or more network entities. The method further includes retrieving a set of operational data associated with one or more user equipment requesting QoS analysis from a memory based on the one or more requests. The set of operational data is data generated by base stations and the one or more network entities
5
providing services to the user equipment and includes Radio Access Network (RAN) logs, session logs, and key performance indicator (KPI) metrics. The method further includes transmitting the retrieved set of operational data to an Artificial Intelligence (AI) engine, with the AI engine trained on historical 5 operational data and historical QoS metrics. The method further includes forecasting one or more QoS metrics associated with the user equipment based on the set of operational data, determining if the forecasted one or more QoS metrics fail to exceed a predefined threshold range, with the failure to exceed the predefined threshold indicative of malfunctioning of at least one service, and sending a 10 notification to the user equipment upon determining malfunctioning of the at least one service.
[0019] The method further comprises identifying one or more network
errors causing the one or more QoS metrics to breach the corresponding predefined threshold ranges and generating one or more recommendations to maintain the at 15 least one QoS metric within the corresponding threshold range.
[0020] The method further comprises pre-processing the operational data
before transmitting to the AI engine.
[0021] The method further comprises dynamically adjusting the predefined
threshold range based on current network conditions.
20 [0022] The method further comprises validating the one or more requests to
determine one or more QoS metrics and transmitting to the user equipment a negative response when validation is unsuccessful.
[0023] The present disclosure further discloses a user equipment that is
communicatively coupled to a system. The coupling comprises steps of receiving a 25 connection request, sending an acknowledgment of connection request to the system, and transmitting a plurality of signals in response to the connection request. The system is configured to determine quality of service (QoS) metrics in a network environment. The system includes a processing engine configured to receive one or more requests to determine one or more QoS metrics from one or more network
6
entities. The system is further configured to retrieve a set of operational data associated with user equipment requesting QoS analysis from a memory. The set of operational data is data generated by base stations and the one or more network entities providing services to the user equipment and includes Radio Access 5 Network (RAN) logs, session logs, and key performance indicator (KPI) metrics. The system transmits the retrieved set of operational data to an Artificial Intelligence (AI) engine, with the AI engine trained on historical operational data and historical QoS metrics. The system is further configured to forecast one or more QoS metrics associated with the user equipment based on the set of operational data 10 and determine if the forecasted one or more QoS metrics fail to exceed a predefined threshold range, with the failure to exceed the predefined threshold indicative of malfunctioning of at least one service. The system then sends a notification to the user equipment upon determining malfunctioning of the at least one service.
OBJECTS OF THE DISCLOSURE
15 [0024] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0025] An object of the present disclosure is to provide a system and a
method for providing quality of service analytics.
[0026] Another object of the present disclosure is to provide a system and a
20 method that uses artificial intelligence (AI) engines for forecasting one or more quality of service metrics.
[0027] Another object of the present disclosure is to provide a system and a
method that generates one or more recommendations to maintain quality of service metrics between a predetermined range.
25 [0028] Another object of the present disclosure is to provide a system and a
method that allows for predictive analytics and perform preventive maintenance or proactively expand the network.
7
[0029] Another object of the present disclosure is to provide a dashboard
for monitoring and analyzing quality of service metrics.
[0030] Another object of the present disclosure is to provide a system and a
method that notifies operators of networks when any of the quality-of-service 5 metrics breach any corresponding threshold ranges.
[0031] Another object of the present disclosure is to provide a system and a
method that minimizes call drops, lowers latency and raises throughput.
[0032] Another object of the present disclosure is to provide a system and a
method that predicts abnormalities faced by user equipment in the network.
10 BRIEF DESCRIPTION OF THE DRAWINGS
[0033] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not
15 necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components
20 or circuitry commonly used to implement such components.
[0034] FIG. 1 illustrates an exemplary network architecture implementing
a system for determining quality-of-service metrics in a network, in accordance with embodiments of the present disclosure.
[0035] FIG. 2 illustrates a block diagram of the system, in accordance with
25 embodiments of the present disclosure.
[0036] FIG. 3 illustrates an exemplary implementation of the system, in
accordance with embodiments of the present disclosure.
8
[0037] FIG. 4A illustrates a flowchart of a method for determining quality-
of-service metrics, in accordance with embodiments of the present disclosure.
[0038] FIG. 4B illustrates exemplary steps of a method for determining
quality-of-service metrics in the network, in accordance with embodiments of the 5 present disclosure.
[0039] FIG. 5 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented.
[0040] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
10 LIST OF REFERENCE NUMERALS
100 – Network Architecture
1 102-1, 102-2…102-N – Users
104-1, 104-2…104-N – User Equipments (UEs)
106 - Network 15 108 - System
110 - Network Entity
112 - Base Station
202 - Processor
204 - Memory 20 206 - Communication Module/Interface
208 - Processing Unit/Engine
210 - Database (DB)
212 - Network Data Analytics Function (NWDAF) Engine
510 – External Storage Device 25 520 – Bus
530 – Main Memory
540 – Read Only Memory
550 – Mass Storage Device
560 – Communication Port
9
570 – Processor
DETAILED DESCRIPTION OF THE DISCLOSURE
[0041] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of 5 embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the 10 problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
15 [0042] The ensuing description provides exemplary embodiments only, and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the
20 function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0043] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these 25 specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
10
[0044] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in 5 parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling 10 function or the main function.
[0045] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not
15 necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term
20 “comprising” as an open transition word without precluding any additional or other elements.
[0046] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included 25 in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
11
[0047] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms 5 “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the
10 associated listed items. It should be noted that the terms “mobile device”, “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms
15 is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0048] As used herein, an “electronic device”, or “portable electronic
20 device”, or “user device” or “communication device” or “user equipment” or
“device” refers to any electrical, electronic, electromechanical, and computing
device. The user device is capable of receiving and/or transmitting one or
parameters, performing function/s, communicating with other user devices, and
transmitting data to the other user devices. The user equipment may have a
25 processor, a display, a memory, a battery, and an input-means such as a hard keypad
and/or a soft keypad. The user equipment may be capable of operating on any radio
access technology including but not limited to IP-enabled communication, Zig Bee,
Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi,
Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to,
30 a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR)
devices, laptop, a general-purpose computer, desktop, personal digital assistant,
12
tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
[0049] Further, the user device may also comprise a “processor” or
“processing unit” includes processing unit, wherein processor refers to any logic 5 circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of 10 integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0050] As portable electronic devices and wireless technologies continue to
15 improve and grow in popularity, the advancing wireless technologies for data transfer are also expected to evolve and replace the older generations of technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology is also seen. The development, in this respect, has been incremental in the order of second generation 20 (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G), and more such generations are expected to continue in the forthcoming time.
[0051] While considerable emphasis has been placed herein on the
components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be
25 made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and
30 not as a limitation.
13
[0052] Traditionally, network systems have struggled to predict and manage
QoS metrics effectively, often leading to service degradation marked by increased call drops, latency issues, and inconsistent throughput. The existing methods lacked the foresight to pre-emptively adjust network parameters to mitigate these issues.
5 [0053] To overcome such issues, a sophisticated approach to forecasting
QoS Key Performance Indicators (KPIs) (QoS metrics) using Network Data Analytics Function (NWDAF) is implemented. NWDAF involves collecting diverse data sets from various network components like RAN throughput, user equipment behavior from Access and Mobility Management Function (AMF), and 10 KPIs from Integrated Performance Management (IPM). By analyzing this data, the present disclosure can predict when the network is likely to breach predefined QoS thresholds.
[0054] The present disclosure enables network operators to proactively
adjust and optimize network settings based on predictive insights, thereby 15 enhancing overall service quality and user experience. This proactive approach to managing and adjusting network parameters in real-time marks a significant advancement over the existing reactive methods in network management.
[0055] Various embodiments throughout the disclosure will be explained in
more detail with reference to FIGs. 1-5.
20 [0056] FIG. 1 illustrates an exemplary network architecture (100)
implementing a system (108) for determining quality-of-service metrics in a network, in accordance with embodiments of the present disclosure.
[0057] Referring to FIG. 1, the network architecture (100) may include one
or more computing devices or user equipments (104-1, 104-2…104-N) associated 25 with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more user equipments (104-1, 104-2…104-N) may be individually referred
14
to as the user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although two user equipments (104) are depicted in FIG. 1, however 5 any number of the user equipments (104) may be included without departing from the scope of the ongoing description.
[0058] In an embodiment, the user equipment (104) may include, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-10 mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user 15 equipment (104) may include, but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, where the user 20 equipment (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touch pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) 25 may not be restricted to the mentioned devices and various other devices may be used.
[0059] In an embodiment, the user equipment (104) may include smart
devices operating in a smart environment, for example, an Internet of Things (IoT)
system. In such an embodiment, the user equipment (104) may include, but is not
30 limited to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal,
15
electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring 5 or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
10 [0060] Referring to FIG. 1, the user equipment (104) may communicate
with the system (108) through the network (106). In an embodiment, the network (106) may include at least one of a Fifth Generation (5G) network, 6G network, or the like. The network (106) may enable the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108).
15 The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network
20 (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like. In an embodiment, each of the UE (104) may have a unique identifier attribute associated therewith. In an embodiment, the unique identifier attribute may be indicative of Mobile Station International Subscriber Directory Number (MSISDN), International Mobile Equipment Identity (IMEI) number, International
25 Mobile Subscriber Identity (IMSI), Subscriber Permanent Identifier (SUPI) and the like.
[0061] The UE (104) is configured to initiate a process of performing
quality of service (QoS) analytics in a network environment through an interface
(206) of the user equipment (104). The QoS analytics may refer to a process of
30 measuring, analyzing, and optimizing the performance of network services and
16
applications to ensure they meet predefined service level agreements (SLAs) and user expectations. QoS analytics involves monitoring various performance metrics and parameters in real-time or near-real-time to assess the quality and reliability of services delivered over a network. During QoS analytics, the system is configured 5 to determine one or more QoS metrics of the network associated with the UE. This involves establishing a connection between the user equipment (104) and a server. Once the connection is established, the system (108) performs the QoS analytics by collecting and analyzing the relevant data. The results of one or more QoS values are then displayed to the one or more network entities (110) when the one or more 10 threshold ranges are anticipated to be breached. The result is displayed via the interface (206), ensuring that the network entities (110) are informed about the current and forecasted QoS metrics.
[0062] According to one aspect, the network (106) may include one or more
base stations (112), which the UEs (104) may connect to and request services from.
15 The base station (112) may be a network infrastructure that provides wireless access to one or more terminals associated therewith. The base station (112) may have coverage defined to be a predetermined geographic area based on the distance over which a signal may be transmitted. The base station (112) may include, but not be limited to, wireless access point, evolved NodeB (eNodeB), 5G node or next
20 generation NodeB (gNB), wireless point, transmission/reception point (TRP), and the like. In an embodiment, the base station (112) may include one or more operational units that enable telecommunication between two or more UEs (104). In an embodiment, the one or more operational units may include, but not be limited to, transceivers, baseband unit (BBU), remote radio unit (RRU), antennae, mobile
25 switching centres, radio network control units, one or more processors associated thereto, and a plurality of network entities (110) such as Access and Mobility Management Function (AMF) unit, Session Management Function (SMF) unit, Network Exposure Function (NEF) units, or any custom built functions executing one or more processor-executable instructions, but not limited thereto.
30 [0063] The base stations (112) and the one or more network entities (110)
17
associated therewith may generate a set of operational data while providing services to the one or more UEs (104). In an embodiment, the set of operational data may include, but not limited to, Radio Access Network (RAN) logs, session logs, historical QoS data, Key Performance Indicators (KPIs) metrics, and the like. In an 5 embodiment, the RAN logs may be generated as the base stations (112) interact with each other and the UE (104) for providing services thereto. In an embodiment, the RAN logs may include one or more attributes that may be used to derive performance and health metrics of the network (106). In an embodiment, the one or more attributes may include, but not be limited to, radio summary logs, timestamps, 10 UE details such as unique identifier attributes, configurations details, device type, etc., call event details, signal strength metrics, throughput metrics, unique attributes associated with the base stations (112), alarms and fault details, error codes, and the like.
[0064] In an embodiment, the session logs may be generated as the one or
15 more network entities (110) create and maintain sessions between or with the one or more UEs (104) for providing network services. In an example, the network entities (110) are AMFs (Access and Mobility Management Functions) that may store and make available the unique identifier attributes of the one or more UEs (104) that used or are using to the network (106), in the session logs. In an
20 embodiment, the base station (112) may include an Intelligent Performance Management (IPM) (such as IPM shown in FIG. 3) that determines one or more Key Performance Indicators (KPIs) that indicate the health and performance of the network (106). In operation, the IPM system collects and analyzes data from network elements, user devices, and application interactions to generate insights
25 into network performance trends and patterns. Using these insights, the IPM system dynamically allocates resources, adjusts traffic prioritization, and implements quality-of-service measures to ensure optimal performance under varying conditions. In an embodiment, the operational data may be used for computing one or more QoS metrics. In an embodiment, the QoS metrics may quantify the quality
30 of the services provided by the network (106).
18
[0065] In an embodiment, the system (108) may be coupled to a monitoring
unit (114) that may provide an audio-visual interface to the user (102) for monitoring and analysing data. In an embodiment, the monitoring unit (114) may provide an interface, including, but not limited to, a Graphical User Interface (GUI), 5 an Application Programming Interface (API) or a Command Line Interface (CLI). In an embodiment, the monitoring unit (114) may provide a dashboard for analysing and monitoring the QoS metrics in real time. In an embodiment, the monitoring unit (114) may be used by users (102) or operators of the network (106). In an embodiment, the monitoring unit (114) may be implemented on the UE (104).
10 [0066] In an embodiment, the system (108) may receive a request for
determining one or more QoS metrics from one or more network entities (110). In an example, the one or more QoS metrics may encompass various performance parameters associated with network services, including, but not limited to, latency, throughput, jitter, packet loss, and reliability. These QoS metrics indicate the
15 quality and efficiency of data transmission and reception within the network environment. Latency, also known as delay, measures the time taken for a data packet to travel from the source to the destination across the network. Low latency is essential for real-time applications like voice over IP (VoIP) and online gaming. Jitter refers to the variation in packet arrival times. High jitter can cause disruptions
20 in audio and video communications. It should be minimized for applications requiring smooth, continuous data flow.
[0067] Packet loss measures the percentage of data packets that are lost
during transmission. High packet loss can degrade the quality of applications like video streaming and file transfers. Throughput measures the rate at which data is 25 successfully transmitted over the network. It is typically measured in bits per second (bps) or its multiples (kbps, Mbps, Gbps). Higher throughput indicates a more efficient network.
[0068] Bandwidth refers to the maximum data transfer capacity of the
network. It is usually measured in bits per second (bps) or its multiples. Adequate
30 bandwidth is necessary to support multiple users and high-data-rate applications.
19
Mean Opinion Score (MOS) is a subjective measure of the quality of voice and video services, usually rated on a scale from 1 to 5, with 5 being excellent quality. It is used to evaluate the user experience of multimedia services.
[0069] Connection establishment time measures the time taken to establish
5 a network connection, such as call setup time in VoIP. Shorter connection establishment times are preferred for a better user experience. Error rate measures the frequency of errors in data transmission, typically expressed as a percentage. Lower error rates indicate a more reliable network. Availability measures the percentage of time the network is operational and available for use. Availability is 10 expressed as a percentage, with higher availability indicating better reliability. Service response time measures the time taken for the network to respond to a service request. These QoS metrics are essential for assessing and optimizing network performance, ensuring enhanced user experience and efficient resource utilization.
15 [0070] In an embodiment, the system (108) may validate the request based
on a set of validation rules and transmit an error response when the validation is unsuccessful. In an embodiment, the set of validation rules includes data types, a format, a minimum value, a maximum value, a minimum length, and a maximum length. In an example, the data type specifies the type of data expected for a
20 particular field or input. For example, a field might be expected to contain integers, strings, dates, or other specific data types. Validating data types ensures that only compatible data is accepted. In another example, the format defines the format or structure that the input data should adhere to. This could include requirements such as specific patterns, regular expressions, or formatting conventions. For instance, a
25 phone number field might require input in a specific format like “(XXX) XXX-XXXX”. The minimum value specifies the minimum acceptable value for numerical inputs. For example, a minimum value might be enforced for fields representing quantities, active established sessions, or other numeric attributes. The maximum value specifies the maximum acceptable value for numerical inputs. This
30 ensures that inputs do not exceed certain limits, preventing errors or overflow
20
situations. The minimum length of the input (query) specifies the minimum number of characters or elements required for the inputs. This is commonly used for fields like names, addresses, or descriptions, ensuring that inputs are not too short. The maximum length of the input specifies the maximum number of characters or 5 elements allowed for string inputs. This prevents excessively long inputs, which could cause issues with data storage, display, or processing.
[0071] In an embodiment, the system (108) may retrieve a set of operational
data associated with the one or more UEs (104). In such embodiments, the validation of the requests may be successful. In an embodiment, the set of 10 operational data may be retrieved from a database, such as the database (210) of FIG. 2, or the one or more network entities (110).
[0072] In an embodiment, the system (108) is configured to correlate the
retrieved operational data, such as network traffic statistics and latency metrics, with one or more User Equipments (UEs) (104) currently requesting services from
15 the network (106). For instance, the operational data may include data packets sent and received, signal strength fluctuations, and connection stability indicators. In another embodiment, the operational data is specifically correlated with unique identifier attributes of the UEs (104). For example, these unique identifiers could include International Mobile Subscriber Identity (IMSI) numbers, Mobile
20 Equipment Identities (MEIs), or Internet Protocol (IP) addresses associated with the UEs. Furthermore, in yet another embodiment, the system (108) may correlate the operational data to assess the quality of services associated to the UEs (104). This assessment may involve analyzing the set of operational data and driving one or more Quality of Service (QoS) metrics, such as throughput rates, packet loss
25 percentages, and latency times experienced by the UEs during their interactions with the network.
[0073] In an embodiment, the system (108) may, using, the AI engine (214),
based on an artificial intelligence, forecast the one or more QoS metrics for the
network (106), based on the retrieved set of operational data. In an embodiment,
30 the AI engine (214) may be indicative of including, but not limited to, pretrained
21
machine learning (ML) models, expert systems, and the like. In an embodiment, the AI engine may be trained on historical data of the one or more operational data, as well as historical data of the QoS metrics. In an embodiment, the QoS metrics may be determined by combining and computing one or more attributes of the 5 operational data based on a predetermined set of functions. In an embodiment, the system (108) may use the AI engine (214) based on one or more query parameters in the request. In some embodiments, the one or more query parameters may indicate the use of non-AI forecasting models for forecasting the one or more QoS metrics. In such embodiments, the system (108) may forecast the one or more QoS 10 metrics using the historical data of QoS metrics.
[0074] In an embodiment, the AI engine (214) may determine whether any
of the one or more QoS metrics breach a corresponding threshold range. If the AI engine detects that one or more QoS metrics (for example, latency exceeding a certain threshold or packet loss rate being too high), fall below acceptable levels as
15 defined by the threshold range, it may indicate issues with network performance (current network conditions). For instance, if latency exceeds a certain threshold, it might mean delays in data transmission, affecting user experience. In an example, the issues may be a latency issues, packet loss issues, bandwidth issues, jitter issues and throughput issues. In an example, the AI engine monitors latency issues, which
20 is the delay between the sending and receiving of data packets. High latency may lead to poor user experience in real-time applications like online gaming or video conferencing, where timely data delivery is crucial. In an embodiment, if any one or more of the QoS metrics falls below the threshold range, it may indicate that the network (106) is underperforming or malfunctioning. In an embodiment, when any
25 of the one or more QoS metrics exceed the predetermined range, it may indicate that the network (106) is operating with substantial costs. In such embodiments, operators of the network (106) may perform preventive maintenance or proactively expand the network (106), thereby bringing the QoS metrics within the predetermined range. In an embodiment, the system (108) may identify network
30 issues (network errors) causing QoS metrics to breach the corresponding threshold ranges and generate one or more recommendations to maintain the QoS metrics
22
within the corresponding threshold range. In an example, where a set of UEs (104) may experience degraded performance due to incompatibility between operating system versions of said UEs (104) and the network (106), the system (108), using the AI engine (214), may generate a recommendation and notify said set of UEs 5 (104) to update their operating systems. In an example, the system (108) continuously monitors and collects data on various QoS metrics such as latency, packet loss, throughput, and jitter. Using advanced analytics and machine learning algorithms, patterns and trends in QoS metric deviations are identified. For example, if latency spikes are consistently observed during peak hours, the system
10 recognizes this pattern as a potential issue affecting user experience. Upon detecting a breach in QoS metric thresholds, the system performs root cause analysis. The AI engine (214) identifies underlying network conditions or configurations contributing to the issue, such as network congestion, hardware limitations, or suboptimal routing paths. For example, the network conditions may include traffic
15 load, bandwidth utilization, latency, error rates, and congestion levels. The AI engine (214) continuously monitors these QoS metrics and adjusts the predefined thresholds accordingly to optimize network performance. In an example, during periods of high traffic or congestion, the AI engine (214) may dynamically increase the threshold for acceptable latency. Suppose the predefined latency threshold is set
20 at 100 milliseconds under normal conditions. If the monitored latency exceeds this threshold due to increased traffic or congestion, the AI engine (214) may adjust the threshold to 150 milliseconds to accommodate the higher demand. This adjustment prevents unnecessary alerts or performance degradation warnings while maintaining an acceptable user experience. Conversely, in times of low network
25 usage and optimal conditions, the AI engine (214) may tighten the latency threshold to 80 milliseconds to ensure a high level of service quality. By dynamically adapting the predefined threshold range based on real-time network conditions, the AI engine (214) enhances network reliability and responsiveness. Based on the analysis of detected issues and their root causes, the system generates one or more
30 recommendations. Recommendations are formulated to address specific network conditions and restore QoS metrics within optimal ranges. Examples of
23
recommendations include traffic optimization strategies (e.g., rerouting traffic to less congested paths), adjusting QoS configurations (e.g., prioritizing critical applications), upgrading network hardware (e.g., increasing bandwidth capacity), or optimizing network protocols (e.g., implementing packet prioritization 5 techniques).
[0075] In an embodiment, the system (108) may transmit the forecasted
QoS metrics to the one or more network entities (110) when the corresponding threshold ranges are breached. In such embodiments, the operators or users (102) may resolve the network issues such that the one or more QoS metrics are brought 10 within the corresponding threshold ranges.
[0076] FIG. 2 illustrates a block diagram (200) of the system (108) for
determining QoS metrics in the network environment, in accordance with embodiments of the present disclosure.
[0077] In an aspect, the system (108) may include one or more processor(s)
15 (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and 20 execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device 25 including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[0078] Referring to FIG. 2, the system (108) may include an interface(s)
(interface module) (206). The interface(s) (206) may include a variety of interfaces,
30 for example, interfaces for data input and output devices, referred to as I/O devices,
24
storage devices, and the like. The interface(s) (206) may facilitate communication to/from the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing unit/engine(s) (208) and a 5 database (210). The interface module (206) is configured to present the operation data and threshold breaches to a network administrator.
[0079] In an embodiment, the processing unit/engine(s) (208) may be
implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the
10 processing engine(s) (208). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may include a
15 processing resource, for example, one or more processors, to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (108) may include the machine-readable storage medium storing the instructions and the processing
20 resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0080] In an embodiment, the database (210) includes data that may be
25 either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing engines (208). In an embodiment, the database (210) may be separate from the system (108). In an embodiment, the database (210) may be indicative of including, but not limited to, a relational database, a distributed database, distributed file sharing system, a cloud-30 based database, or the like.
25
[0081] In an exemplary embodiment, the processing engine (208) may
include one or more engines selected from any of a Network Data Analytics Function (NWDAF) engine (212), the AI engine (214), and other engines (216) having functions that may include, but are not limited to, testing, storage, and 5 peripheral functions, such as wireless communication unit for remote operation, and the like, as described in FIG. 3.
[0082] In an embodiment, each of the processing engines (208) may be
communicatively coupled to implement the system (108) and method of the present disclosure.
10 [0083] FIG. 3 illustrates an exemplary system architecture (300), in which
the system (108) is implemented, in accordance with embodiments of the present disclosure. The system (108) includes the NWDAF engine (212). The NWDAF engine (212) is a system architected for the intelligent management of user data traffic within network infrastructures. The NWDAF replies on an automated
15 mechanisms and an Artificial Intelligence/Machine Learning (AI/ML) framework designed for the intricate analysis and policy formulation necessary for optimal data flow regulation and congestion prevention.
[0084] As shown in FIG. 3, the NWDAF engine (212) is configured to
receive the KPI related data from the IPM (220), RAN data from the base stations 20 (112), and UE related data from the AMF (218). The NWDAF engine (212) is configured to correlate the received data and generate a correlated data. The NWDAF engine (212) is configured to store the generated data to the AI engine (214).
[0085] The network entity (110) is configured to communicate with the
25 NWDAF engine (212). The network entity (110) initiates communication by sending a subscription to the NWDAF engine (212). The subscription may refer to a request indicating interest in receiving specific information or updates for example a throughput QoS metric. After sending the subscription, the network entity (110) is configured to receive a response from the NWDAF engine (212). 30 This response could contain information relevant to the subscription request, such
26
as analytics data, performance metrics, or network insights.
[0086] Based on the received subscription, the NWDAF engine (212) is
configured to communicate with the database and the AI engine. The NWDAF engine (212) is configured to insert data into and receive data from the database. 5 When the NWDAF engine (212) receives data from various sources within the network and processes this data and inserts relevant information into the database. The NWDAF engine (212) shares aggregated and processed data with the AI engine (214). The AI engine (214) uses machine learning models to analyze this data, predict QoS metrics, and recommend proactive measures to optimize network 10 performance. In an example, the AI engine (214) may store the predicted QoS metrics in the database. In an example, the NWDAF engine (212) may also retrieve data (predicted QoS metrics) from the database to perform further analysis or generate reports. For example, the NWDAF engine (212) may retrieve historical performance data to identify trends or patterns in network behaviour.
15 [0087] The AI/ML framework used in the embodiment is designed for the
management of user data traffic within network infrastructures. The AI/ML framework is a component of the Network Data Analytics Function (NWDAF) engine, which relies on automated mechanisms and advanced AI/ML techniques to perform analysis on QoS and formulate recommendations necessary for regulation
20 of the services.
[0088] The AI/ML engine processes both historical and real-time network
traffic data, using algorithms to identify patterns and establish predictive models. These algorithms analyze various data points to understand traffic dynamics and predict future network conditions. Through continuous learning, the AI/ML engine 25 refines its predictive models, enhancing its accuracy over time by integrating new data insights.
[0089] Utilizing the insights derived from the AI/ML engine, the NWDAF
conducts computations that dynamically respond to changing network conditions.
These computations relate to several parameters, including traffic volume
30 thresholds, service quality metrics, and user demand projections. The AI/ML
27
framework ensures that computations and forecasting of QoS metrics are adaptive, dynamic, and can be modified based on new data insights, allowing for a proactive and responsive approach to network management.
[0090] The AI/ ML engine processes, in accordance with the embodiment,
5 historical and real-time network traffic data, employing sophisticated algorithms to discern patterns and establish predictive models. Through continuous learning, the AI/ML engine refines its understanding of traffic dynamics, enhancing its predictive accuracy over time.
[0091] The NWDAF utilizes these insights to conduct complex policy
10 computations that dynamically respond to varying network conditions. Such policies are informed by parameters including traffic volume thresholds, service quality metrics, and user demand projections. These policies are adaptive, subject to modification as the AI/ML engine integrates new data insights, allowing for a proactive and responsive network management stance.
15 [0092] FIG. 3 illustrates an embodiment where the NWDAF engine (212)
of the system (108) receives the request for determining the one or more QoS metrics from one or more network entities (110). Further, the system (108) may include an IPM (220) for providing Key Performance Indicator (KPI) data, an AMF (218) to provide one or more attributes associated with the UEs (104), such as
20 unique identification number, and a database (DB), such as the database (210) of FIG. 2. In an example, KPI data encompasses various metrics, including:
• Network Performance Metrics: Metrics such as throughput (data transfer
rate), latency (response time), packet loss rate, and jitter (variation in packet
delay) assess the efficiency and reliability of network operations.
25 . Service Availability and Reliability: Metrics measuring service uptime,
downtime incidents, and overall service reliability ensure continuous availability and operational stability.
• Quality of Service (QoS) Metrics: Parameters evaluating user experience
quality, encompassing service responsiveness, consistency, and adherence
30 to service level agreements (SLAs).
28
. Resource Utilization: Metrics tracking network resource usage, including
bandwidth, CPU usage, memory utilization, and storage capacity
optimization.
. Security and Compliance Metrics: Metrics addressing security incidents,
5 vulnerabilities, regulatory compliance, and adherence to established
security policies safeguard system integrity and user data. . User Experience Metrics: Metrics gauging end-user satisfaction and service
usability derived from QoS parameters and user feedback.
[0093] The IPM (220) aggregates these KPIs from diverse system or
10 network sources, applies analytical processes to derive actionable insights, and delivers findings to stakeholders such as network administrators, service providers, or system operators. This information is integral to informed decision-making, proactive troubleshooting, capacity planning, and ongoing enhancement of system and network performance.
15 [0094] In one example embodiment, the one or more attributes associated
with User Equipments (UEs) may include location, session duration, data usage metrics, quality of service (QoS) parameters, and service subscription information. The location attribute may comprise latitude and longitude coordinates indicating the geographical position of the UE at a given time. Session duration refers to the
20 elapsed time since the UE established its current connection to the network. Data usage metrics denote the volume of data transferred by the UE during the ongoing session, measured in megabytes (MB) or gigabytes (GB). Quality of Service (QoS) Parameters encompass metrics such as throughput, latency, and jitter, which characterize the performance experienced by the UE on the network. Service
25 Subscription information specifies details regarding the type of service subscribed to by the UE, such as voice, video, or data services. These attributes are typically supplied by network components such as the Access and Mobility Management Function (AMF) within the architecture of a 5th Generation (5G) network.
[0095] The NWDAF engine (212) may validate the request and transmit a
29
negative response when the validation is unsuccessful. In an example, the validation of the request by the NWDAF engine (212) involves ensuring that the request meets predefined criteria or conditions before proceeding with further actions. The predefined criteria may include authentication, authorization, data format, 5 completeness of required parameters, or adherence to network policies. In an embodiment, the NWDAF engine (212) may retrieve the set of operational data associated with the one or more UEs (104), when the validation is successful. Examples of the operational data includes Radio Access Network (RAN) logs, session logs, and KPI metrics. In an embodiment, the set of operational data may 10 be retrieved from the database (210) or the one or more network entities (110). In an example, the NWDAF engine (212) may retrieve RAN logs from the one or more base stations (112), the one or more attributes of the UE (104) from the AMF (218), and KPI metric values from the IPM (220). In an example, the RAN logs may include:
15 . Call Detail Records (CDRs): Information about calls made and received,
including call duration, location, and quality metrics.
• Signal Strength and Quality: Measurements of radio signal strength,
Signal-to-Noise Ratio (SNR), and other RF (Radio Frequency)
parameters.
20 . Handover Events: Records of when a mobile device switches from one
base station to another to maintain connectivity as it moves.
• Alarms and Events: Notifications and alerts generated by network
equipment for anomalies, faults, or performance issues.
. Performance Metrics: Data on throughput, latency, packet loss, and other
25 network performance indicators.
• Subscriber Activity: Information on subscriber connections, session
durations, and data usage patterns.
30
[0096] According to one aspect, the NWDAF engine (212) may correlate
the retrieved operational data with the one or more UEs (104) requesting for services, and compute one or more QoS metrics therefrom. In an example, the signal strength attribute in the RAN logs may be correlated with a hardware 5 specification attribute of the UEs (104), such as a unique identification number of the UE (104). The hardware specification attributes of (UEs) encompass critical components defining their performance within telecommunications networks. These attributes include the processor (CPU) type, clock speed, and core count; memory (RAM) capacity for concurrent application execution; storage size and
10 type; display characteristics such as size, resolution, and technology; camera specifications; supported connectivity options including wireless standards and protocols; and the operating system version and user interface details. These attributes collectively determine device capabilities, network compatibility, and user experience, crucial for network operators in optimizing device performance
15 and enhancing user satisfaction. In such examples, the operators of the network (106) may be able determine a correlation value, which may be a probabilistic value (correlation coefficient i.e. a statistical measure of the strength of a linear relationship between two variables) between certain hardware specification attributes of UEs (104) and the operational data that has been retrieved.. The
20 correlation value may be used by the system (108) to correlate the operational data with one or more UEs (104) having similar hardware specifications. Correlation values quantify the strength and direction of the relationship between hardware specification attributes and QoS metrics. In an example, the AI/ML framework employs regression techniques to model the relationship between hardware
25 specifications and QoS metrics more precisely. This allows for the development of predictive models that estimate QoS metrics based on given hardware specifications. For example, a high positive correlation between CPU speed and throughput might indicate that higher CPU speeds generally lead to better data transfer rates. For example, if historical data shows that UEs with a certain CPU
30 type and RAM capacity tend to experience specific signal strengths, this information can be used to forecast the likely signal strength for new UEs with
31
similar hardware. When the one or more QoS metrics is forecasted based on the operational data, the one or more QoS metrics can be applicable for the UEs (104) having substantially similar hardware specification attributes. In an example, once the correlation values are established and validated, they can be used to forecast 5 future QoS metrics for UEs with similar hardware specifications.
[0097] The operators of the network (106) may appropriately take proactive
steps to resolve any network issues to improve signal strengths while providing services to UEs (104) with said hardware specifications.
[0098] According to one aspect, the NWDAF engine (212) may, using the
10 AI engine (214) (AI forecasting model), forecast the one or more QoS metrics for the network (106), based on the retrieved set of operational data, based on the one or more query parameters. In an example, the AI forecasting model encompasses diverse approaches to predict future outcomes from historical data. For example, AI forecasting model employs Time series methods such as ARIMA 15 (AutoRegressive Integrated Moving Average) and Exponential Smoothing capture temporal dependencies in sequential data. The AI forecasting model may employ machine learning techniques such as Linear Regression, Decision Trees, and Gradient Boosting Machines leverage historical patterns to predict future trends. In other embodiments, the one or more query parameters may indicate the use of non-20 AI forecasting models for forecasting the one or more QoS metrics, such as by using the historical QoS data retrieved from the set of operational data. The non-AI forecasting model may be a computational models which are configured to process operational data to forecast one or more QoS metrics. The non-AI forecasting model may rely on comparing the historical QoS data, which is known and stored in a 25 local database, with the retrieved operational data, by utilizing computational capabilities of the processor (202). In implementation of the non-AI forecasting model, the forecasting of the QoS metrics is determined. In an example, the non-AI forecasting model may include exponential smoothing method, moving average, naive forecasting and so on. In an embodiment, the NWDAF engine (212) may 30 determine whether any of the one or more QoS metrics breach a corresponding
32
threshold range. The NWDAF engine (212) may transmit the forecasted QoS metrics to the one or more network entities (110) when said one or more threshold ranges are breached.
[0099] FIG. 4A illustrates an exemplary flowchart of a method (400) for
5 QoS analytics, in accordance with embodiments of the present disclosure.
[00100] At step (402), the method (400) includes receiving, by the processing
engine (208), a request for determining one or more QoS metrics from one or more network functions (consumer NF).
[00101] The request is processed through Network Data Analytics Function
10 Front End (NWDAF FE) of the NWDAF engine (212), at step (404). The NWDAF
FE serves as the interface between the network infrastructure and a Network Data
Analytics Function Back End (NWDAF BE). The NWDAF FE facilitates the
collection and transmission of operational data from various network entities and
user equipment (UEs) (104) to the NWDAF BE. The NWDAF FE is configured for
15 gathering data such as Radio Access Network (RAN) logs, session logs, and key
performance indicator (KPI) metrics. The NWDAF BE is configured for
performing advanced data processing and analytics. Within the NWDAF BE, the
AI/ML engine (214) that processes both historical and real-time network traffic data
is implemented. The NWDAF BE is implemented to identify patterns, establish
20 predictive models, and refine these models through continuous learning.
[00102] The method step (406) includes validating, by the processing engine
(208), the request. In an example, the validating involves determining that the request meets predefined criteria or conditions before proceeding with further actions. The predefined criteria may include authentication, authorization, data 25 format, completeness of required parameters, or adherence to network policies. A negative response is transmitted consumer network function when the validation is unsuccessful at step (408).
[00103] If the validation is successful, then through Network Data Analytics
Function Back End (NWDAF BE) of the NWDAF engine (212), it is determined if
33
the request for one or more QoS metrics is received at step (412). If no, the method at step (414) includes retrieving, by the processing engine (208), a set of operational data associated with the one or more UEs.
[00104] The operational data may include, but not limited to, RAN logs,
5 session logs, historical QoS data, KPIs metrics, and the like, from a database or the one or more network entities.
[00105] The operational data is pre-processed and fed to NWDAF-AI model,
at step (416). Preprocessing operational data before feeding it to the NWDAF-AI model involves several steps to ensure that the data is in a suitable format and
10 quality for effective analysis and modelling. In an aspect, the preprocessing of data encompasses several stages essential for preparing data for analysis, comprising data cleaning, data transformation, data integration, feature selection, and data splitting. Data cleaning involves removing noise and handling missing values, ensuring data quality. Data transformation standardizes formats and scales
15 numerical data, while data integration merges diverse data sources into a unified dataset. Feature Selection selects pertinent data attributes for analysis, optimizing model efficiency. Finally, data splitting divides the dataset into training, validation, and testing subsets to validate and enhance model performance.
[00106] The method step (420) includes forecasting, by the processing
20 engine (208), the one or more QoS metrics for the requested one or more UEs.
[00107] At step (418), the method may include using an AI/ML engine to
determine the one or more QoS metrics based on one or more query parameters in the request. The AI/ML engine processes both historical and real-time operational data, using algorithms to identify patterns and establish predictive models. Based 25 on operational data associated with the UE (104) requesting the QoS, the one or more QoS metrics is determined by the AI/ML engine.
[00108] The method includes determining, by the processor, whether any of
the one or more QoS metrics breach a corresponding threshold range. The method step (422) includes transmitting, by the processing engine (208), the predicted one
34
or more QoS values to the one or more network entities when said one or more threshold ranges are breached.
[00109] FIG. 4B illustrates exemplary steps of a method (450) for
determining quality-of-service metrics in the network, in accordance with an 5 embodiment of the present disclosure.
[00110] At step (452), the method includes receiving a request to determine
one or more QoS metrics from one or more network entities (110). This request is received by the system (108) through various communication channels, such as an API call, a GUI interaction, or other interface means provided by the monitoring 10 unit. The request typically includes specific parameters or identifiers, such as the unique identifier attributes (e.g., Mobile Station International Subscriber Directory Number (MSISDN), International Mobile Equipment Identity (IMEI), International Mobile Subscriber Identity (IMSI), Subscription Permanent Identifier (SUPI)) associated with the user equipment (104).
15 [00111] At step (454), the method includes retrieving a set of operational data
associated with the user equipment (104) requesting QoS analysis from a memory (204), communicatively coupled to the processor (202), based on the one or more requests. The set of operational data is generated by base stations (112) and the one or more network entities (110) providing services to the user equipment (104). The
20 operational data includes Radio Access Network (RAN) logs, session logs, and key performance indicator (KPI) metrics.
[00112] At step (456), the method includes transmitting the retrieved set of
operational data to an Artificial Intelligence (AI) engine (214). The AI engine (214) is trained on historical operational data and historical QoS metrics.
25 [00113] At step (458), the method includes forecasting one or more QoS
metrics associated with the user equipment (104) based on the set of operational data. The AI engine (214) determines if the forecasted one or more QoS metrics fail to exceed a predefined threshold range. The failure to exceed the predefined threshold is indicative of malfunctioning of at least one service. In an aspect, the AI
35
engine (214) is configured to dynamically adjust the predefined threshold range based on current network conditions. For example, the current network conditions may include traffic load, bandwidth utilization, latency, error rates, and congestion levels. The AI engine (214) continuously monitors these QoS metrics and adjusts 5 the predefined thresholds accordingly to optimize network performance. In an example, during periods of high traffic or congestion, the AI engine (214) may dynamically increase the threshold for acceptable latency. Suppose the predefined latency threshold is set at 100 milliseconds under normal conditions. If the monitored latency exceeds this threshold due to increased traffic or congestion, the
10 AI engine (214) may adjust the threshold to 150 milliseconds to accommodate the higher demand. This adjustment prevents unnecessary alerts or performance degradation warnings while maintaining an acceptable user experience. Conversely, in times of low network usage and optimal conditions, the AI engine (214) may tighten the latency threshold to 80 milliseconds to ensure a high level of
15 service quality. By dynamically adapting the predefined threshold range based on real-time network conditions, the AI engine (214) enhances network reliability and responsiveness.
[00114] At step (460), the method includes sending a notification to the user
equipment (104) upon determining malfunctioning of the at least one service. The 20 malfunctioning can be abnormalities of the services.
[00115] FIG. 5 is an illustration (500) of a non-limiting example of details of
computing hardware used in the system (108), in accordance with an embodiment of the present disclosure. As shown in FIG. 5, the system (108) may include an external storage device (510), a bus (520), a main memory (530), a read only 25 memory (540), a mass storage device (550), a communication port (560), and a processor (570). A person skilled in the art will appreciate that the system (108) may include more than one processor (570) and communication ports (560). Processor (570) may include various modules associated with embodiments of the present disclosure.
30 [00116] In an embodiment, the communication port (560) is any of an RS-
36
232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (560) is chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or 5 any network to which the system (108) connects.
[00117] FIG. 5 illustrates an exemplary computer system (500) in which or
with which embodiments of the present disclosure may be implemented. As shown in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read only memory (5(40), a mass storage 10 device (550), a communication port (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor (570) and communication ports (560). Processor (570) may include various modules associated with embodiments of the present disclosure.
[00118] In an embodiment, the communication port (560) may be any of an
15 RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
20 [00119] In an embodiment, the memory (530) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. Read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the
25 processor (570).
[00120] In an embodiment, the mass storage (550) may be any current or
future mass storage solution, which may be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced 30 Technology Attachment (SATA) hard disk drives or solid-state drives (internal or
37
external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
[00121] In an embodiment, the bus (520) communicatively couples the
5 processor(s) (570) with the other memory, storage and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the 10 computer system (500).
[00122] The exemplary computer system (500) incorporates the system for
Quality of Service (QoS) analytics having a sophisticated assembly of interconnected modules and components designed to maintain and enhance the service quality experienced by users. The system is integrated within the network 15 architecture and functions seamlessly with other network entities to ensure optimal performance.
[00123] The processor of the system is configured for receiving and handling
requests related to QoS metrics. The processor is configured for interfacing with various components of the network, such as user equipment, base stations, and 20 management functions, and the initial processing of incoming requests to ascertain their relevance and accuracy.
[00124] The validation module is software-driven and is activated by the
processor to scrutinize each request against established network protocols and parameters. Should the request be deemed incompatible or erroneous, the validation 25 module prompts the processor to reject it, thus maintaining the integrity of the QoS analytical process.
[00125] The system memory component is a repository of vast operational
data, essential for QoS determination. The memory includes comprehensive logs, which include data from the RAN sessions, a historical backlog of QoS metrics,
38
and an array of KPIs pertinent to network performance. This reservoir of information is critical for the system to perform accurate and predictive analytics.
[00126] The system includes a forecasting to project future QoS metrics.
Operated by the processor, this module utilizes the data stored in memory to analyze 5 trends and patterns, thereby enabling it to predict potential service disruptions or degradation before they occur.
[00127] Complementing the forecasting capabilities is a determination
module. This module is programmed to compare the predicted QoS metrics against predefined thresholds that are indicative of the service quality standards of the 10 network. In the event of an anticipated threshold breach, the determination module signals an alert status, prompting pre-emptive actions to avert service quality deterioration.
[00128] A communication module forms the conduit through which the
system interacts with the broader network. When potential QoS issues are 15 identified, this module ensures that pertinent information is disseminated to the relevant network entities. The communication module includes communicating notifications of threshold breaches as well as encompasses detailed analytics that can guide network operators in implementing strategic interventions.
[00129] Optionally, operator and administrative interfaces, e.g., a display,
20 keyboard, joystick, and a cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces may be provided through network connections connected through the communication port (560). Components described above are meant only to exemplify various possibilities. In no way should 25 the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[00130] The present disclosure further discloses a user equipment that is
communicatively coupled to a system. The coupling comprises steps of receiving a connection request, sending an acknowledgment of connection request to the
39
system, and transmitting a plurality of signals in response to the connection request. The system is configured to determine quality of service (QoS) metrics in a network environment. The system includes a processing engine configured to receive one or more requests to determine one or more QoS metrics from one or more network entities. The system is further configured to retrieve a set of operational data associated with user equipment requesting QoS analysis from a memory. The set of operational data is data generated by base stations and the one or more network entities providing services to the user equipment and includes Radio Access Network (RAN) logs, session logs, and key performance indicator (KPI) metrics. The system transmits the retrieved set of operational data to an Artificial Intelligence (AI) engine, with the AI engine trained on historical operational data and historical QoS metrics. The system is further configured to forecast one or more QoS metrics associated with the user equipment based on the set of operational data and determine if the forecasted one or more QoS metrics fail to exceed a predefined threshold range, with the failure to exceed the predefined threshold indicative of malfunctioning of at least one service. The system then sends a notification to the user equipment upon determining malfunctioning of the at least one service.
[00131] The present disclosure introduces technological advancements in
delivering Quality of Service (QoS) metrics within the network. This innovation resolves existing limitations by anticipating future QoS Key Performance Indicators (KPIs) proactively. By predicting potential QoS KPIs, the disclosure enables network operators to optimize network settings promptly, ensuring optimal service delivery to customers. This capability empowers network operators to adjust network parameters based on forecasts generated by AI/ML engine, enhancing network quality by reducing call drops, minimizing latency, and maximizing throughput
[00132] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred
embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00133] The present disclosure provides a system and a method for providing
quality of service analytics.
[00134] The present disclosure provides a system and a method that uses
artificial intelligence (AI) engines for forecasting one or more quality of service metrics.
[00135] The present disclosure provides a system and a method that
generates one or more recommendations to maintain quality of service metrics between a predetermined range.
[00136] The present disclosure provides a system and a method that allows
for predictive analytics and perform preventive maintenance or proactively expand the network.
[00137] The present disclosure provides a dashboard for monitoring and
analyzing quality of service metrics.
[00138] The present disclosure provides a system and a method that notifies
operators of networks when any of the quality-of-service metrics breach any corresponding threshold ranges.
[00139] The present disclosure provides a system and a method that
minimizes call drops, lowers latency and raises throughput.
[00140] The present disclosure provides a system and a method that predicts
abnormalities faced by user equipment in the network.
We Claim:
1. A system (108) for determining quality of service (QoS) metrics in a
network environment, the system (108) is configured to:
receive, by a processing engine (208), one or more requests to determine one or more QoS metrics from one or more network entities (110);
retrieve, by the processing engine (208), a set of operational data associated with a user equipment (UE (104) from a memory (204) based on the one or more requests;
transmit the retrieved set of operational data to an Artificial Intelligence (AI) engine (214), wherein the AI engine is trained on historical operational data and one or more historical QoS metrics;
forecast, by the AI engine (214), the one or more QoS metrics associated with the UE (104) based on the retrieved set of operational data;
determine, by the AI engine (214), if the forecasted one or more QoS metrics fail to exceed a predefined threshold range, wherein the failure to exceed the predefined threshold is an indicative of malfunctioning of at least one service; and
send a notification, by the AI engine (214), to the UE (104) upon determining malfunctioning of the at least one service.
2. The system (108) of claim 1, is further configured to identify one or more network errors causing the one or more QoS metrics to breach the corresponding predefined threshold ranges and generate one or more recommendations to maintain the at least one QoS metrics within the corresponding threshold range.
3. The system (108) of claim 1, wherein the set of operational data is a data generated by a base station (112) and the one or more network entities (110) providing services to the UE (104), and wherein the set of operational data
includes at least one of, Radio Access Network (RAN) logs, session logs, and key performance indicator (KPI) metrics.
4. The system (108) of claim 1, wherein the AI engine (214) is configured to dynamically adjust the predefined threshold range based on current network conditions.
5. The system (108) of claim 1, further comprising an interface module (206) for presenting the operation data and threshold breaches to a network administrator.
6. A method (450) for determining quality of service (QoS) metrics in a network environment, the method comprising:
receiving (452), by a processing engine (208), a request to determine one or more QoS metrics from one or more network entities (110);
retrieving (454), by the processing engine (208), a set of operational data associated with a user equipment (UE) (104) from a memory (204) based on the one or more requests;
transmitting (456), by the processing engine (208), the retrieved set of operational data to an Artificial Intelligence (AI) engine (214), wherein the AI engine is trained on a historical operational data and one or more historical QoS metrics;
forecasting (458), by the AI engine (214), the one or more QoS metrics associated with the UE (104) based on the retrieved set of operational data;
determining (460), by the AI engine (214), if the forecasted one or more QoS metrics fail to exceed a predefined threshold range, wherein the failure to exceed the predefined threshold is an indicative of malfunctioning of at least one service; and
sending (462) a notification, by the AI engine (214), to the UE (104) upon determining malfunctioning of the at least one service.
7. The method (450) of claim 6, further comprises identifying, by the AI engine (214), one or more network errors causing the one or more QoS metrics to breach the corresponding predefined threshold ranges and generate one or more recommendations to maintain the at least one QoS metrics within the corresponding threshold range.
8. The method (450) of claim 6, wherein the set of operational data is a data generated by a base station (112) and the one or more network entities (110) providing services to the UE (104), and wherein the set of operational data includes Radio Access Network (RAN) logs, session logs, and key performance indicator (KPI) metrics.
9. The method (450) of claim 6, further comprises dynamically adjusting the predefined threshold range based on current network conditions.
10. The method (450) of claim 6, further comprises:
validating, by the processing engine (208), the one or more requests based on a set of validation rules; and
transmitting, by the processing engine (208), to the UE (104), a negative response when validation is unsuccessful.
11. A user equipment (104) communicatively coupled to a system (108), said
coupling comprises steps of:
receiving a connection request;
sending an acknowledgment of connection request to the system (108); and
transmitting a plurality of signals in response to the connection request, wherein the system (108) for determining quality of service (QoS) metrics in a network environment as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321050211-STATEMENT OF UNDERTAKING (FORM 3) [25-07-2023(online)].pdf | 2023-07-25 |
| 2 | 202321050211-PROVISIONAL SPECIFICATION [25-07-2023(online)].pdf | 2023-07-25 |
| 3 | 202321050211-FORM 1 [25-07-2023(online)].pdf | 2023-07-25 |
| 4 | 202321050211-DRAWINGS [25-07-2023(online)].pdf | 2023-07-25 |
| 5 | 202321050211-DECLARATION OF INVENTORSHIP (FORM 5) [25-07-2023(online)].pdf | 2023-07-25 |
| 6 | 202321050211-FORM-26 [19-10-2023(online)].pdf | 2023-10-19 |
| 7 | 202321050211-FORM-26 [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202321050211-FORM 13 [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202321050211-FORM-26 [30-04-2024(online)].pdf | 2024-04-30 |
| 10 | 202321050211-Request Letter-Correspondence [03-06-2024(online)].pdf | 2024-06-03 |
| 11 | 202321050211-Power of Attorney [03-06-2024(online)].pdf | 2024-06-03 |
| 12 | 202321050211-Covering Letter [03-06-2024(online)].pdf | 2024-06-03 |
| 13 | 202321050211-CORRESPONDENCE(IPO)-(WIPO DAS)-10-07-2024.pdf | 2024-07-10 |
| 14 | 202321050211-ORIGINAL UR 6(1A) FORM 26-100724.pdf | 2024-07-15 |
| 15 | 202321050211-FORM-5 [23-07-2024(online)].pdf | 2024-07-23 |
| 16 | 202321050211-DRAWING [23-07-2024(online)].pdf | 2024-07-23 |
| 17 | 202321050211-CORRESPONDENCE-OTHERS [23-07-2024(online)].pdf | 2024-07-23 |
| 18 | 202321050211-COMPLETE SPECIFICATION [23-07-2024(online)].pdf | 2024-07-23 |
| 19 | Abstract-1.jpg | 2024-10-03 |
| 20 | 202321050211-FORM-9 [23-10-2024(online)].pdf | 2024-10-23 |
| 21 | 202321050211-FORM 18A [25-10-2024(online)].pdf | 2024-10-25 |
| 22 | 202321050211-FORM 3 [12-11-2024(online)].pdf | 2024-11-12 |
| 23 | 202321050211-FER.pdf | 2024-12-31 |
| 24 | 202321050211-FER_SER_REPLY [29-01-2025(online)].pdf | 2025-01-29 |
| 25 | 202321050211-FORM 3 [28-03-2025(online)].pdf | 2025-03-28 |
| 26 | 202321050211-US(14)-HearingNotice-(HearingDate-21-04-2025).pdf | 2025-04-09 |
| 27 | 202321050211-Correspondence to notify the Controller [17-04-2025(online)].pdf | 2025-04-17 |
| 28 | 202321050211-Written submissions and relevant documents [01-05-2025(online)].pdf | 2025-05-01 |
| 29 | 202321050211-Retyped Pages under Rule 14(1) [01-05-2025(online)].pdf | 2025-05-01 |
| 30 | 202321050211-2. Marked Copy under Rule 14(2) [01-05-2025(online)].pdf | 2025-05-01 |
| 31 | 202321050211-PatentCertificate09-06-2025.pdf | 2025-06-09 |
| 32 | 202321050211-IntimationOfGrant09-06-2025.pdf | 2025-06-09 |
| 1 | PDFShow1E_31-12-2024.pdf |