Abstract: The present disclosure relates to a method and a system [100] for providing service experience analytics, the method comprising, receiving, by a transceiver unit [102], a service experience information from a specific data source; determining, by a determinator unit [104], analytics as per one or more consumer-defined policies based on the received service experience information; detecting, by an identifier unit [106], a breach in at least one service level agreements (SLA) from a set of SLAs, a quality of service (QoS), and one or more traffic key performance indicators (KPIs) defined as per the one or more consumer-defined policies; conveying, by an analyser unit [108], a closed loop report to one or more end consumers upon detection of the breach; and visualizing, by a display unit [110], the determined analytics on an user-friendly user interface (UI) based on a network data and the set of SLAs. [FIG. 3]
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR PROVIDING SERVICE EXPERIENCE ANALYTICS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR PROVIDING SERVICE EXPERIENCE
ANALYTICS
FIELD OF THE DISCLOSURE
[0001] Embodiments of the present disclosure relates generally to the field of wireless communication systems. More particularly, embodiments of the present disclosure relate to methods and systems for providing service experience analytics.
BACKGROUND
[0002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth-generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth-generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication
technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] Traditional approaches to service experience analytics often lack the depth of analysis required to fully understand the factors influencing user experience. This includes granular details about network performance, application behaviour, device-specific issues, and the impact of geographic location on service quality. Without this level of detail, it is challenging for network operators to identify and address the root causes of service degradation. Many existing systems do not provide a mechanism for automatically adjusting service parameters based on the analysis of network data and SLAs. This means that even when issues are identified, manual intervention is often required to implement changes, leading to delays in resolving problems and improving service quality. The ability to visualize service level agreements in real-time is crucial for effective network management. However, existing solutions often fall short in this area, providing static reports or dashboards that do not offer the dynamic, filter-based views necessary for quickly identifying and addressing issues as they arise. The use of artificial intelligence/machine learning (AI/ML) models for forecasting service level agreements is not well integrated into many existing systems. As a result, network operators are often reactive rather than proactive in their approach to network management, missing opportunities to anticipate and prevent service issues before they impact end users. Managing service experience policies can be cumbersome in existing systems, with limited flexibility to add, modify, view, or delete policies easily. This complexity hinders the ability of network operators to adapt to changing network conditions and user requirements.
[0005] Thus, there exists an imperative need in the art to provide improved service experience analytics, which the present disclosure aims to address.
OBJECTS OF THE INVENTION
[0006] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0007] It is an object of the present disclosure to provide a system and a method for providing service experience analytics.
[0008] It is another object of the present disclosure to provide a system and a method for providing service experience analytics that offer comprehensive analytics capabilities to evaluate network data and Service Level Agreements (SLAs) related to user experience, application performance, device groups, and geographic locations, facilitating deeper insights into service quality.
[0009] It is another object of the present disclosure to provide a system and a method for providing service experience analytics that implement a mechanism that utilizes AI/ML to automatically suggest adjustments to service level agreements based on the analysis, enabling proactive management of network performance.
[0010] It is another object of the present disclosure to provide a system and a method for providing service experience analytics that offer a dynamic, user-friendly interface for visualizing service level agreements in real-time, allowing network operators to quickly identify and address service quality issues based on various filters.
[0011] It is another object of the present disclosure to provide a system and a method for providing service experience analytics that integrate AI/ML models for forecasting service level agreements, enabling network operators to anticipate and mitigate potential service issues before they impact end users.
[0012] It is another object of the present disclosure to provide a system and a method for providing service experience analytics that streamline the management of service experience policies, allowing network operators to easily add, modify,
view, or delete policies to adapt to changing network conditions and user requirements.
[0013] It is another object of the present disclosure to provide a system and a method for providing service experience analytics that provide analytics and insights that enable network operators to make informed decisions and take proactive measures to ensure optimal service experience for end users.
[0014] It is another object of the present disclosure to provide a system and a method for providing service experience analytics that by improving network performance and service quality, contribute to a better overall experience for end users, leading to increased satisfaction and loyalty.
SUMMARY
[0015] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0016] An aspect of the present disclosure relates to a method for providing service experience analytics. The method comprising receiving, by a transceiver unit, a service experience information from a specific data source. The method comprising determining, by a determinator unit, an analytics as per one or more consumer-defined policies based on the received service experience information. Further, the method encompassing detecting, by an identifier unit, a breach in at least one service level agreement (SLA)from a set of SLAs a quality of service (QoS) and one or more traffic key performance indicators (KPIs) defined per the consumer-defined policies. Furthermore, the method comprising conveying, by an analyser unit, a closed loop report to end consumers upon detection of the breach; and
visualizing, by a display unit, the determined analytics on a user-friendly user interface (UI) based on a network data and a defined SLAs.
[0017] In an aspect, further comprising training, by the processing unit, an NWDAF model based on the determined analytics.
[0018] In an aspect, the method comprises forecasting, by the processing unit using the trained NWDAF model, a set of service level agreements based on end user requirements.
[0019] In an aspect, further comprising managing service experience policies on the user-friendly UI wherein the management comprises at least one of adding, modifying, viewing, or deleting the service experience policies.
[0020] In an aspect, the specific data source comprises at least one of network functions and data consumers.
[0021] In an aspect, further comprising visualizing, by the processing unit, the set of service level agreements on the user-friendly user interface (UI) based on one or more filters, wherein the one or more filters comprises at least one of a service experience of a network slice, an application, a user equipment (UE), and a geographical location.
[0022] In an aspect, the service experience analytics include details pertaining to service experience of a network slice for a user equipment (UE) or a group of UEs; variance and/or average of observed Service mean opinion score (MoS) reported to a network slice selection function (NSSF); and suggestion of new QoS parameters to a policy control function (PCF) after correlating current QoS and traffic KPIs information.
[0023] In an aspect, the method comprises facilitating, by the processing unit, a session management function (SMF) in (re)selecting user plane (UP) path, which includes user plane function (UPF) and Data Network Access Identifier (DNAI) selections, by providing observed service experience analytics based on the UP path.
[0024] Another aspect of the present disclosure relates to a system for providing service experience analytics. The system includes a transceiver unit configured to receive service experience information from a specific data source. The system further includes a determination unit configured to determine analytics as per consumer-defined policies based on the received service experience information. The system further includes a detection unit configured to detect a breach in at least one of a set of service level agreements (SLAs) a quality of service (QoS), and traffic key performance indicators (KPIs) defined per the consumer-defined policies. The system further includes a conveying unit configured to convey a closed loop report to end consumers upon detection of the breach. The system further includes a visualizing unit configured to visualize the determined analytics on a user-friendly user interface (UI) based on network data and the defined set of SLAs.
[0025] Yet another aspect of the present disclosure may relate to a non-transitory computer-readable storage medium storing instructions for providing service experience analytics. The instructions include executable code which, when executed by a processor, may cause the processor to receive service experience information from a specific data source; determine analytics as per consumer-defined policies based on the received service experience information; detect a breach in at least one of a set of service level agreements (SLAs), a quality of service (QoS), and a traffic key performance indicators (KPIs) defined per the consumer-defined policies; convey a closed loop report to end consumers upon detection of
the breach; and visualize the determined analytics on a user-friendly user interface (UI) based on network data and the set of SLAs.
BRIEF DESCRIPTION OF DRAWINGS
[0026] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0027] FIG. 1 illustrates an exemplary block diagram of a system for providing a service experience analytics, in accordance with exemplary embodiments of the present disclosure.
[0028] FIG. 2 illustrates an exemplary block diagram of an architecture for implementing a system for providing service experience analytics in accordance with exemplary embodiments of the present disclosure.
[0029] FIG. 3 illustrates an exemplary flow diagram indicating the process for providing service experience analytics, in accordance with exemplary embodiments of the present disclosure.
[0030] FIG. 4 illustrates an exemplary block diagram of a computing device upon which an embodiment of the present disclosure may be implemented.
[0031] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DESCRIPTION
[0032] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
[0033] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0034] It should be noted that the terms "mobile device", "user equipment", "user device", “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for
convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0035] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0036] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0037] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0038] As used herein, an “electronic device”, or “portable electronic device”, or “user device” or “communication device” or “user equipment” or “device” refers to any electrical, electronic, electromechanical, and computing device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices, and transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery, and an input-means such as a hard keypad and/or a soft keypad. The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
[0039] Further, the user device may also comprise a “processor” or “processing unit” includes processing unit, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0040] As portable electronic devices and wireless technologies continue to improve and grow in popularity, the advancing wireless technologies for data transfer are also expected to evolve and replace the older generations of technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology are also seen. The development, in this respect, has been incremental in the order of second generation (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G), and more such generations are expected to continue in the forthcoming time.
[0041] Radio Access Technology (RAT) refers to the technology used by mobile devices/ user equipment (UE) to connect to a cellular network. It refers to the specific protocol and standards that govern the way devices communicate with base stations, which are responsible for providing the wireless connection. Further, each RAT has its own set of protocols and standards for communication, which define the frequency bands, modulation techniques, and other parameters used for transmitting and receiving data. Examples of RATs include GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), UMTS (Universal Mobile Telecommunications System), LTE (Long-Term Evolution), and 5G. The choice of RAT depends on a variety of factors, including the network infrastructure, the available spectrum, and the mobile device's/device's capabilities. Mobile devices often support multiple RATs, allowing them to connect to different types of networks and provide optimal performance based on the available network resources.
[0042] The Session Management Function (SMF) is a core network element responsible for managing the sessions between user devices and the network. Acting as a conductor of sorts, SMF orchestrates the flow of data traffic, ensuring seamless connectivity and efficient resource utilization. One of its primary tasks involves managing the separation of the control plane, which handles connection setup and management, and the user plane, which is responsible for actual data transmission. SMF plays a crucial role in enforcing various policies, such as data
usage limits and quality of service requirements, to optimize network performance
and adhere to service agreements. Additionally, SMF is tasked with dynamically
assigning IP addresses to user devices, enabling them to be identified and routed
effectively within the network. Overall, SMF acts as a cornerstone in 5G networks,
5 facilitating robust session management and enhancing the overall user experience.
[0043] The Network Data Analytics Function (NWDAF) is a key component in 5G networks that performs data analytics to provide valuable insights and intelligence. NWDAF collects and analyses network data to enhance various aspects of the
10 network, such as performance, quality of service, and user experience. It helps in
making informed decisions for network optimization, predictive maintenance, and overall network efficiency. NWDAF enables telecom operators to leverage data-driven approaches for better service delivery and customer satisfaction in 5G networks.
15
[0044] The Network Slice Selection Function (NSSF) stands as an element within the architecture of 5G networks, tasked with the selection and management of network slices. Network slicing is a revolutionary capability of 5G, allowing for the creation of multiple virtualized network instances, each customized to specific use
20 cases or service requirements. NSSF plays a vital role in this process by employing
policy-based decision-making algorithms to determine the most suitable network slice for a given user or application. This selection is based on factors such as service demands, network conditions, and operator policies. By ensuring efficient resource allocation and optimal utilization of network resources, NSSF facilitates
25 the delivery of diverse services with varying performance objectives over a shared
infrastructure.
[0045] The Mean Opinion Score (MOS) serves as a quantitative measure of
perceived quality for real-time communication services, such as voice and video
30 calls. Typically rated on a scale of 1 to 5, with higher scores indicating better
13
quality, MOS provides valuable insights into user satisfaction and helps network operators and service providers identify and address quality issues.
[0046] The Quality of Service (QoS) encompasses a range of techniques and
5 mechanisms aimed at ensuring reliable and predictable performance for
communication services over a network. These techniques prioritize and allocate
network resources based on service requirements, enabling support for various
applications and use cases while maintaining performance guarantees and service
differentiation. Together, NSSF, MOS, and QoS play integral roles in shaping the
10 performance and user experience within 5G networks.
[0047] The User Plane Function (UPF) is a cornerstone of 5G network architecture, primarily tasked with managing user data traffic. Its responsibilities include efficiently routing, forwarding, and processing data packets between user devices
15 and external networks. UPF implements various functions such as packet
inspection, modification, and termination based on network policies and service requirements, ensuring smooth and reliable data transmission. Notably, UPF supports network slicing, enabling the creation of virtualized network instances tailored to specific use cases or service needs. Moreover, it plays a vital role in
20 implementing quality of service (QoS) mechanisms to prioritize and optimize data
traffic based on factors like latency and reliability.
[0048] As used herein, Data Network Access Identifier (DNAI) refers to a unique identifier used within telecommunications networks to designate specific data
25 networks and facilitate the routing of user traffic. DNAI allows the network to select
and manage the optimal data path for user sessions, enhancing service delivery and performance. It plays a crucial role in ensuring that user data is directed through the appropriate network segments, optimizing quality of service (QoS), and maintaining compliance with service level agreements (SLAs). By leveraging
30 DNAI, networks can dynamically respond to varying traffic conditions, thus
maintaining efficient and reliable communication services.
14
[0049] All modules, units, components used herein may be software modules or
hardware processors, the processors being a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
5 of microprocessors, one or more microprocessors in association with a DSP core, a
controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. Furthermore, all the units in the system are interconnected, so they can communicate and work together smoothly.
10
[0050] As discussed in the background section, traditional approaches to service experience analytics often lack the depth of analysis required to fully understand the factors influencing user experience. This includes granular details about network performance, application behaviour, device-specific issues, and the impact
15 of geographic location on service quality. Without this level of detail, it is
challenging for network operators to identify and address the root causes of service degradation. Many existing systems do not provide a mechanism for automatically adjusting service parameters based on the analysis of network data and SLAs. This means that even when issues are identified, manual intervention is often required to
20 implement changes, leading to delays in resolving problems and improving service
quality. The ability to visualize service level agreements in real-time is crucial for effective network management. However, existing solutions often fall short in this area, providing static reports or dashboards that do not offer the dynamic, filter-based views necessary for quickly identifying and addressing issues as they arise.
25 The use of AI/ML based models for forecasting service level agreements is not well
integrated into many existing systems. As a result, network operators are often reactive rather than proactive in their approach to network management, missing opportunities to anticipate and prevent service issues before they impact end users. Managing service experience policies can be cumbersome in existing systems, with
30 limited flexibility to add, modify, view, or delete policies easily. This complexity
15
hinders the ability of network operators to adapt to changing network conditions and user requirements.
[0051] The present disclosure provides a comprehensive solution to address the
5 limitations in the art through several key advancements. Firstly, it enhances the
drill-down capabilities by enabling a detailed analysis of service experience information. This includes capturing granular details related to user experience, application performance, device groups, and geographic locations. As a result, network operators can better understand and address the root causes of service
10 degradation. Secondly, the disclosed system automates closed-loop reporting by
leveraging AI/ML based NWDAF models. This allows for automatic adjustments of service parameters based on the analysis of network data and SLAs, eliminating the need for manual intervention, and reducing the time to resolve problems and improve service quality. Thirdly, the system provides real-time visualization of
15 service level agreements through a user-friendly interface. This enables dynamic,
filter-based views that help network operators quickly identify and address issues as they arise. Fourthly, the disclosure integrates predictive analytics using AI/ML based NWDAF models for forecasting service level agreements. This allows network operators to be proactive in their approach to network management,
20 anticipating and preventing service issues before they impact end users. Lastly, the
system simplifies the management of service experience policies by offering an intuitive interface for adding, modifying, viewing, or deleting policies with ease. This flexibility allows network operators to quickly adapt to changing network conditions and user requirements, ensuring that service quality is maintained at the
25 highest level.
[0052] It would be appreciated by the person skilled in the art that the present
disclosure addresses the problems in the art by providing a more detailed,
automated, and user-friendly approach to service experience analytics in the 5G
30 network and beyond.
16
[0053] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0054] Referring to FIG. 1, an exemplary block diagram of a system [100] for
5 providing a service experience analytics is shown, in accordance with the
exemplary embodiments of the present invention. The system [100] is configured
for providing a service experience analytics to analysis of data related to the quality
and performance of a service, with the help of the interconnection between the
components/units of the system [100]. The system comprises at least a transceiver
10 unit [102], at least a determinator unit [104], at least an identifier unit [106], at least
an analyser unit [108], at least a display unit [110] and at least a storage unit [112].
[0055] A transceiver unit [102] of the system [100] is configured to receive a service experience information from a specific data source. In an implementation
15 of the present disclosure, the specific data source comprises at least one of network
functions and data consumers. Further, the service experience is requested/subscribed by a consumer. Examples of network functions and data consumers includes but not limited only to application function (AF), access and mobility management function (AMF), session management function (SMF),
20 network slice selection function (NSSF), policy control function (PCF) and the like
that generate or hold data relevant to service quality and user experience. For example, in a telecommunications network, the transceiver unit [102] might receive data from network functions like base stations or servers that monitor user activity and network performance. The data can include, but not limited only to metrics such
25 as signal strength, connection quality, and user behaviour patterns for assessing
service experience and identifying any issues.
[0056] As used herein, service experience refers to the overall perception and
satisfaction of a user while interacting with a service or network. The service
30 experience encompasses various factors such as the quality of service (QoS), the
speed and reliability of connections, user interface friendliness, and the
17
responsiveness of customer support. Service experience is determined by both objective metrics, like network latency and uptime, and subjective feedback from users regarding their satisfaction and ease of use.
5 [0057] The service experience information comprises various data points that
reflect the quality and effectiveness of a service as perceived by the user. The service experience information includes metrics such as network latency, connection stability, signal strength, data throughput, and any occurrences of service disruptions or breaches in service level agreements (SLAs). Additionally, it
10 may contain user feedback, service usage patterns, and key performance indicators
(KPIs) that provide insights into the user’s interaction with the service. Further, the service experience is requested/subscribed by a consumer, meaning that users actively seek out or opt-in to have their service experience monitored and analysed to ensure optimal performance and satisfaction.
15
[0058] The system comprises the determinator unit [104], which is communicatively coupled to the transceiver unit [102]. The determinator unit [104] is configured for determining analytics in accordance with one or more consumer-defined policies, which are based on service experience information received from
20 the transceiver unit [102]. The analytics process involves an in-depth analysis of
the quality of service provided, assessment of SLAs, and evaluation of traffic KPIs, all through the lens of the specified consumer policies. For example, a telecommunications service provider may receive service experience information indicating various performance metrics, such as data speed, connection stability,
25 and user feedback. The consumer-defined policy might prioritize the analysis of
data speed and user satisfaction during peak usage times. The determinator unit [104] processes this information to generate analytics. The analytics could reveal that data speed significantly drops between 6 PM and 9 PM in urban areas, correlating with increased user complaints about slow internet.
30
18
[0059] In an example, analytics includes the collection and analysis of data related
to the service experience within a WLAN. This encompasses various performance
metrics and key performance indicators (KPIs) such as connection stability,
throughput, latency, signal strength, and packet loss. The analytics process
5 identifies trends, abnormalities, and deviations from expected performance
standards. For example, the analytics may include measuring the average signal strength and connection stability for user devices in a specific geographical area over a week. If a significant drop in signal strength is detected during peak hours, this would be highlighted as a potential abnormality requiring further investigation.
10
[0060] Moreover, the determinator unit [104] facilitates in training of the NWDAF model. It utilizes the derived analytics to train the NWDAF model [206], ensuring that the model can accurately interpret the data and make predictions or suggestions for service level parameter adjustments. The training process involves the
15 application of machine learning algorithms to the analysed data, which refines the
model's ability to forecast future service conditions and identify potential service level breaches before they occur.
[0061] Examples of NWDAF models include traffic prediction model, user
20 behaviour analysis model, anomaly detection models, and quality of experience
(QoE) Optimization Model. The Traffic Prediction Model forecasts network
congestion by analysing historical user activity patterns, enabling pre-emptive
resource allocation during peak times. The User Behaviour Analysis Model tracks
and analyses user trends, such as application usage and session durations, to
25 optimize bandwidth for popular services like video streaming during peak hours.
Anomaly Detection Models identify unusual patterns indicating potential issues like
security threats or equipment malfunctions, prompting timely interventions. The
Quality of Experience (QoE) Optimization Model enhances user satisfaction by
analysing metrics like latency and packet loss, suggesting adjustments to improve
30 service quality.
19
[0062] Once trained, the NWDAF model [206] becomes a predictive tool under the
purview of the determinator unit [104]. This model is then used to forecast service
level agreements, taking into account end-user requirements and the array of
collected data points. The capability to forecast is crucial for proactive network
5 management, as it allows the network to adapt to changes in user behaviour,
application demands, and other factors that influence network performance and user experience. The predictive insights generated are crucial for maintaining an optimum service level and ensuring that user experiences remain within the defined thresholds of the consumer-defined policies.
10
[0063] The system comprises the identifier unit [106] communicatively coupled to the determinator unit [104]. The identifier unit [106] is configured to detect a breach in at least one service level agreement (SLA) from a set of SLAs, a quality of service (QoS), and one or more traffic key performance indicators (KPIs) defined as per
15 the one or more consumer-defined policies that facilitates in the monitoring and
enforcement of network performance standards. For example, an SLA for a cloud storage service might guarantee a certain level of data retrieval speed. The SLA might state that 95% of data retrieval requests will be processed within 2 seconds. If the cloud storage service fails to meet this standard, the customer might be
20 entitled to compensation or enhanced support services.
[0064] In an example, the consumer defined policies refer to customizable rules set
by network administrators or service providers to manage and optimize network
performance according to specific needs and preferences. These policies dictate
25 how the network resources are allocated, how traffic is prioritized, and what
performance thresholds are maintained.
[0065] For example, a network administrator might define a policy to prioritize
traffic for video conferencing applications during business hours to ensure high
30 quality calls. The policy could specify minimum bandwidth and latency
requirements for these applications to prevent disruptions.
20
[0066] In an example, the system comprises the identifier unit [106], to facilitate a
session management function (SMF) in a (re)selecting user plane (UP) path, which
includes a user plane function (UPF) and a Data Network Access Identifier (DNAI)
5 selection, by providing an observed service experience analytics based on the UP
path. The observed service experience analytics refers to the real time data collected and analysed to assess the actual performance experienced by users. This includes monitoring parameters like connection quality, throughput, latency, and user satisfaction to ensure compliance with SLAs and consumer defined policies.
10
[0067] Upon analysing the service experience information processed by the determinator unit [104], the identifier unit [106] compares the current network performance against the predefined standards set within the SLAs, QoS, and traffic KPIs. Should there be a discrepancy, such as a performance metric falling below
15 the agreed-upon thresholds, the identifier unit [106] recognizes this as a breach.
This detection is vital for maintaining the integrity of the service delivery to the end-users and is crucial for the system's closed-loop reporting mechanism. Once a breach is detected, the identifier unit [106] triggers the analyser unit [108] to generate and convey reports or take corrective actions to address the issue.
20
[0068] Service Level Agreements (SLAs) are formal contracts between service providers and customers that outline the specific performance standards and service quality metrics that must be met. SLAs typically include parameters such as uptime, response time, resolution time, and service availability, among others. These
25 agreements ensure that customers receive a consistent and reliable level of service,
while also holding service providers accountable for maintaining these standards.
[0069] For example, an SLA between an internet service provider (ISP) and a
business might specify that the ISP must provide 99.9% uptime each month. This
30 means that the internet service can only be down for a total of approximately 43.8
21
minutes in any given month. If the service falls below this threshold, the SLA may include penalties, such as service credits or financial compensation to the business.
[0070] The system comprises the analyser unit [108] communicatively coupled to
5 the identifier unit [106]. The analyser unit [108] is configured to convey a closed-
loop report to one or more end consumers upon detection of a breach. The analyser
unit facilitates in ensuring that any deviation from the set service level agreements
(SLAs), quality of service (QoS), and traffic key performance indicators (KPIs) is
promptly addressed. When the identifier unit [106] detects a breach in service levels
10 based on the consumer-defined policies, it triggers the analyser unit [108] to
generate a detailed report outlining the specifics of the breach. This report includes information such as the nature of the breach, the affected service parameters, and the potential impact on end-user experience.
15 [0071] The closed-loop report is then communicated to the relevant end consumers,
which could be network operators, service providers, or even the end-users themselves. This report serves as a feedback mechanism, enabling the stakeholders to take appropriate corrective actions to rectify the breach and restore service levels. The timely conveyance of these reports is essential for maintaining high service
20 quality and ensuring customer satisfaction in the telecom network.
[0072] As used herein, a closed loop report is a feedback mechanism that ensures
continuous monitoring, reporting, and resolution of service issues. It involves
detecting service breaches, analysing the cause, implementing corrective actions,
25 and verifying the effectiveness of these actions.
[0073] Thereafter, the display unit [110] is configured to visualise the determined
analytics on a user-friendly user interface (UI) based on network data and defined
SLAs. Further comprising managing service experience policies on the user-
30 friendly UI wherein the management includes at least one of adding, modifying,
viewing, or deleting the service experience policies. In an implementation of the
22
present disclosure, by providing a user-friendly interface for policy management enables to users to efficiently customize and adjust policies.
[0074] In an implementation of the present disclosure, the system is further
5 configured to visualize the set of service level agreements on the user-friendly user
interface (UI) based on one or more filters, wherein the one or more filters
comprises at least one of a service experience of a network slice, an application, a
user equipment (UE) or group of UE, and a geographical location. In an
implementation of the present disclosure, geographical location refers to the
10 physical location where network services are accessed. Further the network slice is
a virtual network instance that provides specific network functionalities of a particular service, and filters are applied using identifiers for each.
[0075] In an example, network slice identifiers may include specific virtual
15 network segments dedicated to certain types of traffic or services. The application
identifiers may be the specific application types such as video streaming, online gaming, or video conferencing. The user Equipment (UE) identifiers may be unique device IDs or user profiles. The geographical location identifiers may be specific areas or regions where the network performance is monitored and managed. 20
[0076] In an implementation of the present disclosure, recommending adjustments
or updates to Quality of Service (QoS) parameters which refers to the overall
performance and reliability of a network or service, based on the correlation
between current QoS metrics and traffic Key Performance Indicators (KPIs) used
25 to evaluate the performance of network traffic, however the present disclosure is
not limited thereto.
[0077] Referring to FIG. 2 an exemplary block diagram of an architecture for
implementing a system for providing service experience analytics is shown, in
30 accordance with the exemplary embodiments of the present invention. The
architecture [200] comprises a plurality of data consumers [DC-A to DC-N] [202A
23
to 202N] (collectively referred to as DCs [202] or individually referred to as DC
[202] hereinafter). The DCs [202] may include but not limited only to, an Access
and Mobility Management Function (AMF), and a Network Slice Selection
Function (NSSF), and/or the like. Additionally, the architecture [200] comprises an
5 NWDAF [204] and is responsible for performing data analytics on the network
function data received from various DCs [202]. The architecture [200] further
comprises NWDAF [204], NWDAF model [206], NWDAF UI [208]. Also, in FIG.
2 only a few units are shown, however, the architecture [200] may comprise
multiple such units or the architecture [200] may comprise any such numbers of
10 said units, as required to implement the features of the present disclosure.
[0078] The architecture [200] comprises a plurality of data consumers DC-A to DC-N [202A to 202N], which may include various network functions such as an Access and Mobility Management Function (AMF), a Network Slice Selection
15 Function (NSSF), and similar components. The data consumers serve as sources of
service experience information which are essential for the analytics performed by the system. The NWDAF [204] performs data analytics on the information received from the DCs [202]. The analytics process involves determining, detecting, and forecasting based on consumer-defined policies that includes detecting breaches in
20 service level agreements (SLAs), quality of service (QoS), and traffic key
performance indicators (KPIs).
[0079] Furthermore, the architecture [200] further includes the NWDAF model [206], which is trained on the determined analytics to forecast service level
25 agreements for the service experience to end-user requirements. The NWDAF UI
[208] acts as a user-friendly interface for real-time visualization and management of service experience policies. It allows users to add, modify, view, or delete service experience policies and to visualize service level agreements filtered by various criteria, including network slice, application, user equipment (UE), or geographical
30 location. The system provides significant advantages over the existing standards by
offering a nuanced view of service experiences and the ability to suggest QoS
24
adjustments proactively. The unique aspect of this invention is the auto-detection of user experience breaches based on user-defined policies and the real-time reporting of service experience analytics with the aid of a user-friendly interface.
5 [0080] In a preferred implementation, the NWDAF [204] is configured to gather
service experience information from a specific data source. The NWDAF [204] is further configured to compute analytics as per the consumer-defined policies. The NWDAF [204] is furthermore configured to convey a closed loop reporting to end consumers whenever there is a breach in SLAs or QoS or traffic KPIs defined as
10 per consumer policy via the NWDAF UI [208]. The NWDAF [204] is furthermore
configured to feed service experience analytics for NWDAF model [206] for training the model. Based on the training, the NWDAF model [206] is configured to forecast service level agreements as per end user needs. The NWDAF UI [208] is furthermore configured to manage i.e., (add/modify/view/delete) the service
15 experience policies from the UI. It is pertinent to note that the system is capable of
implementing the features that are obvious to a person skilled in the art in light of the disclosure as disclosed above and the implementation of the system is not limited to the above disclosure.
20 [0081] Referring to FIG. 3 an exemplary method flow diagram [300], for a service
experience analytics to analysis of data related to the quality and performance of a service, in accordance with exemplary embodiments of the present invention is shown. In an implementation the method [300] is performed by a system [100]. As shown in Figure 4, the method [300] starts at step [302].
25
[0082] At step [304], the method [300] as disclosed by the present disclosure comprises receiving, by a transceiver unit [102] to transmit and receive signals, the service experience information from a specific data source. In an implementation of the present disclosure, the specific data source comprises at least one of network
30 functions and data consumers. Further, the service experience is
requested/subscribed by a consumer. Examples of network functions and data
25
consumers includes but not limited only to application function (AF), access and
mobility management function (AMF), session management function (SMF),
network slice selection function (NSSF), policy control function (PCF) and the like
that generate or hold data relevant to service quality and user experience. For
5 example, in a telecommunications network, the transceiver unit [102] might receive
data from network functions like base stations or servers that monitor user activity and network performance. The data can include, but not limited only to metrics such as signal strength, connection quality, and user behaviour patterns for assessing service experience and identifying any issues.
10
[0083] As used herein, service experience refers to the overall perception and satisfaction of a user while interacting with a service or network. The service experience encompasses various factors such as the quality of service (QoS), the speed and reliability of connections, user interface friendliness, and the
15 responsiveness of customer support. Service experience is determined by both
objective metrics, like network latency and uptime, and subjective feedback from users regarding their satisfaction and ease of use.
[0084] The service experience information comprises various data points that
20 reflect the quality and effectiveness of a service as perceived by the user. The
service experience information includes metrics such as network latency,
connection stability, signal strength, data throughput, and any occurrences of
service disruptions or breaches in service level agreements (SLAs). Additionally, it
may contain user feedback, service usage patterns, and key performance indicators
25 (KPIs) that provide insights into the user’s interaction with the service. Further, the
service experience is requested/subscribed by a consumer, meaning that users actively seek out or opt-in to have their service experience monitored and analysed to ensure optimal performance and satisfaction.
30
26
[0085] Now at step [306], the method [300] as disclosed by the present disclosure
encompasses determining, by a determinator unit [104], analytics as per one or
more consumer-defined policies based on the received service experience
information. The analytics process involves an in-depth analysis of the quality of
5 service provided, assessment of SLAs, and evaluation of traffic KPIs, all through
the lens of the specified consumer policies. For example, a telecommunications
service provider may receive service experience information indicating various
performance metrics, such as data speed, connection stability, and user feedback.
The consumer-defined policy might prioritize the analysis of data speed and user
10 satisfaction during peak usage times. The determinator unit [104] processes this
information to generate analytics. The analytics could reveal that data speed significantly drops between 6 PM and 9 PM in urban areas, correlating with increased user complaints about slow internet.
15 [0086] In an example, analytics includes the collection and analysis of data related
to the service experience within a WLAN. This encompasses various performance metrics and key performance indicators (KPIs) such as connection stability, throughput, latency, signal strength, and packet loss. The analytics process identifies trends, abnormalities, and deviations from expected performance
20 standards. For example, the analytics may include measuring the average signal
strength and connection stability for user devices in a specific geographical area over a week. If a significant drop in signal strength is detected during peak hours, this would be highlighted as a potential abnormality requiring further investigation.
25 [0087] Moreover, the determinator unit [104] facilitates in training of the NWDAF
model. It utilizes the derived analytics to train the NWDAF model [206], ensuring that the model can accurately interpret the data and make predictions or suggestions for service level parameter adjustments. The training process involves the application of machine learning algorithms to the analysed data, which refines the
30 model's ability to forecast future service conditions and identify potential service
level breaches before they occur.
27
[0088] Examples of NWDAF models include traffic prediction model, user
behaviour analysis model, anomaly detection models, and quality of experience
(QoE) Optimization Model. The Traffic Prediction Model forecasts network
5 congestion by analysing historical user activity patterns, enabling pre-emptive
resource allocation during peak times. The User Behaviour Analysis Model tracks
and analyses user trends, such as application usage and session durations, to
optimize bandwidth for popular services like video streaming during peak hours.
Anomaly Detection Models identify unusual patterns indicating potential issues like
10 security threats or equipment malfunctions, prompting timely interventions. The
Quality of Experience (QoE) Optimization Model enhances user satisfaction by analysing metrics like latency and packet loss, suggesting adjustments to improve service quality.
15 [0089] Once trained, the NWDAF model [206] becomes a predictive tool under the
purview of the determinator unit [104]. This model is then used to forecast service level agreements, taking into account end-user requirements and the array of collected data points. The capability to forecast is crucial for proactive network management, as it allows the network to adapt to changes in user behaviour,
20 application demands, and other factors that influence network performance and user
experience. The predictive insights generated are crucial for maintaining an optimum service level and ensuring that user experiences remain within the defined thresholds of the consumer-defined policies.
25 [0090] Next at step [308], the method [300] as disclosed by the present disclosure
encompasses detecting, by an identifier unit [106], a breach in at least one service level agreements (SLA) from a set of SLAs, a quality of service (QoS), and one or more traffic key performance indicators (KPIs) defined as per the one or more consumer-defined policies.
30
28
[0091] For example, an SLA for a cloud storage service might guarantee a certain
level of data retrieval speed. The SLA might state that 95% of data retrieval requests
will be processed within 2 seconds. If the cloud storage service fails to meet this
standard, the customer might be entitled to compensation or enhanced support
5 services.
[0092] In an example, the consumer defined policies refer to customizable rules set
by network administrators or service providers to manage and optimize network
performance according to specific needs and preferences. These policies dictate
10 how the network resources are allocated, how traffic is prioritized, and what
performance thresholds are maintained.
[0093] For example, a network administrator might define a policy to prioritize
traffic for video conferencing applications during business hours to ensure high
15 quality calls. The policy could specify minimum bandwidth and latency
requirements for these applications to prevent disruptions.
[0094] In an example, the system comprises the identifier unit [106], to facilitate a session management function (SMF) in a (re)selecting user plane (UP) path, which
20 includes a user plane function (UPF) and a Data Network Access Identifier (DNAI)
selection, by providing an observed service experience analytics based on the UP path. The observed service experience analytics refers to the real time data collected and analysed to assess the actual performance experienced by users. This includes monitoring parameters like connection quality, throughput, latency, and user
25 satisfaction to ensure compliance with SLAs and consumer defined policies.
[0095] Upon analysing the service experience information processed by the
determinator unit [104], the identifier unit [106] compares the current network
performance against the predefined standards set within the SLAs, QoS, and traffic
30 KPIs. Should there be a discrepancy, such as a performance metric falling below
the agreed-upon thresholds, the identifier unit [106] recognizes this as a breach.
29
This detection is vital for maintaining the integrity of the service delivery to the end-users and is crucial for the system's closed-loop reporting mechanism. Once a breach is detected, the identifier unit [106] triggers the analyser unit [108] to generate and convey reports or take corrective actions to address the issue. 5
[0096] Service Level Agreements (SLAs) are formal contracts between service
providers and customers that outline the specific performance standards and service
quality metrics that must be met. SLAs typically include parameters such as uptime,
response time, resolution time, and service availability, among others. These
10 agreements ensure that customers receive a consistent and reliable level of service,
while also holding service providers accountable for maintaining these standards.
[0097] For example, an SLA between an internet service provider (ISP) and a
business might specify that the ISP must provide 99.9% uptime each month. This
15 means that the internet service can only be down for a total of approximately 43.8
minutes in any given month. If the service falls below this threshold, the SLA may include penalties, such as service credits or financial compensation to the business.
[0098] Further, at step [310], the method [300] as disclosed by the present
20 disclosure encompasses conveying, by an analyser unit [108], a closed loop report
to one or more end consumers upon detection of the breach.
[0099] The analyser unit facilitates in ensuring that any deviation from the set service level agreements (SLAs), quality of service (QoS), and traffic key
25 performance indicators (KPIs) is promptly addressed. When the identifier unit [106]
detects a breach in service levels based on the consumer-defined policies, it triggers the analyser unit [108] to generate a detailed report outlining the specifics of the breach. This report includes information such as the nature of the breach, the affected service parameters, and the potential impact on end-user experience.
30
30
[0100] The closed-loop report is then communicated to the relevant end consumers,
which could be network operators, service providers, or even the end-users
themselves. This report serves as a feedback mechanism, enabling the stakeholders
to take appropriate corrective actions to rectify the breach and restore service levels.
5 The timely conveyance of these reports is essential for maintaining high service
quality and ensuring customer satisfaction in the telecom network.
[0101] As used herein, a closed loop report is a feedback mechanism that ensures
continuous monitoring, reporting, and resolution of service issues. It involves
10 detecting service breaches, analysing the cause, implementing corrective actions,
and verifying the effectiveness of these actions.
[0102] Furthermore, at step [312], the method [300] as disclosed by the present
disclosure comprises visualizing, by a display unit [110], the determined analytics
15 on a user-friendly user interface (UI) based on a network data and the set of SLAs.
[0103] Further comprising managing service experience policies on the user-
friendly UI wherein the management includes at least one of adding, modifying,
viewing, or deleting the service experience policies. In an implementation of the
20 present disclosure, by providing a user-friendly interface for policy management
enables to users to efficiently customize and adjust policies.
[0104] In an implementation of the present disclosure, the system is further configured to visualize the set of service level agreements on the user-friendly user
25 interface (UI) based on one or more filters, wherein the one or more filters
comprises at least one of a service experience of a network slice, an application, a user equipment (UE) or group of UE, and a geographical location. In an implementation of the present disclosure, geographical location refers to the physical location where network services are accessed. Further the network slice is
30 a virtual network instance that provides specific network functionalities of a
particular service, and filters are applied using identifiers for each.
31
[0105] In an example, network slice identifiers may include specific virtual
network segments dedicated to certain types of traffic or services. The application
identifiers may be the specific application types such as video streaming, online
5 gaming, or video conferencing. The user Equipment (UE) identifiers may be unique
device IDs or user profiles. The geographical location identifiers may be specific areas or regions where the network performance is monitored and managed.
[0106] Thereafter, the method terminates at step [314]. 10
[0107] In an example, a telecom operator provides video streaming services to its
customers. With the rise of online video platforms, ensuring a high-quality
streaming experience is crucial for customer satisfaction. NWDAF can play a vital
role in optimizing this service. NWDAF collects data from various sources in real-
15 time, including network nodes, user devices, and streaming servers. This data
includes metrics such as video resolution, buffering rate, network latency, and user
location. Further, the NWDAF analyses this data in real-time. By processing
information about video quality, loading times, and interruptions, it identifies
patterns and discrepancies in the streaming service. Enables, identification of
20 bottlenecks by pinpointing specific network bottlenecks or server issues causing
video buffering or quality degradation. For instance, it might identify that certain
geographical areas experience slower streaming speeds during peak hours due to
network congestion. Most importantly, NWDAF uses predictive analytics to
foresee potential issues. By identifying trends, it can anticipate when and where
25 network congestion might occur, allowing operators to take proactive measures.
[0108] With continuous monitoring and analysis, the operator can iteratively
improve the streaming service. They can roll out updates and optimizations,
ensuring that users receive the best possible video streaming experience. In this use
30 case, NWDAF's Service Experience Analytics significantly contributes to customer
32
satisfaction by ensuring high-quality video streaming, reducing buffering issues, and providing a seamless viewing experience for subscribers.
[0109] In an implementation, the present disclosure also encompasses a computer
5 readable medium comprising instructions executable by a processor, upon
execution of said instructions, the processor configured to: receive a service experience information from a specific data source; determine analytics as per one or more consumer-defined policies based on the received service experience information; detect a breach in at least one service level agreements (SLA) from a
10 set of SLAs, a quality of service (QoS), and one or more traffic key performance
indicators (KPIs) defined as per the one or more consumer-defined policies; convey a closed loop report to one or more end consumers upon detection of the breach; and visualize, via display unit, the determined analytics on a user-friendly user interface (UI) based on a network data and the set of SLAs. As is evident from the
15 above, the present disclosure provides a technically advanced solution for service
experience analytics.
[0110] FIG. 4 illustrates an exemplary block diagram of a computing device [400] (also referred to herein as a computer system [400]) upon which an embodiment of
20 the present disclosure may be implemented. In an implementation, the computing
device implements the method for providing service experience analytics using the system [100]. In another implementation, the computing device itself implements the method for providing service experience analytics by using one or more units configured within the computing device, wherein said one or more units are capable
25 of implementing the features as disclosed in the present disclosure.
[0111] The computing device [400] may include a bus [402] or other
communication mechanism for communicating information, and a processor [404]
coupled with bus [402] for processing information. The processor [404] may be, for
30 example, a general-purpose microprocessor. The computing device [400] may also
include a main memory [406], such as a random-access memory (RAM), or other
33
dynamic storage device, coupled to the bus [402] for storing information and
instructions to be executed by the processor [404]. The main memory [406] also
may be used for storing temporary variables or other intermediate information
during execution of the instructions to be executed by the processor [404]. Such
5 instructions, when stored in non-transitory storage media accessible to the processor
[404], render the computing device [400] into a special-purpose machine that is
customized to perform the operations specified in the instructions. The computing
device [400] further includes a read only memory (ROM) [408] or other static
storage device coupled to the bus [402] for storing static information and
10 instructions for the processor [404].
[0112] A storage device [410], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [402] for storing information and instructions. The computing device [400] may be coupled via the bus [402] to a
15 display [412], such as a cathode ray tube (CRT), for displaying information to a
computer user. An input device [414], including alphanumeric and other keys, may be coupled to the bus [402] for communicating information and command selections to the processor [404]. Another type of user input device may be a cursor controller [416], such as a mouse, a trackball, or cursor direction keys, for
20 communicating direction information and command selections to the processor
[404], and for controlling cursor movement on the display [412]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
25 [0113] The computing device [400] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware, and/or program logic which in combination with the computing device [400] causes or programs the computing device [400] to be a special-purpose machine. According to one embodiment, the techniques herein are performed by the
30 computing device [400] in response to the processor [404] executing one or more
sequences of one or more instructions contained in the main memory [406]. Such
34
instructions may be read into the main memory [406] from another storage medium,
such as the storage device [410]. Execution of the sequences of instructions
contained in the main memory [406] causes the processor [404] to perform the
process steps described herein. In alternative embodiments, hard-wired circuitry
5 may be used in place of or in combination with software instructions.
[0114] The computing device [400] also may include a communication interface [418] coupled to the bus [402]. The communication interface [418] provides a two-way data communication coupling to a network link [420] that is connected to a
10 local network [422]. For example, the communication interface [418] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [418] may be a local area network (LAN) card to provide a data communication connection to a
15 compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [418] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
20 [0115] The computing device [400] can send messages and receive data, including
program code, through the network(s), the network link [420] and the communication interface 418. In the Internet example, a server [430] might transmit a requested code for an application program through the Internet [428], the Internet Service Provider (ISP) [426], host [424], the local network [422] and the
25 communication interface [418]. The received code may be executed by the
processor [404] as it is received, and/or stored in the storage device [410], or other non-volatile storage for later execution.
[0116] An aspect of the present disclosure may relate to a non-transitory computer-
30 readable storage medium storing instructions for providing service experience
analytics. The instructions include executable code which, when executed by a
35
processor, may cause the processor to receive service experience information from
a specific data source; determine analytics as per consumer-defined policies based
on the received service experience information; detect a breach in at least one of a
set of service level agreements (SLAs), a quality of service (QoS), and a traffic key
5 performance indicators (KPIs) defined per the consumer-defined policies; convey
a closed loop report to end consumers upon detection of the breach; and visualize the determined analytics on a user-friendly user interface (UI) based on network data and the set of SLAs.
10 [0117] The present disclosure provides a comprehensive solution to address the
limitations in the art through several key advancements. Firstly, it enhances the drill-down capabilities by enabling a detailed analysis of service experience information. This includes capturing granular details related to user experience, application performance, device groups, and geographic locations. As a result,
15 network operators can better understand and address the root causes of service
degradation. Secondly, the disclosed system automates closed-loop reporting by leveraging AI/ML based NWDAF models. This allows for automatic adjustments of service parameters based on the analysis of network data and SLAs, eliminating the need for manual intervention, and reducing the time to resolve problems and
20 improve service quality. Thirdly, the system provides real-time visualization of
service level agreements through a user-friendly interface. This enables dynamic, filter-based views that help network operators quickly identify and address issues as they arise. Fourthly, the disclosure integrates predictive analytics using AI/ML based NWDAF models for forecasting service level agreements. This allows
25 network operators to be proactive in their approach to network management,
anticipating and preventing service issues before they impact end users. Lastly, the system simplifies the management of service experience policies by offering an intuitive interface for adding, modifying, viewing, or deleting policies with ease. This flexibility allows network operators to quickly adapt to changing network
30 conditions and user requirements, ensuring that service quality is maintained at the
highest level.
36
[0118] As is evident from the above, the present disclosure provides a technically advanced solution for delivering comprehensive service experience analytics in a telecommunications network. By utilizing a transceiver unit [102] to receive service experience information from specific data sources, the system ensures accurate data collection. The determinator unit [104], configured to determine analytics based on consumer-defined policies, processes this information to generate actionable insights. The identification of breaches in service level agreements (SLAs) by the identifier unit [106] further enhances service reliability. The analyser unit [108] conveys closed-loop reports to end consumers upon detection of these breaches, facilitating prompt corrective actions. Additionally, the display unit [110] visualizes the determined analytics on a user-friendly interface, allowing consumers to manage service experience policies effectively. The incorporation of the NWDAF model training by the determinator unit [104] enables advanced forecasting of service level agreements, based on end-user requirements, enhancing predictive maintenance and proactive service management. The visualization of service level agreements with filters, such as network slice, application, user equipment (UE), and geographical location, by the display unit [110], allows for tailored insights and targeted optimizations. The service experience analytics include detailed information on network slice performance, observed mean opinion scores (MoS), and suggestions for new quality of service (QoS) parameters, correlating current QoS with traffic key performance indicators (KPIs). The system's ability to facilitate session management function (SMF) in reselecting user plane paths, including user plane function (UPF) and Data Network Access Identifier (DNAI) selections, based on observed service experience analytics, further optimizes network efficiency. By integrating these advanced analytics and management capabilities, the system presents a robust, efficient, and user-centric solution for enhancing telecommunications network performance and service quality, leading to significant operational benefits and improved user satisfaction.
[0119] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units, as disclosed in the disclosure, should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0120] While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
I/We Claim:
1. A method for providing service experience analytics, the method comprising:
receiving, by a transceiver unit [102], a service experience information from a specific data source;
determining, by a determinator unit [104], analytics as per one or more consumer-defined policies based on the received service experience information;
detecting, by an identifier unit [106], a breach in at least one service level agreements (SLA) from a set of SLAs, a quality of service (QoS), and one or more traffic key performance indicators (KPIs) defined as per the one or more consumer-defined policies;
conveying, by an analyser unit [108], a closed loop report to one or more end consumers upon detection of the breach; and
visualizing, by a display unit [110], the determined analytics on a user-friendly user interface (UI) based on a network data and the set of SLAs.
2. The method as claimed in claim 1, further comprising training, by the determinator unit [104], an NWDAF model based on the determined analytics.
3. The method as claimed in claim 2, wherein the method comprises forecasting, by the identifier unit [106] using the trained NWDAF model, the set of service level agreements based on end user requirements.
4. The method as claimed in claim 1, further comprising the display unit [110] to manage service experience policies on the user-friendly UI, wherein the management comprises at least one of adding, modifying, viewing, or deleting the service experience policies.
5. The method as claimed in claim 1, wherein the specific data source comprises at least one of a network function and data consumers.
6. The method as claimed in claim 1, further comprising visualizing, by the display unit [110], the set of service level agreements on the user-friendly user interface (UI) based on one or more filters, wherein the one or more filters comprises at least one of a service experience of a network slice, an application, a user equipment (UE), and a geographical location.
7. The method as claimed in claim 6, wherein a service experience analytics include details pertaining to:
the service experience of the network slice for the user equipment (UE) or a group of UEs;
a variance and/or average of observed a service mean opinion score (MoS) reported to a network slice selection function (NSSF); and
a suggestion of new one or more QoS parameters to a policy control function (PCF) after a correlating current QoS and a traffic KPIs information.
8. The method as claimed in claim 1, wherein the method comprises facilitating, by the identifier unit [106], a session management function (SMF) in a (re)selecting user plane (UP) path, which includes a user plane function (UPF) and a Data Network Access Identifier (DNAI) selections, by providing an observed service experience analytics based on the UP path.
9. A system for providing service experience analytics, the system comprises:
a transceiver unit [102], configured to receive a service experience information from a specific data source;
a determinator unit [104], configured to determine an analytics as per one or more consumer-defined policies based on the received service experience information;
an identifier unit [106], configured to detect a breach in at least one service level agreements (SLA) from a set of SLAs, a quality of service (QoS), and one or more traffic key performance indicators (KPIs) defined as per the one or more consumer-defined policies;
an analyser unit [108], configured to convey a closed loop report to one or more end consumers upon detection of the breach; and
a display unit [110], configured to visualize the determined analytics on a user-friendly user interface (UI) based on a network data and the set of SLAs.
10. The system as claimed in claim 9, system comprises the determinator unit [104], configured to train an NWDAF model based on the determined analytics.
11. The system as claimed in claim 10, wherein the system comprises the determinator unit [104] to forecast, using the trained NWDAF model, a set of service level agreements based on end user requirements.
12. The system as claimed in claim 9, system comprises: the display unit [110] to manage service experience policies on a user-friendly UI wherein the management comprises at least one of adding, modifying, viewing, or deleting the service experience policies.
13. The system as claimed in claim 9, wherein the specific data source comprises at least one of network functions and a data consumer.
14. The system as claimed in claim 9, system comprises: the display unit [110] to visualize the set of service level agreements on the user-friendly user interface (UI) based on one or more filters, wherein the one or more filters comprises at least one of a service experience of a network slice, an application, a user equipment (UE), and a geographical location.
15. The system as claimed in claim 14, wherein a service experience analytics include details pertaining to:
the service experience of a network slice for the user equipment (UE) or a group of UEs;
a variance and/or average of observed a Service mean opinion score (MoS) reported to a network slice selection function (NSSF); and
a suggestion of new one or more QoS parameters to a policy control function (PCF) after correlating current QoS and a traffic KPIs information.
16. The system as claimed in claim 9, wherein the system comprises the identifier unit [106], to facilitate a session management function (SMF) in a (re)selecting user plane (UP) path, which includes a user plane function (UPF) and a Data Network Access Identifier (DNAI) selection, by providing an observed service experience analytics based on the UP path.
| # | Name | Date |
|---|---|---|
| 1 | 202321048379-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf | 2023-07-19 |
| 2 | 202321048379-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf | 2023-07-19 |
| 3 | 202321048379-FORM 1 [19-07-2023(online)].pdf | 2023-07-19 |
| 4 | 202321048379-FIGURE OF ABSTRACT [19-07-2023(online)].pdf | 2023-07-19 |
| 5 | 202321048379-DRAWINGS [19-07-2023(online)].pdf | 2023-07-19 |
| 6 | 202321048379-FORM-26 [20-09-2023(online)].pdf | 2023-09-20 |
| 7 | 202321048379-Proof of Right [23-10-2023(online)].pdf | 2023-10-23 |
| 8 | 202321048379-ORIGINAL UR 6(1A) FORM 1 & 26)-211123.pdf | 2023-11-24 |
| 9 | 202321048379-FORM-5 [16-07-2024(online)].pdf | 2024-07-16 |
| 10 | 202321048379-ENDORSEMENT BY INVENTORS [16-07-2024(online)].pdf | 2024-07-16 |
| 11 | 202321048379-DRAWING [16-07-2024(online)].pdf | 2024-07-16 |
| 12 | 202321048379-CORRESPONDENCE-OTHERS [16-07-2024(online)].pdf | 2024-07-16 |
| 13 | 202321048379-COMPLETE SPECIFICATION [16-07-2024(online)].pdf | 2024-07-16 |
| 14 | 202321048379-FORM 3 [02-08-2024(online)].pdf | 2024-08-02 |
| 15 | 202321048379-Request Letter-Correspondence [16-08-2024(online)].pdf | 2024-08-16 |
| 16 | 202321048379-Power of Attorney [16-08-2024(online)].pdf | 2024-08-16 |
| 17 | 202321048379-Form 1 (Submitted on date of filing) [16-08-2024(online)].pdf | 2024-08-16 |
| 18 | 202321048379-Covering Letter [16-08-2024(online)].pdf | 2024-08-16 |
| 19 | 202321048379-CERTIFIED COPIES TRANSMISSION TO IB [16-08-2024(online)].pdf | 2024-08-16 |
| 20 | Abstract-1.jpg | 2024-09-04 |
| 21 | 202321048379-FORM 18A [12-03-2025(online)].pdf | 2025-03-12 |
| 22 | 202321048379-FER.pdf | 2025-05-30 |
| 23 | 202321048379-FORM 3 [08-07-2025(online)].pdf | 2025-07-08 |
| 24 | 202321048379-FER_SER_REPLY [08-07-2025(online)].pdf | 2025-07-08 |
| 1 | 202321048379_SearchStrategyNew_E_searchE_26-05-2025.pdf |