Abstract: The present disclosure relates to a system and a method (600) for enhancing network performance using customer data clustering. The method includes receiving (602) a user request. The method includes analysing (604) the received user request by an analyser engine. The analyser engine operates in conjunction with an artificial intelligence/machine learning (AI/ML) engine. The method includes forming (606) a plurality of user clusters based on one or more parameters by analyzing the data records. The method includes identifying (608), using the AI/ML engine, one or more trends based on the plurality of formed user clusters. The identified trends indicate service usage patterns of the users of respective clusters. The method includes forecasting (610) one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance. Figure.6
FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
CLUSTERING
APPLICANT
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material
which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The present disclosure generally relates to enhancement of the
network performance. More particularly, the present disclosure relates to a system and a method for enhancing the network performance using data clustering.
BACKGROUND OF THE INVENTION
[0003] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] In the field of telecommunications, each subscriber has a different
usage pattern for data consumption in the network. The resources such as bandwidth required by the subscriber can exceed the capacity of hardware installed in the serving area. In such cases, the network gets overloaded, resulting in poor network performance. The variation in customer usage patterns due to factors such as sector of work, age, and locality can lead to unforeseen circumstances where the overall usage in a certain region might exceed the installed hardware's capacity. This
overload can result in errors and disconnections, affecting the reliability of the network.
[0005] There have been various approaches to address network overloads.
Techniques for network usage analysis and forecasting have been developed. However, none of the existing techniques fully address the challenges of enhancing network capabilities based on advanced customer data clustering and forecasting without manual intervention.
[0006] There is, therefore, a need in the art to provide a system and a method
that can mitigate the problems associated with the prior arts by forming clusters of customers based on usage patterns and leveraging AI/ML engines to predict and enhance network performance proactively.
OBJECTS OF THE INVENTION
[0007] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0008] An object of the present disclosure is to provide a system and a
method for enhancing network performance with data clustering.
[0009] Another object of the present disclosure is to allow dynamic
execution of one or more user subscriber plans for data usage.
[0010] Another object of the present disclosure is to prevent fatalities
occurring in the network due to the limited capabilities of the hardware.
[0011] Another object of the present disclosure is to provide a system and a
method that are economical and easy to implement.
SUMMARY
[0012] The present disclosure discloses a system for enhancing network
performance using customer data clustering. The system includes a user interface, and a processing unit. The user interface is configured to receive a user request
pertaining to network performance received from a user. The processing unit is coupled to the user interface. The processing unit includes an analyzer engine configured to analyze the received user request from the user interface. The analyzer engine operates in conjunction with an artificial intelligence/machine learning (AI/ML) engine. The analyzer engine is configured to form a plurality of user clusters based on one or more parameters by analyzing a plurality of normalized data records fetched from a distributed data lake (DDL) and a distributed file system (DFS). The AI/ML engine is configured to identify one or more trends based on the plurality of formed user clusters. The one or more identified trends are indicative of service usage patterns of the users of respective clusters and forecast one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance.
[0013] In an embodiment, the system includes a receiving unit is configured
to receive a plurality of data records from at least one data source.
[0014] In an embodiment, the processing unit includes an ingestion layer
configured to process the plurality of received data records to generate a plurality of processed data and a normalization layer configured to normalize the plurality of processed data records to generate the plurality of normalized data records.
[0015] In an embodiment, the system includes a streaming engine
configured to monitor the plurality of data records continuously in real-time.
[0016] In an embodiment, the distributed data lake (DDL) and a distributed
file system (DFS) are configured to store the forecasted KPI values and the normalized data records.
[0017] In an embodiment, the AI/ML engine is configured to be trained
using the normalized data records.
[0018] In an embodiment, the one or more parameters include call usage,
data usage, and geographic location.
[0019] In an embodiment, the system includes an analyzer microservice
configured to gather cumulative observations from both the streaming engine and AI/ML engine.
[0020] In an embodiment, the user interface facilitates user requests and
interactions with the analyzer engine.
[0021] The present disclosure discloses a method for enhancing network
performance using customer data clustering. The method includes receiving, by a user interface, a user request pertaining to network performance received from a user. The method includes analysing, by an analyser engine, the received user request. The analyser engine operates in conjunction with a streaming engine and an artificial intelligence/machine learning (AI/ML) engine. The method includes forming, by the analyzer engine, a plurality of user clusters based on one or more parameters by analyzing a plurality of normalized data records fetched from a distributed data lake (DDL) and a distributed file system (DFS). The method includes identifying, using the AI/ML engine, one or more trends based on the plurality of formed user clusters. The one or more identified trends are indicative of service usage patterns of the users of respective clusters. The method includes forecasting, using the AI/ML engine, one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance.
[0022] In an aspect, the method further includes receiving, by a receiving
unit, a plurality of data records from at least one data source.
[0023] In an aspect, the method further includes processing, by an ingestion
layer, the plurality of received data records to generate a plurality of processed data records. The method further includes normalizing, by a normalization layer, the plurality of processed data records to generate the plurality of normalized data records.
[0024] In an aspect, the method further includes monitoring, using a
streaming engine, the plurality of data records continuously in real-time.
[0025] In an aspect, the method further includes storing the forecasted KPI
values and the normalized data records in the distributed data lake (DDL) and in the distributed file system (DFS).
[0026] In an aspect, the method further includes training the AI/ML engine
using the normalized data records.
[0027] In an aspect, the method further includes the one or more parameters
include call usage, data usage, and geographic location.
[0028] The present disclosure discloses a user equipment configured to
enhance network performance using customer data clustering. The user equipment includes a processor, and a computer readable storage medium storing programming instructions for execution by the processor. Under the programming instructions, the processor is configured to receive, by a user interface, a user request pertaining to network performance received from a user. Under the programming instructions, the processor is configured to analyze, by an analyzer engine, the received user request, wherein the analyzer engine operates in conjunction with an artificial intelligence/machine learning (AI/ML) engine. Under the programming instructions, the processor is configured to form, by the analyzer engine, a plurality of user clusters based on one or more parameters by analyzing a plurality of normalized data records fetched from a distributed data lake (DDL) and a distributed file system (DFS). Under the programming instructions, the processor is configured to identify, using the AI/ML engine, one or more trends based on the plurality of formed user clusters, wherein the one or more identified trends are indicative of service usage patterns of the users of respective clusters. Under the programming instructions, the processor is configured to forecast, using the AI/ML engine, one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance.
[0029] The foregoing general description of the illustrative embodiments
and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF DRAWINGS
[0030] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.
[0031] FIG. 1 illustrates an exemplary network architecture in which or with
which embodiments of the present disclosure may be implemented, in accordance with an embodiment of the present disclosure.
[0032] FIG. 2 illustrates an example block diagram of a system for
enhancing network performance using customer data clustering, in accordance with an embodiment of the present disclosure.
[0033] FIG. 3 illustrates a flow diagram representing an architecture
depicting the operations of the system, in accordance with some embodiments of the present disclosure.
[0034] FIGS. 4A and 4B illustrate exemplary representations of flow
diagrams representing a method for enhancing network performance in a network, in accordance with some embodiments of the present disclosure.
[0035] FIG. 5 illustrates an example computer system in which or with
which the embodiments of the present disclosure may be implemented.
[0036] FIG. 6 illustrates an exemplary flow diagram representing a method
for enhancing network performance using customer data clustering, in accordance with some embodiments of the present disclosure.
[0037] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
LIST OF REFERENCES –
100 – Network architecture
102-1, 102-2…102-N – Users
104-1, 104-2…104-N – User equipments
106 – Network
108 – System
112 – Centralized server
202 – Receiving Unit
204 – Memory
206 – User Interface
208 – Processing Unit
210 – Database
212, 308, 422 – Analyzer Engine (Analyzer)
214, 318, 408, 426 – AI/ML Engine
216, 312, 424 – Streaming Engine
218 – Other Modules
220, 314, 412, 430 – Distributed File System (DFS)
222, 310, 410, 428 – Distributed Data Lake (DDL)
302, 402 – Data Records
304, 404 – Ingestion Layer
306, 406 – Normalization Layer
316, 420– User Interface
500 – Computer System
510 – External Storage Device
520 – Bus
530 – Main Memory
540 – Read-Only Memory
550 – Mass Storage Device
560 – Communication Port(s) 570 – Processor
DETAILED DESCRIPTION
[0038] In the following description, for explanation, various specific details
are outlined in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0039] The ensuing description provides exemplary embodiments only and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0040] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[0041] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a
structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0042] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0043] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0044] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular
forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the associated listed items.
[0045] Embodiments herein relate to a system and a method for enhancing
user performance with data clustering. In particular, the system monitors the usage pattern for a region. While monitoring the usage pattern, the system gathers the user data and forms a set of clusters based on the gathered user data. Based on the user data, the users are clustered into a set of clusters, which are stored in a categorized manner for analysis. The clustering is performed based on a set of parameters such as call usage, data usage and geographic location. Based on the clusters so formed, one or more trends can be identified to derive useful insights.
[0046] In an exemplary embodiment, a set of users may shift from a
category belonging to lower call usage to another category belonging to higher call usage. Accordingly, user behaviours can be estimated based on the changes made in the subscriber plans.
[0047] In another exemplary embodiment, based on the clusters formed, the
trends can indicate that the existing hardware capabilities will not be sufficient to meet future requirements and therefore, the hardware capabilities need to be increased to meet the increasing user demand. In this manner, errors and any sort of disconnections can be prevented, thereby enhancing the network performance as well as improving the user experience. It also helps in executing or launching the subscriber plans for different usage categories.
[0048] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGS. 1-6.
[0049] FIG. 1 illustrates an exemplary network architecture in which or with
which a system (108) for enhancing network performance using customer data clustering is implemented, in accordance with embodiments of the present disclosure.
[0050] Referring to FIG. 1, the network architecture (100) includes one or
more computing devices or user equipments (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more user equipments (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipments (104) are depicted in FIG. 1, however any number of the user equipments (104) may be included without departing from the scope of the ongoing description.
[0051] In an embodiment, the user equipment (104) includes smart devices
operating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the user equipment (104) may include, but is not limited to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0052] In an embodiment, the user equipment (104) includes, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104) includes, but is not limited to, any electrical, electronic, electro¬mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the user equipment (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102), or the entity (110) such as touch pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) may not be restricted to the mentioned devices and various other devices may be used.
[0053] Referring to FIG. 1, the user equipment (104) communicates with
the system (108), through the network (106). In an embodiment, the network (106) includes at least one of a Fifth Generation (5G) network, 6G network, or the like. The network (106) enables the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) includes a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) is implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
[0054] In another exemplary embodiment, the centralized server (112)
includes or comprises, by way of example but not limitation, one or more of – a stand-alone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
[0055] In an embodiment, the network (106) is further configured with the
centralized server (112) including a database, where all output is stored as part of the operational records. It can be retrieved whenever there is a need to reference this output in the future.
[0056] In an embodiment, the computing device (104) associated with one
or more users (102) may transmit the at least one captured data packet over a point-to-point or point-to-multipoint communication channel or network (106) to the system (108).
[0057] In an embodiment, the computing device (104) may involve
collection, analysis, and sharing of data received from the system (108) via the communication network (106).
[0058] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0059] FIG. 2 illustrates an example block diagram (200) of the system
(108), in accordance with an embodiment of the present disclosure.
[0060] The system includes a receiving unit (202), a user interface (206),
and a processing unit (208). The receiving unit (202) is configured to receive a plurality of data records from at least one data source. In an example, the receiving unit (202) is configured to receive the data records of the plurality of user equipments. In an aspect, the receiving unit (202) is configured to receive the data records directly from the user equipment, a plurality of network modules or any other sources (third party source). In an example, the data records may also be stored in cloud-based services, either provided by the network operator or third-party service providers. These could include storage services, databases, and content delivery networks. In another example, the user records may be received from subscriber data management (SDM) systems. The SDM systems manage subscriber data across different generations of networks and may integrate with 5G core network functions to ensure seamless service continuity. In an example, the at least one data source includes a base station (eNodeB), Evolved Packet Core (EPC) components like Serving Gateway (SGW) and Packet Data Network Gateway (PGW), Policy and Charging Rules Function (PCRF), and Home Subscriber Server (HSS), a gNodeB base station, Access and Mobility Management Function (AMF), Session Management Function (SMF), and User Plane Function (UPF), Network Slice Selection Function (NSSF) and Authentication Server Function (AUSF).
[0061] The data records include various pieces of information pertaining to
network activities, usage, subscribers, and more. The plurality of data records is essential for network management, optimization, billing, security, and troubleshooting purposes. In an example, the plurality of data records may include session records, UE records, traffic records, billing records, event logs records, authentication and authorization records, and location records. Session records include information about active user sessions, including start time, duration, data usage, and session termination details. UE records include data related to individual user devices connected to the network, such as device type, capabilities, location, and connection history. Traffic records include details about data traffic flow within the network, including volume, source, destination, protocol, current data usage,
call metrics, and quality of service (QoS) parameters. Billing records include information necessary for billing and accounting purposes, such as usage details, service plans, tariffs, and subscriber identifiers. Event logs include records of network events, alarms, errors, and system activities for monitoring, troubleshooting, and auditing purposes. Location records include data related to the geographical location of users or devices, which may be used for location-based services, network optimization, or emergency services. Authentication and authorization records include information about user authentication attempts, authorization status, and security-related events for ensuring network security and access control. In an aspect, the one or more data sources may include user equipments, databases, file systems, APIs, sensors, or any other systems capable of providing data.
[0062] The user interface is configured to receive a user request pertaining
to network performance received from a user. For example, the user request may include a number of data sources selected by the user. In another example, the user request may include one or more of Key Performance Indicators (KPIs) that the user wishes to assess or monitor regarding network performance. For instance, the users may request data from specific sources or indicate KPIs such as latency, throughput, reliability, or other metrics to evaluate network performance effectively.
[0063] The processing unit (208) may be implemented as one or more
microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the processing unit (208) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as random-
access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0064] In an embodiment, the user interface (206) may comprise a variety
of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, a processing unit (208) and a database (210). Further, the processing unit (208) may include one or more engine(s) such as, but not limited to, an input/output engine, an identification engine, and an optimization engine.
[0065] In an embodiment, the processing unit (208) may be implemented as
a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing unit (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing unit (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing unit (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing unit (208). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing unit (208) may be implemented by electronic circuitry.
[0066] The processing unit (208) is coupled to the receiving unit (202) and
receives the plurality of data records. The processing unit (208) includes an
ingestion layer, a normalization layer, an analyzer engine (212), an AI/ML engine (214), a streaming engine (216), and other modules (218). The ingestion layer is configured to collect, process, and organize the processed data into a cohesive format for storage and analysis. The ingestion layer systematically retrieves data from diverse data sources, ranging from databases and APIs to streaming platforms and IoT devices. Following collection, the data undergoes transformation, where it is cleansed, standardized, and enriched to ensure consistency and compatibility with downstream processes. Subsequently, the transformed data is loaded into designated data systems, such as data warehouses or lakes, facilitating accessibility and utilization for analytical insights and decision-making.
[0067] The normalization layer is configured to systematically organize and
standardize the received data to eliminate redundancies, enhance integrity, and streamline accessibility. The normalization layer is configured to employ principles of normalization to minimize duplication and remove inconsistencies, thereby mitigating data anomalies. The normalization layer incorporates data cleaning procedures to rectify errors, reconcile discrepancies, and address missing values, ensuring the integrity and completeness of the dataset. Furthermore, the normalization layer orchestrates the integration of data from heterogeneous sources, harmonizing disparate datasets to establish a unified and coherent representation. Through rigorous validation processes, the normalization layer verifies data accuracy, completeness, and adherence to predefined quality standards and business rules, culminating in a refined dataset ready for analysis and decision-making processes.
[0068] The at least one data source and the user interface (206) are coupled
to the ingestion layer of the system. The ingestion layer is configured to process the plurality of received data records. The ingestion layer is responsible for collecting, ingesting, and initially processing raw data from data sources before it is further transformed, stored, or analyzed. In an aspect, the ingestion layer acts as an entry point for the data records from the data source(s) into the system (108). In an aspect, the ingestion layer facilitates a seamless and efficient flow of data from the data
sources to downstream processing pipelines. The ingestion layer performs various operations such as data collection, data ingestion, data validation, and data routing.
[0069] The ingestion layer within the processing engine gathers data from
various sources and forwards it to the data processing systems. The ingestion layer processes incoming data by validating it and routing it to the normalization layer and streaming engine.
[0070] The normalization layer is configured to normalize a plurality of
processed data records to generate a plurality of normalized data records.
[0071] The normalization layer ensures that the data is processed uniformly,
making it suitable for analysis. The streaming engine (216) receives data from connected subsystems and streams the received data to the user interface (206) in support with the distributed data lake (DDL) (222). This real-time monitoring of data flows by the streaming engine helps promptly identify and respond to any irregular data usage patterns.
[0072] The analyzer engine (212) is configured to analyze the user requests
that come through the user interface (206). The analyzer engine (212) operates in conjunction with the streaming engine (216) and the AI/ML engine (214). On receiving the user request, the analyzer engine is configured to fetch the plurality of normalized data records from the distributed data lake (DDL) and the distributed file system (DFS). On receiving the plurality of normalized data records, the analyzer engine is configured to form a plurality of user clusters based on one or more parameters by analyzing the plurality of fetched normalized data records. In an example, the one or more parameters include current call usage patterns, call metrics (including call duration, frequency, and type), data usage volumes, and geographical locations of users. The analyzer engine forms meaningful clusters that group users exhibiting similar behaviors and characteristics.
[0073] The AI/ML engine is configured to receive the plurality of formed
user clusters from the analyzer engine. The AI/ML engine is further configured to identify one or more trends based on the plurality of formed user clusters. The one
or more identified trends are indicative of service usage patterns of the users of respective clusters and forecast one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance. In an example, the AI/ML engine deploys advanced algorithms for predicting future trends based on the plurality of formed user clusters. For example, the AI/ML engine can forecast hardware capabilities needed to meet future requirements by analyzing patterns in current data usage and call metrics. This predictive analysis is utilized for anticipating potential issues, allowing for proactive measures to be taken to enhance network performance.
[0074] Cumulative results from both the streaming engine and the AI/ML
engine are gathered on the analyzer engine. Such integration simplifies the monitoring process for the user, providing a comprehensive overview of data trends and enabling efficient decision-making.
[0075] The normalized data records are securely stored in the distributed
data lake (DDL) (222) and the distributed file system (DFS) (220). These storage solutions ensure that data is readily accessible for further analysis and forecasting. The database (210) is another storage component that holds processed data records and forecasted one or more Key Performance Indicator (KPI) values, which represent data usage and call usage. Key Performance Indicators (KPIs) in the network are metrics used to evaluate and measure the performance, efficiency, and effectiveness of various aspects of the network's operations. These KPIs provide insights into the network's health, quality of service, user experience, and adherence to service level agreements. The stored information is utilized for generating insights into customer behaviour and network performance, facilitating the development of targeted strategies to improve service delivery.
[0076] The AI/ML engine can identify trends indicating a surge in data
usage in a particular region by analyzing historical data patterns, current usage statistics, and various other relevant factors. The AI/ML engine collects and aggregates data related to network usage, such as the volume of data transmitted,
the number of active users, peak usage times, and geographical information about network traffic. Relevant features are extracted or engineered from the collected data. These features may include temporal patterns (e.g., hourly, daily, weekly trends), spatial patterns (e.g., geographical location), demographic information (e.g., population density), and external factors (e.g., events, holidays).
[0077] By analyzing data usage trends in real-time, the AI/ML engine can
detect patterns indicating a surge in demand in a particular region. Once identified, the system can then forecast the required bandwidth and hardware adjustments needed to accommodate this increase in demand. This proactive approach helps prevent potential overloads and ensures consistent network performance for users in that region.
[0078] The user interface (UI) allows the users to interact with the system,
submit queries, and receive analyzed data and forecasts. This empowers network administrators and operators to make informed decisions and take proactive measures to optimize network performance and user experience.
[0079] By integrating AI/ML technology with a user-friendly interface, the
system is able to provide a comprehensive network management solution that enables efficient resource allocation and proactive capacity planning and, ultimately, enhances the overall reliability and quality of service for subscribers.
[0080] Although FIG. 2 shows exemplary components of the system 108,
in other embodiments, the system (108) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of the system (108) may perform functions described as being performed by one or more other components of the system (108).
[0081] FIG. 3 illustrates an exemplary block diagram (300) of the system
(108), showing the detailed workflow of data processing and analysis, in accordance with an embodiment of the present disclosure.
[0082] In an aspect, the system may include several key components that
interact to enhance network performance using customer data clustering.
[0083] The system receives the data records (302), which are the initial data
inputs received from various data sources. These records can include information about call usage, data usage, and other relevant parameters. The data records (302) are sent to the ingestion layer for further processing. The data records (302) are the raw data entities or records collected from the one or more data sources. The data records (302) may be logs, sensor readings, transaction data, or any other form of data generated by systems or users.
[0084] The ingestion layer (304) is responsible for gathering data from
different sources and forwarding it to the data processing systems. This layer processes incoming data by validating it and routing it to the normalization layer (306) and the streaming engine (312). The ingestion layer (304) ensures that data is accurately captured and prepared for subsequent processing steps. The normalization layer (306) facilitates the integration of diverse data types and sources, such as data from distributed data lakes (DDL) and distributed file systems (DFS), into a unified format that can be effectively analyzed by subsequent components like the analyzer engine (308) and AI/ML engine (318). By normalizing data, the ingestion layer (304) enhances the accuracy and reliability of insights derived from analytics processes.
[0085] The normalization layer (306) ensures that the data is processed
uniformly. This involves transforming the raw data into a standardized format that can be efficiently analyzed. Normalization is implemented to maintain data consistency and quality, enabling reliable analysis. The normalized data records are then stored in the distributed data lake (DDL) (310) and the distributed file system (DFS) (322).
[0086] The DDL (310) serves as a storage repository for the normalized data
records. It allows for efficient data management and retrieval, supporting large-
scale data storage and analysis. The DDL (310) ensures that the data is organized and accessible for real-time and historical analysis.
[0087] The distributed file system (DFS) (314) provides a distributed
storage solution that ensures data redundancy and availability. It works in conjunction with the DDL to store and manage the normalized data records. The DFS (314) ensures that the data is protected against loss and is available for analysis even in the event of hardware failures.
[0088] The analyzer engine (308) is a component that analyzes user requests
from the user interface. The user requests may pertain to actions or queries related to network performance, service quality, or specific data insights. In an example, the user requests may be performance monitoring requests, service quality assessment requests, usage analytics requests, and predictive insights requests. The analyzer engine (308) operates in conjunction with a streaming engine (312) and an AI/ML engine (318). The analyzer engine (308) analyzes user behavior by forming clusters based on parameters such as call usage, data consumption, and geographic location. By forming the user clusters with similar patterns and behaviors, the analyzer engine can identify significant trends within each cluster. By clustering users, the analyzer engine (308), in conjunction with the AI/ML engine (318), can detect patterns and trends that inform network management strategies.
[0089] The streaming engine (312) continuously monitors data flows in real
time. It receives data from connected subsystems and streams the received data to the user interface (316), in support of the DDL (310). This real-time monitoring helps in promptly identifying and responding to any irregularities in data usage patterns. The streaming engine (312) ensures that the system can react swiftly to changes in network conditions. The streaming engine (312) handles real-time data processing and analysis tasks within the system. The streaming engine (312) operates in conjunction with other components to continuously ingest, process, and analyze streaming data as it flows through the network. The streaming engine (312) supports functionalities such as real-time monitoring of network performance
metrics, immediate detection of anomalies or patterns in data streams, and timely response to dynamic changes in user behavior or network conditions. The streaming engine (312) complements batch processing capabilities provided by other components, ensuring that both real-time and historical data insights contribute to proactive network management and decision-making.
[0090] The AI/ML engine (318) deploys advanced algorithms to predict
future trends and optimize network performance. This engine is crucial in transforming raw data into actionable insights through machine learning and artificial intelligence techniques.
[0091] The AI/ML engine (318) connects to several key components of the
system, including the distributed data lake (DDL) (310), the distributed file system (DFS) (314), and the normalization layer (306). These connections ensure that the AI/ML engine has access to a continuous stream of high-quality, standardized data necessary for training and making accurate predictions.
[0092] The normalization layer (306) processes raw data records received
from the ingestion layer (304). The normalization process involves cleaning, transforming, and standardizing the data to ensure consistency and reliability. The normalized data is then stored in both the DDL (310) and the DFS (314). The DDL (310) serves as a centralized repository that supports efficient data management and retrieval, while the DFS (314) ensures data redundancy and availability. The AI/ML engine (318) accesses the normalized data records stored in the DDL (310) and DFS (314) for training purposes. By leveraging the large volumes of standardized data, the AI/ML engine can apply machine learning algorithms to identify patterns and trends that are not immediately apparent. This process involves several steps:
[0093] The AI/ML engine (318) retrieves relevant data from the DDL (310)
and the DFS (314). This data includes various parameters such as call usage, data usage, and other factors that influence network performance. Once the data is retrieved, the AI/ML engine (318) processes it to create features that can be used in machine learning models. This step involves selecting, modifying, and constructing
variables that capture the underlying patterns in the data. Using the features derived from the normalized data, the AI/ML engine (318) trains machine learning models. This involves feeding the data into algorithms that learn the relationships between input features and target variables, such as key performance indicators (KPIs) representing data usage and call usage. Common algorithms used for this purpose include regression models, decision trees, and neural networks. The trained models are evaluated using validation techniques to ensure accuracy and reliability. This evaluation helps in fine-tuning the models and selecting the best-performing algorithms for deployment.
[0094] The AI/ML engine (318) continuously updates the models as new
data becomes available, ensuring that the predictions remain accurate and relevant. By analyzing historical data and identifying trends, the AI/ML engine (318) can forecast future network demands and potential issues. This proactive approach enables the system to optimize network resources, improve service quality, and anticipate user needs.
[0095] By leveraging the analytical capabilities of the AI/ML engine (318)
in conjunction with the clustering performed by the analyzer engine, the system can detect new patterns and trends in user activities. These insights are crucial for informing network management strategies, including capacity planning, resource allocation, service optimization, and proactive maintenance. By understanding how different user segments interact with the network, operators can enhance service delivery, anticipate demand fluctuations, and improve overall network efficiency and reliability. This collaborative approach between the analyzer engine, streaming engine, and AI/ML engine enables data-driven decision-making and supports continuous improvement in network performance and user experience.
[0096] The user interface (316) facilitates user requests and interactions
with the analyzer engine (308). It provides a platform for users to submit queries and receive analyzed data and forecasts, making it an integral part of the overall network management solution. The user interface (316) enables users to interact
with the system, access insights, and make informed decisions based on the analyzed data.
[0097] The database stores the processed data records and the forecasted
KPI values. This stored information is vital for generating insights into customer behaviour and network performance, enabling the development of targeted strategies to improve service delivery. The database ensures that historical data is available for trend analysis and strategic planning.
[0098] The components are interconnected as follows – Data Records (302)
are sent to the ingestion layer (304), which forwards them to the normalization layer (306). The normalized data is then stored in the DDL (310) and the DFS (314). The analyzer engine (308) works with the streaming engine (312) and AI/ML engine (318) to analyze data and forecast trends. The processed data and predictions are stored in the database and made available to users through the user interface (316).
[0099] FIGS. 4A and 4B illustrate exemplary representations of flow
diagrams (400A, 400B, respectively) representing a method for enhancing network performance in a network, in accordance with some embodiments of the present disclosure.
[00100] As illustrated in FIG. 4A, the method begins at step (434) where the
data records (402) are received from the at least one data source. In step (436), the data records are sent to the ingestion layer (404), which processes the data. Following processing, in step (438), the data is routed to the normalization layer (406) for standardizing. The normalized data records are then stored, as shown in steps (438) and (440), in the distributed data lake (DDL) (410) and the distributed file system (DFS) (412), respectively.
[00101] In step (442), the AI/ML engine (408) fetches the required data for
fine-tuning from the DDL (410) and DFS (412). This involves extracting data and sending the necessary information back for training the AI/ML models. This loop of fetching data (steps 442, 446), sending required data (steps 444, 448), and
training (step 450) ensures that the AI/ML engine is continuously updated with the latest data trends, thereby enhancing the accuracy of predictions.
[00102] FIG. 4B illustrates an interaction of a user (418) with the system
(108) through the analyzer (422). In step (420), the user (418) initiates a request for generating trends via the user interface. This request is transmitted in step (454) to the analyzer engine, which then engages with the streaming engine in step (456) and the AI/ML engine in step (458). If the streaming engine (424) is set to extract real-time data, the streaming engine (424) collects (step 458) the necessary data from the DDL (428) and DFS (430) and sends the raw data for processing (step 460).
[00103] In a reverse flow, when only forecasting or AI/ML data is required,
the method proceeds to step (462) again where the AI/ML engine fetches the relevant trends. The request is processed in step (464) again and the results are sent back to the analyzer engine in step (468). Finally, in step (470), where the user interface is implemented, the processed results are displayed to the user through the user interface (step 472).
[00104] The method depicted in FIGS. 4A and 4B integrate multiple steps to
enhance network performance through real-time data monitoring and advanced analytics. The method ensures that data is consistently standardized and readily available for machine learning purposes, leading to accurate and reliable predictions. By continuously training the AI/ML models with updated data, the system's predictive capabilities are improved, enabling proactive network management. User interactions through the user interface facilitate efficient trend analysis and network optimization based on real-time and forecasted data. This comprehensive approach ensures the optimization of network resources and overall performance improvement.
[00105] FIG. 5 illustrates an example computer system 500 in which or with
which the embodiments of the present disclosure may be implemented.
[00106] As shown in FIG. 5, the computer system 500 may include an
external storage device 510, a bus 520, a main memory 530, a read-only memory 540, a mass storage device 550, a communication port(s) 560, and a processor 570. A person skilled in the art will appreciate that the computer system 500 may include more than one processor and communication ports. The processor 570 may include various modules associated with embodiments of the present disclosure. The communication port(s) 560 may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) 560 may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system 500 connects.
[00107] In an embodiment, the main memory 530 may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory 540 may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor 570. The mass storage device 550 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[00108] In an embodiment, the bus 520 may communicatively couple the
processor(s) 570 with the other memory, storage, and communication blocks. The bus 520 may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 570 to the computer system 500.
[00109] In another embodiment, operator and administrative interfaces, e.g.,
a display, keyboard, and cursor control device may also be coupled to the bus 520 to support direct operator interaction with the computer system 500. Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) 560. Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system 500 limit the scope of the present disclosure.
[00110] FIG. 6 illustrates an exemplary flow diagram representing a method
(600) for enhancing network performance using customer data clustering, in accordance with some embodiments of the present disclosure.
[00111] As illustrated in FIG. 6, the method (600) begins at step (602), the
method involves receiving a user request pertaining to network performance received by a user interface (206). In an example, the user request may be received from a user or a network operator. In an aspect, the method (600) further includes receiving a plurality of data records from at least one data source by a receiving unit (202).
[00112] At step (604), the method (600) involves analyzing the received user
request by an analyzer engine. The analyzer engine operates in conjunction with an artificial intelligence/machine learning (AI/ML) engine and a streaming engine. The AI/ML engine is trained on the normalized data records to enhance its predictive capabilities. The step (604) further includes processing the received data records by an ingestion layer to generate a plurality of processed data. The ingestion layer gathers the data and prepares it for further processing by validating and organizing it. In step (604), the method involves normalizing the plurality of processed data records using a normalization layer and generating a plurality of normalized data records. The normalization layer standardizes the data to ensure consistency and reliability. Once normalized, the data records are stored in a distributed data lake (DDL) and a distributed file system (DFS).
[00113] At step (606), the method (600) involves forming, by the analyzer
engine, a plurality of user clusters based on one or more parameters by analyzing the plurality of normalized data records fetched from a distributed data lake (DDL) (222) and a distributed file system (DFS) (220). In an example, the one or more parameters includes call usage, data usage, and geographic location. This clustering helps in identifying patterns and trends within different user groups.
[00114] At step (608), the method (600) involves identifying, using the
AI/ML engine (214), one or more trends based on the plurality of formed user clusters. The one or more identified trends are indicative of service usage patterns of the users of respective clusters. By analyzing these trends, the system can gain insights into user behaviour and network demands.
[00115] At step (610), the method (600) involves forecasting, using the
AI/ML engine (214), one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance. The forecasted KPI values are stored in the DDL and DFS, enabling proactive network management and optimization.
[00116] In an aspect, the method (600) further includes monitoring, using the
streaming engine (216), the plurality of data records continuously in real-time.
[00117] In an aspect, the method (600) further includes training the AI/ML
engine (214) using the normalized data records.
[00118] The present disclosure provides technical advancements related to
monitoring cell phone usage based on users' work sector, age, locality, and more. This advancement aims to address the limitations in network analysis of user data for network enhancement. The present disclosure emphasizes the need to analyze user data to improve the network using customer data clustering and to enhance hardware capabilities to prevent errors and disconnections. Additionally, the present disclosure helps in executing plans for different usage categories. The present disclosure collects and stores customer data in a categorized manner, enabling
analysis using AI/ML model to forecast future trends, which significantly reduces response time in various situations.
[00119] In an exemplary aspect, the present disclosure discloses a user
equipment that is configured to enhance network performance using customer data clustering. The user equipment includes a processor and a computer-readable storage medium storing programming instructions for execution by the processor. Under the programming instructions, the processor is configured to receive a plurality of data records from at least one data source. Under the programming instructions, the processor is configured to process the plurality of received data records. Under the programming instructions, the processor is configured to normalize the plurality of processed data records to generate a plurality of normalized data records. Under the programming instructions, the processor is configured to analyze, by an analyzer engine, a user request received from a user interface, wherein the analyzer engine operates in conjunction with a streaming engine and an artificial intelligence/machine learning (AI/ML) engine. Under the programming instructions, the processor is configured to form clusters of a plurality of users by the analyzer engine, based on one or more parameters. Under the programming instructions, the processor is configured to identify one or more trends based on the clusters, wherein the one or more trends are indicative of service usage patterns of the users of respective clusters. Under the programming instructions, the processor is configured to forecast one or more key performance indicator (KPI) values based on the one or more identified trends using the AI/ML engine for enhancing network performance.
[00120] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing
descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE INVENTION
[00121] The present disclosure provides a system and a method for
enhancing network performance with data clustering.
[00122] The present disclosure provides a system and a method to allow
dynamic execution of one or more user subscriber plans for data usage.
[00123] The present disclosure provides a system and a method to prevent
fatalities occurring in the network with the limited capabilities of the hardware.
[00124] The present disclosure provides a system and a method that are
economical and easy to implement.
We Claim:
1. A system (108) for enhancing network performance using customer data
clustering, the system comprising:
a user interface (206) configured to receive a user request pertaining to network performance received from a user;
a processing unit (208) coupled to the user interface (206), the processing unit (208) comprises:
an analyzer engine (212) configured to analyze the received user request and wherein the analyzer engine (212), operates in conjunction with an artificial intelligence/machine learning (AI/ML) engine (214), is configured to form a plurality of user clusters based on one or more parameters by analyzing a plurality of normalized data records fetched from a distributed data lake (DDL) (222) and a distributed file system (DFS) (220); and
the AI/ML engine (214) is further configured to:
identify one or more trends based on the plurality of formed user clusters, wherein the one or more identified trends are indicative of service usage patterns of the users of respective clusters; and
forecast one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance.
2. The system (108) as claimed in claim 1, includes a receiving unit (202) configured to receive a plurality of data records from at least one data source.
3. The system (108) as claimed in claim 2, wherein the processing unit (208) is configured to cooperate with the processing unit (208) to receive the plurality of data records and the processing unit (208) includes:
an ingestion layer configured to process the plurality of received data records to generate a plurality of processed data; and
a normalization layer configured to normalize the plurality of processed data records to generate the plurality of normalized data records.
4. The system (108) as claimed in claim 2, includes a streaming engine (216) configured to monitor the plurality of data records continuously in real-time.
5. The system (108) as claimed in claim 1, wherein the distributed data lake (DDL) (222) and the distributed file system (DFS) (220) are configured to store the forecasted KPI values and the plurality of normalized data records.
6. The system (108) as claimed in claim 1, wherein the AI/ML engine (214) is configured to be trained using the plurality of normalized data records.
7. The system (108) as claimed in claim 1, wherein the one or more parameters include call usage, data usage, and geographic location.
8. The system (108) as claimed in claim 1, includes an analyzer microservice configured to gather cumulative observations from the streaming engine (216) and AI/ML engine (214).
9. The system (108) as claimed in claim 1, wherein the user interface (206) facilitates user requests and interactions with the analyzer engine (212).
10. A method (600) for enhancing network performance using customer data clustering, the method comprising:
receiving (602), by a user interface (206), a user request pertaining to network performance received from a user;
analyzing (604), by an analyzer engine (212), the received user request, wherein the analyzer engine operates in conjunction with an artificial intelligence/machine learning (AI/ML) engine;
forming (606), by the analyzer engine (212), a plurality of user clusters based on one or more parameters by analyzing a plurality of normalized data records fetched from a distributed data lake (DDL) (222) and a distributed file system (DFS) (220);
identifying (608), using the AI/ML engine (214), one or more trends based on the plurality of formed user clusters, wherein the one or more identified trends are indicative of service usage patterns of the users of respective clusters; and
forecasting (610), using the AI/ML engine (214), one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance.
11. The method (600) as claimed in claim 10, further comprising receiving, by a receiving unit (202), a plurality of data records from at least one data source.
12. The method (600) as claimed in claim 11, further comprising:
processing, by an ingestion layer, the plurality of received data records to generate a plurality of processed data; and
normalizing, by a normalization layer, the plurality of processed data records to generate the plurality of normalized data records.
13. The method (600) as claimed in claim 11, further comprising monitoring, using a streaming engine (216), the plurality of data records continuously in real-time.
14. The method (600) as claimed in claim 10, further comprising storing the forecasted KPI values and the plurality of normalized data records in the distributed data lake (DDL) (222) and in the distributed file system (DFS) (220).
15. The method (600) as claimed in claim 10, further comprising training the AI/ML engine (214) using the plurality of normalized data records.
16. The method (600) as claimed in claim 10, wherein the one or more parameters include call usage, data usage, and geographic location.
17. The method (600) as claimed in claim 10, further comprising gathering cumulative observations from the streaming engine and AI/ML engine on an analyzer microservice.
18. A user equipment configured to enhance network performance using customer data clustering, the user equipment comprising:
a processor; and
a computer readable storage medium storing programming instructions for execution by the processor, the programming instructions to:
receive, by a user interface, a user request pertaining to network performance received from a user;
analyze, by an analyzer engine, the received user request, wherein the analyzer engine operates in conjunction with an artificial intelligence/machine learning (AI/ML) engine;
form, by the analyzer engine, a plurality of user clusters based on one or more parameters by analyzing a plurality of normalized data records fetched from a distributed data lake (DDL) and a distributed file system (DFS);
identify, using the AI/ML engine, one or more trends based on the plurality of formed user clusters, wherein the one or more identified trends are indicative of service usage patterns of the users of respective clusters; and
forecast, using the AI/ML engine, one or more key performance indicator (KPI) values based on the one or more identified trends for enhancing network performance.
| # | Name | Date |
|---|---|---|
| 1 | 202321047674-STATEMENT OF UNDERTAKING (FORM 3) [14-07-2023(online)].pdf | 2023-07-14 |
| 2 | 202321047674-PROVISIONAL SPECIFICATION [14-07-2023(online)].pdf | 2023-07-14 |
| 3 | 202321047674-FORM 1 [14-07-2023(online)].pdf | 2023-07-14 |
| 4 | 202321047674-DRAWINGS [14-07-2023(online)].pdf | 2023-07-14 |
| 5 | 202321047674-DECLARATION OF INVENTORSHIP (FORM 5) [14-07-2023(online)].pdf | 2023-07-14 |
| 6 | 202321047674-FORM-26 [13-09-2023(online)].pdf | 2023-09-13 |
| 7 | 202321047674-POA [29-05-2024(online)].pdf | 2024-05-29 |
| 8 | 202321047674-FORM 13 [29-05-2024(online)].pdf | 2024-05-29 |
| 9 | 202321047674-AMENDED DOCUMENTS [29-05-2024(online)].pdf | 2024-05-29 |
| 10 | 202321047674-Power of Attorney [04-06-2024(online)].pdf | 2024-06-04 |
| 11 | 202321047674-Covering Letter [04-06-2024(online)].pdf | 2024-06-04 |
| 12 | 202321047674-ORIGINAL UR 6(1A) FORM 26-120624.pdf | 2024-06-20 |
| 13 | 202321047674-ENDORSEMENT BY INVENTORS [02-07-2024(online)].pdf | 2024-07-02 |
| 14 | 202321047674-DRAWING [02-07-2024(online)].pdf | 2024-07-02 |
| 15 | 202321047674-CORRESPONDENCE-OTHERS [02-07-2024(online)].pdf | 2024-07-02 |
| 16 | 202321047674-COMPLETE SPECIFICATION [02-07-2024(online)].pdf | 2024-07-02 |
| 17 | 202321047674-CORRESPONDENCE(IPO)-(WIPO DAS)-12-07-2024.pdf | 2024-07-12 |
| 18 | Abstract-1.jpg | 2024-08-05 |
| 19 | 202321047674-FORM 18 [27-09-2024(online)].pdf | 2024-09-27 |
| 20 | 202321047674-FORM 3 [04-11-2024(online)].pdf | 2024-11-04 |