Sign In to Follow Application
View All Documents & Correspondence

Method And System For Fetching And Analyzing A Plurality Of Metrics In A Network

Abstract: ABSTRACT METHOD AND SYSTEM FOR FETCHING AND ANALYSING A PLURALITY OF METRICS IN A NETWORK The present disclosure relates to a system (120) and a method (500) for fetching and analyzing a plurality of metrics in a network (105). The method (500) includes the step of retrieving a list of processes hosted on at least one container. The method (500) includes the step of receiving a request corresponding to selection of at least one process from the list of processes. The method (500) includes the step of adding the at least one selected process to an Agent Manager (AM) unit (230). The method (500) includes the step of fetching the plurality of metrics corresponding to the at least one selected process. The method (500) includes the step of analyzing the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 July 2023
Publication Number
42/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD - 380006, GUJARAT, INDIA

Inventors

1. Gaurav Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
2. Gourav Gurbani
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
3. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
4. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
5. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
6. Chandra Kumar Ganveer
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
7. Jugal Kishore Kolariya
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
8. Sunil Meena
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
9. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
10. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
11. Kishan Sahu
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
12. Sanjana Chaudhary
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
13. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR FETCHING AND ANALYZING A PLURALITY OF METRICS IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication networks, and more particularly relates to a method and system of fetching and analyzing a plurality of metrics in the communication networks.
BACKGROUND OF THE INVENTION
[0002] In existing legacy monitoring methods and legacy monitoring systems, a system doesn’t have a capability to fetch process wise metrics from one or more container(s) and also don’t have advanced artificial intelligence (AI)/machine learning (ML) techniques to analyze a hysteresis pattern in the process wise metrics in a cloud environment. In an example, a fifth generation (5G) server has hundred containers, and each container runs 10 processes. The existing system doesn’t have the capability to fetch process-wise metrics from each container.
[0003] Hence, there is a need for a system and a method for fetching the process wise metrics from the one or more container(s) and have an advanced AI/ ML techniques to analyze a hysteresis pattern in the process wise metrics in the cloud environment in an effective manner.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and a system for fetching and analyzing a plurality of metrics in a network.
[0005] In one aspect of the present invention, the method for fetching and analyzing the plurality of metrics in the network is disclosed. The method includes the step of retrieving, by one or more processors, a list of processes hosted on at least one container. The method includes the step of receiving, by the one or more processors, a request corresponding to selection of at least one process from the list of processes. The method includes the step of adding, by the one or more processors, the at least one selected process to an Agent Manager (AM) unit. The method includes the step of fetching, by the one or more processors, the plurality of metrics corresponding to the at least one selected process. The method includes the step of analyzing, by the one or more processors, the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics.
[0006] In one embodiment, upon fetching the plurality of metrics, the method includes the step of creating, by the one or more processors, a file corresponding to the plurality of metrics corresponding to the at least one selected process. The method further includes the step of validating and enriching, by the one or more processors, the plurality of metrics provided in at least one of the file. The method further includes the step of storing, by the one or more processors, the plurality of metrics provided in at least one of the file in a distributed data lake.
[0007] In another embodiment, the one or more processors is configured to receive the request via at least one of a User Interface (UI) and a Command Line Interface (CLI).
[0008] In yet another embodiment, the AM unit is managed by a dedicated Virtual Machine for the at least one respective container.
[0009] In yet another embodiment, the plurality of metrics comprises at least one of a Central processing Unit (CPU) utilization by each process in the list of process, network usage and Operating System (OS) usage and memory usage.
[0010] In yet another embodiment, on detection of anomalies, the method comprises the step of transmitting, by the one or more processors, a notification to a user equipment in response to one of the detection of the anomalies and the forecasting of the anomalies based on the fetched plurality of metrics.
[0011] In another aspect of the present invention, the system for fetching and analyzing the plurality of metrics in the network is disclosed. The system includes a retrieval unit configured to retrieve a list of processes hosted on at least one container. The system includes an interface unit configured to receive a request corresponding to selection of at least one process from the list of processes. The system includes a management unit configured to add the at least one selected process to an Agent Manager (AM) unit. The system includes a fetching unit configured to fetch the plurality of metrics corresponding to the at least one selected process. The system includes an analyzing unit configured to analyze the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics.
[0012] In another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to retrieve a list of processes hosted on at least one container. The processor is configured to receive a request corresponding to selection of at least one process from the list of processes. The processor is configured to add the at least one selected process to an Agent Manager (AM) unit. The processor is configured to fetch the plurality of metrics corresponding to the at least one selected process. The processor is configured to analyze the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics.
[0013] In another aspect of the present invention, a User Equipment (UE) is disclosed. The UE includes one or more primary processors and a memory. The one or more primary processors communicatively coupled to one or more processors. The memory stores instructions which when executed by the one or more primary processors causes the UE to select at least one process from a list of processes.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for fetching and analyzing a plurality of metrics in a network, according to one or more embodiments of the present disclosure;
[0017] FIG. 2 is an exemplary block diagram of a system for fetching and analyzing the plurality of metrics in the network, according to one or more embodiments of the present disclosure;
[0018] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to one or more embodiments of the present disclosure;
[0019] FIG. 4 illustrates a schematic flow diagram illustrating fetching and analyzing the plurality of metrics in the network, according to one or more embodiments of the present disclosure; and
[0020] FIG. 5 is a flow diagram illustrating a method for fetching and analyzing the plurality of metrics in the network, according to one or more embodiments of the present disclosure.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] FIG. 1 illustrates an exemplary block diagram of an environment 100 for fetching and analyzing a plurality of metrics in the network 105, according to one or more embodiments of the present disclosure. The environment 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 to select at least one process from a list of processes hosted on at least one container. In an embodiment, the user includes at least one of a network operator. The plurality of metrics is used to evaluate system performance, efficiency, and reliability of the network 105. In an embodiment, the plurality of metrics includes at least one of a Central processing Unit (CPU) utilization, network usage, Operating System (OS) usage and memory usage. The plurality of metrics is collected for containers as well as the first and the second host 405, 410 (as shown in FIG.4) of the containers.
[0026] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105.
[0027] In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0028] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0029] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0030] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0031] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity. However, for the purpose of description, the system 120 is illustrated as remotely coupled with the server 115, without deviating from the scope of the present disclosure.
[0032] The system 120 is further configured to employ Transmission Control Protocol (TCP) connection to identify any connection loss in the network 105 and thereby improving overall efficiency. The TCP connection is a communication standard enabling applications and the system 120 to exchange information over the network 105.
[0033] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0034] FIG. 2 illustrates an exemplary block diagram of the system 120 for fetching and analyzing the plurality of metrics in the network 105, according to one or more embodiments of the present disclosure. The system 120 includes one or more processors 205, a memory 210, and a distributed data lake 260. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor 205. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0035] Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROMs, FLASH memory, unalterable memory, and the like.
[0036] The distributed data lake 260 is configured to store the plurality of metrics provided in at least one of a file. Further, the distributed data lake 260 provides structured storage, support for complex queries, and enables efficient data retrieval and analysis. The distributed data lake 260 is a data repository providing storage and computing for structured and unstructured data, such as for machine learning, streaming, or data science. The distributed data lake 260 allows the user and/or an organization to ingest and manage large volumes of data in an aggregated storage solution for business intelligence or data products. The distributed data lake 260 may be implemented and utilize different technologies.
[0037] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0038] In order for the system 120 to fetch and analyze the plurality of metrics in the network 105, the processor 205 includes a retrieval unit 215, an interface unit 220, a management unit 225, an Agent Manager (AM) unit 230, a fetching unit 235, a creation unit 240, an enrichment unit 245, an analyzing unit 250, and a transmitting unit 255 communicably coupled to each other for fetching and analyzing the plurality of metrics in the network 105.
[0039] The retrieval unit 215, the interface unit 220, the management unit 225, the AM unit 230, the fetching unit 235, the creation unit 240, the enrichment unit 245, the analyzing unit 250, and the transmitting unit 255, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0040] The retrieval unit 215 is configured to retrieve the list of processes hosted on at least one container. In an embodiment, the list of processes is related to one or more services that are running on the at least one container at a server. In one embodiment, the one or more services include but are not limited to, authentication service, mobility management, policy management, subscriber data management, security service, orchestration service, monitoring service, managing service, file storage service, and search engine service. The at least one container is configured for building, deploying, and managing cloud-native applications. The at least one container includes software components that pack a microservice code and other required files in cloud-native systems. By containerizing the microservices, the cloud-native applications run independently of the underlying operating system and hardware. In a cloud-native infrastructure stack (CNIS), the at least one container is typically orchestrated using tools like Kubernetes, which manage the deployment, scaling, and networking of the at least one container. In an embodiment, the containerization technologies include docker and containerd.
[0041] As per the above one embodiment, the docker is a platform for developing, shipping, and running containerized applications. The containerd is an industry-standard container runtime that manages the complete container lifecycle on a host system. The containerd is a core component in a container ecosystem, providing the fundamental capabilities needed to run containers. The retrieval unit 215 is configured to retrieve information about the list of processes, such as process IDs (PIDs), process names, resource usage, and other relevant details.
[0042] Upon retrieving the information about the list of processes hosted on the at least one container, the interface unit 220 is configured to receive a request corresponding to selection of the at least one process from the list of processes. The interface unit 220 is configured to allow the user to provide an input which corresponds to the selection of the at least one process from the list of processes. The interface unit 220 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The interface unit 220 is configured to receive the request via at least one of a User Interface (UI) and the CLI by the UE 110. The interface unit 220 allows the user to communicate with the system 120. In one embodiment, the interface unit 220 provides a communication pathway for one or more components of the system 120.
[0043] Upon receiving the request corresponding to the selection of the at least one process from the list of processes, the management unit 225 is configured to add the at least one selected process to the AM unit 230. The management unit 225 is a central component of the system 120 which interacts with the interface unit 220 on a southbound and the AM unit 230 on a northbound via a Hypertext Transfer Protocol (HTTP) interface. The management unit 225 is configured to allocate host Internet Protocols (IPs) to the AM unit 230.
[0044] Upon adding the at least one selected process to the AM unit 230, the fetching unit 235 is configured to fetch the plurality of metrics corresponding to the at least one selected process. The fetching unit 235 is configured to fetch the plurality of metrics from the processes and containers hosted on the server 115. The at least one process is identified by using the PIDs and reaching out to the at least one container via an IP address setup at the AM unit 230. In one embodiment, the the AM unit 230 is configured to also collect the at least one container, docker, images, volumes, networks and daemon type of service statistics along with the Kubernetes. The one or more AMs 308 are allocated to each of the first and second host 405, 410 by providing IP addresses of the first and second host 405, 410.
[0045] Upon fetching the plurality of metrics corresponding to the at least one selected process, the creation unit 240 is configured to create at least one of a file corresponding to the plurality of metrics corresponding to the at least one selected process. In an embodiment, the at least one of the file includes a Comma-Separated Values (CSV) file. The CSV file is processed by the enrichment unit 245. The fetching unit 235 is configured to fetch/pull the at least one of the file to the enrichment unit 245 for validating and enriching the plurality of metrics provided in the at least one of the file.
[0046] Thereafter, the enrichment unit 245 is configured to validate and enrich the plurality of metrics provided in the at least one of the file. The enrichment unit 245 is configured to validate whether the plurality of metrics fetched has errors or is corrupted or is incomplete or is not in a proper format. If the plurality of metrics fetched has errors or is corrupted or is incomplete or is not in the proper format, then the plurality of metrics is ignored. Subsequently, the enrichment unit 245 is configured to enrich the plurality of metrics by creating the dynamic fields, obtained by modifying existing fields in the data, such as split, append, concatenation, transform, etc. The enrichment unit 245 is configured to enrich the plurality of metrics provided in the at least one of the file, the distributed data lake 260 is configured to store the plurality of metrics provided in the at least one of the file.
[0047] Upon enriching the plurality of metrics provided in the at least one of the file, the analyzing unit 250 is configured to analyze the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics. The analyzing unit 250 is configured to continuously monitor and analyze the plurality of metrics fetched by the fetching unit 235. Further, the analyzing unit 250 is configured to detect the anomalies within the plurality of metrics by utilizing the historical data to recognize normal and abnormal patterns and identifying deviations from expected trends. The anomalies are typically defined as a deviation from the normal behavior or expected range of values. In an embodiment, the anomalies include, but not limited to, increased bandwidth usage, high CPU utilization, packet loss, latency, or unusual traffic patterns that deviate from the established baseline.
[0048] Upon detecting and forecasting the anomalies, the transmitting unit 255 is configured to transmit a notification to the UE 110 in response to one of the detection of the anomalies and the forecasting of the anomalies. Once the anomalies are detected, a pre-empting action is taken by an Artificial Intelligence (AI)/Machine Learning (ML) model 420 (shown in FIG.4) based on the fetched plurality of metrics. In an embodiment, the pre-empting action can be, for example, but not limited to a downgrade of the server, an upgrade of the server, stop a process thread, provide a priority to the process thread or the like. By doing so, the system 120 incorporates the agentless unique architectural design for efficiently fetching the plurality of metrics corresponding to the at least one selected process in the cloud environment.
[0049] FIG. 3 is a schematic representation of the system 120 in which various entities operations are explained, according to one or more embodiments of the present disclosure. Referring to FIG. 3, describes the system 120 for live monitoring of the one or more KPIs. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0050] As mentioned earlier in FIG.1, In an embodiment, the first UE 110a may encompass electronic apparatuses. These devices are illustrative of, but not restricted to, personal computers, laptops, tablets, smartphones (including phones), or other devices enabled for web connectivity. The scope of the first UE 110a explicitly extends to a broad spectrum of electronic devices capable of executing computing operations and accessing networked resources, thereby providing users with a versatile range of functionalities for both personal and professional applications. This embodiment acknowledges the evolving nature of electronic devices and their integral role in facilitating access to digital services and platforms. In an embodiment, the first UE 110a can be associated with multiple users. Each of the UE 110 is communicatively coupled with the processor 205 via the network 105.
[0051] The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120. The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to select the at least one process from the list of processes.
[0052] Furthermore, the one or more primary processors 305 within the UE 110 are uniquely configured to execute a series of steps as described herein. This configuration underscores the processor 205 capability to live monitoring of the one or more KPIs. The operational synergy between the one or more primary processors 305 and the additional processors, guided by the executable instructions stored in the memory 310, facilitates a seamless live monitoring of the one or more KPIs.
[0053] As mentioned earlier in FIG.2, the system 120 includes the one or more processors 205, the memory 210, and the distributed data lake 260. The operations and functions of the one or more processors 205, the memory 210, and the distributed data lake 260 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0054] Further, the processor 205 includes the retrieval unit 215, the interface unit 220, the management unit 225, the AM unit 230, the fetching unit 235, the creation unit 240, the enrichment unit 245, the analyzing unit 250, and the transmitting unit 255 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0055] FIG. 4 illustrates a schematic flow diagram illustrating fetching and analyzing the plurality of metrics in the network 105, according to one or more embodiments of the present disclosure.
[0056] At 402, the management unit 225 is the central component of the system 120 which interacts with the interface unit 220 on the southbound and the AM unit 230 on the northbound via a Hypertext Transfer Protocol (HTTP) interface. The management unit 225 is configured to allocate the host IPs to the AM unit 230. The management unit 225 is configured to provide support for a set of Application Programming Interface (APIs’) through which the first and second hosts 405 and 410 can easily provisioned as well. Further, the management unit 225 can add and remove the first and second hosts 405 and 410 based on the requirement.
[0057] In an embodiment, the AM unit 230 included in the first host 405 and the second host 410 interacts with the containers or Network Function (NFs) on the southbound interface. The first and second hosts 405 and 410 are configured to integrate over the TCP interface with the at least one container. The AM unit 230 is managed by a dedicated Virtual Machine (VM) for the at least one respective container. The VM is a software-based emulation of a physical computer. The VM runs an operating system and applications like the physical computer, but it operates within the first and second hosts 405 and 410, sharing the physical hardware resources of the first and second hosts 405 and 410.
[0058] A mapping of the first and the second host 405 and 410 and the container is completed based on the plurality of metrics, which ensures that the plurality of metrics of each individual container runs on a specific host server effectively. The AM unit 230 is configured to transmit the collected data pertaining to the plurality of metrics from the at least one of the plurality of NFs to the fetching unit 235.
[0059] At 404, the fetching unit 235 is configured to fetch the plurality of metrics corresponding to the at least one selected process. In an embodiment, the plurality of metrics includes at least one of a Central processing Unit (CPU) utilization by each process in the list of processes, network usage, Operating System (OS) usage, memory usage, network connections, disk activity, a runtime of the server, and an operation of a server (e.g., downgrading of the server, upgrading of the server or the like). The list of processes is defined at the AM unit 230 to process the data of the plurality of metrics from the at least one container running on the VM. In an embodiment, the fetching unit 235 refers to a broker topic, which is a messaging system that distributes the plurality of metrics based on subscribed/polled topic to other components of the system 120.
[0060] The fetching unit 235 is configured to fetch the plurality of metrics from the NFs hosted on the server 115. The processes are identified using the PIDs and reaching out to the at least one container/NFs via an IP address setup at the AM unit 230. In one embodiment, the fetching unit 235 is configured to fetch the plurality of metrics. The AM unit 230 is configured to also collect the at least one container, docker, images, volumes, networks and daemon type of service statistics along with the Kubernetes.
[0061] Upon fetching the plurality of metrics corresponding to the at least one selected process, the creation unit 240 is configured to create at least one of the file corresponding to the plurality of metrics corresponding to the at least one selected process. The fetching unit 235 is configured to fetch/pull the at least one of the file to the enrichment unit 245 for validating and enriching the plurality of metrics provided in the at least one of the file. The fetching unit 235 consumes the plurality of metrics from the broker topics and creates a Comma-Separated Value (CSV) file for the same. The CSV file is processed by the enrichment unit 245. The broker topics are the channels, which transmit the plurality of metrics to the other components of the system 120. The fetching unit 235 creates the CSV file for the metrics, which is easy to process by the enrichment unit 245. The fetching unit 235 also performs some data cleansing, such as removing duplicate records and correcting typos, which ensures that the server metrics sent to the enrichment unit 245 are accurate and consistent.
[0062] At 406, the enrichment unit 245 is configured to validate and enrich the plurality of metrics provided in the at least one of the file. In an embodiment, the at least one of the file includes the CSV file. The enrichment unit 245 is configured to validate whether the plurality of metrics fetched has errors or corrupted or incomplete or not proper format. If the plurality of metrics fetched has errors or corrupted or incomplete or not proper format, then the plurality of metrics is ignored. Upon validation of the plurality of metrics, the enrichment unit 245 is configured to enrich the plurality of metrics by creating the dynamic fields, obtained by modifying existing fields in the data, such as split, append, concatenate, transform, etc. The enrichment unit 245 is configured to enrich the plurality of metrics provided in the at least one of the file, the distributed data lake 260 is configured to store the plurality of metrics provided in the at least one of the file.
[0063] At step 408, an infra normalizer 415 is responsible for normalizing the fetched metrics that are received from the enrichment unit 245. The infra normalizer 415 is configured to convert the data into a standard format and removes any outliers or anomalies. The normalized data is then stored in the distributed data lake 260, which is a repository for storing large amounts of data. The infra normalizer 415 also performs some basic data mining on the data, such as identifying patterns and correlations. The information is used by the other components of the system 120 to make better decisions about network resource provisioning and scaling of the new instance of the plurality of NFs. The infra normalizer 415 communicates with the distributed data lake 260.
[0064] At 410, the AI/ML model 420 is configured to analyze the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics. The AI/ML model 420 is configured to continuously monitor and analyze the plurality of metrics fetched by the fetching unit 235. Further, the AI/ML model 420 is configured to detect the anomalies within the plurality of metrics by utilizing the historical data to recognize normal and abnormal patterns and identifying deviations from expected trends. The anomalies are typically defined as the deviation from the normal behavior or expected range of values. In an embodiment, the anomalies include, but not limited to, increased bandwidth usage, high CPU utilization, packet loss, or unusual traffic patterns that deviate from the established baseline.
[0065] The AI/ML model 420 is responsible for running the AI/ML techniques on the plurality of metrics that are stored in the distributed data lake 260. In one embodiment, the supervised learning is a type of machine learning algorithm, which is trained on a labeled dataset. The supervised learning refers to each training example paired with an output label. The supervised learning algorithm learns to map inputs to a correct output. In one embodiment, the unsupervised learning is a type of machine learning algorithm, which is trained on data without any labels. The unsupervised learning algorithm tries to learn the underlying structure or distribution in the data in order to discover patterns or groupings. In one embodiment, the reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment to maximize cumulative reward. The agent receives feedback in the form of rewards or penalties based on the actions it takes, and it learns a path that maps states of the environment to the best actions.
[0066] The AI/ML model 420 is trained utilizing historical data pertaining to the plurality of metrics. The historical data is used to analyze past network performance and identify trends or patterns. The trained AI/ML model 420 learns trends/patterns related to the plurality of metrics. The trained AI/ML model 420 is configured to analyze the trends over time, such as gradual increases in bandwidth usage or recurring patterns of downtime, which aids in understanding the long-term behavior of the network 105. The trained AI/ML model 420 learns trends/patterns related to the plurality of metrics. The detected anomalies within the plurality of metrics are stored in the distributed data lake 260.
[0067] Upon analyzing the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics, the feedback engine 425 is configured to take one or more actions for the same. The feedback engine 425 is configured to provide one or more actions pertaining to the detection of anomalies and the forecasting of the anomalies based on the fetched plurality of metrics to the server 115 and stored in the distributed data lake 260.
[0068] FIG. 5 is a flow diagram illustrating a method 500 for fetching and analyzing the plurality of metrics in the network 105, according to one or more embodiments of the present disclosure. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0069] At step 505, the method 500 includes the step of retrieving the list of processes hosted on the at least one container by the retrieval unit 215. The at least one container is configured for building, deploying, and managing applications. The retrieval unit 215 is configured to retrieve information about the list of processes, such as process IDs (PIDs), process names, resource usage, and other relevant details.
[0070] At step 510, the method 500 includes the step of receiving the request corresponding to selection of at least one process from the list of processes by the interface unit 220. The interface unit 220 is configured to receive the request via at least one of the UI and the CLI. The interface unit 220 facilitates communication of the system 120. In one embodiment, the interface unit 220 provides the communication pathway for one or more components of the system 120.
[0071] At step 515, the method 500 includes the step of adding the at least one selected process to the AM unit 230 by the management unit 225. The management unit 225 is the central component of the system 120 which interacts with the interface unit 220 on the southbound and the AM unit 230 on the northbound via the HTTP interface. The management unit 225 is configured to allocate host IPs to the AM unit 230. The management unit 225 is configured to provide support for a set of Application Programming Interface (APIs’) through which the first and second hosts 405 and 410 can easily provisioned as well. Further, the management unit 225 can add and remove the first and second hosts 405 and 410 based on the requirement.
[0072] At step 520, the method 500 includes the step of fetching the plurality of metrics corresponding to the at least one selected process by the fetching unit 235. The fetching unit 235 is configured to fetch the plurality of metrics from the processes hosted on the server 115, using the PIDs to identify the at least one processes, and reaching out to the at least one container/NFs via an IP address setup at the AM unit 230. In one embodiment, the fetching unit 235 is configured to fetch the plurality of metrics. The AM unit 230 is configured to also collect the at least one container, docker, images, volumes, networks and daemon type of service statistics along with the Kubernetes.
[0073] Upon fetching the plurality of metrics corresponding to the at least one selected process, the creation unit 240 is configured to create at least one of the file corresponding to the plurality of metrics corresponding to the at least one selected process. The fetching unit 235 is configured to fetch/pull the at least one of the file to the enrichment unit 245 for validating and enriching the plurality of metrics provided in the at least one of the file.
[0074] Upon creating the file corresponding to the plurality of metrics, the enrichment unit 245 is configured to validate and enrich the plurality of metrics provided in the at least one of the file. The enrichment unit 245 is configured to validate whether the plurality of metrics fetched has errors or corrupted or incomplete or not proper format. The enrichment unit 245 is configured to enrich the plurality of metrics by creating the dynamic fields, obtained by modifying existing fields in the data, such as split, append, concatenate, transform, etc. The enrichment unit 245 is configured to enrich the plurality of metrics provided in the at least one of the file, the distributed data lake 260 is configured to store the plurality of metrics provided in the at least one of the file.
[0075] At step 525, the method 500 includes the step of analyzing the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics by the analyzing unit 250. The analyzing unit 250 is configured to continuously monitor and analyze the plurality of metrics fetched by the fetching unit 235. Further, the analyzing unit 250 is configured to detect the anomalies within the plurality of metrics by utilizing the historical data to recognize normal and abnormal patterns and identifying deviations from expected trends. The anomalies are typically defined as the deviation from the normal behavior or expected range of values. In an embodiment, the anomalies include, but not limited to, increased bandwidth usage, high CPU utilization, packet loss, or unusual traffic patterns that deviate from the established baseline.
[0076] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 205. The processor 205 is configured to retrieve a list of processes hosted on at least one container. The processor 205 is configured to receive a request corresponding to selection of at least one process from the list of processes. Further, the processor 205 is configured to add the at least one selected process to an Agent Manager (AM) unit 230. The processor 205 is configured to fetch the plurality of metrics corresponding to the at least one selected process. The processor 205 is configured to analyze the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics.
[0077] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0078] The present disclosure incorporates technical advancement of incorporating the agentless unique architectural design for efficiently fetching the plurality of metrics corresponding to the at least one selected process in the cloud environment. The present invention is configured for fetching the plurality of metrics to detect anomalies to improve CPU performance, processing speed of the processor, avoid downgrading of the server, and reduce requirement of memory space.
[0079] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0080] Environment – 100;
[0081] Network – 105;
[0082] User Equipment – 110;
[0083] Server – 115;
[0084] System – 120;
[0085] One or more processor -205;
[0086] Memory – 210;
[0087] Retrieval unit– 215;
[0088] Interface unit– 220;
[0089] Management unit – 225;
[0090] Agent Manager unit – 230;
[0091] Fetching unit – 235;
[0092] Creation unit-240;
[0093] Enrichment unit-245;
[0094] Analyzing unit-250;
[0095] Transmitting unit-255;
[0096] Distributed data lake- 260;
[0097] One or more primary processors – 305;
[0098] Memory– 310;
[0099] First host-405;
[00100] Second host- 410;
[00101] Infra normalizer- 415;
[00102] AI/ML model-420;
[00103] Feedback engine-425.


,CLAIMS:

CLAIMS
We Claim:
1. A method (500) for fetching and analyzing a plurality of metrics in a network (105), the method (500) comprising the steps of:
retrieving (505), by one or more processors (205), a list of processes hosted on at least one container;
receiving (510), by the one or more processors (205), a request corresponding to selection of at least one process form the list of processes;
adding (515), by the one or more processors (205), the at least one selected process to an Agent Manager (AM) unit (230);
fetching (520), by the one or more processors (205), the plurality of metrics corresponding to the at least one selected process; and
analyzing (525), by the one or more processors (205), the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics.

2. The method (500) as claimed in claim 1, wherein upon fetching the plurality of metrics, the method (500) comprises the step of:
creating, by the one or more processors (205), at least one of a file corresponding to the plurality of metrics corresponding to the at least one selected process;
validating and enriching, by the one or more processors (205), the plurality of metrics provided in at least one of the file; and
storing, by the one or more processors (205), the plurality of metrics provided in at least one of the file in a distributed data lake (260).

3. The method (500) as claimed in claim 1, wherein the one or more processors (205) is configured to receive the request via at least one of a User Interface (UI) and a Command Line Interface (CLI).

4. The method (500) as claimed in claim 1, wherein the AM unit (230) is managed by a dedicated Virtual Machine for the at least one respective container.

5. The method (500) as claimed in claim 1, wherein the plurality of metrics comprises at least one of a Central processing Unit (CPU) utilization by each process in the list of process, network usage and Operating System (OS) usage and memory usage.

6. The method (500) as claimed in claim 1, wherein on detection of anomalies, the method (500) comprises the step of:
transmitting, by the one or more processors (205), a notification to a user equipment in response to one of the detection of the anomalies and the forecasting of the anomalies based on the fetched plurality of metrics.

7. A system (120) for fetching and analyzing a plurality of metrics in a network (105), the system (120) comprising the steps of:
a retrieval unit (215) configured to retrieve, a list of processes hosted on at least one container;
an interface unit (220) configured to receive, a request corresponding to selection of at least one process from the list of processes;
a management unit (225) configured to add, the at least one selected process to an Agent Manager (AM) unit (230);
a fetching unit (235) configured to fetch, the plurality of metrics corresponding to the at least one selected process; and
an analyzing unit (250) configured to analyze, the plurality of metrics to detect anomalies or forecast anomalies based on the fetched plurality of metrics.

8. The system (120) as claimed in claim 7, the system (120) further comprises:
a creation unit (240) configured to create, the at least one of a file corresponding to the plurality of metrics corresponding to the at least one selected process; and
an enrichment unit (245) configured to validate and enrich, the plurality of metrics provided in at least one of the file.

9. The system (120) as claimed in claim 7, wherein the plurality of metrics provided in at least one of the file is stored in a distributed data lake (260).

10. The system (120) as claimed in claim 7, wherein the interface unit (220) is configured to receive the request via at least one of a User Interface (UI) and a Command Line Interface (CLI).

11. The system (120) as claimed in claim 7, wherein the AM unit (230) is managed by a dedicated Virtual Machine for the at least one respective container.

12. The system (120) as claimed in claim 7, wherein the plurality of metrics comprises at least one of a Central processing Unit (CPU) utilization by each process in the list of process, network usage and Operating System (OS) usage and memory usage.

13. The system (120) as claimed in claim 7, further comprising:
a transmitting unit (255) configured to transmit, a notification to a user equipment in response to one of the detection of the anomalies and the forecasting of the anomalies based on the fetched plurality of metrics.

14. A User Equipment (UE) (110) comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
select at least one process from a list of processes,
wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321047835-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2023(online)].pdf 2023-07-15
2 202321047835-PROVISIONAL SPECIFICATION [15-07-2023(online)].pdf 2023-07-15
3 202321047835-FORM 1 [15-07-2023(online)].pdf 2023-07-15
4 202321047835-FIGURE OF ABSTRACT [15-07-2023(online)].pdf 2023-07-15
5 202321047835-DRAWINGS [15-07-2023(online)].pdf 2023-07-15
6 202321047835-DECLARATION OF INVENTORSHIP (FORM 5) [15-07-2023(online)].pdf 2023-07-15
7 202321047835-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321047835-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321047835-DRAWING [13-07-2024(online)].pdf 2024-07-13
10 202321047835-COMPLETE SPECIFICATION [13-07-2024(online)].pdf 2024-07-13
11 Abstract-1.jpg 2024-08-28
12 202321047835-FORM-9 [15-10-2024(online)].pdf 2024-10-15
13 202321047835-FORM 18A [16-10-2024(online)].pdf 2024-10-16
14 202321047835-Power of Attorney [21-10-2024(online)].pdf 2024-10-21
15 202321047835-Form 1 (Submitted on date of filing) [21-10-2024(online)].pdf 2024-10-21
16 202321047835-Covering Letter [21-10-2024(online)].pdf 2024-10-21
17 202321047835-CERTIFIED COPIES TRANSMISSION TO IB [21-10-2024(online)].pdf 2024-10-21
18 202321047835-FORM 3 [02-12-2024(online)].pdf 2024-12-02
19 202321047835-FER.pdf 2025-09-30
20 202321047835-FORM 3 [11-11-2025(online)].pdf 2025-11-11
21 202321047835-FER_SER_REPLY [11-11-2025(online)].pdf 2025-11-11
22 202321047835-COMPLETE SPECIFICATION [11-11-2025(online)].pdf 2025-11-11

Search Strategy

1 Search048735E_28-11-2024.pdf