Abstract: ABSTRACT METHOD AND SYSTEM OF SAVING POWER CONSUMPTION OF AT LEAST ONE OF SERVERS IN NETWORK The present disclosure relates to a system (120) and a method (400) of saving power consumption of at least one of a plurality of servers (115) in a network (105). The system (120) includes an analysis manager (215) configured to fetch server metrics of at least one of the plurality of servers (115) stored in a database (240). The system (120) includes the analysis manager (215) configured to analyze, utilizing a trained model, the server metrics fetched from the database (240) to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold. The system (120) includes a feedback engine (220) configured to in response to determining, the fetched server metrics are exceeding the predefined threshold, trigger, one or more actions in order to save power consumption of at least one of the plurality of servers (115) in the network (105). Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM OF SAVING POWER CONSUMPTION OF AT LEAST ONE OF SERVERS IN NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication networks, and more particularly relates to a method and system for saving power consumption of at least one of a plurality of servers in the wireless communication network.
BACKGROUND OF THE INVENTION
[0002] Currently, there exist no mechanism that can save server power consumption by dynamically monitoring power consumption of server CPUs in a cloud network. Often, based on workloads, one or more servers tends to consume power beyond normal threshold parameters. There is a need for a system and method that can dynamically monitor server CPU utilization, analyse the hysteresis patterns in the CPU utilization and power-related metrics, and provide required actions to save power.
[0003] Hence, there is a need for an improved system and a method for saving server power consumption is proposed.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and a system of saving power consumption of at least one of a plurality of servers in a network.
[0005] In one aspect of the present invention, the method of saving power consumption of at least one of a plurality of servers in a network is disclosed. The method includes the step of fetching, by one or more processors, server metrics of at least one of the plurality of servers stored in a database. The method includes the step of analyzing, by the one or more processors, utilizing a trained model, the server metrics fetched from the database to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold. The method includes the step of in response to determining, the fetched server metrics are exceeding the predefined threshold, triggering, by the one or more processors, one or more actions in order to save power consumption of at least one of the plurality of servers in the network.
[0006] In one embodiment, the server metrics of at least one of the plurality of server pertains to power related metrics pertaining to at least one of, Central Processing Unit (CPU) utilization, CPU power clock cycles, power consumption, and power efficiency.
[0007] In another embodiment, the one or more actions includes at least one of dynamically adjusting the server metrics of the at least one of the plurality of servers based on a workload demand. The one or more actions includes at least one of enforcing power management policies at the server level, including, at least one of, enabling idle CPUs into low-power states or adjusting power-saving settings of the at least one of the plurality of servers, setting at least one of, quotas and limits pertaining to the power consumption of the plurality of servers.
[0008] In yet another embodiment, the model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model.
[0009] In yet another embodiment, the model is trained utilizing historical data pertaining to the server metrics of the at least one of the plurality of servers.
[0010] In yet another embodiment, the trained model learns trends/patterns related to the server metrics of the at least one of the plurality of servers.
[0011] In yet another embodiment, the predefined threshold is set by the one or more processors based on the trends/patterns of the historical data pertaining to server metrics of the at least one of the plurality of servers.
[0012] In yet another embodiment, the step of analyzing, by the one or more processors, utilizing a trained model, the server metrics fetched from the database to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold, includes the steps of comparing, by the one or more processors, the server metrics fetched from the database with the predefined threshold. The step of analyzing, by the one or more processors, utilizing a trained model, the server metrics fetched from the database to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold, includes the steps of in response to determining, by the one or more processors, a deviation in at least one of, the fetched server metrics related to exceeding the predefined threshold based on the comparison with the predefined threshold, inferring by the one or more processors, high power consumption or inefficient resource utilization of the at least one of the plurality of servers.
[0013] In yet another embodiment, the one or more processors is configured to generate at least one of, an alarm based on a predefined threshold value and a report pertaining to the power consumption of at least one of the plurality of servers.
[0014] In yet another embodiment, the one or more processors is configured to detect anomaly pertaining to high power consumption in at least one of the plurality of servers.
[0015] In yet another embodiment, the one or more processors is further configured to take one or more pre-emptive actions using predefined threshold value.
[0016] In another aspect of the present invention, the system of saving power consumption of at least one of a plurality of servers in a network is disclosed. The system includes an analysis manager, configured to, fetch, server metrics of at least one of the plurality of servers stored in a database. The system includes the analysis manager, configured to, analyze, utilizing a trained model, the server metrics fetched from the database to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold. The system includes in response to determining, the fetched server metrics are exceeding the predefined threshold, a feedback engine, configured to, trigger, one or more actions in order to save power consumption of at least one of the plurality of servers in the network.
[0017] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to fetch server metrics of at least one of the plurality of servers stored in a database. The processor is configured to analyze, utilizing a trained model, the server metrics fetched from the database to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold. The processor is configured to in response to determining, the fetched server metrics are exceeding the predefined threshold, trigger, one or more actions in order to save power consumption of at least one of the plurality of servers in the network.
[0018] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0020] FIG. 1 is an exemplary block diagram of an environment of saving power consumption of at least one of a plurality of servers in a network, according to one or more embodiments of the present disclosure;
[0021] FIG. 2 is an exemplary block diagram of a system of saving power consumption of at least one of the plurality of servers in the network, according to one or more embodiments of the present disclosure;
[0022] FIG. 3 is an exemplary block diagram of an architecture can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure;
[0023] FIG. 4 is a flow diagram illustrating a method of saving power consumption of at least one of the plurality of servers in the network, according to one or more embodiments of the present disclosure; and
[0024] FIG. 5 is a flow diagram illustrating the method of analyzing server metrics fetched from a database to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold, according to one or more embodiments of the present disclosure.
[0025] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0026] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0027] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0028] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0029] FIG. 1 illustrates an exemplary block diagram of an environment 100 of saving power consumption of at least one of a plurality of servers 115 in a network 105, according to one or more embodiments of the present disclosure. The environment 100 includes the network 105, a User Equipment (UE) 110, the at least one of the plurality of servers 115, and a system 120. The UE 110 aids a user to interact with the system 120 for saving power consumption of the at least one of the plurality of servers 115 in the network 105. In an embodiment, the user includes, at least one of, a network operator.
[0030] The term “server” and “at least one of the plurality of servers” can be used interchangeably herein.
[0031] As per the illustrated embodiment and for the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the first UE 110a, the second UE 110b, and the third UE 110c connected to the network 105, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0032] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0033] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0034] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0035] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0036] The environment 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity.
[0037] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0038] FIG. 2 illustrates an exemplary block diagram of the system 120 of saving power consumption of the at least one of the plurality of servers 115 in the network 105, according to one or more embodiments of the present disclosure. The system 120 includes one or more processors 205, a memory 210, and a database 240. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor 205. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 205 as per the requirement of the network 105.
[0039] Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROMs, FLASH memory, unalterable memory, and the like.
[0040] The database 240 is a data repository providing storage and computing for structured and unstructured data, such as for machine learning, streaming, or data science. The database 240 allows the user and/or an organization to ingest and manage large volumes of data in an aggregated storage solution for business intelligence or data products. The database 240 may be implemented and utilize different technologies.
[0041] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0042] In order for the system 120 to save power consumption of the at least one of the plurality of servers 115 in the network 105, the processor 205 includes an analysis manager 215, a feedback engine 220, an anomaly detection engine 225, a reporting and alarm engine 230, and a forecasting engine 235 communicably coupled to each other for saving power consumption of the at least one of the plurality of servers 115 in the network 105. In an embodiment, operations and functionalities of the analysis manager 215, the feedback engine 220, the anomaly detection engine 225, the reporting and alarm engine 230, and the forecasting engine 235 can be used in combination or interchangeably.
[0043] The analysis manager 215, the feedback engine 220, the anomaly detection engine 225, the reporting and alarm engine 230, and the forecasting engine 235, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0044] The analysis manager 215 is configured to fetch server metrics of the at least one of the plurality of servers 115 stored in the database 240. In one embodiment, the server metrics of the at least one of the plurality of servers 115 pertains to power related metrics pertaining to at least one of, a Central Processing Unit (CPU) utilization, CPU power clock cycles, power consumption, and power efficiency. In another embodiment, the metrics are collected for containers and the containers are hosted on the server 115. The one or more Agent Managers (AMs) 308 (shown in FIG.3) are hosted on the first and the second host 310, 315 (shown in FIG.3). The one or more AMs 308 are allocated to each of the first and second host 310, 315 by providing IP addresses of the first and second host 310, 315.
[0045] In another embodiment, the analysis manager 215 is configured to also collect the container, docker, image, volumes, and daemon type of service statistics along with Kubernetes. A mapping of the first and the second host 310, 315 of the containers are completed based on the server metrics. The server metrics to be pulled are defined at the one or more AMs 308 to ensure that the server metrics of each individual container run on a specific host server is captured effectively.
[0046] Upon fetching the server metrics of the at least one of the plurality of servers 115, the analysis manager 215 is configured to analyze the server metrics fetched from the database 240 to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold by utilizing a trained model. In an embodiment, the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model 335 (shown in FIG.3).
[0047] The AI/ML model 340 is responsible for running the AI/ML techniques on the metrics that are stored in the database 240. In one embodiment, the supervised learning is a type of machine learning algorithm, which is trained on a labeled dataset. The supervised learning refers to each training example paired with an output label. The supervised learning algorithm learns to map inputs to a correct output. In one embodiment, the unsupervised learning is a type of machine learning algorithm, which is trained on data without any labels. The unsupervised learning algorithm tries to learn the underlying structure or distribution in the data in order to discover patterns or groupings. In one embodiment, the reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment to maximize cumulative reward. The agent receives feedback in the form of rewards or penalties based on the actions it takes, and it learns a path that maps states of the environment to the best actions.
[0048] The AI/ML model 335 is trained utilizing historical data pertaining to the server metrics of the at least one of the plurality of servers 115. The historical data is used to analyze past network performance and identify trends or patterns. The trained AI/ML model 335 learns trends/patterns related to the server metrics of the at least one of the plurality of servers 115. The trained AI/ML model 340 is configured to analyze the trends over time, such as gradual increases in bandwidth usage or recurring patterns of downtime, which aids in understanding the long-term behavior of the network 105. The trained AI/ML model 340 learns trends/patterns related to the server metrics of the at least one of the plurality of servers 115.
[0049] Further, the analysis manager 215 is configured to determine whether the fetched server metrics are within the predefined threshold or exceeding the predefined threshold. In an embodiment, the predefined threshold is set by the one or more processors 205 based on the trends/patterns of the historical data pertaining to the server metrics of the at least one of the plurality of servers 115. In this regard, the analysis manager 215 is configured to compare the server metrics fetched from the database 240 with the predefined threshold.
[0050] Upon comparing the server metrics fetched from the database 240 with the predefined threshold, the analysis manager 215 is configured to determine a deviation in at least one of, the fetched server metrics related to exceeding the predefined threshold based on the comparison with the predefined threshold. If the fetched server metrics exceeds the predefined threshold, then the analysis manager 215 is configured to infer high power consumption or inefficient resource utilization of the at least one of the plurality of servers 115.
[0051] As per one or more embodiment, in response to determining, the fetched server metrics are exceeding the predefined threshold, the feedback engine 220 is configured to trigger one or more actions in order to save power consumption of the at least one of the plurality of servers 115 in the network 105. In an embodiment, the one or more actions includes dynamically adjusting the server metrics of the at least one of the plurality of servers 115 based on a workload demand. In another embodiment, the one or more actions includes enforcing power management policies at the server 115 level, including, at least one of, enabling idle CPUs into low-power states or adjusting power-saving settings of the at least one of the plurality of servers 115, setting at least one of, quotas and limits pertaining to the power consumption of the plurality of servers 115.
[0052] As per one or more embodiments, the fetched server metrics exceed the predefined threshold, the anomaly detection engine 225 is configured to detect an anomaly pertaining to high power consumption in the at least one of the plurality of servers 115. The anomaly detection engine 225 is configured to utilize the AI/ML model 335, including at least one of, machine learning models, statistical analysis, or predefined rules, to identify and confirm the presence of the anomaly. When the anomaly detection engine 225 is configured to identify the anomaly related to high power consumption and triggers the one or more actions to reduce power usage. Let’s consider for an example, modifying the power settings or configurations of the at least one of the plurality of servers 115 to operate in a more energy-efficient mode.
[0053] Upon detecting the anomaly pertaining to the high power consumption in the at least one of the plurality of servers 115, the reporting and alarm engine 230 is configured to generate at least one of, an alarm based on a predefined threshold value and a report pertaining to the power consumption of at least one of a plurality of servers 115. Let’s consider for an example, the idle CPUs consume more power. At that moment, the reporting and alarm engine 230 is configured to generate the alarm based on the consumption of power from the idle CPU. In owing to this, the analysis manager 215 is configured to set at least one of, quotas and limits pertaining to the power consumption of the plurality of servers 115.
[0054] Upon triggering the one or more actions in order to save the power consumption of at the least one of the plurality of servers 115 in the network 105, the forecasting engine 235 is configured to take one or more pre-emptive actions using predefined threshold value. The forecasting engine 235 is configured to predict future conditions and take proactive measures based on the predictions.
[0055] Furthermore, the forecasting engine 235 is configured to operate using the predefined threshold value, which are established based on the historical data, the trends/the patterns related to the server metrics of the at least one of the plurality of servers 115. The predefined threshold values aid in anticipating future anomaly or inefficiencies before the anomaly occurs. By doing so, the system 120 incorporates an agentless unique architectural design that can efficiently fetch the server metrics of the CPU utilization, CPU power clock cycles, power consumption and accordingly execute one or more actions like raising the alarm, or adjusting CPU power based on workload demands, to save power consumption of the at least one of the plurality of servers 115 in the network 105.
[0056] FIG. 3 is an exemplary block diagram of an architecture 300 can be implemented in the system 120 of FIG.2, according to one or more embodiments of the present disclosure.
[0057] In an example, the system 120 includes an infrastructure manager 305 that is a central component of the system 120. The infrastructure manager 305 interacts with a Graphical User Interface (GUI)/dashboard on the southbound and one or more Agent Managers (AMs) 308 on the northbound via a Hypertext Transfer Protocol (HTTP) interface. The infrastructure manager 305 allocates host Internet Protocols (IPs) to the one or more AMs 308.
[0058] In an example, the system 120 includes the one or more AMs 308 included in the first host 310 and the second host 315 that interacts with the NFs on the southbound interface. The one or more AMs 308 is configured to receive the server metrics from the at least one of the plurality of servers 115. The one or more AMs 308 are configured to transmit the server metrics to a metric ingestion layer 320. In an embodiment, the metric ingestion layer 320 refer as a broker topic, which is a messaging system that distributes the server metrics to other components of the system 120. Further, the one or more AMs 308 running at the first and second host 310, 315 fetches metrics of the servers 115. Each of the one or more AMs 308 are defined with the server 115/container/process to collect the metrics from each allocated server/container/process The one or more AMs 308 are allocated the server 115/container/process using one or more identifiers. In an embodiment, the one or more identifiers include, but not limited to IP address, process ID, container ID, and the like.
[0059] The one or more AMs 308 are responsible for collecting the server metrics from the at least one of the plurality of servers 115. The one or more AMs 308 use a variety of methods to collect the server metrics, such as polling, sampling, and event-based collection. The one or more AMs 308 are configured to transmit the collected server metrics to the broker in a format that is easy for the other components of the system 120 to understand.
[0060] Further, the infrastructure manager 305 is responsible for saving power consumption of the at least one of the plurality of servers 115. The infrastructure manager 305 is configured to allocate the host IPs to the one or more AMs 308. The infrastructure manager 305 interacts with the GUI/dashboard on the southbound interface, which allows a user to view and manage the system 120. The infrastructure manager 305 also interacts with the one or more AMs 308 on the northbound interface, which allows the one or more AMs 308 communicate with the other components of the system 120. The infrastructure manager 305 is configured to provide support for a set of Application Programming Interfaces (APIs) through which the first and second host 310, 315 can be easily provisioned as well.
[0061] In an example, the system 120 further includes the metric ingestion layer 320 that consumes the server metrics from the broker topics and creates a Comma-Separated Value (CSV) file for the same. The CSV file is processed by an infrastructure enrichment layer 325. The metric ingestion layer 320 is responsible for consuming the server metrics. The broker topics are the channels, which transmits the metrics to the other components of the system 120. The metric ingestion layer 320 creates the CSV file for the metrics, which is easy to process by the infrastructure enrichment layer 325. The metric ingestion layer 320 also performs some data cleansing, such as removing duplicate records and correcting typos, which ensures that the server metrics sent to the infrastructure enrichment layer 325 are accurate and consistent.
[0062] Upon transmitting the metrics from the metric ingestion layer 320, the infrastructure enrichment layer 325 is configured to fetch/pull the CSV files which are being created by the metric ingestion layer 320. The CSV files are pushed to an infrastructure normalizer 330 for processing. The infrastructure enrichment layer 325 is responsible for enriching the server metrics that are received from the metric ingestion layer 320. The infrastructure enrichment layer 325 adds additional information to the metrics, such as timestamps and metadata. The infrastructure enrichment layer 325 also performs some basic data analysis on the data, such as identifying trends and anomalies. The information is used by the other components of the system 120 to make better decisions to save power consumption of the at least one of the plurality of servers 115.
[0063] Upon receiving the CSV files from the infrastructure enrichment layer 325, the infrastructure normalizer 330 is a data normalization platform which intelligently processes the server metrics, filters out and stores the filtered server metrics in the database 240. The infrastructure normalizer 330 is responsible for normalizing the server metrics that is received from the infrastructure enrichment layer 325. The infrastructure normalizer 330 is configured to convert the data into a standard format and removes any outliers or anomalies. The normalized data is then stored in the database 240, which is a repository for storing large amounts of data. The infrastructure normalizer 330 also performs some basic data mining on the data, such as identifying patterns and correlations. The information is used by the other components of the system 120 to make better decisions to save power consumption of the at least one of the plurality of servers 115.
[0064] Upon processing the server metrics from the infrastructure normalizer 330, the AI/ML model 335 runs the server metrics to find any anomalies or triggers forecasting for the server metrics. The AI/ML model 340 is configured to transmit the results to the forecasting engine 235, the reporting & alarm engine 230, and the anomaly detection engine 225. The AI/ML model 340 is responsible for running the AI/ML techniques on the server metrics that are stored in the database 240. The AI/ML techniques are used to identify any anomalies in the server metrics or to forecast future trends. The results of the AI/ML techniques are sent to the forecasting engine 235, the reporting & alarm engine 230, and the anomaly detection engine 225.
[0065] Upon receiving the results of the AI/ML techniques, the system 120 further includes the forecasting engine 345 that receives a request from the AI/ML model 340 to take one or more actions using the pre-defined threshold. The forecasting engine 345 has the capability to network expansion based on data trends from the AI/ML techniques. The forecasting engine 345 is responsible for taking the one or more actions based on the results of the AI/ML techniques. The forecasting engine 345 uses the metrics from the AI/ML model 340 to forecast future demand for network resources. The forecasting engine 345 compares the forecast to the current demand and takes one or more actions if the forecast indicates that the demand will exceed the current capacity.
[0066] Further, the system 120 further includes the reporting & alarm engine 230 that receives the request from the AI/ML model 335 to generate the alarms based on using the pre-defined threshold value. The reporting & alarm engine 230 is configured for generating the alarms based on the results of the AI/ML techniques. The reporting & alarm engine 230 also sends reports to the user about the server performance. The reports include information such as the current demand for the one or more resources, the forecast for future demand, and the actions that have been taken by the forecasting engine 235.
[0067] Further, the system 120 further includes the anomaly detection engine 225 that receives the anomaly request from the AI/ML model 335 to take action in a closed loop. The anomaly detection engine 225 is responsible for detecting anomalies in the metrics. The anomaly detection engine 225 sends reports to the user about the anomalies that have been detected. The reports include information such as the type of anomaly, the time at which it occurred, and the impact of the anomaly on the system 120.
[0068] FIG. 4 is a flow diagram illustrating a method 400 of saving power consumption of the at least one of the plurality of servers 115 in the network 105, according to one or more embodiments of the present disclosure. For the purpose of description, the method 400 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0069] At step 405, the method 400 includes the step of fetching the server metrics of the at least one of the plurality of servers 115 stored in the database 240 by the analysis manager 215. In one embodiment, the server metrics of the at least one of the plurality of servers 115 pertains to power related metrics pertaining to at least one of, a Central Processing Unit (CPU) utilization, CPU power clock cycles, power consumption, and power efficiency. In another embodiment, the metrics are collected for containers and the containers are hosted on the first and the second host 310, 315. The analysis manager 215 is allocated to each of the first and second host 310, 315 by providing the IP addresses of the first and second host 310, 315.
[0070] At step 410, the method 400 includes the step of analyzing the server metrics fetched from the database 240 to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold by utilizing the trained model by the analysis manager 215. In an embodiment, the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model 335.
[0071] At step 415, the method 400 includes the step of triggering one or more actions in order to save power consumption of the at least one of the plurality of servers 115 in the network 105 by the feedback engine 220. In an embodiment, the one or more actions includes dynamically adjusting the server metrics of the at least one of the plurality of servers 115 based on a workload demand. In another embodiment, the one or more actions includes enforcing power management policies at the server 115 level, including, at least one of, enabling idle CPUs into low-power states or adjusting power-saving settings of the at least one of the plurality of servers 115, setting at least one of, quotas and limits pertaining to the power consumption of the plurality of servers 115.
[0072] As per one or more embodiments, the fetched server metrics exceed the predefined threshold, the anomaly detection engine 225 is configured to detect the anomaly pertaining to high power consumption in the at least one of the plurality of servers 115. The anomaly detection engine 225 is configured to utilize the AI/ML model 335, including at least one of, machine learning models, statistical analysis, or predefined rules, to identify and confirm the presence of the anomaly. When the anomaly detection engine 225 is configured to identify the anomaly related to high power consumption and triggers the one or more actions to reduce power usage.
[0073] Upon detecting the anomaly pertaining to the high power consumption in the at least one of the plurality of servers 115, the reporting and alarm engine 230 is configured to generate at least one of, an alarm based on a predefined threshold value and a report pertaining to the power consumption of at least one of a plurality of servers 115. Let’s consider for an example, the idle CPUs consume more power. At that moment, the reporting and alarm engine 230 is configured to generate the alarm based on the consumption of power from the idle CPU. In owing to this, the analysis manager 215 is configured to set at least one of, quotas and limits pertaining to the power consumption of the plurality of servers 115.
[0074] Upon triggering the one or more actions in order to save the power consumption of at the least one of the plurality of servers 115 in the network 105, the forecasting engine 235 is configured to take one or more pre-emptive actions using predefined threshold value. The forecasting engine 235 is configured to predict future conditions and take proactive measures based on the predictions.
[0075] Furthermore, the forecasting engine 235 is configured to operate using the predefined threshold value, which are established based on the historical data, the trends/the patterns related to the server metrics of the at least one of the plurality of servers 115. The predefined threshold values aid in anticipating future anomaly or inefficiencies before the anomaly occurs. By doing so, the method 400 incorporates an agentless unique architectural design that can efficiently fetch the server metrics of the CPU utilization, CPU power clock cycles, power consumption and accordingly execute one or more actions like raising the alarm, or adjusting CPU power based on workload demands, to save power consumption of the at least one of the plurality of servers 115 in the network 105.
[0076] FIG. 5 is a flow diagram illustrating the method 500 of analyzing server metrics fetched from the database 240 to determine whether the fetched server metrics are within the predefined threshold or exceeding the predefined threshold, according to one or more embodiments of the present disclosure.
[0077] At step 505, the method 500 includes the step of comparing the server metrics fetched from the database 240 with the predefined threshold. In an embodiment, the predefined threshold is set by the one or more processors 205 based on the trends/patterns of the historical data pertaining to server metrics of the at least one of the plurality of servers 115.
[0078] At step 510, the method 500 includes the step of determining the deviation in at least one of, the fetched server metrics related to exceeding the predefined threshold based on the comparison with the predefined threshold by the analysis manager 215. If the fetched server metrics exceeds the predefined threshold, then the analysis manager 215 is configured to infer high power consumption or inefficient resource utilization of the at least one of the plurality of servers 115.
[0079] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 205. The processor 205 is configured to fetch server metrics of at least one of the plurality of servers 115 stored in a database 240. The processor 205 is configured to analyze, utilizing a trained model, the server metrics fetched from the database to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold. The processor 205 is configured to in response to determining, the fetched server metrics are exceeding the predefined threshold, trigger, one or more actions in order to save power consumption of at least one of the plurality of servers 115 in the network 105.
[0080] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0081] The present disclosure incorporates technical advancement determining, the fetched server metrics are exceeding the predefined threshold, triggering the one or more actions in order to save power consumption of the at least one of the plurality of servers 115 in the network 105. By doing so, the present disclosure incorporates an agentless unique architectural design that can efficiently fetch the server metrics of the CPU utilization, CPU power clock cycles, power consumption and accordingly execute one or more actions like raising the alarm, or adjusting CPU power based on workload demands, to save power consumption of the at least one of the plurality of servers 115 in the network 105.
[0082] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0083] Environment – 100;
[0084] Network – 105;
[0085] UE- 110;
[0086] Server – 115;
[0087] System – 120;
[0088] Processor -205;
[0089] Memory – 210;
[0090] Analysis manager– 215;
[0091] Feedback engine– 220;
[0092] Anomaly detection engine– 225;
[0093] Reporting and alarm engine– 230;
[0094] Forecasting engine-235;
[0095] Database- 240;
[0096] Infrastructure manager- 305;
[0097] First host-310;
[0098] Second host-315;
[0099] Metric ingestion layer-320;
[00100] Infrastructure enrichment layer-325;
[00101] Infrastructure normalizer-330;
[00102] AI/ML model-335.
,CLAIMS:CLAIMS
We Claim:
1. A method (400) of saving power consumption of at least one of a plurality of servers (115) in a network (105), the method (400) comprising the steps of:
fetching, by one or more processors (205), server metrics of at least one of the plurality of servers (115) stored in a database (240);
analyzing, by the one or more processors (205), utilizing a trained model, the server metrics fetched from the database (240) to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold; and
in response to determining, the fetched server metrics are exceeding the predefined threshold, triggering, by the one or more processors (205), one or more actions in order to save power consumption of at least one of the plurality of servers (115) in the network (105).
2. The method (400) as claimed in claim 1, wherein the server metrics of the at least one of the plurality of server (115) pertains to power related metrics pertaining to at least one of, Central Processing Unit (CPU) utilization, CPU power clock cycles., power consumption, and power efficiency.
3. The method (400) as claimed in claim 1, wherein the one or more actions includes at least one of:
dynamically adjusting the server metrics of the at least one of the plurality of servers (115) based on a workload demand; and
enforcing power management policies at the server level, including, at least one of, enabling idle CPUs into low-power states or adjusting power-saving settings of the at least one of the plurality of servers, setting at least one of, quotas and limits pertaining to the power consumption of the plurality of servers (115).
4. The method (400) as claimed in claim 1, wherein the model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model (335).
5. The method (400) as claimed in claim 1, wherein the model is trained utilizing historical data pertaining to the server metrics of the at least one of the plurality of servers (115).
6. The method (400) as claimed in claim 1, wherein the trained model learns trends/patterns related to the server metrics of the at least one of the plurality of servers (115).
7. The method (400) as claimed in claim 6, wherein the predefined threshold is set by the one or more processors (205) based on the trends/patterns of the historical data pertaining to server metrics of the at least one of the plurality of servers (115).
8. The method (400) as claimed in claim 1, wherein the step of analyzing, by the one or more processors (205), utilizing a trained model, the server metrics fetched from the database (240) to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold, includes the steps of:
comparing, by the one or more processors (205), the server metrics fetched from the database (240) with the predefined threshold;
in response to determining, by the one or more processors (205), a deviation in at least one of, the fetched server metrics related to exceeding the predefined threshold based on the comparison with the predefined threshold, inferring by the one or more processors (205), high power consumption or inefficient resource utilization of the at least one of the plurality of servers (115).
9. The method (400) as claimed in claim 1, wherein the one or more processors (205) is configured to generate at least one of, an alarm based on a predefined threshold value and a report pertaining to the power consumption of at least one of the plurality of servers (115).
10. The method (400) as claimed in claim 1, wherein the one or more processors (205) is configured to detect anomaly pertaining high power consumption in at least one of the plurality of servers (115).
11. The method (400) as claimed in claim 1, wherein the one or more processors (205) is further configured to take one or more pre-emptive actions using predefined threshold value.
12. A system (120) for saving power consumption of at least one of a plurality of servers (115) in a network (105), the system (120) comprises:
an analysis manager (215), configured to, fetch, server metrics of at least one of the plurality of servers (115) stored in a database (240);
the analysis manager (215), configured to, analyze, utilizing a trained model, the server metrics fetched from the database (240) to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold; and
in response to determining, the fetched server metrics are exceeding the predefined threshold, a feedback engine (220), configured to, trigger, one or more actions in order to save power consumption of at least one of the plurality of servers (115) in the network (105).
13. The system (120) as claimed in claim 12, wherein the server metrics of at least one of the plurality of servers (115) pertains to power related metrics pertaining to at least one of, Central Processing Unit (CPU) utilization, CPU power clock cycles, power consumption, and power efficiency.
14. The system (120) as claimed in claim 12, wherein the one or more actions includes at least one of:
dynamically adjusting the server metrics of the at least one of the plurality of servers (115) based on a workload demand; and
enforcing power management policies at the server level, including, at least one of, enabling idle CPUs into low-power states or adjusting power-saving settings of the at least one of the plurality of servers, setting at least one of, quotas and limits pertaining to the power consumption of the plurality of servers (115).
15. The system (120) as claimed in claim 12, wherein the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model (335).
16. The system (120) as claimed in claim 12, wherein the model is trained utilizing historical data pertaining to the server metrics of the at least one of the plurality of servers (115).
17. The system (120) as claimed in claim 12, wherein the trained model learns trends/patterns related to the server metrics of the at least one of the plurality of servers (115).
18. The system (120) as claimed in claim 17, wherein the predefined threshold is set by the one or more processors (205) based on the trends/patterns of the historical data pertaining to server metrics of the at least one of the plurality of servers (115).
19. The system (120) as claimed in claim 12, wherein the analysis manager (215), analyses, utilizing a trained model, the server metrics fetched from the database (240) to determine whether the fetched server metrics are within a predefined threshold or exceeding the predefined threshold, by:
comparing, the server metrics fetched from the database (240) with the predefined threshold;
in response to determining, a deviation in at least one of, the fetched server metrics related to exceeding the predefined threshold based on the comparison with the predefined threshold, inferring by the one or more processors (205), high power consumption or inefficient resource utilization of the at least one of the plurality of servers (115).
20. The system (120) as claimed in claim 12, wherein a reporting and alarm engine (230) is configured to generate at least one of, an alarm based on a predefined threshold value and a report pertaining to the power consumption of at least one of the plurality of servers (115).
21. The system (120) as claimed in claim 12, wherein an anomaly detection engine (225) is configured to detect anomaly pertaining high power consumption in at least one of the plurality of servers (115).
22. The system (120) as claimed in claim 12, wherein a forecasting engine (235) is configured to take one or more pre-emptive actions using predefined threshold value.
| # | Name | Date |
|---|---|---|
| 1 | 202321047836-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2023(online)].pdf | 2023-07-15 |
| 2 | 202321047836-PROVISIONAL SPECIFICATION [15-07-2023(online)].pdf | 2023-07-15 |
| 3 | 202321047836-FORM 1 [15-07-2023(online)].pdf | 2023-07-15 |
| 4 | 202321047836-FIGURE OF ABSTRACT [15-07-2023(online)].pdf | 2023-07-15 |
| 5 | 202321047836-DRAWINGS [15-07-2023(online)].pdf | 2023-07-15 |
| 6 | 202321047836-DECLARATION OF INVENTORSHIP (FORM 5) [15-07-2023(online)].pdf | 2023-07-15 |
| 7 | 202321047836-FORM-26 [03-10-2023(online)].pdf | 2023-10-03 |
| 8 | 202321047836-Proof of Right [08-01-2024(online)].pdf | 2024-01-08 |
| 9 | 202321047836-DRAWING [13-07-2024(online)].pdf | 2024-07-13 |
| 10 | 202321047836-COMPLETE SPECIFICATION [13-07-2024(online)].pdf | 2024-07-13 |
| 11 | Abstract-1.jpg | 2024-08-28 |
| 12 | 202321047836-Power of Attorney [24-10-2024(online)].pdf | 2024-10-24 |
| 13 | 202321047836-Form 1 (Submitted on date of filing) [24-10-2024(online)].pdf | 2024-10-24 |
| 14 | 202321047836-Covering Letter [24-10-2024(online)].pdf | 2024-10-24 |
| 15 | 202321047836-CERTIFIED COPIES TRANSMISSION TO IB [24-10-2024(online)].pdf | 2024-10-24 |
| 16 | 202321047836-FORM 3 [02-12-2024(online)].pdf | 2024-12-02 |
| 17 | 202321047836-FORM 18A [18-03-2025(online)].pdf | 2025-03-18 |
| 18 | 202321047836-FER.pdf | 2025-06-30 |
| 19 | 202321047836-OTHERS [17-07-2025(online)].pdf | 2025-07-17 |
| 20 | 202321047836-FER_SER_REPLY [17-07-2025(online)].pdf | 2025-07-17 |
| 21 | 202321047836-US(14)-HearingNotice-(HearingDate-05-12-2025).pdf | 2025-11-11 |
| 22 | 202321047836-Correspondence to notify the Controller [11-11-2025(online)].pdf | 2025-11-11 |
| 1 | 202321047836_SearchStrategyNew_E_SearchHistory202321047836E_28-04-2025.pdf |
| 2 | 202321047836_SearchStrategyAmended_E_SearchStrategyAE_29-10-2025.pdf |