Abstract: An improved power and cooling optimization system is envisaged in accordance with this invention. In accordance with an embodiment of this invention, the power and cooling optimization system includes a mapping system which maps the servers in a pre-determined area; in that it memorises its locations, its input/output power and performance capabilities, upper and lower temperature thresholds for optimum working, the allowance for errors in each of the servers with respect to fluctuations in power and heat dissipation, and continuously monitors the aforementioned parameters.
FORM-2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
PROVISIONAL
Specification
(See section 10 and rule 13)
A SYSTEM FOR MANAGING POWER AND OPTIMIZING COOLING FOR COMPUTER SERVERS
TATA CONSULTANCY SERVICES LIMITED
an Indian Company
of Bombay House, 24, Sir Homi Mody Street, Mumbai 400 001,
Maharashtra, India
THE FOLLOWING SPEC IFICATION DESCRIBES THE INVENTION.
Field of the Invention:
This invention relates to computer servers.
Particularly, this invention envisages a system for managing power and optimizing cooling for computer servers in a data centre.
Background of the Invention:
A server is defined as a multi-user computer that provides a service (e.g. database access, file transfer, remote access) or resources (e.g. file space) over a network connection. A data centre is a facility used to house a plurality of servers, computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communication connections, environmental controls (air conditioning, fire suppression, and the like), and special security devices.
As the equipment gets faster, more powerful, larger storage and the like improvements occur, it generates more heat. Cooling of these equipment is essential for better performance and for safeguarding them from damage by heat. Typically, convectional and conductional cooling occurs at two levels; global cooling which is the cooling of the entire data centre, and local cooling which occurs at a specific rack of servers or a specific server.
In a typical data centre, most of the energy is used for cooling and working of computer servers and allied equipment. Power usage of a server is directly proportional to the amount of data processed by the server, typically the load on the server. Cooling of the server is directly proportional to the speed of
the fan which in turn is directly proportional to the power supplied to the fan. Server usage may be continuous or intermittent or infrequent. This indicates that there are always servers in the data centre whose average utilizations are low. Still, those underutilized servers consume a lot of power as their usage is unmonitored. Cooling these underutilized servers also results in wastage of energy.
Prior Art:
In today's data centre, a cooling system is not tightly coupled with Information Technology equipment. They work independent of the equipment they service. In zone-based or rack-based cooling, the cooling system will still continue to cool as the servers are running, but the servers may be not utilized at all.
For most organizations, the pressure to remodel or build new data centres can be alleviated through improved server and storage hygiene. As certain racks become more densely populated with servers and blade systems, using perforated floor tiles on a raised floor no longer supplies enough cold air for the systems in the rack. For facilities built in the last decade, raised-floor cooling systems can exhaust typically upto 7 kilowatts per rack. Most data centres don't use that much power per rack, but in certain instances, they can use far more. For example, a fully loaded rack of blade servers can draw 30 kilowatts or more—only specialized, localized cooling systems can handle that sort of per-rack load.
It has been suggested to spread out the load; to put blade servers and other high-powered gear in combination with lower-consumption storage and
networking systems, or to simply leave the racks partially empty. The geometry of the data centre is a hindrance to allow this application. Spreading out the load can push the average power draw per rack beyond what most data centres can deliver.
It was hence conceived to pull those high-demand systems back together and use rack-based or row-based cooling systems to augment the room-based air-conditioning.
Rack-based cooling systems are available from a number of vendors. Two with very different approaches are IBM and HP.
IBM's 'eServer Rear Door Heat eXchanger' replaces the back door of a standard IBM rack. The door uses a building's chilled water supply to remove up to 55 percent of the heat generated by the racked systems. The benefit to this approach is its simplicity. The system removes heat before it enters the data centre. By lowering the thermal footprint of the racked equipment, the IBM system can move the high-watermark from 7 kilowatts per rack to about 15 kilowatts. The downside is that the IBM solution requires water pressure of 60 PSI. Not all building systems can supply that much pressure, particularly if there will be a lot of these racks deployed.
HP's solution is more comprehensive, takes more floor space, and costs considerably more. Its 'Modular Cooling System' also uses the existing chilled water supply but adds self-enclosed fans and pumps. It result in a self-contained unit that can remove typically 30 kilowatts of heat with no impact on the room-based cooling system. Isolating the hottest-running,
most-power-hungry systems and segregating them into a rack that removes 100 percent of their generated heat aids in extending the life of a data centre.
If a data centre already owns the racks and simply wants a method for extracting large amounts of heat, Liebert's systems are one that mount on or above these racks. Its 'XD systems' remove typically up to 30 kilowatts/rack.
Row-based systems such as Advanced Power Conversion's 'Infrastruxure' and Liebert's 'XDH' use half-rack-width heat exchangers between racks of equipment. The heat exchangers pull exhaust from the back (or hot-aisle side) of the racks and blow conditioned air out of the front. Because these systems substantially limit the ability for hot exhaust air to mix with cooled air—with Advanced Power Conversion's product, a roof and doors can be put on the hot aisle for full containment—they can be much more efficient than typical computer room air conditioning (CRAC) units. Where CRAC units can draw as much as 60 percent of the power required by the systems they're meant to cool, Advanced Power Conversion's systems can draw as little as 40 percent.
An improved power and cooling optimization system is needed as saving of energy is essential for the data centre.
Objectives of the Invention:
An object of this invention is to provide an improved cooling system for servers in a data centre.
Another object of this invention is to provide an optimized cooling system for a data centre.
Yet another object of this invention is to optimize the management of power in a data centre.
Summary of the Invention:
An improved power and cooling optimization system is envisaged in accordance with this invention.
In accordance with an embodiment of this invention, the power and cooling optimization system includes a mapping system which maps the servers in a pre-determined area; in that it memorises its locations, its input/output power and performance capabilities, upper and lower temperature thresholds for optimum working, the allowance for errors in each of the servers with respect to fluctuations in power and heat dissipation, and continuously monitors the aforementioned parameters. Server clusters are used in a data centre to share workloads, increase availability and reliability. The system in accordance with this invention includes a means to set atleast an upper and a lower threshold for the loads to be processed by a server.
In accordance with an embodiment of this invention, the power and cooling optimization system includes a real time monitoring system which monitors the mapped parameters of the servers in a data centre. The system proactively monitors the server clusters for utilization as it is tightly coupled with the clusters.
In accordance with another embodiment of this invention, the power and cooling optimization system includes a controlling system which has a preset and adaptive governing means. Based on the utilization level, the system will manage the cooling requirements and control the number of servers in the cluster based on inputs to the governing means. The governing means is adapted to run on any industry standard single board computer which consumes very less power, typically in the range of a few watts. Predefined parameters for the governing means include the upper and lower thresholds at which server cluster operate; the cooling speeds, typically the time required to cool in relation to threshold levels, number of servers, amount of cooling time; and the frequency of checks to monitor continuous and efficient usage. The governing means is designed to make the system generic and usable for any vendor equipment in the data centre.
The data centre is initially operated on a minimum number of servers which are adapted to cater to the envisaged load. If the utilization of any server crosses the upper threshold set by the user for that server, the monitoring means in accordance with this invention alerts the governing means to switch on the remainder of the servers one by one till the load gets distributed and the utilization is between the thresholds. The governing means in accordance with this invention will stop acting after switching on all the servers even though the utilization is above the thresholds. The monitoring means in accordance with this invention continues to monitor the utilization of the cluster. If the utilization is below threshold set by the user for that server, it alerts the governing means to start switching off servers one by one till the utilization is between thresholds and the load is concentrated between a minimum number of servers required to service that
load. The system in accordance with this invention is adapted to ensure that at any given instance at least two servers are in the switched on state so that they form a cluster. It can also be altered to be set for a minimum number of servers to maintain service levels or availability.
The governing means also incorporates an intelligent decision making routine. It is done to avoid decisions during spikes in the cluster utilization. This prevents any unnecessary decisions on server or cooling control. Even if the optimization system fails, the electronic controller will turn of all the servers and set cooling to maximum level which effectively prevents from any cooling or equipment non-availability.
Brief Description of the Accompanying Drawings:
The invention will now be described in relation to the accompanying drawings in which:
Figure 1 illustrates the implementation of power and cooling optimization system; and
Figure 2 illustrates a flowchart for decision making of power and cooling optimization system.
Detailed Description of the Accompanying Drawings:
Figure 1 illustrates the implementation of power and cooling optimization system in accordance with this invention. A server cluster (CLUSTER) is cooled by a Rack Cooling Fan (RCF) and server power Controller (C) controls the Rack Cooling Fan (RCF). A Power and Cooling Controller
(PCS) houses the governing means monitors and governs the cooling of the server clusters (CLUSTER) in accordance with this invention.
Figure 2 illustrates a flowchart for decision making of power and cooling optimization system, typically resident in the governing means. The server cluster is mapped initially (10) by obtaining inputs relating to number of servers, number of cooling fans, lower and upper threshold temperatures, speed levels of fans, minimum cooling required, delay for passing on decisions to controller. This input is stored in a storing means (20) and is continually upgraded at a preset frequency. A user may view (30) these stored parameters (20) and decide to turn all servers on or set speed of fans to requisite level (40). In cases, where the user does not act, the controller means gets server cluster utilization data from server cluster (50) which is polled, typically every 20 seconds. Server cluster utilization data (50) and user action details (30), are stored in a collating means (60). User data is also polled, typically every 20 seconds. The monitoring means (70) decides whether server cluster utilization is within optimum limits. In the event that the server cluster utilization is within the optimum limits, then it again tries to retrieve updated information from collating means (60). In the event that the server cluster utilization is not within the optimum limits, then a delaying means (80) pauses the action for a specific delay, typically for 's' seconds. This is to authenticate the validity of the decision and to disallow errors due to voltage or power spikes. The monitoring means further checks (90) if the utilization of the servers, typically the load on the server that it is monitoring is still within or out of threshold range. In the event that the load on the server is outside the threshold range, then the control returns to the monitoring means (70). In the event that the server cluster utilization is
within the optimum limits, then the monitoring means has to determine if the server cluster utilization is above or below the specified limits (100) for the servers that it is monitoring. If it is above the specified limits i.e. the load on the server is more than the specified upper threshold, the monitoring means alerts the governing means to turn ON one server after another in the server cluster (no action if all servers are ON) till the load gets distributed, to adjust rack cooling fan speed proportional to number of servers that are working as represented by reference numeral (110), and to return control to the monitoring means (70). If it is below the specified limits i.e. the load on the server is less than the specified lower threshold, the monitoring means alerts the governing means to turn off one server after another in the server cluster (no action if only two servers are ON) till the load gets concentrated within the number of operational servers, to adjust rack cooling fan speed proportional to number of servers that are working as represented by reference numeral (130), and to return control to the monitoring means (70).
Advantages of the invention:
• The power and cooling optimization system in accordance with this invention provides Real time monitoring of the server utilization in a cluster.
• The power and cooling optimization system in accordance with this invention provides a Cooling system coupled with computer equipment and hence provides proactive power savings.
• The power and cooling optimization system in accordance with this invention provides an intelligent and adaptive governing means for proactive power and cooling management.
• The power and cooling optimization system in accordance with this invention works for all major server clusters as it depends on the utilization parameters.
• The power and cooling optimization system in accordance with this invention works with all major cooling systems as the system has generic controls.
• The power and cooling optimization system in accordance with this invention necessitates no major modification in the data centre or no major technology refresh is needed.
• The power and cooling optimization system in accordance with this invention is developed and implemented at a very low cost.
• The power and cooling optimization system in accordance with this invention includes a monitoring system which consumes very low power as it can be installed on a single board computer.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the
preferred embodiments without departing from the principles of the invention. These and other changes in the preferred embodiment as well as other embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 121-MUM-2008-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 1 | 121-MUM-2008-SPECIFICATION(AMENDED) (16-11-2015).pdf | 2015-11-16 |
| 2 | 121-MUM-2008-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 2 | 121-MUM-2008-REPLY TO EXAMINATION REPORT (16-11-2015).pdf | 2015-11-16 |
| 3 | 121-MUM-2008-RELEVANT DOCUMENTS [29-09-2021(online)].pdf | 2021-09-29 |
| 3 | 121-MUM-2008-POWER OF ATTORNEY (16-11-2015).pdf | 2015-11-16 |
| 4 | 121-MUM-2008-RELEVANT DOCUMENTS [29-03-2020(online)].pdf | 2020-03-29 |
| 4 | 121-MUM-2008-MARKED COPY (16-11-2015).pdf | 2015-11-16 |
| 5 | 121-MUM-2008-RELEVANT DOCUMENTS [23-03-2019(online)].pdf | 2019-03-23 |
| 5 | 121-MUM-2008-FORM 2(TITLE PAGE) (16-11-2015).pdf | 2015-11-16 |
| 6 | 121-MUM-2008-IntimationOfGrant28-09-2018.pdf | 2018-09-28 |
| 6 | 121-MUM-2008-FORM 1 (16-11-2015).pdf | 2015-11-16 |
| 7 | 121-MUM-2008-PatentCertificate28-09-2018.pdf | 2018-09-28 |
| 7 | 121-MUM-2008-CLAIMS (16-11-2015).pdf | 2015-11-16 |
| 8 | 121-MUM-2008-ABSTRACT(16-11-2015).pdf | 2015-11-16 |
| 8 | 121-MUM-2008-ABSTRACT(16-1-2009).pdf | 2018-08-09 |
| 9 | 121-MUM-2008-CLAIMS(16-1-2009).pdf | 2018-08-09 |
| 9 | 121-MUM-2008-Written submissions and relevant documents (MANDATORY) [02-01-2018(online)].pdf | 2018-01-02 |
| 10 | 121-MUM-2008-CORRESPONDENCE(12-2-2010).pdf | 2018-08-09 |
| 10 | 201727045230-ORIGINAL UNDER RULE 6 (1A)-050118.pdf | 2018-08-09 |
| 11 | 121-MUM-2008-CORRESPONDENCE(16-1-2009).pdf | 2018-08-09 |
| 11 | 121-MUM-2008_EXAMREPORT.pdf | 2018-08-09 |
| 12 | 121-MUM-2008-CORRESPONDENCE(28-1-2008).pdf | 2018-08-09 |
| 12 | 121-MUM-2008-ORIGINAL UNDER RULE 6 (1A)-050118.pdf | 2018-08-09 |
| 13 | 121-mum-2008-correspondence-received.pdf | 2018-08-09 |
| 13 | 121-MUM-2008-HearingNoticeLetter.pdf | 2018-08-09 |
| 14 | 121-mum-2008-description (provisional).pdf | 2018-08-09 |
| 14 | 121-mum-2008-form-3.pdf | 2018-08-09 |
| 15 | 121-MUM-2008-DESCRIPTION(COMPLETE)-(16-1-2009).pdf | 2018-08-09 |
| 15 | 121-mum-2008-form-26.pdf | 2018-08-09 |
| 16 | 121-MUM-2008-DRAWING(16-1-2009).pdf | 2018-08-09 |
| 16 | 121-mum-2008-form-2.pdf | 2018-08-09 |
| 17 | 121-mum-2008-drawings.pdf | 2018-08-09 |
| 18 | 121-mum-2008-form-1.pdf | 2018-08-09 |
| 18 | 121-MUM-2008-FORM 1(28-1-2008).pdf | 2018-08-09 |
| 19 | 121-mum-2008-form 13(16-1-2009).pdf | 2018-08-09 |
| 19 | 121-MUM-2008-FORM 5(16-1-2009).pdf | 2018-08-09 |
| 20 | 121-MUM-2008-FORM 18(12-2-2010).pdf | 2018-08-09 |
| 20 | 121-MUM-2008-FORM 2(TITLE PAGE)-(16-1-2009).pdf | 2018-08-09 |
| 21 | 121-mum-2008-form 2(16-1-2009).pdf | 2018-08-09 |
| 21 | 121-MUM-2008-FORM 2(TITAL PAGE)-(17-1-2008).pdf | 2018-08-09 |
| 22 | 121-mum-2008-form 2(16-1-2009).pdf | 2018-08-09 |
| 22 | 121-MUM-2008-FORM 2(TITAL PAGE)-(17-1-2008).pdf | 2018-08-09 |
| 23 | 121-MUM-2008-FORM 18(12-2-2010).pdf | 2018-08-09 |
| 23 | 121-MUM-2008-FORM 2(TITLE PAGE)-(16-1-2009).pdf | 2018-08-09 |
| 24 | 121-MUM-2008-FORM 5(16-1-2009).pdf | 2018-08-09 |
| 24 | 121-mum-2008-form 13(16-1-2009).pdf | 2018-08-09 |
| 25 | 121-MUM-2008-FORM 1(28-1-2008).pdf | 2018-08-09 |
| 25 | 121-mum-2008-form-1.pdf | 2018-08-09 |
| 26 | 121-mum-2008-drawings.pdf | 2018-08-09 |
| 27 | 121-MUM-2008-DRAWING(16-1-2009).pdf | 2018-08-09 |
| 27 | 121-mum-2008-form-2.pdf | 2018-08-09 |
| 28 | 121-MUM-2008-DESCRIPTION(COMPLETE)-(16-1-2009).pdf | 2018-08-09 |
| 28 | 121-mum-2008-form-26.pdf | 2018-08-09 |
| 29 | 121-mum-2008-description (provisional).pdf | 2018-08-09 |
| 29 | 121-mum-2008-form-3.pdf | 2018-08-09 |
| 30 | 121-mum-2008-correspondence-received.pdf | 2018-08-09 |
| 30 | 121-MUM-2008-HearingNoticeLetter.pdf | 2018-08-09 |
| 31 | 121-MUM-2008-CORRESPONDENCE(28-1-2008).pdf | 2018-08-09 |
| 31 | 121-MUM-2008-ORIGINAL UNDER RULE 6 (1A)-050118.pdf | 2018-08-09 |
| 32 | 121-MUM-2008-CORRESPONDENCE(16-1-2009).pdf | 2018-08-09 |
| 32 | 121-MUM-2008_EXAMREPORT.pdf | 2018-08-09 |
| 33 | 121-MUM-2008-CORRESPONDENCE(12-2-2010).pdf | 2018-08-09 |
| 33 | 201727045230-ORIGINAL UNDER RULE 6 (1A)-050118.pdf | 2018-08-09 |
| 34 | 121-MUM-2008-CLAIMS(16-1-2009).pdf | 2018-08-09 |
| 34 | 121-MUM-2008-Written submissions and relevant documents (MANDATORY) [02-01-2018(online)].pdf | 2018-01-02 |
| 35 | 121-MUM-2008-ABSTRACT(16-1-2009).pdf | 2018-08-09 |
| 35 | 121-MUM-2008-ABSTRACT(16-11-2015).pdf | 2015-11-16 |
| 36 | 121-MUM-2008-PatentCertificate28-09-2018.pdf | 2018-09-28 |
| 36 | 121-MUM-2008-CLAIMS (16-11-2015).pdf | 2015-11-16 |
| 37 | 121-MUM-2008-FORM 1 (16-11-2015).pdf | 2015-11-16 |
| 37 | 121-MUM-2008-IntimationOfGrant28-09-2018.pdf | 2018-09-28 |
| 38 | 121-MUM-2008-RELEVANT DOCUMENTS [23-03-2019(online)].pdf | 2019-03-23 |
| 38 | 121-MUM-2008-FORM 2(TITLE PAGE) (16-11-2015).pdf | 2015-11-16 |
| 39 | 121-MUM-2008-RELEVANT DOCUMENTS [29-03-2020(online)].pdf | 2020-03-29 |
| 39 | 121-MUM-2008-MARKED COPY (16-11-2015).pdf | 2015-11-16 |
| 40 | 121-MUM-2008-RELEVANT DOCUMENTS [29-09-2021(online)].pdf | 2021-09-29 |
| 40 | 121-MUM-2008-POWER OF ATTORNEY (16-11-2015).pdf | 2015-11-16 |
| 41 | 121-MUM-2008-REPLY TO EXAMINATION REPORT (16-11-2015).pdf | 2015-11-16 |
| 41 | 121-MUM-2008-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 42 | 121-MUM-2008-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 42 | 121-MUM-2008-SPECIFICATION(AMENDED) (16-11-2015).pdf | 2015-11-16 |