Sign In to Follow Application
View All Documents & Correspondence

System And Method For Providing Overload Protection For A Server In A Network

Abstract: ABSTRACT SYSTEM AND METHOD FOR PROVIDING OVERLOAD PROTECTION FOR A SERVER IN A NETWORK The present disclosure relates to a system (106) and a method (500) for providing overload protection for the server (108) in the network (104). The system (106) includes a stacking module (212) configured to stack one or more incoming requests to the server (108) in queue. The retrieving module (214) is configured to retrieve one or more health parameters of the server (108). The configuration module (216) is configured to threshold model based on retrieved one or more health parameters. The threshold model includes at least one of, the soft limit, the hard limit, and the max limit. The comparing module (218) is configured to compare a queue size of the incoming requests with the threshold limits. The rerouting module (220) is configured to reroute the future incoming requests to the proxy network element until the load on the server (108) is resolved. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD - 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Sandeep Bisht
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Jyothi Durga Prasad Chillapalli
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Ezaj Ansari
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Ravindra Yadav
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR PROVIDING OVERLOAD PROTECTION FOR A SERVER IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present disclosure relates to communication networks, more particularly relates to a system and method for providing overload protection for a server in networks.
BACKGROUND OF THE INVENTION
[0002] A communication network is subjected to massive exchange of information over a certain time frame. Multiple users send information commands to network servers which may lead to cases of system failure by the way of command/request-overloading i.e., large number of requests within a single time frame. The server attending such a massive influx may end up freezing or may even restart as a means of auto-recovery resulting in loss of data. Without a mechanism to protect a server from overloading itself with requests that it is unable to process, may eventually lead the server to shut down or freeze up or restart. In a large-scale network where each network component has a vital role to play, such abrupt shutdown will disrupt the network services and may lead to cascading failures of multiple servers in a network thus leading to catastrophic outcome and poor quality of service.
[0003] Presently there is no solution to alert the server about the overloading. Moreover, the user requesting the information may have to wait for an indefinite time in case the server freezes because of lacking appropriate system in place to inform the user about the error, thus time and resources are prone to waste.
[0004] Therefore, from the above cases, it becomes necessary to implement a system and method to set a threshold of request processing for an alert system while addressing the overloading errors in the network, to prevent server shutdowns and maintain the network element operational till the overload condition subsides. However, the current available solutions are not able to offer the optimized alert system with provision to address the overloading issue at hand.
[0005] Therefore, there arises a need for a system and method for overload protection while optimally reducing effort and time consumed during heavy influx of requests. In particular, there is a need to provide solutions that can alert both the server and the user as well as reroute the influx requests to multiple servers during peak so as to address the issue, simultaneously solving minor errors automatically. In other words, there is a need for a solution with overload prevention measures.
BRIEF SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a system and method for providing overload protection for a server in networks.
[0007] In one aspect of the present invention, a method for providing overload protection for a server in a network is disclosed. The method includes the step of stacking, by one or more processors, one or more incoming requests to the server in a queue. The method includes the step of retrieving, by the one or more processors, one or more health parameters of the server. The method includes the step of configuring, by the one or more processors, a threshold model based on the retrieved one or more health parameters. The threshold model includes one or more threshold limits. The method includes the step of comparing, by the one or more processors, a queue size of the one or more incoming requests with the one or more threshold limits to check if the server is in an overloaded state pertaining to the incoming requests. The method includes the step of rerouting, by the one or more processors, future incoming requests to a proxy network element until the load on the server is resolved, thereby reducing the impact of the overload on the server.
[0008] In one embodiment, the threshold limits include at least one of, a soft limit, a hard limit, and a max limit.
[0009] In one embodiment, the method further includes the step of transmitting, by the one or more processors, the one or more incoming requests to the server without generating an alert if the queue size is less than the soft limit.
[0010] In one embodiment, the method includes the steps of transmitting, by the one or more processors, the one or more incoming requests to the server and subsequently raising an alert to the server when about to reach the overloaded state. The server reaches the overloaded state when the queue size is increasing, i.e. when the queue size is less than the hard limit and greater than the soft limit.
[0011] In one embodiment, the method includes the steps of providing, by the one or more processors, an error response to the user while rerouting the future incoming requests to the server and raising an alert to the server pertaining to a request queue indicating the request queue is returning to a normal operating condition. The server returns to a normal operating condition when the queue size is decreasing i.e. when the queue size is less than the hard limit and greater than the soft limit.
[0012] In one embodiment, the method includes the step of denying, by the one or more processors, the one or more incoming requests to the server and providing an error response to the user, while engaging the proxy network element to reroute any future incoming requests and raise an alert to the server. Reroute any future incoming requests and raise an alert to the server, when the queue size is more than the hard limit and less than the max limit.
[0013] In one embodiment, the method further includes the step of rejecting, by the one or more processors, the incoming requests when the queue size is more than the max limit and subsequently raising an alert to the server.
[0014] In one embodiment, the threshold model is configured based on available historic and current information of the one or more health parameters of the server.
[0015] In one embodiment, the one or more processors is configured to update the threshold model dynamically based on statistics of the one or more health parameters of the server.
[0016] In one embodiment, the step of comparing, by the one or more processors, the queue size pertaining to the incoming requests is performed subsequent to receiving a new incoming request.
[0017] In one embodiment, the one or more health parameters of the server are retrieved based on monitoring, by the one or more processors, one or more health parameters of the server within a predefined time period.
[0018] In one embodiment, the one or more processors, reroutes future incoming requests to the proxy network element including at least one of, the proxy server.
[0019] In one embodiment, the server is at least one of, an application server.
[0020] In one embodiment, the one or more processors, by providing the hard limit and the soft limit provides healing time to the server.
[0021] In another aspect of the invention, the system for providing overload protection for a server in a network is disclosed. The system includes a stacking module. The stacking module is configured to stack one or more incoming requests to the server in a queue. The system includes a retrieving module. The retrieving module is configured to retrieve one or more health parameters of the server. The system includes a configuration module. The configuration module is configured to a threshold model based on the retrieved one or more health parameters. The threshold model includes one or more threshold limits. The system includes a comparing module. The comparing module is configured to compare a queue size of the one or more incoming requests with the one or more threshold limits to check if the server is in an overloaded state pertaining to the incoming requests. The system includes a rerouting module. The rerouting module is configured to reroute the future incoming requests to the proxy network element until the load on the server is resolved, and thereby reducing the impact of the overload on the server.
[0022] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0024] FIG. 1 is an exemplary block diagram of an environment for providing overload protection for a server in a network, according to various embodiments of the present invention;
[0025] FIG. 2 is an exemplary block diagram of a system for providing overload protection for the server in the network, according to various embodiments of the present invention;
[0026] FIG. 3 is schematic representation of a workflow of the system of FIG. 2, according to various embodiments of the present invention;
[0027] FIG. 4 is a graphical illustration indicating threshold limits against performance of the server, of the system of FIG. 2, according to various embodiments of the present invention;
[0028] FIG. 5 shows a flow diagram of a method for providing overload protection for the server in the network, according to various embodiments of the present invention; and
[0029] FIG. 6 is an exemplary block diagram of a system architecture for providing overload protection for the server in the network, according to one or more embodiments of the present invention.

[0030] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0031] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0032] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0033] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0034] As per various embodiments depicted, the present invention discloses the systems and methods for providing overload protection for a server in a network. The system and method perform the required action by means of setting a threshold limit and implementing queue stack to reroute requests or rejecting requests as per the set limits. The present system provides a mechanism to protect the server from overloading itself with requests which may lead to freezing up of the server or may lead to shut down and maintain the network element operational till the overload condition subsides. The system is also capable of performing required error solving, rerouting the excessive requests and generating error response to notify the user so that the user may have flexibility to redirect the request to another server or opt of an alternative without having to waste their precious time.
[0035] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for providing an overload protection for a server 108 in a network 104, according to various embodiments of the present invention. The environment 100 includes at least one User Equipment (UE) 102 configured to generate and transmit a request to provide an overload protection for the server 108 in the network 104. In one embodiment, the at least one UE 102 is at least one of a first UE 102a, a second UE 102b, and a third UE 102c. In one embodiment, each of the at least first UE 102a, the second UE 102b, and the third UE 102c are configured to at least transmit the request from the at least one UE 102 to avail one or more services. In one embodiment, the one or more services includes, but not limited to accessing the server 108.
[0036] At least one of the UE 102a from the at least first UE 102a, the second UE 102b, and the third UE 102c is communicatively connected to a system 106 via the network 104. The first UE 102a, the second UE 102b, and the third UE 102c will henceforth collectively and individually be referred to as “the UE 102” without limiting the scope and deviating from the scope of the present disclosure.
[0037] In one embodiment, the UE 102 includes, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a tablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like.
[0038] The environment 100 further includes the server 108 communicably coupled to the UE 102 via the network 104. The server 108 includes by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defense facility, or any other facility that provides content.
[0039] The network 104 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 104 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0040] The network 104 includes, by the way of example but not limitation, one or more wireless interfaces/protocols such as, for example, 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as CDMA, CDMA2000, WCDMA, Radio Frequency (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0041] The environment further includes the system 106 communicably coupled to the server 108 and the UE 102 via the network 104. The system 106 is configured to provide the overload protection for the server 108 in the network 104. Further, the system 106 is adapted to be embedded within the server 108 or is embedded as the individual entity. However, for the purpose of description, the system 106 is described as an integral part of the server 108, without deviating from the scope of the present disclosure.
[0042] Operational and construction features of the system 106 will be explained in detail with respect to the following figures.
[0043] Referring to FIG. 2, FIG. 2 illustrates an exemplary block diagram of the system 106 of FIG. 1, according to various embodiments of the present invention. As per the illustrated embodiment, the system 106 includes one or more processors 202, a memory 204, an Input/Output (I/O) user interface 206, a display unit 208, an input device 210 and a database 222. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 106 includes one processor 202. However, it is to be noted that the system 106 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0044] In one embodiment, the memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like. In an embodiment, the I/O user interface 206 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like. The I/O user interface 206 facilitates communication of the system 106. In one embodiment, the I/O user interface 206 provides a communication pathway for one or more components of the system 106.
[0045] The I/O user interface 206 may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The I/O user interface 206 may be rendered on the display unit 208, implemented using LCD display technology, OLED display technology, and/or other types of conventional display technology. The display unit 208 is integrated within the system 106 or connected externally. Further the request may be configured to receive request, queries, or information from the user by using the input device 210. The input device 210 may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0046] The system 106, further comprises the database 222. The database 222 is communicably connected to the processor 202, and the memory 204. The database 222 is configured to store and retrieve the data. Further, the processor 202, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 202 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 106 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 106 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0047] In order for the system 106 to provide the overload protection for the server 108 in the network 106, the processor 202 includes a stacking module 212, a retrieving module 214, a configuration module 216, a comparing module 218, a rerouting module 220 and an updating unit 224, communicably coupled to each other.
[0048] The stacking module 212 of the processor 202 is communicably connected to the UE 102, via the network 104. Accordingly, the stacking module 212 is configured to stack the one or more incoming requests to the server 108 in the queue. In one embodiment, the server 108 includes at least one of an application server 108. The one or more incoming requests includes at least, but not limited to, the request to access the server 108 from the UE 102. The one or more incoming requests includes, but not limited to, Hypertext Transfer Protocol (HTTP) requests, database queries, file request, service calls, network traffic. Further, the stacking module 212 transmits the stack of the one or more incoming requests to the retrieving module 214.
[0049] Upon receiving the stack of the one or more incoming requests from the stacking module 212, the retrieving module 214 is configured to retrieve one or more health parameters of the server 108. The retrieving module 214 monitors the retrieved one or more health parameters of the server 108 within a predefined time period. In one embodiment, the predefined time includes the time frame of the historic and the current information of the one or more health parameters of the server 108. In one embodiment, the one or more health parameters of the server 108 includes the available historic and current information of but not limited to a request queue, a stack snapshots, a memory allocation, a Central Processing Unit (CPU) consumption, a blocked time, a latency, a metric measures the number of requests the server processes per second and a metric measures the average time it takes for the server to respond to a request. The available historic and current information refers to data collected over time about the server's performance and health. The available historic information includes, but not limited to, past performance data, past incidents, long-term trends. The current information includes, but is not limited to real-time monitoring, immediate state data, instantaneous events. The retrieving module 214 transmits the one or more health parameters of the application server 108 to the configuration module 216 for further processing.
[0050] On receipt of the one or more health parameters, the configuration module 216 is configured to generate a threshold model based on the one or more health parameters of the server 108. The threshold model is generated based on the available historic and current information of the one or more health parameters of the server 108. In one embodiment, the threshold model includes one or more threshold limits. The one or more threshold limits includes at least one of a soft limit, a hard limit, and a max limit. In alternate embodiments, there may be multiple limits as per requirement of the service provider. In an embodiment, the soft limit is the first threshold that indicates the server 108 is starting to experience a significant load, but it is still within a safe and manageable range. For example, if the server's CPU usage consistently runs below 60%, the soft limit might be set at 70%. When the CPU usage reaches this limit, the system acknowledges the increase in load. The hard limit indicates that the server 108 is under significant stress and may soon become overloaded if the load continues to increase. For example, if CPU usage reaches 85%, the hard limit might be set at 90%. When this limit is reached, the system could start rerouting requests to a proxy server to reduce the load on the server. The max limit is the uppermost threshold, representing the absolute maximum capacity the server can handle before it is at risk of failing or becoming unresponsive. For example, if the server's CPU usage exceeds 95%, the max limit might be set at 95%. When this limit is crossed, the system may begin rejecting new requests entirely and engage emergency protocols to protect the server from crashing.
[0051] In one embodiment, as the number of incoming requests increases, simultaneously the load on the server 108 also increases. The load on the server 108 is classified into the normal load operating state and an overload operating state. In one embodiment, the normal load operating state includes the soft limit and the hard limit of the threshold limits. In one embodiment, the overload state includes the maximum limit of the threshold limits.
[0052] In one embodiment, the threshold model is updated dynamically based on the statistics of the one or more health parameters of the server 108. In another embodiment, the updating unit 224 dynamically updates the threshold model based on the current information of the one or more health parameters of the server 108. Subsequent to updating, the updating unit 224 transmits the updated threshold model to the comparing module 218. In one embodiment, the current information of the one or more health parameters of the server 108 includes but not limited to the request queue, the stack snapshots, the memory allocation, the CPU consumption, the blocked time, the latency, the metric measures the number of requests the server processes per second and the metric measures the average time it takes for the server to respond to a request.
[0053] On receipt, the comparing module 218 performs at least one of steps of the comparison. The steps of the comparison of the queue size pertaining to the incoming requests are performed subsequent to receiving a new incoming request. The comparison module 218 compares the queue size of the incoming request with the one or more threshold limits. In one embodiment, the queue size of the one or more incoming requests referred to the number of incoming requests in a queue. In one embodiment, when the one or more incoming requests is lesser than the soft limit, the one or more incoming requests are transmitted to the server 108 without generating an alert to the server 108.
[0054] In another embodiment, when the one or more incoming requests is increasing, the comparing module 218 determines that the server 108 is about to reach the overload state. The server 108 is considered to be reaching the overloaded state when the size of the queue of incoming requests is approaching the hard limit. The one or more requests are referred to as increasing when the queue size of the one or more incoming request is greater than the soft limit and less than the hard limit. Thereby, the comparing module 218 transmits the one or more incoming request to the server 108 with the alert response to the UE 102.
[0055] In one embodiment, when the one or more incoming requests is decreasing, the comparing module 218 determines that the server 108 is not in the overload state. The one or more requests are referred to as decreasing when the queue size of the one or more incoming requests is less than the hard limit and greater than the soft limit. Thereby, the comparing module 218 transmits the one or more incoming request to the server 108 with the alert to the server 108. The alert refers to specific notifications or warnings that the system 106 generates based on the server's current load status and the thresholds it has crossed (soft limit, hard limit, max limit). The alert include, but not limited to, pre-overloaded alert, overloaded alert, critical overloaded alert, recovery alert.
[0056] In one embodiment, when the one or more incoming requests are more than the hard limit and less than the max limit, the comparing module 218 does not transmit the one or more incoming requests to the server 108. The comparing module 218 reroutes the one or more incoming request to the proxy server. In one embodiment, the proxy server is an alternate server. The comparing module 218 reroute the one or more incoming request to the alternate application server and provides an error response to the UE 102. Thereby, the comparing module 218 is configured to raise the alert to the server 108, while engaging with the alternate application server to reroute any future incoming requests. The future incoming request refers to any new request that will arrive at the server 108 after the system 106 has detected that the server 108 is approaching or has exceeded a threshold (like the hard or max limit) and has taken action to protect the server 108 from overload.
[0057] In one embodiment, when the one or more incoming requests are more than the max limit, the comparing module 218 does not transmit the one or more incoming requests to the server 108. The comparing module 218 raises the alert to the server 108 and the comparing module 218 transmits the comparison data of the queue size of the one or more incoming requests that is performed subsequent to receiving the new incoming request to the rerouting module 220.
[0058] On receipt of the comparison of the queue size of the incoming request with the one or more threshold limits from the comparing module 218, the rerouting module 220 reroutes the one or more incoming requests to the alternate server. The rerouting module 220 is configured to reroute the one or more incoming requests to the alternate server, until the queue size of the one or more incoming requests on the server 108 is reduced. In an embodiment, the rerouting refers to the process of redirecting incoming requests that are intended for a specific server 108 to a proxy network element such as proxy server. The proxy network element refers to an alternative network component that temporarily takes over the handling of incoming requests when the server 108 is overloaded or unable to process them efficiently. The system 106 improves the server 108 performance by a unique approach of monitoring number of incoming requests and creating the threshold model to alert the server 108 regarding the overload state. The system 106 also stores the incoming request references and removes the completed incoming request references, which reduces the space usage in the memory 204.
[0059] Referring to FIG. 3, FIG. 3 is schematic representation of a workflow of the system 106 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to each of the at least first UE 102a, the second UE 102b, and the third UE 102c and the system 106 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure. As mentioned earlier in FIG. 1, each of the at least first UE 102a, the second UE 102b, and the third UE 102c configured to at least transmit the request from the at least one UE 102 to avail one or more services. The first UE 102a includes one or more primary processors 302 coupled with a memory 304 storing instructions which are executed by the one or more primary processors 302. Execution of the stored instructions by the one or more primary processors 302 enables the first UE 102a to transmit the request from each of the at least first UE 102a, the second UE 102b, and the third UE 102c.
[0060] The at least first UE 102a, the second UE 102b, and the third UE 102c may comprise a memory 304 such as a volatile memory (e.g., RAM), a non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory. In one implementation, the memory 304 might be configured or designed to store data. The data may pertain to transmit the request and access rights specifically defined for each of the at least first UE 102a, the second UE 102b, and the third UE 102c. Each of the at least first UE 102a, the second UE 102b, and the third UE 102c is accessed by the user, to initiate the request to access the server 108. The at least first UE 102a, the second UE 102b, and the third UE 102c is configured to connect with the server 108 through the network 106.
[0061] For the sake of brevity, it is to be noted that similar description related to the working and operation of the system 106 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 106 in FIG. 3, should be read with the description as provided for the system 106 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0062] Referring to FIG.4, FIG. 4 illustrates a graphical illustration indicating the threshold limits against performance of the server 108, of the system 106 of FIG. 2, according to various embodiments of the present invention. As mentioned in the FIG. 2, the system 106 is communicably coupled to the server 108 and the UE 102 via the network 104. When the UE 102 initiates the request to access the application server 108, the processor 202 of the system 106 receives the one or more incoming request from the UE 102. Further, the stacking module 212 is configured to stack the one or more incoming requests in the queue. On receiving the stack of the one or more incoming requests from the stacking module 212, the retrieving module 214 retrieves the one or more health parameters of the server 108. Further, the configuration module 216 generates the threshold models based on available historic and current information of the retrieved one or more health parameters of the server 108. The threshold model includes the soft limit, the hard limit, and the max limit as shown in the FIG. 4. The comparing module 218 receives the threshold model from the configuration module 216 for comparing the queue size of the one or more incoming requests subsequent to receiving the new incoming request.
[0063] The comparison module 218 compares the queue size of the incoming request with the one or more threshold limits. In one embodiment, the queue size of the one or more incoming requests refers to the number of incoming requests.
[0064] In one embodiment, for example the threshold limits ranges from zero to thirty in numbers as shown in table 1. If the number of incoming requests is ten in number, one or more requests fall within the soft limit. If the number of incoming requests is more than ten and less than twenty in number, the one or more requests fall within the hard limit. If the number of requests is more than twenty and less than thirty in number, the one or more requests fall within the max limit. This implies that the server 108 can accept maximum thirty requests.
[0065] For example, as per table 1
Queue size of the one or more incoming requests in numbers One or more threshold limits Operating state of the server
0-10 Soft limit Normal operating state
10-20 Hard limit Normal operating state
20-30 Max limit Overload state
Table 1
[0066] In one embodiment, when the one or more incoming requests are within the range of soft limit, the one or more incoming requests are transmitted to the server 108 without generating the alert to the server 108. Further, the system 106 categorizes the server 108 to be in the normal operating state.
[0067] In another embodiment, when the one or more incoming requests are increasing, the comparing module 218 determines that the server 108 is not in the overload state. The one or more requests are referred to as increasing when the queue size of the one or more incoming request is greater than the soft limit and less than the hard limit. Thereby, the comparing module 218 transmits the one or more incoming request to the server 108 with the alert response to the UE 102.
[0068] In one embodiment, when the one or more incoming requests is decreasing, the comparing module 218 determines that the server 108 is not in the overload state. The one or more requests are referred to as decreasing when the queue size of the one or more incoming requests is less than the hard limit and greater than the soft limit. Thereby, the comparing module 218 transmits the one or more incoming request to the server 108 with the alert to the server 108.
[0069] In one embodiment, when the one or more incoming requests are more than the hard limit and less than the max limit, the comparing module 218 does not transmit the one or more incoming requests to the server 108. The comparing module 218 reroute the one or more incoming request to the proxy server. In one embodiment, the proxy server is the alternate application server. The comparing module 218 reroute the one or more incoming request to the alternate application server and provides the error response to the UE 102. Thereby, the comparing module 218 is configured to raise the alert to the server 108, while engaging with the alternate application server to reroute any future incoming requests.
[0070] In one embodiment, when the one or more incoming requests are more than the max limit, the comparing module 218 does not transmit the one or more incoming requests to the server 108. The comparing module 218 subsequently raises the alert to the server 108. Thereby, the comparing module 218 transmits the comparison data of the queue size of the one or more incoming requests that is performed subsequent to receiving the new incoming request to the rerouting module 220.
[0071] Furthermore, the rerouting module 220 is configured to reroute the future one or more incoming requests to the alternate server until the load on the server 108 is balanced.
[0072] Referring to FIG. 5, FIG. 5 is a flow diagram of method 500 for providing overload protection of the server 108 in the network 104, according to various embodiments of the present invention. The method 500 is adapted to provide the overload protection for the server 108 in the network 104. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG 1 and should nowhere be construed as limiting the scope of the present disclosure.
[0073] At step 501, the method 500 includes the step of stacking the one or more incoming requests to the server 108 in the queue by the stacking module 212.
[0074] At step 502, the method 500 includes the step of retrieving the one or more health parameters of the server 108 by the retrieving module 214. The retrieving module 214 retrieves the one or more health parameters of the server 108 based on the monitoring of one or more health parameters of the server 108 within the predefined time period. Further, the retrieving module 214 transmits the one or more health parameters of the server 108 to the configuration module 216 for further process. In one embodiment, the predefined time includes the time frame of the historic and the current information of the one or more health parameters of the server 108.
[0075] At step 503, the method 500 includes the step of configuring the generated threshold model by the configuration module 216. The configuration module 216 generates the threshold model based on the available historic and current information of the one or more health parameters of the server 108. The threshold model includes the threshold limits i.e. the soft limit, the hard limit and the max limit.
[0076] At step 504, the method 500 includes the step of comparing the queue size of the one or more incoming requests with the one or more threshold limits, to check the state of the server 108.
[0077] In one embodiment, the comparison module 218 compares the queue size of the one or more incoming request with the one or more threshold limits.
[0078] In one embodiment, when the one or more incoming request is lesser than the soft limit, the one or more incoming requests are transmitted to the server 108 without generating the alert to the server 108.
[0079] In another embodiment, when the one or more incoming requests is increasing, the comparing module 218 determines that the server 108 is not in the overload state. The one or more requests are referred to as increasing when the queue size of the one or more incoming request is greater than the soft limit and less than the hard limit. Thereby, the comparing module 218 transmits the one or more incoming request to the server 108 with the alert response to the UE 102.
[0080] In one embodiment, when the one or more incoming requests is decreasing, the comparing module 218 determines that the server 108 is not in the overload state. The one or more requests are referred to as decreasing when the queue size of the one or more incoming requests is less than the hard limit and greater than the soft limit. Thereby, the comparing module 218 transmits the one or more incoming request to the server 108 with the alert to the server 108.
[0081] In one embodiment, when the one or more incoming requests are more than the hard limit and less than the max limit, the comparing module 218 does not transmit the one or more incoming requests to the server 108. The comparing module 218 reroute the one or more incoming request to the alternate application server. The comparing module 218 reroutes the one or more incoming request to the alternate server and provides the error response to the UE 102. Thereby, the comparing module 218 is configured to raise the alert to the application server 108, while engaging with the alternate server to reroute any future incoming requests.
[0082] In one embodiment, when the one or more incoming requests are more than the max limit, the comparing module 218 does not transmit the one or more incoming requests to the server 108. The comparing module 218 subsequently raises the alert to the server 108. Thereby, the comparing module 218 transmits the comparison data of the queue size of the one or more incoming requests that is performed subsequent to receiving the new incoming request to the rerouting module 220.
[0083] At step 505, the method 500 includes the step of rerouting the future one or more incoming requests to the alternate server, until the one or more incoming requests on the server 108 is reduced and thereby reducing the impact of the overload on the server 108.
[0084] In an embodiment, the present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 602. The processor 602 is configured to stack one or more incoming requests to the server 108 in a queue. The processor 602 is further configured to retrieve one or more health parameters of the server 108. The processor 602 is further configured to a threshold model based on the retrieved one or more health parameters. The threshold model includes one or more threshold limits including at least one of, a soft limit, a hard limit, and a max limit. The processor 602 is further configured to compare a queue size of the one or more incoming requests with the one or more threshold limits to check if the server 108 is in an overloaded state pertaining to the incoming requests. The processor 602 is configured to reroute the future incoming requests to the proxy network element until the load on the server 108 is resolved and thereby reducing the impact of the overload on the server 108.
[0085] FIG. 6 is an exemplary block diagram of the system 106 architecture 600 for providing overload protection for the server 108 in the network 104, according to one or more embodiments of the present invention.
[0086] As per the illustrated embodiment, the system 106 architecture 600 includes a virtual machine, but not limited to, a Virtual Machine (VM) 602. The VM 602 enables the UE 102 to run programs written in Java as well as programs written in other languages that are compiled to bytecode. In the illustrated embodiment, the VM 602 is the environment in which an application 604 and a protocol stack 606 run.
[0087] In one embodiment, the application 604 is at least one of, but not limited to, a java application which utilizes the protocol stack 606 to communicate to another node via HTTP 2.0. The application 604 includes, but is not limited to, desktop applications, web applications, mobile applications, and enterprise applications.
[0088] In an embodiment of the present invention, the protocol stack 606 is a library based on one or more programming languages which interacts with the network 104 to communicate with another node via HTTP 2.0.
[0089] The protocol stack 606 provides abstracted APIs (Application Programming Interface) for developers to build an application around it with inbuilt features like connection management, log management, transport HTTP2 messages, overload protection, rate limit protection, etc.
[0090] Further, the protocol stack 606 provides the overload protection for the server 108 by rerouting the future incoming requests to a proxy network element until the load on the server 108 is resolved. The overload protection of the server 108 is performed by one or modules of a processor 202 (as shown in FIG. 2). The one or more modules of the processor includes at least a rerouting module 220, which reroutes the future incoming requests to the proxy network element until the load on the server 108 is resolved. The rerouting is performed based on comparing the queue size of the one or more incoming requests with the one or more threshold limits to check if the server 108 is in the overloaded state pertaining to the incoming requests as explained in FIG. 2.
[0091] Further, the system 106 architecture 600 includes a network layer 608. The network layer 608 is responsible for the actual transmission of data over the network 104.
[0092] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0093] The present disclosure incorporates technical advancement by preventing the server from crashing or freezing or shutting down and maintaining it continuous operation in an optimized manner in the overload conditions. The method provides the alert for the server based on the user requests before going into the overload state, enabling the application server to take preventive measures like up-scaling and load balancing. Improves the server health and performance based on the modification or overriding the server overload condition with the thresholds limit model at runtime. The system protects the server from overloading itself with one or more incoming requests, which may lead to freezing up of the server or may lead to shut down and maintain the network element operational till the overload condition subsides.
[0094] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.


REFERENCE NUMERALS
[0095] Environment - 100;
[0096] User Equipment- 102;
[0097] Network - 104;
[0098] System-106;
[0099] Server-108;
[00100] Processor(s) -202;
[00101] Memory- 204;
[00102] Input/Output (I/O) user interface-206;
[00103] Display- 208;
[00104] Input device-210;
[00105] Stacking module- 212;
[00106] Retrieving module-214;
[00107] Configuration module-216;
[00108] Comparing module- 218;
[00109] Rerouting module- 220;
[00110] Updating unit-224;
[00111] Database-222;
[00112] Primary processor -302;
[00113] Memory unit of User Equipment -304;

,CLAIMS:
CLAIMS
We Claim:
1. A method (500) for providing overload protection for a server (108) in a network (104), the method (500) comprises the steps of:
stacking, by one or more processors (202), one or more incoming requests to the server (108) in a queue;
retrieving, by the one or more processors (202), one or more health parameters of the server (108);
configuring, by the one or more processors (202), a threshold model based on the retrieved one or more health parameters, wherein the threshold model includes one or more threshold limits;
comparing, by the one or more processors (202), a queue size of the one or more incoming requests with the one or more threshold limits to check if the server (108) is in an overloaded state pertaining to the incoming requests; and
rerouting, by the one or more processors (202), future incoming requests to a proxy network element until the load on the server (108) is resolved, thereby reducing the impact of the overload on the server (108).

2. The method (500) as claimed in claim 1, wherein the threshold limits include at least one of, a soft limit, a hard limit, and a max limit.

3. The method (500) as claimed in claim 1, wherein the method (500) further comprises at least one of the steps of:
transmitting, by the one or more processors (202), the one or more incoming requests to the server (108) without generating an alert if the queue size is less than a soft limit;
transmitting, by the one or more processors (202), the one or more incoming requests to the server (108) and subsequently raising an alert to the server (108) when about to reach the overloaded state, when the queue size is less than a hard limit and greater than the soft limit, wherein the queue size is increasing;
providing, by the one or more processors (202), an error response to the user while rerouting the future incoming requests to the server (108) and raising an alert to the server (108) pertaining to a request queue indicating the request queue is returning to a normal operating condition, when the queue size is less than the hard limit and greater than the soft limit, wherein the queue size is decreasing;
denying, by the one or more processors (202), the one or more incoming requests to the server (108) and providing an error response to the user, while engaging the proxy network element to reroute any future incoming requests and raise an alert to the server (108), when the queue size is more than the hard limit and less than the max limit; and
rejecting, by the one or more processors (202), the incoming requests when the queue size is more than the max limit and subsequently raising an alert to the server (108).

4. The method (500) as claimed in claim 1, wherein the threshold model is configured based on available historic and current information of the one or more health parameters of the server (108).

5. The method (500) as claimed in claim 1, wherein the one or more processors (202) is configured to update the threshold model dynamically based on statistics of the one or more health parameters of the server (108).

6. The method (500) as claimed in claim 1, wherein the step of comparing, by the one or more processors (202), the queue size pertaining to the incoming requests is performed subsequent to receiving a new incoming request.

7. The method (500) as claimed in claim 1, wherein the one or more health parameters of the server (108) are retrieved based on monitoring, by the one or more processors (202), one or more health parameters of the server (108) within a predefined time period.

8. The method (500) as claimed in claim 1, wherein the one or more processors (202), reroutes future incoming requests to a proxy network element including at least one of, a proxy server .

9. The method (500) as claimed in claim 1, wherein the server (108) is at least one of, an application server.

10. The method (500) as claimed in claim 1, wherein the one or more processors (202), by providing the hard limit and the soft limit provides healing time to the server (108).

11. A system (106) for providing overload protection for a server (108) in a network (104), the system (106) comprising:
a stacking module (212) configured to, stack, one or more incoming requests to the server (108) in a queue;
a retrieving module (214) configured to, retrieve, one or more health parameters of the server (108);
a configuration module (216) configured to, configure, a threshold model based on the retrieved one or more health parameters, wherein the threshold model includes one or more threshold limits;
a comparing module (218) configured to, compare, a queue size of the one or more incoming requests with the one or more threshold limits to check if the server (108) is in an overloaded state pertaining to the incoming requests; and
a rerouting module (220) configured to, reroute, future incoming requests to a proxy network element until the load on the server (108) is resolved, thereby reducing the impact of the overload on the server (108).

12. The system (106) as claimed in claim 11, wherein the threshold limits include at least one of, a soft limit, a hard limit, and a max limit.

13. The system (106) as claimed in claim 12, wherein the hard limit and the soft limit provides healing time to the server (108).

14. The system (106) as claimed in claim 11, wherein an operations manager of the system (106) is further configured to perform at least one of the steps of:
transmitting the one or more incoming requests to the server (108) without generating an alert if the queue size is less than the soft limit;
transmitting, the one or more incoming requests to the server (108) and subsequently raising an alert to the server (108) when about to reach the overloaded state, when the queue size is less than a hard limit and greater than the soft limit, wherein the queue size is increasing;
providing, an error response to the user while rerouting the future incoming requests to the server (108) and raising an alert to the server (108) pertaining to a request queue indicating the request queue is returning to a normal operating condition, when the queue size is less than the hard limit and greater than the soft limit, wherein the queue size is decreasing;
denying, the one or more incoming requests to the server (108) and providing an error response to the user, while engaging the proxy network element to reroute any future incoming requests and raise an alert to the server (108), when the queue size is more than the hard limit and less than the max limit; and
rejecting, the incoming requests when the queue size is more than the max limit and subsequently raising an alert to the server (108).

15. The system (106) as claimed in claim 11, wherein the threshold model is configured based on available historic and current information of the one or more health parameters of the server (108).

16. The system (106) as claimed in claim 11, wherein an updating unit is configured to update the threshold model dynamically based on statistics of the one or more health parameters of the server (108).

17. The system (106) as claimed in claim 11, wherein the comparing module (218) compares, the queue size pertaining to the incoming requests subsequent to receiving a new incoming request.

18. The system (106) as claimed in claim 11, wherein the one or more health parameters of the server (108) are retrieved based on monitoring, one or more health parameters of the server (108) within a predefined time period.

19. The system (106) as claimed in claim 11, wherein the rerouting module (220) reroutes future incoming requests to a proxy network element including at least one of, a proxy server.

20. The system (106) as claimed in claim 11, wherein the server (108) is at least one of, an application server (108).

21. A User Equipment (UE) (102), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory (304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (302) causes the UE to:
transmit, one or more incoming requests to a server (108) in order to avail one or more services; and
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321060146-STATEMENT OF UNDERTAKING (FORM 3) [07-09-2023(online)].pdf 2023-09-07
2 202321060146-PROVISIONAL SPECIFICATION [07-09-2023(online)].pdf 2023-09-07
3 202321060146-FORM 1 [07-09-2023(online)].pdf 2023-09-07
4 202321060146-FIGURE OF ABSTRACT [07-09-2023(online)].pdf 2023-09-07
5 202321060146-DRAWINGS [07-09-2023(online)].pdf 2023-09-07
6 202321060146-DECLARATION OF INVENTORSHIP (FORM 5) [07-09-2023(online)].pdf 2023-09-07
7 202321060146-FORM-26 [17-10-2023(online)].pdf 2023-10-17
8 202321060146-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321060146-DRAWING [02-09-2024(online)].pdf 2024-09-02
10 202321060146-COMPLETE SPECIFICATION [02-09-2024(online)].pdf 2024-09-02
11 Abstract 1.jpg 2024-09-24
12 202321060146-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321060146-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321060146-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321060146-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321060146-FORM 3 [29-01-2025(online)].pdf 2025-01-29
17 202321060146-FORM 18 [20-03-2025(online)].pdf 2025-03-20