Abstract: ABSTRACT METHOD AND SYSTEM FOR MANAGING SERVICE REQUESTS IN A NETWORK The present disclosure envisages a method (400) and a system (108) for managing a service request in a network (106). The method comprises receiving at least one service request for at least one application. The method comprises identifying, by a processing engine (208), at least one throttling rule configured for the at least one received service request. The at least one throttling rule comprises at least one of a request count, a predetermined threshold value, and a preconfigured time unit value. Based on identifying, incrementing, by the processing engine (208), the at least one of a request count for the at least one received service request. The method comprises determining, by the processing engine (208), if the at least one received service request fulfills at least one predefined criteria based on the at least one identified configured throttling rule. Based on determining, processing, by the processing engine (208), the at least one received service request. Ref. Fig 4
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR MANAGING SERVICE REQUESTS IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure generally relates to the field of wireless communication systems. More particularly, the present disclosure relates to a system and a method for managing a service request in a network.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The expression ‘Service API request’ used hereinafter in the specification refers to an application programming interface (API) call made to a server by an external client or a system, requesting the execution of a specific service or function related to one or more applications hosted on the server.
[0005] The expression ‘Throttling rule’ used hereinafter in the specification refers to a set of predefined parameters or conditions used to control and limit the rate at which service API requests are processed by the server. The throttling rule typically includes a request count, a predetermined threshold value, and a preconfigured time unit value over which the limits are applied.
[0006] The expression ‘Threshold value’ used hereinafter in the specification refers to a pre-configured numerical value representing the maximum allowable number of service API requests that can be processed within a specified time unit before throttling mechanisms are activated to limit further requests.
[0007] The expression ‘Time unit value’ used hereinafter in the specification refers to a specified duration (e.g., seconds, minutes, or hours) over which the service API requests are monitored and compared against the corresponding threshold value to determine if throttling should occur.
[0008] The expression ‘Request count’ used hereinafter in the specification refers to a counter that tracks the number of service API requests the server receives for a specific application within a defined time unit. This count compares against the predetermined threshold to assess whether throttling needs to be applied.
[0009] The expression ‘Delta time’ used hereinafter in the specification refers to the calculated time difference between a current time associated with the received service request and a predefined last refresh time interval. This time difference evaluates whether the requests fall within the throttling limits.
[0010] The expression ‘Subsequent service API request’ used hereinafter in the specification refers to a service API request received by the server after an initial request has been processed, which is subject to evaluation under the throttling rules.
[0011] The expression ‘flag’ used hereinafter in the specification indicates whether further incoming service requests should be blocked or allowed.
[0012] The expression ‘Startup time’ used hereinafter in the specification refers to the point in time when an application or server is initialized, during which throttling rules and thresholds may be defined or configured to manage subsequent service API requests.
[0013] These definitions are in addition to those expressed in the art.
BACKGROUND
[0014] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0015] Wireless communication technology has rapidly evolved over the past few decades. The first generation of wireless communication technology was analog technology that offered only voice services. Further, when the second-generation (2G) technology was introduced, text messaging and data services became possible. The 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth-generation (4G) technology revolutionized wireless communication with faster data speeds, improved network coverage, and security. Currently, the fifth-generation (5G) technology is being deployed, with even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. These advancements represent a significant leap forward from previous generations, enabling enhanced mobile broadband, improved Internet of Things (IoT) connectivity, and more efficient use of network resources. The sixth generation (6G) technology promises to build upon these advancements, pushing the boundaries of wireless communication even further. While the 5G technology is still being rolled out globally, research and development into the 6G are rapidly progressing, with the aim of revolutionizing the way people connect and interact with technology.
[0016] As wireless technologies are advancing, there is a need to cope with the 5G requirements and deliver a high level of network services to the users. Further, network service providers face the continual challenge of responding to users demands for reliable, secure, and fast network services. Satisfying the customer’s demands is imperative to maintaining a competitive edge in an intensely competitive market. The vast user base has heightened service providers and their customer’s susceptibility to security threats. In the past, network security responsibilities have largely been the charge of the end users. However, service providers have recognized the commercial viability of offering security services. Security attacks and breaches impose a heavy cost on the service providers and their customers.
[0017] In distributed systems, microservices are commonly employed to handle specific tasks within larger applications. The microservices often interact with various client applications through service Application Programming Interfaces (APIs), which allow external systems to request services or data. However, managing the rate of incoming service API requests is a critical challenge in ensuring the stability and reliability of the microservices. One major issue arises when a microservice becomes overwhelmed with a large volume of requests. In such scenarios, excessive requests can lead to increased response times or, in severe cases, total service failures. As the microservice struggles to process the influx of requests, this can degrade the overall performance of the system, reducing the effectiveness of the microservice and negatively impacting user experience. Furthermore, handling numerous simultaneous requests can potentially consume excessive server resources. Critical resources such as CPU, memory, and network bandwidth can be drained rapidly, leading to performance bottlenecks and rendering the microservice inefficient. Unchecked resource consumption can also compromise the ability of the service to handle other critical tasks or serve additional users, limiting scalability. In addition to performance degradation, distributed systems are vulnerable to malicious actors. These attackers may launch Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks, flooding the microservice with overwhelming requests. The DDoS attacks can disrupt the service, rendering it inaccessible to legitimate users and causing serious service outages, particularly in mission-critical applications. As a result, the overall system experience becomes inconsistent, with some users facing degraded quality while others consume disproportionate amounts of service resources.
[0018] Considering these challenges, managing the rate of incoming API requests is crucial for maintaining system stability. By limiting the number of requests that can be processed within a defined time period, organizations can ensure that no single client or group of clients overwhelms the system. This management helps protect server resources and prevents service disruptions, ensuring the service remains accessible to all users. Such measures are essential for maintaining an equitable distribution of service capacity and ensuring that excessive requests or potential malicious attacks do not negatively impact legitimate traffic.
[0019] Therefore, there is a need to overcome the disadvantages of the prior art.
OBJECTIVES
[0020] Some of the objectives of the present disclosure, which are addressed by at least one embodiment described herein, are as follows:
[0021] An objective of the present disclosure is to provide a configuration system and a method for managing a rate of service Application Programming Interface (API) by a server.
[0022] Another objective of the present disclosure is to provide a configuration system and a method that ensures availability and fair resource allocation in a network.
[0023] Another objective of the present disclosure is to provide a configuration system that improves the performance of a network by ensuring the stability of the various applications running on the network.
[0024] An objective of the present disclosure is to provide a configuration system that detects and defends the various types of attacks in a network.
[0025] Additional objectives and advantages of the present disclosure will be more fully understood from the following detailed description, which is intended to illustrate and not to limit the scope of the present disclosure.
SUMMARY
[0026] In an exemplary embodiment, a method for managing a service request in a network is described. The method includes receiving, by a receiving unit, at least one service request for at least one application. The method includes identifying, by a processing engine, at least one throttling rule configured for the at least one received service request. The at least one identified configured throttling rule comprises at least one of a request count, a predetermined threshold value, and a preconfigured time unit value. Based on identifying, incrementing, by the processing engine, the request count for the at least one received service request. The method includes determining, by the processing engine, if the at least one received service request fulfills at least one predefined criteria based on the at least one identified configured throttling rule. Based on determining, processing, by the processing engine, the at least one received service request.
[0027] In an embodiment, the determining comprises calculating a delta time is calculated between a current time associated with the at least one received service request and a predefined last refresh time interval.
[0028] In an embodiment, the at least one predefined criteria includes determining whether the calculated delta time is less than the preconfigured time unit value and, based on determining, checking if the request count is less than the predetermined threshold value.
[0029] In an embodiment, the predefined last refresh time interval is updated based on the preconfigured time unit value.
[0030] In an embodiment, the request count and the predefined last refresh time interval resets when the calculated delta time is greater than the preconfigured time unit value.
[0031] Another exemplary embodiment of the present disclosure describes a system for managing a service request in a network. The system comprises a receiving unit configured to receive at least one service request for at least one application. The system comprises a memory and a processing engine coupled with the receiving unit to receive the at least one service request and is further coupled with the memory to execute a set of instructions stored in the memory. The processing engine is configured to identify at least one throttling rule configured for the at least one received service request. The at least one identified configured throttling rule comprises at least one of a request count, a predetermined threshold value, and a preconfigured time unit value. Based on identifying, processing engine is configured to increment the request count for the at least one received service request. The processing engine is configured to determine, if the at least one received service request fulfills at least one predefined criteria based on the at least one identified configured throttling rule. Based on determining, processing engine is configured to process the at least one received service request.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
[0032] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0033] FIG. 1 illustrates an exemplary network architecture for managing a service request in a network, in accordance with an embodiment of the present disclosure.
[0034] FIG. 2A illustrates an exemplary block diagram for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0035] FIG. 2B illustrates a system architecture of the system for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0036] FIG. 3 illustrates an exemplary flow diagram illustrating a method for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0037] FIG. 4 illustrates another exemplary flow diagram illustrating the method for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0038] FIG. 5 illustrates an example computer system in which or with which the embodiments of the present disclosure may be implemented.
[0039] The foregoing shall be more apparent from the following more detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 - Network Architecture
102-1, 102-2…102-N – Plurality of Users
104-1, 104-2…104-N – Plurality of User Equipments (UEs)
106 – Network
108 – System
200A – Block diagram
202 - Processors
204 - Memory
206 - Interface(s)
208 - Processing engine
209 - Receiving Unit
210 - Database
200B – System Architecture
212-1, 212-2, 212-3 - A Plurality of clients
216 - Event Routing Manager (ERM) Application Programming Interface (API) throttling module
300 - Flow Diagram
400 - Flow Diagram
500– Computer System
510 – External Storage Device
520 – Bus
530 – Main Memory
540 – Read Only Memory
550 – Mass Storage Device
560 – Communication Port
570 – Processor
DETAILED DESCRIPTION
[0040] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
[0041] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0042] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0043] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0044] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0045] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0046] The terminology used herein is to describe particular embodiments only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the associated listed items. It should be noted that the terms “mobile device”, “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0047] While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment, as well as other embodiments of the disclosure, will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[0048] As wireless technologies are advancing, there is a need to cope up with the 5G requirements and delivering a high level of network service to the customers. Further, the network service providers are facing the continual challenge of responding to customer’s demands for reliable, secure, and fast network services. Satisfying the customer’s demands is imperative to maintaining a competitive edge in an intensely competitive market. In conventional techniques, it is impossible to detect the malicious actors of distributed denial of service (DDoS) attacks and brute force attacks. Further, even if such an attack is detected, it is difficult to defend these attacks appropriately and efficiently. Further, the conventional techniques fail to manage a fair resource allocation to a particular microservice provided by the distributed system.
[0049] The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by disclosing a method for service Application Programming Interface (API) request rate capping. The method may protect the network against attacks and abuse. The method may provide an effective measure to defend against DDoS attacks and the brute-force attacks. The method may help in the stability and availability of various applications in the server of the network. The method may further ensure fair resource allocation for a particular microservice provided by the distributed system.
[0050] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0051] FIG. 1 illustrates an exemplary network architecture (100) of a system (108) for managing a service request in a network (106), in accordance with an embodiment of the present disclosure.
[0052] Referring to FIG. 1 an exemplary network architecture for managing the service request in the network is illustrated. The network architecture (100) may include one or more computing devices or user equipments (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more user equipments (UEs) (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipments (104) are depicted in FIG. 1, however any number of the user equipments (104) may be included without departing from the scope of the ongoing description.
[0053] In an embodiment, the user equipment (104) may include, but is not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a global positioning system (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104) may include but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, where the user equipment (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as a touchpad, touch-enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) may not be restricted to the mentioned devices and various other devices may be used.
[0054] In an embodiment, the user equipment (104) may include smart devices operating in a smart environment, for example, an internet of things (IoT) system. In such an embodiment, the user equipment (104) may include but is not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0055] Referring to FIG. 1, the user equipment (104) may communicate with a system (108) through a network (106). In an embodiment, the network (106) may include at least one of a Fifth Generation (5G) network, Sixth Generation (6G) network, or the like. The network (106) may enable the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like. In an embodiment, each of the UE (104) may have a unique identifier attribute associated therewith. In an embodiment, the unique identifier attribute may be indicative of Mobile Station International Subscriber Directory Number (MSISDN), International Mobile Equipment Identity (IMEI) number, International Mobile Subscriber Identity (IMSI), Subscription Permanent Identifier (SUPI) and the like.
[0056] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0057] FIG. 2A illustrates a block diagram (200A) of the system (108) for managing the service request in the network (106), in accordance with an embodiment of the present disclosure.
[0058] In an aspect, the system (108) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[0059] Referring to FIG. 2A, the system (108) may include an interface(s) (206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication to/from the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include but are not limited to, processing unit/engine(s) (208) and a database (210).
[0060] In an embodiment, the processing unit/engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium, and the hardware for the processing engine(s) (208) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (108) may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry. In an embodiment, the database (210) includes data that may be either stored or generated because of functionalities implemented by any of the components of the processor (202) or the processing engine (208).
[0061] The receiving unit (209) is configured to receive a service request that may be referred to as a service Application Programming Interface (API) request for at least one application running on a server. The receiving unit (209) is configured to receive a service request from a plurality of clients or other systems such as web applications, mobile applications, etc. The service request may be sent to the server to invoke specific functions or services related to the applications hosted on the server. The receiving unit (209) is the initial point of contact for all incoming service requests, ensuring they are directed to the appropriate application. The receiving unit (209) pre-processes the requests to validate their conformity to the required API specifications and interfaces with the application to forward the service request for further processing. The receiving unit (209) ensures proper routing and efficient processing within the server system by managing the intake and initial handling the service request.
[0062] In an embodiment, the memory (204) is configured to store various data and instructions essential for the operation of the server and its applications. This includes the configuration data for the throttling rules, such as predefined thresholds and time unit values (e.g., seconds, minutes, hours) that are utilized to manage the rate of service API requests. The memory (204) also maintains the mapping of these throttling rules to their respective limits, which can be established either at startup or dynamically during runtime.
[0063] The processing engine (208) is configured to couple with the receiving unit (209) and the memory (204). The processing engine (208) is configured to receive data and instructions essential for the operation of the server and its applications from the memory (204). The processing engine (208) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the processing engine (208) may be configured to fetch and execute computer-readable instructions stored in the memory (204).
[0064] In an embodiment, the processing engine (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine (208). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium, and the hardware for the processing engine (208) may comprise a processing resource (for example, one or more processors) to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine (208). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing engine (208) may be implemented by electronic circuitry.
[0065] In an embodiment, the processing engine (208) is configured to receive at least one service request directed to at least one application running on a server. The processing engine (208) may identify and capture incoming service requests from various clients or systems, ensuring that the application can process multiple requests efficiently while tracking the details of each service API request for analysis.
[0066] In an embodiment, the processing engine (208) is configured to identify at least one throttling rule that is configured for the at least one application. The processing engine (208) may query the database (210) or configuration file where the throttling rules are stored. The database (210) may contain information about each application, its associated throttling rules, and the corresponding parameters (request count limit, threshold value, time unit value). The at least one identified configured throttling rule includes at least one of a request count, at least one predetermined threshold value, and at least one preconfigured time unit value. The at least one identified configured throttling rule defines the limits for processing service API requests. The at least one configured throttling rule specifies the maximum number of service requests allowed within a specific time period (request count limit and threshold value) and the duration of that time period (time unit value). The parameters work together to control the rate at which requests are accepted and processed. The processing engine (208) assesses the predefined rules that govern how many service requests can be processed within a certain time frame, ensuring that the application does not exceed its operational limits and maintains stability.
[0067] In an embodiment, the processing engine (208) is configured to process the at least one received service request by incrementing the at least one of a request count and recording the time of receipt of the at least one received service request. When a new service request is received, the processing engine (208) increases the at least one of a request count for the corresponding application. This keeps track of the total number of requests within the current time window. The processing engine (208) tracks the number of the at least one received service request and records the time of receipt of the at least one received service request to manage subsequent requests and enforce throttling rules effectively.
[0068] In an embodiment, the processing engine (208) is configured to determine if the at least one received service request fulfills at least one predefined criteria based on the at least one identified configured throttling rule. In an aspect, the processing engine (208) calculates the delta time, which is the difference between the current time and the predefined last refreshed time interval. The calculated delta time indicates whether the at least one received service request falls within the at least one identified configured throttling rule.
[0069] In an embodiment, if the calculated delta time is less than the preconfigured time unit value, the at least one received service request is considered to be within the allowed time frame. Additionally, the at least one of a request count is compared against the predetermined threshold value. If the at least one of a request count is less than the predetermined threshold value, it indicates that the number of service requests within the current window is still below the limit. Only when both conditions are met (delta time within the time unit and the at least one of a request count below the threshold) the service request accepted and processed. Otherwise, the request is rejected to prevent overloading the system (108).
[0070] In an embodiment, if both conditions are met, i.e. the calculated delta time is less than the preconfigured time unit value and the at least one of a request count is below the predetermined threshold value, the processing engine (208) proceeds to process the at least one received service request. This indicates that the at least one received service request is within the allowed limits and can be executed. However, if either condition is not met, the request is rejected. For instance, if the at least one a request count exceeds the threshold or if the delta time falls outside the specified time window, the at least one received service request is deemed excessive and is denied. To prevent further overloading, a flag may be set to temporarily block additional requests from the same client or application for a predetermined period. This helps ensure fair resource allocation and prevents system instability.
[0071] In an embodiment, the processing engine (208) is configured to reset the at least one of a request count and a last refreshed time unit value when it is determined that the calculated delta time is more than the preconfigured time unit value. After the time window defined by the at least one identified configured throttling rule expires, the processing engine (208) resets the at least one of a request count and the last refreshed time unit value, allowing new requests to be processed under the refreshed conditions.
[0072] In an embodiment, the processing engine (208) is configured to reject the at least one received service request when it is determined that the calculated delta time is less than the preconfigured time unit value and the at least one of a request count is greater than the predetermined threshold value. If the second request arrives within the allowed time frame but exceeds the request limit, the processing engine (208) rejects it, preventing potential overload and maintaining system performance.
[0073] In an embodiment, the processing engine (208) may be configured to define the predetermined threshold value for the at least one identified configured throttling rule during the startup time or the runtime of the at least one application. This flexibility allows the system (108) to adapt to varying operational needs by setting initial limits at startup or adjusting them dynamically during runtime, ensuring optimal performance and resource management.
[0074] In an embodiment, the database (210) may store data generated due to functionalities implemented by any of the components of the configuration system (108). In an embodiment, the database (210) may be indicative of including, but not limited to, a relational database, a distributed database, a cloud-based database, or the like. In an exemplary embodiment, the processing engine (208) may include one or more units having functions that may include, but are not limited to, testing, storage, and peripheral functions, such as a wireless communication unit for remote operation and the like.
[0075] FIG. 2B illustrates another exemplary system architecture (200B) for managing the service request in the network (106), in accordance with an embodiment of the present disclosure. The system architecture (200B) may include a plurality of clients (102-2, 102-2, ….102-n).
[0076] In an aspect, at step (214), a plurality of clients (102-1, 102-2…102-n) may transmit a service request to an Event Routing Manager (ERM) application programming interface (API) throttling module.
[0077] In an aspect, at (216), the ERM API throttling module may receive the service request for at least one application running at the server from the plurality of clients (102-2, 102-2, ….102-n). In an aspect, the at least one application may be configured to perform a mapping of a throttling rule. In an aspect, the throttling rule may include at least one condition for accepting or rejecting the at least one received service request. In an aspect, a threshold value for the throttling rule may be defined at startup time or in the runtime of at least one application. In an aspect, the throttling rule is configured with at least one of a request count, a predetermined threshold value, and a preconfigured time unit value
[0078] In an aspect, for the at least one received service request, the API rate limit method of the present disclosure is called to check if the threshold value for the specific the at least one identified configured throttling rule is crossed or not.
[0079] In an aspect, at (218), the at least one received service request is accepted (processed) when the at least one received service request on at least one application is less than a predetermined threshold value. In an aspect, the at least one of a request count is incremented, after accepting the at least one received service request. In an aspect, the at least one received service request is rejected when the number of received requests on at least one application is greater than the predetermined threshold value. In an aspect, a flag to block further service API requests for at least one application running at the server is set, after rejecting the at least one received service API request. In an aspect, when the time unit value that is configured by the at least one application may reach a configured value, the at least one of a request count and last refreshed count is reset. In an aspect, the API rate limit method, as disclosed by the present disclosure, is used to configure multiple API throttling (threshold) within the single application with different time unit values (sec/min/hr), etc.
[0080] FIG. 3 illustrates an exemplary flow diagram illustrating a method (300) for managing the service request in the network (106), in accordance with an embodiment of the present disclosure.
[0081] In an aspect, at step (302), it is determined whether a service request throttling is enabled or not for the service request that is received at a server.
[0082] In an aspect, at step (304), when it is determined that at least one received service request throttling is enabled, at least one throttling rule for service API request throttling is checked. The throttling rule may comprise at least one of a request count (the number of requests processed), a predetermined threshold value (the maximum number of allowed requests), and a preconfigured time unit value (such as per second, minute, or hour). For instance, multiple at least one throttling rule can be configured within the system (108). The at least one received service request may originate from various servers, and capping may be necessary for server. Public clients may send requests over a Hypertext Transfer Protocol Secure (HTTPS) connection, while the plurality of clients (102-2, 102-2, ….102-n) may utilize standard Hypertext Transfer Protocol (HTTP) communication. Given that the HTTPS typically requires more processing time than the HTTP, a rule can be established whereby 3,000 requests from the HTTPS clients are accepted, while 5,000 requests from the plurality of clients (102-2, 102-2, ….102-n) from HTTP are allowed. This configuration allows for the application of two distinct rules within the system (108), accommodating the varying performance levels of different request types.
[0083] In an aspect, at step (306), when it is determined that the at least one received service request throttling is not enabled, the at least one received the server processes the at least one received service request.
[0084] In an aspect, at step (308), it is determined whether the at least one identified configured throttling rule may be configured for the at least one received service request.
[0085] In an aspect, at step (310), when it is determined that the at least one identified configured throttling rule is configured for the at least one received service request, the at least one of a request count is incremented. For example, one rule for 1000 requests per second is configured. Once we get the first request, we will increment the counter of the request and trace how many requests are coming to at the user end.
[0086] In an aspect, at step (312), when it is determined that the at least one identified configured throttling rule is not configured for the at least one received service request, the request is rejected. For instance, if the ERM API throttling module is set to handle 5,000 requests per second, the rule stipulates that only 5,000 requests will be accepted within each second. Any requests exceeding this limit during a defined time interval will be rejected.
[0087] In an aspect, at step (314), a delta time is calculated for the at least one received service request. The system (108) dynamically manages API traffic by calculating delta time using the formula: Delta Time = Current Time - Last Refresh Interval.
[0088] In an aspect, at step (316), the method determines whether the calculated delta time exceeds the configured time limit for the at least one identified configured throttling rule. The delta time is essentially the difference between the current time and the last refresh interval when the instance of the API request throttling class is created. During instantiation, the user (102) sets variables such as the time unit (second, minute, hour), threshold, and last refresh time. For instance, if the threshold is set to 10,000 requests per minute, and the time unit is 1 minute, the system begins tracking API requests accordingly. For example, if the system (108) starts at 15:40:00, and the first request comes in at 15:40:20, the delta time is the difference between 15:40:20 and 15:40:00, which is 20 seconds. The system (108) checks whether this delta time is within the configured time unit (in this case, 1 minute).
[0089] In an aspect, at step (318), when it is determined that the calculated delta time is more than the time limit value that is configured for the rule, the at least one of a request count and the last refresh time is reset, effectively starting a new throttling period. For example, if the at least one request count is exactly 5,000 and subsequent at least request count is only 1,000 for that second, the counter will reset for the next time interval (the following second). The system will then check again to see if the threshold value has been reached. In a scenario where the number of requests is 1000 and the next request arrives at 0.1 seconds with a value of 1001 at 0.2 seconds, the interval between these two requests is 0.1 seconds. Since the predetermined threshold value has not been exceeded, the counter is reset to zero (as the system can handle more requests). The method (300) will then check the rule again; if the predetermined threshold value is still not exceeded, the at least one received service request is processed. Conversely, if the threshold value has been reached, the at least one received service request will be rejected. The counter may reset every second.
[0090] In an aspect, at step (320), when it is determined that the calculated delta time is less than the time limit value that is configured for the rule, then it is checked whether the at least one of a request count is more than the threshold value set for the rule.
[0091] In an aspect, at step (322), when the at least one of a request count and the last refresh time are reset, then it is checked whether the at least one of a request count is more than the threshold value set for the rule.
[0092] In an aspect, at step (324), when it is checked that the at least one of a request count is not more than the threshold value set for the rule, the at least one received service request is processed by the server, and at step (330) the method (300) may end. If, however, the delta time is greater than the time limit value that is configured for the rule, the at least one of a request count is incremented, and the request is processed. The system (108) will continue processing requests until the count exceeds 10,000 within that one-minute window. In cases where the delta time is less than the configured time, but the at least one of a request count exceeds the threshold (e.g., more than 10,000 requests in one minute), the API request is rejected with an appropriate HTTP response, such as a “Too Many Requests” (HTTP 429) status. This ensures that requests are throttled based on both the rate and the time window set by the user, preventing overload while maintaining efficient request handling.
[0093] In an aspect, at step (326), when it is checked that the at least one of a request count is more than the threshold value set for the rule, the at least one received service request is rejected by the server and at step (328) the method (300) may end.
[0094] Thus, the present method (300) may help mitigate server overload and can ensure the stability and availability of the API by limiting the number of service API requests that the server can receive. Further, by controlling the rate at which requests are processed, throttling protects the API from being overwhelmed with a high volume of service API requests, which can lead to degraded performance or even crashes.
[0095] FIG. 4 illustrates an exemplary flow diagram illustrating the method (400) for managing the service request in the network (106) in accordance with an embodiment of the present disclosure.
[0096] At step 402, the method (400) includes receiving at least one service request for at least one application running at a server. The service request originates from client applications and pertains to specific functions or services that the application needs to execute.
[0097] At step 404, the method (400) includes identifying at least one throttling rule configured for the at least one received service request by a processing engine (208), wherein the at least one identified configured throttling rule comprises at least one of at least one of a request count (the number of service requests processed), a predetermined threshold value (the maximum number of allowed service requests), and a preconfigured time unit value (such as per second, minute, or hour). The at least one of a request count may track the number of the at least one received service requests received. The predetermined threshold value may define the maximum number of requests that can be processed within the designated time window and the preconfigured time unit value that sets the time frame (e.g., per second, per minute, per hour) for monitoring the incoming requests. Identifying the correct throttling rule for the at least one received request allows the system (108) to regulate traffic, ensuring that the server is not overwhelmed by excessive service requests.
[0098] At step 406, the method (400) includes incrementing by the processing engine (208), the at least one of a request count for the at least one received service request. This tracking mechanism ensures that the rate of incoming service requests can be monitored in line with the at least one identified configured throttling rule. This step is essential to keep track of how many service requests have been processed within the defined time window. Each time a new service request is received, the at least one of a request count is updated to reflect the current load on the server. By tracking the at least one of a request count, the method (400) ensures that it adheres to the at least one identified configured throttling rule and remains within the permissible limits. This mechanism allows the method (400) to manage multiple client requests while avoiding over-consumption of server resources.
[0099] At step 408, the method (400) includes determining by the processing engine (208), if the at least one received service request fulfills at least one predefined criteria based on the at least one identified configured throttling rule. The at least one predefined criteria includes determining whether the calculated delta time is less than the preconfigured time unit value and based on determining, checking if the at least one of a request count is less than the predetermined threshold value.
[00100] For determining, the method (400) calculates delta time, which is the time difference between a current time associated with the at least one received service request and a predefined last refresh time interval. This step ensures the server knows how frequently requests are arriving. In a first condition, to process the request, the method (400) checks if the calculated delta time is less than the preconfigured time unit value to verify that the at least one received service request is within the permissible time window. If the calculated delta time is less than the preconfigured time unit value, the method resets the at least one of a request count and a last refresh time to mark the start of a new time window for subsequent requests. This step ensures that the throttling mechanism remains accurate and up to date, preventing any inconsistencies or errors in managing incoming service requests. The updated refresh time provides a new reference point for calculating delta time for future service requests. After that the method checks if the at least one of a request count is more than the predetermined threshold value, if it is determined that the at least one of a request count is more than the predetermined threshold value, the at least one received request is processed and the method (400) stops.
[00101] In a second condition, to process the request, the method (400) checks if the calculated delta time is more than the preconfigured time unit value to verify that the at least one received service request is within the permissible time window. If the calculated delta time is more than the preconfigured time unit value, the method (400) checks if the at least one of a request count is more than the predetermined threshold value, if it is determined that the at least one of a request count is more than the predetermined threshold value, the at least one received request is processed and the method (400) stops.
[00102] Only if both conditions—time and the at least one of a request count—are met, does the system consider the at least one received service request eligible for further processing. This step is critical for determining whether to allow or reject the incoming service request based on the current state of the system (108).
[00103] In an embodiment, the delta time is when the current service request and the last recorded refresh time are received. This calculation is vital for monitoring the rate at which requests arrive and determining whether the at least one received service request falls within the allowed time window set by the at least one identified configured throttling rule.
[00104] At step 410, based on determining the method (400) includes processing the at least one received service request. Once the method (400) determines that the service request meets the predefined criteria, the at least one received service request is processed by the processing engine (208). Processing the at least one received service request involves executing the necessary functions associated with the at least one received service request, which could range from retrieving data to performing complex operations as instructed by the client application. This step ensures that the at least one received service request is handled efficiently while respecting the limits imposed by the at least one identified configured throttling rule.
[00105] FIG. 5 illustrates an exemplary computer system (500) in which or with which embodiments of the present disclosure may be implemented.
[00106] As shown in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer system may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects.
[00107] The main memory (530) may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage device (550) includes, but is not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks.
[00108] The bus (520) communicatively couples the processor (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect / Peripheral Component Interconnect Extended bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system.
[00109] Optionally, operator and administrative interfaces, e.g., a display, keyboard, joystick, and a cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system. Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (560). The components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[00110] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
[00111] The present disclosure provides technical advancement related to service API request rate throttling in network applications. The present disclosure addresses the limitations of existing solutions by introducing a dynamic method for managing API request rates through configurable throttling rules. The inventive aspects include defining multiple throttling thresholds, adjusting rules during runtime, and controlling request acceptance based on real-time calculations. The present disclosure improves performance and efficiency by enabling applications to handle varying loads, prevent server overloads, and maintain optimal performance, resulting in a more reliable and scalable application environment.
[00112] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
TECHNICAL ADVANTAGES
[00113] The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a method for service Application Programming Interface (API) request rate capping.
[00114] Thus, the present invention can help mitigate server overload and can ensure the stability and availability of the API by limiting the number of service API requests that the server can receive. Further, by controlling the rate at which requests are processed, throttling protects the API from being overwhelmed with a high volume of service API requests, which can lead to degraded performance or even crashes. The present invention further ensures fair resource allocation among the API consumers.
[00115] Further, by imposing threshold rate limits, the APIs can prevent a single client from monopolizing the available resources and ensure that other clients have a chance to access the API's services. The present invention improves the API performance by spreading out the service API requests over time and helping the API in maintaining a consistent level of service and avoid bottlenecks. Further, the present invention can also help reduce response times and improve the latency for all API consumers.
[00116] The present invention can protect the server against attacks and abuse and can provide an effective measure to defend against various types of attacks, such as distributed denial-of-service (DDoS) attacks or brute-force attacks. Further, by limiting the number of service API requests an attacker can make, throttling helps mitigate the impact of such malicious activities and safeguards the API infrastructure.
[00117] The present invention can help in managing API costs and can assist in managing the costs associated with API usage. By enforcing rate limits, organizations can control the resources consumed by each client and avoid excessive bandwidth or processing costs. This is particularly relevant for APIs that charge clients based on the number of service API requests or data transferred. ,CLAIMS:CLAIMS
We claim:
1. A method (400) for managing a service request in a network (106), the method (400) comprising:
receiving (402), by a receiving unit (209), at least one service request for at least one application;
identifying (404), by a processing engine (208), at least one throttling rule configured for the at least one received service request, wherein the at least one identified configured throttling rule comprises at least one of a request count, a predetermined threshold value, and a preconfigured time unit value;
based on identifying, incrementing (406), by the processing engine (208), the at least one of a request count for the at least one received service request;
determining (408), by the processing engine (208), if the at least one received service request fulfills at least one predefined criteria based on the at least one identified configured throttling rule; and
based on determining, processing (410), by the processing engine (208), the at least one received service request.
2. The method (400) as claimed in claim 1, wherein determining further comprises calculating a delta time between a current time associated with the at least one received service request and a predefined last refresh time interval.
3. The method (400) as claimed in claim 1, wherein the at least one predefined criteria includes:
determining whether the calculated delta time is less than the preconfigured time unit value; and
based on determining, checking if the at least one of a request count is less than the predetermined threshold value.
4. The method (400) as claimed in claim 1, further comprising updating the predefined last refresh time interval based on the preconfigured time unit value.
5. The method (400) as claimed in claim 1, further comprising resetting the at least one of a request count and the predefined last refresh time interval when the calculated delta time is greater than the preconfigured time unit value.
6. A system (108) for managing a service request in a network (106), the system (108) comprises:
a receiving unit (209) configured to receive at least one service request for at least one application;
a memory (204); and
a processing engine (208) coupled with the receiving unit (209) to receive the at least one service request and is further coupled with the memory (204) to execute a set of instructions stored in the memory (204), the processing engine (208) is configured to:
identify at least one throttling rule configured for the at least one received service request, wherein the at least one identified configured throttling rule comprises at least one of a request count, a predetermined threshold value, and a preconfigured time unit value;
based on identifying, increment the at least one of a request count for the at least one received service request;
determine if the at least one received service request fulfills at least one predefined criteria based on the at least one identified configured throttling rule; and
based on determining, process the at least one received service request.
7. The system (108) as claimed in claim 6, wherein determining is further configured to calculate a delta time between a current time associated with the at least one received service request and a predefined last refresh time interval.
8. The system (108) as claimed in claim 6, wherein the at least one predefined criteria includes:
determine whether the calculated delta time is less than the preconfigured time unit value; and
based on determining, check if the at least one of a request count is less than the predetermined threshold value.
9. The system (108) as claimed in claim 6, further configured to update the predefined last refresh time interval based on the preconfigured time unit value.
10. The system (108) as claimed in claim 6, further configured to reset the at least one of a request count and the predefined last refresh time interval when the calculated delta time is greater than the preconfigured time unit value.
| # | Name | Date |
|---|---|---|
| 1 | 202321076735-STATEMENT OF UNDERTAKING (FORM 3) [09-11-2023(online)].pdf | 2023-11-09 |
| 2 | 202321076735-PROVISIONAL SPECIFICATION [09-11-2023(online)].pdf | 2023-11-09 |
| 3 | 202321076735-FORM 1 [09-11-2023(online)].pdf | 2023-11-09 |
| 4 | 202321076735-FIGURE OF ABSTRACT [09-11-2023(online)].pdf | 2023-11-09 |
| 5 | 202321076735-DRAWINGS [09-11-2023(online)].pdf | 2023-11-09 |
| 6 | 202321076735-DECLARATION OF INVENTORSHIP (FORM 5) [09-11-2023(online)].pdf | 2023-11-09 |
| 7 | 202321076735-FORM-26 [28-11-2023(online)].pdf | 2023-11-28 |
| 8 | 202321076735-Proof of Right [06-03-2024(online)].pdf | 2024-03-06 |
| 9 | 202321076735-DRAWING [08-11-2024(online)].pdf | 2024-11-08 |
| 10 | 202321076735-COMPLETE SPECIFICATION [08-11-2024(online)].pdf | 2024-11-08 |
| 11 | 202321076735-FORM-5 [26-11-2024(online)].pdf | 2024-11-26 |
| 12 | Abstract-1.jpg | 2024-12-27 |
| 13 | 202321076735-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321076735-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321076735-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321076735-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 17 | 202321076735-FORM 3 [24-02-2025(online)].pdf | 2025-02-24 |