Sign In to Follow Application
View All Documents & Correspondence

Method And System For Application Programming Interface (Api) Traffic Management

Abstract: ABSTRACT METHOD AND SYSTEM FOR APPLICATION PROGRAMMING INTERFACE (API) TRAFFIC MANAGEMENT The present invention relates to a system (120) and a method (500) for Application Programming Interface (API) traffic management is disclosed. The system (120) includes a configuration unit (220), configured to configure in real time, one or more API traffic rules and parameters for at least one of a plurality of APIs experiencing high traffic. The system includes a transceiver, configured to transmit, a ping request to the at least one of plurality of APIs experiencing high traffic in at least one of, a synchronous and an asynchronous mode, receive a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic. The system includes the transceiver, configured to transmit, a final response configuration report to the consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD - 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Sandeep Bisht
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Suman Singh Kanwer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Ankur Mishra
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Yogendra Pal Singh
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Pankaj Kshirsagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Anurag Sinha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Mangesh Shantaram Kale
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Supriya Upadhye
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Ravindra Yadav
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Abhiman Jain
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Ezaj Ansari
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Lakhichandra Sonkar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Himanshu Sharma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Rohit Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR APPLICATION PROGRAMMING INTERFACE (API) TRAFFIC MANAGEMENT
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication networks, more particularly relates to a method and a system for Application Programming Interface (API) traffic management.
BACKGROUND OF THE INVENTION
[0002] Generally, now days, traffic management in a communication network has become a major concern across the world. The traffic is generally the requests transmitted from a consumer to a server via a CAPIF (Common API Framework). The CAPIF provides a framework for accessing northbound APIs (Application Programming Interface). The API is an application/set of defined rules that enable different applications or more computer programs to communicate with each other.
[0003] In a normal functioning scenario between the consumer and the server, the consumer generally transmits the requests to a load balancer. The load balancer will distribute the requests among one or more servers via the APIs. During this process, due to multiple requests from the consumer to the server, there is a tendency for the traffic accumulated at the APIs. Due to the traffic accumulated at the APIs termed as API traffic, the ability of the APIs to enable different applications or computer programs to communicate with each other is substantially hampered.
[0004] In order to manage the API traffic, generally the configuration rules and parameters of the API may have to be changed manually by changing the algorithm or code of the API. The configuration rules are rules which complete the processing of the requests transmitted from the consumer to the server. Further, the parameters of the API may be at least one of, but not limited to, the capacity of the API, time consumption by the API and rate limiting of the API.
[0005] The process of handling the traffic of the API by manually changing the configuration rule, algorithm or code of the API and parameters of the API is time consuming and cumbersome and due to manual intervention, there may be possibilities of errors that may occur during the process, thereby affecting the efficiency of the system.
[0006] In view of the above, there is a need for a system and method for API traffic management, which ensures that time consumed for handling the traffic of the API is substantially reduced.
SUMMARY OF THE INVENTION
[0007] One or more embodiments of the present disclosure provide a method and a system for Application Programming Interface (API) traffic management.
[0008] In one aspect of the present invention, the method for the Application Programming Interface (API) traffic management is disclosed. The method includes the step of configuring in real time, by one or more processors, one or more API traffic rules and parameters for at least one of a plurality of APIs experiencing high traffic based on detection of high traffic in at least one of the plurality of APIs. The method includes the step of transmitting, by the one or more processors, a ping request to the at least one of the plurality of APIs experiencing high traffic in at least one of, a synchronous mode and an asynchronous mode. The method includes the step of receiving, by the one or more processors, a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic. The method includes the step of transmitting, by the one or more processors, a final response configuration report to a consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs.
[0009] In one embodiment, the traffic pertains to multiple requests received at least one of the plurality of APIs from the consumer.
[0010] In another embodiment, the multiple requests received from the consumer pertain to utilizing one or more services provided by a server.
[0011] In yet another embodiment, the high traffic at one of the plurality of APIs is detected when the capacity of handling number of requests by the at least one API reaches or exceeds a predefined threshold.
[0012] In yet another embodiment, the step of configuring the one or more API traffic rules and parameters in real time using a training model.
[0013] In yet another embodiment, the one or more API traffic rules includes one or more predefined policies which are configured based on historical data pertaining to routing of the multiple requests from the consumer to the server.
[0014] In yet another embodiment, the parameters pertaining to the plurality of APIs include at least one of, rate limit, geo-location, bandwidth, request payload (e.g. headers), user/access token, OAuth Token.
[0015] In yet another embodiment, while transmitting a ping request to the at least one of the plurality of APIs experiencing high traffic in the synchronous mode, the one or more processors forbid to execute any further requests until the response pertaining to the ping request is returned by the API.
[0016] In yet another embodiment, while transmitting the ping request to the at least one of plurality of APIs experiencing high traffic in the asynchronous mode, the one or more processors executes multiple requests at the same time without waiting for the previous request to be executed.
[0017] In another aspect of the present invention, the system for the Application Programming Interface (API) traffic management is disclosed. The system includes a configuration unit, configured to configure in real time, one or more API traffic rules and parameters for at least one of a plurality of APIs experiencing high traffic based on detection of high traffic in at least one of the plurality of APIs. The system includes a transceiver, configured to transmit, a ping request to the at least one of plurality of APIs experiencing high traffic in at least one of, a synchronous and an asynchronous mode. The transceiver is configured to receive a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic. The transceiver is further configured to transmit a final response configuration report to the consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs.
[0018] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to configure in real time one or more API traffic rules and parameters for at least one of a plurality of APIs experiencing high traffic based on detection of high traffic in at least one of the plurality of APIs. The processor is configured to transmit a ping request to the at least one of plurality of APIs experiencing high traffic in at least one of, a synchronous and an asynchronous mode. The processor is configured to receive a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic. The processor is configured to transmit a final response configuration report to the consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs.
[0019] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.

BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0021] FIG. 1 is an exemplary block diagram of an environment for Application Programming Interface (API) traffic management, according to one or more embodiments of the present disclosure;
[0022] FIG. 2 is an exemplary block diagram of a system for the API traffic management, according to one or more embodiments of the present disclosure;
[0023] FIG. 3 is a schematic representation of a workflow of the system of FIG. 2 communicably coupled with a User Equipment (UE), according to one or more embodiments of the present disclosure;
[0024] FIG. 4 is a block diagram of an architecture that can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure;
[0025] FIG. 5 is a block diagram of an architecture of a CAPIF framework, according to one or more embodiments of the present disclosure; and
[0026] FIG. 6 is a flow diagram illustrating a method for the API traffic management, according to one or more embodiments of the present disclosure.
[0027] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0028] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0029] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0030] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0031] The present invention relates to a system and a method for Application Programming Interface (API) traffic management. The present invention is able to ensure that the time consumed for handling the API traffic is substantially reduced. The disclosed system and method aim at enhancing efficiency of the system by configuring the one or more API traffic rules and parameters depending on the API traffic in the network. In other words, the present invention provides a unique approach of implementing the one or more traffic rules for determining a preferred route to handle and process the multiple requests from a consumer to a server based on the configurations provided by a Common API Framework (CAPIF) rule engine and a training model.
[0032] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for Application Programming Interface (API) traffic management, according to one or more embodiments of the present invention. The environment 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 for transmitting multiple requests to one or more processors 205 (shown in FIG.2) in order to avail one or more services provided by the server 115. In an embodiment, the user is one of, but not limited to, a network operator or a service provider. In an embodiment, multiple requests, include, but are not limited to, data requests, service requests, authentication requests, configuration requests, notification requests, query requests, and the like. In an embodiment, the one or more services include, but not limited to, data management services, communication services, transaction services, user authentication and security services, and the like. The API traffic management refers to the process of controlling and optimizing the flow of data between a consumer and the server 115, which ensures that the multiple requests are handled efficiently, securely, and reliably. In an embodiment, the consumer includes, but not limited to, the user, a client application, or another service. The consumer sends the multiple requests to access or interact with the one or more services hosted on the server 115. The term “user” and “consumer” are used interchangeably herein there, without limiting the scope of the disclosure.
[0033] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105. In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0034] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defense facility, or any other facility that provides content.
[0036] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is configured for API traffic management. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity, as per multiple embodiments of the present invention.
[0037] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0038] FIG. 2 is an exemplary block diagram of a system 120 for API traffic management, according to one or more embodiments of the present disclosure.
[0039] The system 120 includes a processor 205, a memory 210, a user interface 215, and a database 240. For the purpose of description and explanation, the description will be explained with respect to one or more processors 205, or to be more specific will be explained with respect to the processor 205 and should nowhere be construed as limiting the scope of the present disclosure. The one or more processor 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0040] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0041] The user interface 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the user equipment 110, and the database 240.
[0042] The database 240 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 240 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0043] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0044] In order for the system 120 to perform API traffic management, the processor 205 includes a configuration unit 220, a transceiver 225, an execution unit 230, and a training model 235 communicably coupled to each other. In an embodiment, operations and functionalities of the configuration unit 220, the transceiver 225, the execution unit 230, and the training model 235 can be used in combination or interchangeably.
[0045] The configuration unit 220 is configured to configure one or more API traffic rules and parameters for at least one of a plurality of APIs in real time based on detection of high traffic in at least one of the plurality of APIs. The one or more API traffic rules are applied at strategic points in an API's architecture 400 (as shown in FIG.4), mainly in the API gateway 420 (as shown in FIG.4). The API gateway 420 is a centralized server or service that manages incoming API requests and routes them to an API service repository 455 (as shown in FIG.4). The one or more API traffic rules are triggered on real-time monitoring of the API’s performance based on one or more metrics. In an embodiment, the one or more metrics include, but not limited to, request rate, response time, error rate, and server load. Based on the one or more metrics, the one or more API traffic rules are applied to the API gateway 420.
[0046] In an embodiment, the one or more API traffic rules include, but not limited to, rate limiting, throttling, prioritization of certain requests, or rerouting traffic to underutilized APIs. The rate limiting is the process of controlling the number of requests the API handles over a period of time. The rate limiting ensures that a certain threshold is not exceeded (e.g., 1000 requests per minute). If the limit is reached, additional requests are denied or delayed, preventing overload. The throttling refers to deliberately slowing down the processing of requests when the traffic exceeds a certain limit, rather than outright rejecting them (as in rate limiting). The throttling regulates the speed of the incoming requests to ensure that the system remains operational without being overwhelmed. The prioritization of certain requests involves handling some requests with a higher level of importance. For example, the critical API requests (such as payment processing or emergency alerts) may be given priority over less urgent requests, ensuring to process the request faster. Rerouting traffic is the process of redirecting traffic from overburdened APIs to other APIs or services that are not fully utilized. The rerouting traffic helps balance the load across multiple APIs or servers, reducing the risk of performance degradation in heavily used APIs and making better use of resources.
[0047] In an embodiment, the one or more parameters pertaining to the plurality of APIs includes at least one of, geo-location, bandwidth, request payload (e.g. headers), user/access token, and OAuth Token. The geo-location refers to the geographic location from where the API request originates. The APIs use the geo-location to apply region-specific rules, such as prioritizing or restricting traffic based on the location. The bandwidth indicates the amount of network bandwidth being consumed by the API requests. The bandwidth enables managing the API traffic to prevent bandwidth overload and ensuring efficient data transmission. The request payload refers to the data (such as HTTP headers) that accompanies the API request. The headers contain information like the content type, request origin, or authentication details. The request payload parameter is used to prioritize or filter requests based on the type of data being sent. The user/access token refers to the token associated with a user or application, typically used for authentication and authorization purposes. The one or more API traffic rules is configured based on the user’s access level or token validity. The OAuth tokens are used in authorization protocols, enabling secure access to resources without exposing the user credentials. The OAuth tokens allow the APIs to manage the traffic based on the OAuth token, potentially applying rules based on the token’s scope or expiration status. The configuration unit 220 is configured to specifically target the APIs experiencing high traffic based on real-time traffic monitoring, and historical data analysis.
[0048] As per the above embodiment, the configuration unit 220 is configured to monitor the API traffic continuously and respond instantaneously to changes in traffic conditions. The changes in traffic conditions refer to any variations or fluctuations in the behavior or performance of the APIs. In an exemplary embodiment, if the traffic conditions are normal, the API is receiving a consistent and expected volume of requests, with response times and error rates within acceptable limits. In another exemplary embodiment, if the traffic conditions are high, the API is experiencing a high volume of requests, possibly due to peak usage times. The API traffic pertains to the multiple requests received at least one of the plurality of APIs from the consumer. In an embodiment, the multiple requests include, but are not limited to API calls, HTTP requests (GET, POST, PUT, DELETE), or service invocations. The multiple requests received from the consumer pertain to utilizing one or more services provided by the server 115. In an embodiment, the one or more services, include, but not limited to, web services, data services, authentication and authorization services, communication services, and the like.
[0049] In an embodiment, the configuration unit 220 is configured to configure the one or more API traffic rules and parameters in real time using a training model 235. In an embodiment, the training model 235 includes, but is not limited to an Artificial Intelligence/ Machine Learning (AI/ML) model. The configuration unit 220 is configured to utilize the training model 235 to assess traffic patterns and behaviors across the APIs. In an exemplary embodiment, a news website’s API notices a surge in traffic from a particular geographical region during a major local event or breaking news. The API may need to reroute traffic or allocate more resources to handle the regional spike, ensuring that global users are not affected. The training model 235 enables the system 120 to make real-time decisions and adjustments, ensuring optimal performance and resource allocation, and predict potential issues such as traffic surges, bottlenecks, or anomalies before occurrence. The configuration unit 220 automatically tweaks the one or more API traffic rules based on the insights gained from the AI/ML analysis. In an embodiment, the insights include, but not limited to, traffic patterns, performance metrics, anomaly detection, resource utilization, and the like.
[0050] The training model 235 utilizes a variety of ML techniques, such as supervised learning model, unsupervised learning model and reinforcement learning. In one embodiment, the supervised learning model is a type of machine learning algorithm, which is trained on a labeled dataset, which means that each training dataset is paired with an output label. The goal of supervised learning is to learn a mapping from inputs to outputs, so the supervised learning model predicts the output for new, unseen data. In one embodiment, the unsupervised learning is a type of machine learning algorithm, which is trained on data without any labels. The unsupervised learning algorithm tries to learn the underlying structure or distribution in the data in order to discover patterns or groupings. In one embodiment, the reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment to maximize cumulative reward. The agent receives feedback in the form of rewards or penalties based on the actions taken and learns a path that maps the states of the environment to the best actions.
[0051] In an embodiment, the one or more API traffic rules includes one or more predefined policies which are configured based on historical data pertaining to routing of the multiple requests from the consumer to the server 115. In an embodiment, the one or more predefined policies are set of rules or configurations that govern the API traffic. The one or more predefined policies are configured based on the historical data pertaining to routing of the multiple requests. The historical data refers to past records or logs of API traffic, including request rates, response times, error rates, and routing patterns. In an embodiment, the training model 235 learns the trends/patterns pertaining to routing of the multiple requests from the consumer to the server 115. The training model 235 is configured to analyze the trends over time, such as peak usage times or high-demand endpoints in API usage.
[0052] In one embodiment, the high traffic at one of the plurality of APIs is detected when the capacity of handling number of requests by the at least one API reaches or exceeds a predefined threshold. The high traffic occurs when the at least one of the plurality of APIs reaches or exceeds the predefined threshold. In an exemplary embodiment, the API has a predefined threshold of 1,000 requests per minute. The predefined threshold is based on the historical data indicating that the API handles up to 1,000 requests per minute before performance starts to degrade. The predefined threshold is adjusted dynamically based on the historical data analysis. The system 120 continuously monitors the API's performance, uses the historical data and real-time traffic patterns to adapt the threshold over time, improving as the API usage evolves. For example, if the system 120 learns that the API can handle more traffic during certain periods (due to hardware improvements or better optimizations), it might increase the threshold to 1,200 requests per minute. The configuration unit 220 is configured to continuously monitor the number of requests in real-time. If the number of requests approaches or exceeds 1,000 requests per minute, the high traffic is detected. The configuration unit 220 is configured to take actions to manage the high traffic, such as rate limiting, redirecting requests to alternative servers 115, or scaling up resources to handle the increased load.
[0053] Upon configuring the one or more API traffic rules and parameters for the at least one of the plurality of APIs, the transceiver 225 is configured to transmit a ping request to the at least one of plurality of APIs. In an embodiment, the at least one of plurality of APIs experiencing the high traffic which operate in at least one of, a synchronous mode and an asynchronous mode. In an embodiment, the ping request is to ascertain the current configurations of the APIs pursuant to configuring the one or more API traffic rules. The ping request is a network utility used to test the reachability of the UE 110 or a service and to check the response time. In this scenario, the status and configurations of the APIs are checked to ensure that the APIs are operating as expected and to gather current configuration information before applying or updating the API traffic rules.
[0054] In one embodiment, upon transmitting the ping request to the at least one of the plurality of APIs experiencing high traffic in the synchronous mode, the execution unit 230 is configured to forbid execution of any further request until a response pertaining to the ping request is returned by the API. In the synchronous mode, the execution unit 230 is configured to handle the multiple requests in a synchronized manner, where pauses additional request processing until the ping response is received. If the one or more processors 205 of the system 120 are waiting for the ping response, which is programmed to forbid or prevent the execution of any further requests to the API. During the waiting period, the execution unit 230 holds off on sending any new requests to the API, which ensures that does not overwhelm the additional request processing. Further, the waiting period for the ping response ensures that the system 120 makes decisions based on up-to-date information about the API’s capacity and status.
[0055] In another embodiment, upon transmitting the ping request to the at least one of plurality of APIs experiencing high traffic in the asynchronous mode, the execution unit 230 is configured to execute the multiple requests at the same time without waiting for the previous request to be executed. In the asynchronous mode, the multiple requests are executed independently of each other. The execution unit 230 does not wait for one request to complete before starting the next one. When the ping request is in progress, the execution unit 230 is configured to process multiple additional requests to the API concurrently. The system 120 uses asynchronous programming techniques or concurrency mechanisms to handle multiple requests at the same time. In an embodiment, the asynchronous programming techniques include, but not limited to, callbacks, reactive programming, event-driven programming, and Task-based Asynchronous Pattern (TAP).
[0056] Upon transmitting the ping request to the at least one of plurality of APIs, the transceiver 225 is configured to receive a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic. The processor 205 of the system 120 is configured to obtain information about the API's current status or configuration. In this regard, the processor 205 of the system 120 is used to make decisions or updates regarding the traffic management for the plurality of APIs. The transceiver 225 is configured to receive the response to the ping request from the API. The response contains data about the API’s operational state, current load, or configuration details. The one or more processors 205 of the system 120 is configured to process the response to determine how to handle future requests or manage traffic to the API.
[0057] Upon receiving the response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic transmit, the transceiver 225 is configured to transmit a final response configuration report to the consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs. The transceiver 225 is configured to process the received response to generate the final response configuration report. In an embodiment, the final response configuration report includes, but not limited to, API availability status, response time, request throughput, error rates, current configuration settings, load balancer/server information, rerouting information, and historical performance data. In an exemplary embodiment, the API name includes payment API, API status is up, response time is 150 millisecond, current throughput is 950 requests/min, predefined threshold is 1000 requests/min, error rate is 2%, rate limit is 1000 requests/min, throttling is enabled for bulk requests, priority is high for payment processing requests, load balancer status is active and historical response time includes last 10 minutes as average response time is 150 millisecond and last one hour as average response time is 130 millisecond. In this regard, the final response configuration report provides up-to-date information about the API’s status or configuration to the consumer to make informed decisions or take appropriate actions based on the information.
[0058] By doing so, the system 120 is able to, advantageously, configuration of the APIs used for longer period of time, substantially reducing time consumption of handling the traffic. The system 120 ensures the process of handling the traffic of the API, which improves processing speed of the processor 205, and reduces memory space requirement.
[0059] FIG. 3 is a schematic representation of a workflow of the system 120 of FIG. 2 communicably coupled with a User Equipment (UE), according to one or more embodiments of the present disclosure. More specifically, FIG. 3 illustrates the system 120 configured for API traffic management. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0060] As mentioned earlier in FIG.1, in an embodiment, the first UE 110a may encompass electronic apparatuses. These devices are illustrative of, but not restricted to, modems, routers, switches, laptops, tablets, smartphones (including phones), or other devices enabled for web connectivity. The scope of the first UE 110a explicitly extends to a broad spectrum of electronic devices capable of executing computing operations and accessing networked resources, thereby providing users with a versatile range of functionalities for both personal and professional applications. This embodiment acknowledges the evolving nature of electronic devices and their integral role in facilitating access to digital services and platforms. In an embodiment, the first UE 110a can be associated with multiple users. Each of the first UE 110a is communicatively coupled with the processor 205.
[0061] The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120. The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to transmit multiple requests to the one or more processors 205 in order to avail one or more services provided by the server 115.
[0062] Furthermore, the one or more primary processors 305 within the first UE 110a are uniquely configured to execute a series of steps as described herein. This configuration underscores the processor 205 capability to stitch the subscriber profile with the trace data. The coordinated functioning of the one or more primary processors 305 and the additional processors, is directed by the executable instructions stored in the memory 310. The executable instructions facilitate effective task distribution and management among the one or more primary processors 305, optimizing performance and resource use.
[0063] As mentioned earlier in FIG.2, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the database 240. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the database 240 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0064] Further, the processor 205 includes the configuration unit 220, the transceiver 225, and the execution unit 230. The operations and functions of the configuration unit 220, the transceiver 225, and the execution unit 230 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0065] FIG. 4 is a block diagram of an architecture 400 that can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure. The architecture 400 of the system includes an API consumer 405, an Edge Load Balancer (ELB) 410, an Identity and access Management (IAM) 415, an API gateway 420, and an API service repository 455.
[0066] The architecture 400 of the system includes an API consumer 405. In an embodiment, the API consumer 405 develops applications or web sites using the APIs. In particular, the API consumer 405 are the users of APIs. In an embodiment, the API consumer 405 may transmit the request to a load balancer such as the ELB 410. The ELB 410 is configured to automatically distribute the incoming API traffic from entities such as the API consumer 405 across multiple targets, such as servers 115. In the present invention, the request may be sent from the API consumer 405 to the server 115 requesting network resources from the server 115.
[0067] The architecture 400 further includes the IAM 415. In an embodiment, the IAM 415 is a web service that facilitates securely control access to system resources. The IAM 415 is configured to centrally manage permissions or provides authentication to the API consumer 405.
[0068] The ELB 410 transmits the API traffic to the API gateway 420. The API gateway 420 is a data-plane entry point for API calls that represent consumer requests to target applications and services. The API gateway 420 typically performs request processing based on one or more predefined policies, including authentication, authorization, access control, Secure Sockets Layer (SSL)/ Transport Layer Security (TLS) offloading, routing, and load balancing.
[0069] The API gateway 420 further includes an API orchestration configuration 425, an API traffic rule configuration 430, an API sync call 435, an API response collection 440, an API traffic management 445, and an API async call 450.
[0070] The API orchestration configuration 425 is the process of integrating multiple APIs from different sources and reorganizing them into a consolidated set of outputs. The orchestration involves coordinating the communication and interaction between the APIs, as well as managing data flow, security, and other aspects of API integration. The API orchestration configuration 425 allows multiple ways of routing from the east bound API calls to multiple westbound API calls.
[0071] In the present invention, a common API Framework (CAPIF) rule engine is configured to perform dynamic traffic management using the API traffic rule configuration 430 and the API traffic management 445. Depending on the traffic, the one or more API traffic rules and parameters or requirements of the API are dynamically configured in real time in order to handle the optimally handle traffic flow. In an embodiment, the parameters of the API include at least one of, but not limited to, the capacity of the API, time consumption by the API and the rate limiting of the API.
[0072] In an embodiment, the API call may refer to the API request for allowing one application to request the data or services from another application. If the API call is in synchronous mode, it is referred to as that of a code execution block (or wait) for the API call to return before continuing. In particular, the API call in the synchronous mode indicates that until the response is returned by the API, the application forbids execute any further functions, which is perceived by the user as latency or performance lag in the application.
[0073] The API response collection 440 may be the location where the data or information that is returned from the server 115 is stored when the API request is transmitted. Further, when the API call is made to the one or more APIs in the asynchronous mode, the web service may allow the users to conduct multiple requests at the same time without waiting for the previous request to be executed. In this regard, the server 115 may process multiple requests at the same time, decreasing the APIs overall response time.
[0074] The architecture 400 of the system 120 further includes API service repository 455. The API service repository 455 includes, but not limited to, databases, data lakes, data containers, persistence cache and distributed cache, etc. The API service repository 455 may be a catalogue in which all the services available on the network 105 are stored.
[0075] FIG. 5 is a block diagram of an architecture of the Common API Framework (CAPIF) 500, according to one or more embodiments of the present disclosure.
[0076] The CAPIF 500 includes a Service Capability Server/Application Server (SCS/AS) 505, the CAPIF 510, an analysis platform 515, a Service Capability Exposure Function (SCEF) 520, a Network Exposure Function (NEF) 525, a domain adapter 530, a 4G Evolved Packet System (EPS) 535, a 5G core Service Based Architecture (SBA) 540, a private 5G network 545, a massive Machine Type Communication (mMTC)/Internet of Things (IoT) services 555, an Enhanced Mobile Broadband (eMBB) service provider 560, the API gateway for mobile applications 565, and an Element Management System (EMS) 570.
[0077] The SCS/AS 505 enables seamless integration between external applications and core network capabilities. The SCS abstracts network capabilities and offers them as standardized APIs, while the AS contains the application logic that uses these APIs to provide services to users. The SCS exposes and manages network capabilities (like sending SMS, checking user location, etc.) to external application servers through the APIs. The AS handles application-specific tasks (like user requests), processes the tasks, and calls the necessary APIs (often provided by the SCS) to access the network services required to fulfill those requests. Further, the SCS/AS 505 is connected with the SCEF 520 and the CAPIF 510 via a Hypertext Transfer Protocol (HTTP)/ WebSocket (WS).
[0078] The CAPIF 510 is a framework designed by a Third Generation Partnership Project (3GPP) to provide a standardized way of exposing network services as the APIs, making it easier for application developers to access the network functions. The CAPIF ensures that the APIs are exposed securely, protecting the underlying network infrastructure. The CAPIF mainly utilizes the API gateway 420 to act as an intermediary that handles the API requests from third-party applications, ensuring proper routing, security, and policy enforcement. In another aspect, the CAPIF provides the API gateway 420 to transform the request to consumer, destination entities, or host applications.
[0079] The analysis platform 515 refers to a module or system responsible for monitoring, analyzing, and managing the data related to the APIs exposed by network functions. The analysis platform 515 ensures optimal performance, security, and usage efficiency of the APIs and provides insights into their operations.
[0080] The SCEF 520 includes an application server 520a and a distributed and clustered data system 520b. The SCEF 520 is key component that allows third-party applications and external systems to securely access and interact with the services and capabilities provided by a mobile network operator via HTTP 2.0. The SCEF 520 acts as an intermediary between the external applications and the network's core functions, offering a secure and standardized way to expose network services such as IoT management, location services, and communication APIs. The SCEF 520 acts as a secure interface between the application server 520a and the mobile network, handling security (authentication and authorization), traffic management (throttling, rate limiting), and protocol translation. In the distributed and clustered data system 520b are deployed across multiple physical or virtual servers located in different geographic locations. This ensures that the system can handle a large number of API requests from the application server 520a spread over different regions and improve response times by routing requests to the nearest server.
[0081] The SCEF 520 handles protocol translations and abstracts the complexity of internal 3GPP protocols (e.g., Diameter, NAS) from external applications, providing them with simplified API-based access. Further, the SCEF 520 handles the 4G EPS 535. The 4G EPS 535 is the core architecture used in 4G networks to deliver high-speed data, voice, and multimedia services. The 4G EPS 535 is part of an Evolved Packet Core (EPC), which is the backbone of a Long Term Evolution (LTE) network, and it supports both packet-switched (PS) services, such as internet browsing and video streaming, and voice over LTE (VoLTE).
[0082] The NEF 525 serves as a key interface that securely exposes network capabilities and services of the 5G system to external applications or third-party services, such as Application Servers (AS) 525a. The NEF 525 allows authorized access to various network functions and data through APIs while ensuring secure and controlled communication between external parties and the core network. The NEF 525 often works in environments with the distributed and clustered data systems 525b. The NEF is necessary to manage the scalability, and reliability needs of the network. Further, the NEF 525 handles the 5G core SBA 540.
[0083] The domain adapter 530 connects the CAPIF 510 with the Application Server 530a, enabling communication between the CAPIF 510 and the applications running on these servers. The domain adapter 530 ensures that requests and responses between the CAPIF 510 and the AS 530a are correctly formatted and understood. The domain adapter 530 further includes the private 5G network 550, the mMTC/IoT services 555, the eMBB service provider 560, and the API gateway for applications 565.
[0084] The EMS 570 is a system within the CAPIF 510 that handles the creation, management, and lifecycle of various entities, which can include users, applications, services, and other components necessary for the operation of the CAPIF framework 500.
[0085] FIG. 6 is a flow diagram illustrating a method 600 for API traffic management, according to one or more embodiments of the present disclosure. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0086] At step 605, the method 600 includes the step of configuring the one or more API traffic rules and parameters for at least one of the plurality of APIs in real time by the configuration unit 220. The configuration unit 220 is configured to specifically target the APIs experiencing high traffic based on the real-time traffic monitoring, and the historical data analysis. In an embodiment, the configuration unit 220 is configured to configure the one or more API traffic rules and parameters in real time using the training model 235.
[0087] At step 610, the method 600 includes the step of transmitting the ping request to the at least one of plurality of APIs by the transceiver 225. In an embodiment, the at least one of plurality of APIs experiencing the high traffic which operate in at least one of, the synchronous mode and the asynchronous mode. In an embodiment, the ping request is to ascertain the current configurations of the APIs pursuant to configuring the one or more API traffic rules.
[0088] At step 615, the method 600 includes the step of receiving the response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic by the transceiver 225. The processor 205 of the system 120 is configured to obtain information about the API's current status or configuration. In this regard, the processor 205 of the system 120 is used to make decisions or updates regarding the traffic management for the plurality of APIs. The transceiver 225 is configured to receive the response to the ping request from the API.
[0089] At step 620, the method 600 includes the step of transmitting the final response configuration report to the consumer by the transceiver 225. The transceiver 225 is configured to process the received response to generate the final response configuration report. In an embodiment, the final response configuration report provides up-to-date information about the API’s status or configuration to the consumer to make informed decisions or take appropriate actions based on the information.
[0090] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to configure in real time one or more API traffic rules and parameters for at least one of a plurality of APIs experiencing high traffic based on detection of high traffic in at least one of the plurality of APIs. The processor 205 is configured to transmit a ping request to the at least one of plurality of APIs experiencing high traffic in at least one of, a synchronous and an asynchronous mode. The processor 205 is configured to receive a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic. The processor 205 is configured to transmit a final response configuration report to the consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs.
[0091] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0092] The present disclosure provides technical advancement for configuring the API depending on the API traffic in the communication network. In other words, the present invention provides a unique approach of implementing the one or more API traffic rules for determining a preferred route to handle and process the requests from the consumer to the server based on the configurations provided by the CAPIF rule engine and the training model. Further, the APIs may be used for longer period of time. The present invention substantially reduces time consumption for handling the API traffic.
[0093] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0094] Environment - 100
[0095] Network-105
[0096] User equipment- 110
[0097] Server - 115
[0098] System -120
[0099] Processor - 205
[00100] Memory - 210
[00101] User interface-215
[00102] Configuration unit – 220
[00103] Transceiver– 225
[00104] Execution unit – 230
[00105] Training model- 235
[00106] Database– 240
[00107] Primary processor- 305
[00108] Memory– 310
[00109] Architecture- 400
[00110] API consumer- 405
[00111] Edge Load Balancer- 410
[00112] Identity and Access Management- 415
[00113] API gateway- 420
[00114] API orchestration configuration- 425
[00115] API traffic rule configuration- 430
[00116] API sync call- 435
[00117] API response collection- 440
[00118] API traffic management- 445
[00119] API async call- 450
[00120] API service repository- 455
[00121] SCS/AS- 505
[00122] CAPIF- 510
[00123] Analysis platform- 515
[00124] SCEF- 520
[00125] Application server- 520a
[00126] Distributed and clustered data system- 520b
[00127] NEF-525
[00128] Application server- 525a
[00129] Distributed and clustered data system- 525b
[00130] Domain adapter- 530
[00131] Application server- 530a
[00132] Distributed and clustered data system- 530b
[00133] 4G EPS- 535
[00134] 5G core SBA- 540
[00135] Private 5G network- 550
[00136] mMTC/IoT services- 555
[00137] eMBB service provider- 560
[00138] API gateway for mobile applications- 565
[00139] EMS- 570

,CLAIMS:CLAIMS
We Claim:
1. A method (600) for Application Programming Interface (API) traffic management, the method (600) comprises the steps of:
configuring in real time, by one or more processors (205), one or more API traffic rules and parameters for at least one of a plurality of APIs experiencing high traffic based on detection of high traffic in at least one of the plurality of APIs;
transmitting, by the one or more processors (205), a ping request to the at least one of the plurality of APIs experiencing high traffic in at least one of, a synchronous mode and an asynchronous mode;
receiving, by the one or more processors (205), a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic; and
transmitting, by the one or more processors (205), a final response configuration report to a consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs.

2. The method (600) as claimed in claim 1, wherein the traffic pertains to multiple requests received at the at least one of the plurality of APIs from the consumer.

3. The method (600) as claimed in claim 1, wherein the multiple requests received from the consumer pertain to utilizing one or more services provided by a server (115).

4. The method (600) as claimed in claim 1, wherein the high traffic at one of the plurality of APIs is detected when the capacity of handling number of requests by the at least one API reaches or exceeds a predefined threshold.

5. The method (600) as claimed in claim 1, wherein the step of configuring the one or more API traffic rules and parameters in real time using a training model (235).

6. The method (600) as claimed in claim 1, wherein the one or more API traffic rules includes one or more predefined policies which are configured based on historical data pertaining to routing of the multiple requests from the consumer to the server (115).

7. The method (600) as claimed in claim 1, wherein the parameters pertaining to the plurality of APIs include at least one of, rate limit, geo-location, bandwidth, request payload (e.g. headers), user/access token, OAuth Token.

8. The method (600) as claimed in claim 1, wherein during transmitting a ping request to the at least one of the plurality of APIs experiencing high traffic in the synchronous mode, the one or more processors (205) forbid to execute any further requests until the response pertaining to the ping request is returned by the API.

9. The method (600) as claimed in claim 1, wherein while transmitting the ping request to the at least one of plurality of APIs experiencing high traffic in the asynchronous mode, the one or more processors (205) executes multiple requests at the same time without waiting for the previous request to be executed.

10. A system (120) for Application Programming Interface (API) traffic management, the system (120) comprises:
a configuration unit (220), configured to, configure in real time, one or more API traffic rules and parameters for at least one of a plurality of APIs experiencing high traffic based on detection of high traffic in at least one of the plurality of APIs; and
a transceiver (225), configured to:
transmit, a ping request to the at least one of plurality of APIs experiencing high traffic in at least one of, a synchronous and an asynchronous mode;
receive, a response pertaining to the ping request from the at least one of the plurality of APIs experiencing high traffic; and
transmit, a final response configuration report to the consumer based on the received response pertaining to the ping request from the at least one of the plurality of APIs.

11. The system (120) as claimed in claim 10, wherein the traffic pertains to multiple requests received by the at least one of the plurality of APIs from the consumer.

12. The system (120) as claimed in claim 10, wherein the multiple requests received from the consumer pertains to utilizing one or more services provided by a server (115).

13. The system (120) as claimed in claim 10, wherein the high traffic at one of the plurality of APIs is detected when the capacity of handling number of requests by the at least one API reaches or exceeds a predefined threshold.

14. The system (120) as claimed in claim 10, wherein the configuration unit (220) configures the one or more API traffic rules and parameters in real time using a training model (235).

15. The system (120) as claimed in claim 10, wherein the one or more API traffic rules includes one or more predefined policies which are configured based on historical data pertaining to routing of the multiple requests from the consumer to the server (115).

16. The system (120) as claimed in claim 10, wherein the parameters pertaining to the plurality of APIs includes at least one of, rate limit, geo-location, bandwidth, request payload (e.g. headers), user/access token, OAuth Token.

17. The system (120) as claimed in claim 10, wherein while transmitting the ping request to the at least one of plurality of APIs experiencing high traffic in the sync mode, an execution unit (230) forbid execute any further request until response pertaining to the ping request is returned by the API.

18. The system (120) as claimed in claim 10, wherein while transmitting the ping request to the at least one of plurality of APIs experiencing high traffic in the async mode, the execution unit (230) executes multiple requests at the same time without waiting for the previous request to be executed.

19. A User Equipment (UE) (110), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
transmit, multiple requests to the one or more processors (205) in order to avail one or more services provided by the server (115); and
wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321060142-STATEMENT OF UNDERTAKING (FORM 3) [07-09-2023(online)].pdf 2023-09-07
2 202321060142-PROVISIONAL SPECIFICATION [07-09-2023(online)].pdf 2023-09-07
3 202321060142-FORM 1 [07-09-2023(online)].pdf 2023-09-07
4 202321060142-FIGURE OF ABSTRACT [07-09-2023(online)].pdf 2023-09-07
5 202321060142-DRAWINGS [07-09-2023(online)].pdf 2023-09-07
6 202321060142-DECLARATION OF INVENTORSHIP (FORM 5) [07-09-2023(online)].pdf 2023-09-07
7 202321060142-FORM-26 [17-10-2023(online)].pdf 2023-10-17
8 202321060142-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321060142-DRAWING [07-09-2024(online)].pdf 2024-09-07
10 202321060142-COMPLETE SPECIFICATION [07-09-2024(online)].pdf 2024-09-07
11 Abstract 1.jpg 2024-10-03
12 202321060142-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321060142-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321060142-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321060142-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321060142-FORM 3 [29-01-2025(online)].pdf 2025-01-29
17 202321060142-FORM 18 [20-03-2025(online)].pdf 2025-03-20