Abstract: ABSTRACT SYSTEM AND METHOD FOR HANDLING API CALL FLOWS The present disclosure relates to a method of handling API call flows by one or more processors (202). The method includes configuring a plurality of configuration parameters based on rules defined in an API Quality of Service (QoS) engine. The method further includes receiving an API call from a user equipment (102) and determining one or more configuration parameters to apply to the API call. Further, the method includes identifying a type of the of the API call upon applying the one or more configuration parameters to the API call, wherein the type can be one of a sync type or an async type. Finally, the method includes preparing an API response based on the type of the API call. Ref. FIG. 6
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR HANDLING API CALL FLOWS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to telecommunications, and network management, more particularly relates to handling Application Programming Interface (API) call flows.
BACKGROUND OF THE INVENTION
[0002] Applications and services rely heavily on API call flows to communicate with each other. These API call flows enable diverse functionalities from data retrieval and processing to interconnecting various software components. Ensuring the QoS of the API call flows is crucial, as it directly impacts the user experience and the overall effectiveness of applications and services.
[0003] Traditionally, the QoS management for the API call flows has been a manual and rule-based process. Network administrators and engineers configure QoS parameters based on expected traffic patterns and predefined rules.
[0004] However, the existing approach has several limitations, for e.g., rule-based QoS configurations do not adapt well to dynamic changes in network traffic and service demands. It often requires continuous manual adjustments, making it challenging to maintain optimal performance. Another limitation of the existing art is managing the QoS for large-scale, complex network environments with multiple APIs and varying service levels is a challenging and error-prone task.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a system and a method for handling Application Programming Interface (API) call flows.
[0006] In one aspect of the present invention, the method of handling service API call flows is provided. The method includes configuring, by one or more processors, a plurality of configuration parameters based on rules defined in an API Quality of Service (QoS) engine; receiving, an API call from a user equipment; determining, by the one or more processors, one or more configuration parameters to apply to the API call; identifying, by the one or more processors, a type of the of the API call upon applying the one or more configuration parameters to the API call. Typically, the type of the API call can be either sync or async. The method further includes, preparing, by the one or more processors, an API response based on the type of the API call, and sending the API response to a consumer.
[0007] In an embodiment, the plurality of configuration parameters for the API are created in run-time. The plurality of configuration parameters includes at least one of: a response time parameter, a network optimization parameter, a request handling parameter, a concurrency parameter, an error handling parameter, a failover mechanism parameter, a secure request parameter, and a data integrity parameter.
[0008] In an embodiment, the predefined rules are related to at least one of: a format of the API call, a size of the API call, an execution time of the API call, an encryption format of the API call and a decryption format of the API call. The API response defines a response structure, wherein the response structure comprises a status code, a response body, a mask sensitive data, and an access control data.
[0009] In an embodiment, further, the method includes determining, by the one or more processors, whether the API response is to be communicated for a single session or a plurality of sessions, based on the plurality of configuration parameters.
[0010] In an embodiment, further, the method includes identifying that the API call is the sync type, when the API response is received from an API provider within a same session between the user and the API provider.
[0011] In an embodiment, further, the method includes identifies that the API call is the async type, when the API response is received, from the API provider, irrespective of a session, correlated by a session identifier.
[0012] In another aspect of the present invention, the system for handling API call flows is provided. The system includes a receiving unit, a processing unit, an API engine and a communicating unit. The receiving unit is configured to receive an API call from a consumer. The API engine is configured to create a plurality of configuration parameters in run-time based on rules defined in the API engine. The processing unit is configured to determine one or more configuration parameters to apply to the API call; and identify a type of the of the API call upon applying the one or more configuration parameters to the API call, wherein the type can be one of a sync type or an async type. The communicating unit is configured to prepare an API response based on the type of the API call and send the API response to the consumer.
[0013] In another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is provided. The computer readable instructions, when executed by a processor, cause the processor to configure a plurality of configuration parameters based on rules defined in an API Quality of Service (QoS) engine. Further the processor is configured to receive an API call from a user equipment, determine one or more configuration parameters to apply to the API call, identify a type of the of the API call upon applying the one or more configuration parameters to the API call, wherein the type can be one of a sync type or an async type; and prepare an API response based on the type of the API call.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for handling API call flows, according to various embodiments of the present disclosure.
[0017] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
[0018] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0019] FIG. 4 shows an exemplary embodiment illustrating a system architecture, in accordance with the exemplary embodiment of the present subject matter.
[0020] FIG. 5 shows a sequence flow diagram illustrating a method for handling API call flows, according to various embodiments of the present disclosure.
[0021] FIG. 6 is an exemplary flow diagram illustrating the method for handling API call flows, according to various embodiments of the present disclosure.
[0022] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0028] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0029] Further, the terms first, second etc… may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0030] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0031] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0032] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0033] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0034] The proposed system and proposed method can be used to enable onboarding of an API consumer and an API provider. Further, the system and the method enables optimization of Quality of Services, consumer onboarding, API Traffic management, response time of service, API service discovery, manage multiple requests in a single API call and effective manner. The system and method enables run-time configuration of the parameters and rule. The system and method further enables creation of rules dynamically.
[0035] In an aspect of the exemplary embodiment, the system may be deployed in a framework. The framework, may be a form of a Common API Framework (CAPIF) configured to provide, unified and standardized access to the API for northbound API calls. The framework deployed as a system, may comprise a QoS engine. The QoS engine, may be enabled to manage or configure, during start-up time or run-time, at least one parameter, from API consumer onboarding, or API subscription creation, or Service API discovery, or API Traffic distribution, or clustering requests from a single API call, or logging and error detection. The at least one parameter can be, for example, but not limited to an easy and smooth API access parameter, an API subscription parameter, an API consumer onboarding parameter, a service API discovery parameter, an API traffic distribution parameter, a grouping of multiple request parameter, and a logging and error detection parameter
[0036] Further an Aritificial Intelligence (AI)/Machine Learning (ML) based techniques, implemented and integrated with the framework, may be configured to recommend and implement changes to the QoS engine to optimize the parameters based on rules and historical data. The parameters optimized by the AI/ML based techniques may be at least one of easy and smooth API access, API Subscription, API Consumer onboarding, Service API Discovery, API traffic distribution, grouping of multiple request, also logging and error detection.
[0037] FIG. 1 illustrates an exemplary block diagram of an environment (100) for handling API call flows, according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) (102-1, 102-2, ……,102-n). The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, ……102-n) is configured to connect to a system (108) via a communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is 102.
[0038] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or Voice Over Internet Protocol (VoIP) capabilities. In an embodiment, the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, a landline with assigned extension within the communication network (106).
[0039] The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0040] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G) network, a Fourth Generation (4G) network, a Fifth Generation (5G) network, a Sixth Generation (6G) network, a New Radio (NR) network, a Narrow Band Internet of Things (NB-IoT) network, an Open Radio Access Network (O-RAN), and the like.
[0041] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0042] One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE or mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3rd Generation Partnership Project (3GPP) or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications.
[0043] The system (108) is communicatively coupled to a server (104) via the communication network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility) that provides service.
[0044] The environment (100) further includes the system (108) communicably coupled to the server (e.g., remote server or the like) (104) and each UE of the plurality of UEs (102) via the communication network (106). The remote server (104) is configured to execute the requests in the communication network (106).
[0045] The system (108) is adapted to be embedded within the remote server (104) or is embedded as the individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for the API call flows, which gets reflected in real-time independent of the complexity of network.
[0046] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the API call flows and perform real-time analysis in the system (108).
[0047] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility) that provides service.
[0048] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0049] FIG. 2 illustrates a block diagram of the system (108) provided for handling API call flows, according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes one or more processors (202), a memory (204), an user interface (206), a display (208), an input device (210), and a database (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0050] The information related to the request may be provided or stored in the memory (204) of the system (108). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0051] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and a database. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0052] The information related to the requests may further be configured to render on the user interface (206). The user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input device(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0053] The database (214) may be communicably connected to the processor (202) and the memory (204). The database (214) may be configured to store and retrieve the request pertaining to features, or services or API call flows of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator. Further the remote server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the API call flows, which provides flexibility to roll out multiple variants of the request as per business needs. In another embodiment, the database (214) may be outside the system (108) and communicated through a wired medium and wireless medium.
[0054] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0055] In order for the system (108) to handle the API call flows, the processor (202) includes a receiving unit (216), a processing unit (218), an API engine (220) and a communicating unit (224). The receiving unit (216), the processing unit (218), the API engine (220) and the communicating unit (224) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0056] In order for the system (108) to handle service API call flows, the receiving unit (216), the processing unit (218), the API engine (220) and the communicating unit (224) are communicably coupled to each other.
[0057] The API engine (220) creates a plurality of configuration parameters in run-time based on rules defined in the API engine (220). The rules are set by the API consumer (402) or a service provider. In an example, the rules are related to a format of the API call. In another example, the rules are related to a size of the API call. In another example, the rules are related to an execution time of the API call. In another example, the rules are related an encryption format and a decryption format of the API call.
[0058] In an example, the API engine (220) enables the QoS performance of the API based on performance metrics, suitable API endpoint, caching strategies, and scalability (for example) by using AI/ML based techniques. The suitable API endpoint means instead of making multiple requests to fetch user details, create an endpoint that accepts multiple user IDs and returns details for all of them in one response. The caching strategies include client-side caching, server-side caching, content delivery network (CDN). The client-side caching uses HTTP caching headers to indicate to clients cache responses. The server-side caching caches frequent responses on the server using in-memory caches like Redis or Memcached to reduce load on backend systems. The CDN reduces latency and improves response times. The scalability includes load balancing, horizontal scaling, and auto-scaling. The load balancing distributes incoming requests across multiple servers to ensure no single server becomes a bottleneck. The horizontal scaling adds more instances of the API servers as demand grows. This can help handle increased load and ensure better performance. The auto-scaling uses cloud services that provide auto-scaling based on traffic load, which automatically adjusts the number of servers based on demand.
[0059] In an example embodiment, the receiving unit (216) receives an API call from a consumer (e.g., API consumer (402)) for a subscription. In an example a API call is an API request made by the consumer to an API endpoint to perform operations like retrieving or sending data. The API call for optimizing QoS involves ensuring that the API calls are handled efficiently, reliably, and within acceptable performance metrics.
[0060] The plurality of configuration parameters can be, for example, but not limited to a response time parameter, a network optimization parameter, a request handling parameter, a concurrency parameter, an error handling parameter, a failover mechanism parameter, a secure request parameter, and a data integrity parameter.
[0061] In an embodiment, the response time parameter optimizes how quickly the API call responds to the requests. This involves minimizing the time from when the request is sent until the response is received. The network optimization parameter minimizes data transfer to reduce network latency.
[0062] In an embodiment, the request handling parameter improves the rate at which API calls are processed. This includes optimizing server resources, load balancing, and scaling services to handle high volumes of requests. The concurrency parameter ensures that the system can handle multiple simultaneous requests efficiently without significant performance degradation.
[0063] In an embodiment, the error handling parameter implements robust error handling and retry mechanisms to deal with failed requests and ensure reliable service. The failover mechanisms parameter uses redundancy and failover strategies to maintain service availability in case of server or network failures.
[0064] The secure requests parameter ensures that the API calls are authenticated and authorized properly to prevent unauthorized access and data breaches. The data integrity parameter verifies that requests and responses are not tampered with during transmission.
[0065] The processing unit (218) determines one or more configuration parameters to be applied to the API call based on the plurality of configuration parameters and the API call received. The processing unit (218) further identifies a type of the API call upon applying the one or more configuration parameters to the API call. For example, the API call can be identified as a sync API call or an async API call.
[0066] The processing unit (218) determines whether the API call is synchronous (sync) or asynchronous (async) by understanding how the API call is executed and how it affects the flow of the API call. In an embodiment, if the API provider returns a result such as an API response within a same session between the API provider and the user (API consumer), then the API call is determined as the sync type. In an embodiment, the same session refers to the establishment and maintenance of a connection between the API provider and the user for exchange of requests and responses.
[0067] Further, the API call is determined as the async type when the API response is received from the API provider irrespective of the session. Herein the API response is correlated by a session identifier. In general teachings of asynchronous API call allows the API provider to make a request and then proceed without waiting for the API response. The API provider process the API request in its own time and sends back the API response when ready, where the system can handle other requests. Herein, the API request and the API response are bound by a session identifier. In other words, based on the session identifier of the API request, the corresponding API response is sent back. The API response, in an example, can be such as http response code 200: the OK response. The 200 OK response indicates that the API call or API request has been accepted by the server of the API provider, and the task i.e. transmitting the API response is being processed and will be transmitted shortly. In an embodiment, the processing unit (218) tracks the API response when the API call is the async type based on correlating the API response with the API call using the session identifier. The session identifier represents the corresponding same session between the user and the API provider when the API call is transmitted to the API provider.
[0068] In an embodiment, the communicating unit (222) sends the API response to the consumer (402) based on the type of the API call and the one or more configuration parameters applied. The API response defines a response structure. The response structure includes a status code, a response body, a mask sensitive data, and an access control data. The status code uses appropriate HTTP status codes to indicate the result of the API call. In an example, 404 not found means resource not found, 200 ok means success, and 500 internal server error means an unexpected error occurred. The response body provides necessary information (like, information about the created subscription or the current state of the subscription, error details). The mask sensitive data avoids including sensitive information in the response. For example, not return passwords or personal details. The access control data ensures that the response data adheres to the access control policies and only includes information a consumer is authorized to view.
[0069] Further, the processing unit (218) suppresses at least one parameter during preparation of the API response based on the plurality of configuration parameters. In an example, for a bank call, the API validates customer form payment validation and collection through a virtual code. ValidateAccount API response has many parameters such as customer_code, account_no, transfer_type, transfer_unique_no, transfer_timestamp, transfer_ccy, transfer_amt and many more but the bank may not want all these parameters in the API response, let say only valid, account_no is required, so the system (108) can configure required parameter and eliminate or mask or suppress unwanted parameters.
[0070] Further, the processing unit (218) determines whether the API response is to be communicated for a single session or a plurality of sessions, based on the plurality of configuration parameters. If the API call pertains to the single session, the response should include details relevant only to that session. This is often the case for operations that deal with user-specific actions or resources that are scoped to a single interaction. The single session is useful for operations like retrieving, updating, or deleting a specific session related to the API call. If the API call involves multiple sessions or is querying a collection of sessions, the response should be structured to handle and convey information about multiple sessions. This often involves returning a list or a collection of session objects. The plurality of sessions are useful for listing, searching, or performing batch operations on multiple sessions.
[0071] The example for handling service API call flows is explained in FIG. 4.
[0072] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0073] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) causes the UE (102-1) to transmit an API call to the one or more processers(202).
[0074] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the API call flows to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0075] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the user interface (206), the display (208), and the input device (210). The operations and functions of the one or more processors (202), the memory (204), the user interface (206), the display (208), and the input device (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the receiving unit (216), the processing unit (218), the API engine (220) and the communicating unit (224). The operations and functions of the receiving unit (216), the processing unit (218), the API engine (220) and the communicating unit (224) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0076] FIG. 4 shows of an exemplary embodiment illustrating a system architecture (400) in accordance with the exemplary embodiment of the present subject matter. The system architecture (400) may include an API consumer (402). The API consumer (402) may be an application, or a developer, or an enterprise accessing a framework (e.g., CAPIF or the like). The API consumer (402), may be communicably connected to Edge Load balancer (ELB) unit (404a, 404b). The ELB unit (404a, 404b) may be configured to route the API call request from the API consumer (402) to the destination application based on rules like round robin, context-based routing, header-based routing, or TCP based routing.
[0077] Further, an IAM module (406), may be configured for identity and access management. In accordance with an exemplary embodiment, the common API gateway (408) may be provided for enabling the API consumer (402) to access the framework. Further, the API service repository (426) may be provided for API providers. The API gateway (408) may comprise a data transformation module (410). The data transformation module (410) may be configured for data transformation on request and in response to the data. The API gateway (408) is coupled with a database system (426).
[0078] The API gateway (408) may further comprise an API template configuration module (412). The API template configuration module (412) may be configured for template-based API integration with all API configurations. Further, an API traffic management module (414) may be provided in the API gateway (408). The API traffic management module (414) may be configured for traffic management, routing algorithm and traffic distribution, request aggregation etc.
[0079] A data manipulation module (416) may be provided in the API gateway (408), configured for data manipulation based on configuration. Further an API integration module (424) may be configured to enable integration of a new API by using template at runtime. All rules and integration logic can be defined in template itself. The API gateway (408) may further comprise a protocol translation module (422). The protocol translation module (422) is configured to convert request protocol to different protocol that enables interoperability and ensuring that the service API calls can be made seamlessly across different systems.
[0080] An API throttling and rate limit rule engine (418) provided in the API gateway (408) is to control access and rate the limit rule applicable for all consumer requests based on their access plan and validity. In the API throttling and rate limit rule engine (418), API throttling can be applied for each, and every request based on the request volume. Further, a data ingestion module (420) may be provided in the API gateway (408). By using the API throttling and rate limit rule engine (418), API throttling and rate limiting are critical components of managing API QoS and ensuring that services remain performant and reliable. The API throttling and rate limit rule engine (418) controls an amount and rate of API calls that can be made by clients, helping to prevent abuse, ensure fair usage, and maintain service stability.
[0081] In an aspect of the present system, a configurable API orchestration may be provided to allow multiple ways of routing the east bound API calls to multiple westbound API calls. Further, a dynamic transformation and manipulation of API Data enables the capability of transforming request as per destination application and also transform response as required by a sender.
[0082] FIG. 5 shows a sequence flow diagram (500) illustrating a method for handling API call flows, according to various embodiments of the present disclosure.
[0083] At step 502, the method includes configuring the API QoS by using the API throttling and rate limit rule engine (418). The API throttling and rate limit rule engine (418) may be configured to optimize the QoS performance by configuring the plurality of configuration parameter in run-time based on the predefined rules defined in the API throttling and rate limit rule engine (418) for the specified API requested by the consumer (402).
[0084] In an exemplary aspect, the API throttling and rate limit rule engine (418) may be configured to suppress the at least one parameter, based on configuration and attributes (e.g., latency, throughput, bandwidth, load balancing, error rate or the like) defined for the specific API. Further, additional parameters may also be configured for any specific request to be completed or fulfilled. The configuration and the attributes may be defined in a template. Further, these templates may be readily re-used for the plurality of configuring parameters, for other API. In another example, the API throttling and rate limit rule engine (418) may enable configuration for transforming data from one format to another, or route one request to multiple destinations.
[0085] Further, at step 504, the method includes initiating the API call by the consumer (402). At step 506, the method includes applying the AI/ML techniques, by using the API throttling and rate limit rule engine (418), to determine the optimized application and QoS rule to be applied for the API call. In an aspect of the present step, the Artificial Intelligence (AI)/Machine Learning (ML) techniques may be applied to the API throttling and rate limit rule engine (418), to implement the predefined rules defined in the API throttling and rate limit rule engine (418). The AI/ML techniques avoid sending the API calls to those API providers which may be facing resource crunch or any network fluctuation or any other problem, therefore advantageously ensuring efficient utilization of network resources.
[0086] Applying the AI/ML techniques to determine optimized application and QoS rules for the API calls involves leveraging data-driven insights to dynamically adjust rate limits, and throttling based on observed patterns and predictive analytics. The AI/ML techniques involve a data collection operation, a data preprocessing operation, a model training operation, a real-time analytics and optimization operation, and continuous improvement operation.
[0087] In the data collection operation, the data (e.g., timestamps, endpoints accessed, user IDs, and response times) is collected from a data source (e.g., website, or the like). The data preprocessing operation involves extracting relevant features from the collected data, such as request rates, average response times, peak usage times, and error rates. Further, the data preprocessing operation normalizes the data to ensure consistency, such as scaling request rates and response times to a common range.
[0088] The model training operation uses a clustering algorithm (e.g., K-means or the like) to group users or requests into segments with similar behaviors. This helps in identifying patterns and setting specific QoS rules for each segment. The clustering model groups users based on their API call patterns into different segments: low, moderate, and high usage.
[0089] The real-time analytics and optimization operation continuously monitors the API usage and system performance in real-time. Further, the real-time analytics and optimization operation uses the trained models to dynamically adjust QoS rules based on current conditions and predictions. For example: The real-time analytics and optimization operation increases rate limits during predicted peak times for high-usage segments while reducing limits during off-peak times. The continuous improvement operation regularly the retrain models with new data to adapt to changing usage patterns and improve predictions. The continuous improvement operation implements a feedback mechanism to assess the effectiveness of QoS adjustments and refine models based on real-world performance and user feedback.
[0090] Further, the method includes identifying whether the API call is in sync or async. In an embodiment, the communicating unit (222) enables to determine if the API call is in a single session or a single flow and if it is in sync or async. If the API call is in async, then the response will be provided later on once the API provider sends a response back.
[0091] Further, at step 508, the method includes processing and collecting the API response. At step 510, the method includes preparing the API response. Further, the method includes sending the API response to the consumer (402).
[0092] FIG. 6 is an example flow diagram (600) illustrating the method of handling API call flows, according to various embodiments of the present disclosure.
[0093] At step 602, the method includes creating the plurality of configuration parameters for the API call based on the predefined rules. In an embodiment, the method allows the processing unit (218) to create the plurality of configuration parameters for the API based on the predefined rules. The predefined rules are set by the API consumer (402) or the service provider. In an example, the predefined rules are related to the format of the API call.
[0094] At step 604, the method includes receiving an API call from a user equipment. Upon receiving the API call, the API call is initiated. The API call may be received at the framework. In an embodiment, the method includes receiving the API call from the UE (102) for the API subscription creation. In an embodiment, the method allows the receiving unit (216) to receive the API call from the UE (102) for the API subscription creation. The API call is the call made by the consumer (402) to the API endpoint to perform operations like retrieving or sending data. The API call for optimizing QoS involves ensuring that the API call are handled efficiently, reliably, and within acceptable performance metrics.
Further, at step 606, the method includes determining one or more configuration parameters to apply to the API call.
[0095] At step 608, a type of the API call is identified upon applying the one or more configuration parameters to the API call, wherein the type can be either sync or async. In an embodiment, the method allows the processing unit (218) to determine whether the API call is sync or async. Further, at step 610, the method includes processing the API response based on the type of the API call, and sending the API call to the consumer (402).
[0096] Below is the technical advancement of the present invention:
[0097] The system and the method further enables optimization of Quality of Services, consumer onboarding, API Traffic management, response time of service, API service discovery, manage multiple requests in the single API call and effective manner. The system and method enables run-time configuration of the parameters and rule. The system further enables creation of rules dynamically.
[0098] Based on the proposed method, a template-based Service API provisioning allows to create and manage APIs on-demand. The on-demand improves the agility, flexibility, and cost-efficiency of API development and management process as API is integrated dynamically.
[0099] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00100] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00101] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[00102] Environment - 100
[00103] UEs– 102, 102-1-102-n
[00104] Server - 104
[00105] Communication network – 106
[00106] System – 108
[00107] Processor – 202
[00108] Memory – 204
[00109] User Interface – 206
[00110] Display – 208
[00111] Input device – 210
[00112] Database – 214
[00113] Receiving unit– 216
[00114] Processing unit – 218
[00115] API engine – 220
[00116] Communicating unit – 224
[00117] System - 300
[00118] Primary processors -305
[00119] Memory– 310
[00120] Kernel– 315
[00121] Example system – 400
[00122] API consumer – 402
[00123] ELB unit - 404a, 404b
[00124] IAM unit – 406
[00125] API gateway – 408
[00126] Data transformation module – 410
[00127] API template configuration module – 412
[00128] API Traffic Management module – 414
[00129] Data manipulation module – 416
[00130] API throttling and rate limit rule engine – 418
[00131] Data ingestion module – 420
[00132] Protocol translation module – 422
[00133] API integration module – 424
[00134] API service repository – 426
[00135] Database system - 428
,CLAIMS:CLAIMS:
We Claim:
1. A method of handling Application Programming Interface (API) call flows, the method comprising the steps of:
configuring, by one or more processors (202), a plurality of configuration parameters based on rules defined in an API Quality of Service (QoS) engine;
receiving, by the one or more processors (202), an API call from a user equipment (102);
determining, by the one or more processors (202), one or more configuration parameters to apply to the API call;
identifying, by the one or more processors (202), a type of the API call upon applying the one or more configuration parameters to the API call, wherein the type can be one of a sync type or an async type; and
preparing, by the one or more processors (202), an API response based on the type of the API call.
2. The method as claimed in claim 1, wherein the plurality of configuration parameters for the API request are created in run-time, wherein the plurality of configuration parameters comprises at least one of: a response time parameter, a network optimization parameter, a request handling parameter, a concurrency parameter, an error handling parameter, a failover mechanism parameter, a secure request parameter, and a data integrity parameter.
3. The method as claimed in claim 1, further comprising:
determining, by the one or more processors (202), whether the API response is to be communicated for a single session or a plurality of sessions, based on the plurality of configuration parameters.
4. The method as claimed in claim 1, wherein the predefined rules are related to at least one of: a format of the API call, a size of the API call, an execution time of the API call, an encryption format of the API call and a decryption format of the API call.
5. The method as claimed in claim 1, wherein the API response defines a response structure, wherein the response structure comprises a status code, a response body, a mask sensitive data, and an access control data.
6. The method as claimed in claim 1, wherein the one or more processors, identifies that the API call is the sync type, when the API response is received from an API provider within a same session between the user and the API provider.
7. The method as claimed in claim 1, wherein the one or more processors, identifies that the API call is the async type, when the API response is received, from the API provider, irrespective of a session, correlated by a session identifier.
8. A system (108) for handling Application Programming Interface (API) call flows, wherein the system (108) comprises:
an API engine (220) configured to:
create a plurality of configuration parameters in run-time based on rules defined in the API engine (220);
a receiving unit (216) configured to:
receive an API call from a user equipment (102);
a processing unit (218) configured to:
determine one or more configuration parameters to apply to the API call; and
identify a type of the of the API call upon applying the one or more configuration parameters to the API call, wherein the type can be one of a sync type or an async type; and
a communicating unit (222) configured to:
prepare an API response based on the type of the API call.
9. The system (108) as claimed in claim 8, wherein the processing unit (218) is further configured to:
determine whether the API response is to be communicated for a single session or a plurality of sessions, based on the plurality of configuration parameters.
10. The system (108) as claimed in claim 8, wherein the plurality of configuration parameters comprises at least one of: a response time parameter, a network optimization parameter, a request handling parameter, a concurrency parameter, an error handling parameter, a failover mechanism parameter, a secure request parameter, and a data integrity parameter.
11. The system (108) as claimed in claim 8, wherein the predefined rules are related to at least one of: a format of the API call, a size of the API call, an execution time of the API call, an encryption format of the API call and a decryption format of the API call.
12. The system (108) as claimed in claim 8, wherein the API response defines a response structure, wherein the response structure comprises a status code, a response body, a mask sensitive data, and an access control data.
13. The system as claimed in claim 8, wherein the processing unit (218) identifies that the API call is the sync type, when the API response is received from an API provider within a same session between the user and the API provider.
14. The system as claimed in claim 8, wherein the processing unit (218) identifies that the API call is the async type, when the API response is received, from the API provider, irrespective of a session, correlated by a session identifier.
15. A User Equipment (UE) (102-1), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (202) of a system (108), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (102-1) to:
transmit an API call to the one or more processers;
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321060143-STATEMENT OF UNDERTAKING (FORM 3) [07-09-2023(online)].pdf | 2023-09-07 |
| 2 | 202321060143-PROVISIONAL SPECIFICATION [07-09-2023(online)].pdf | 2023-09-07 |
| 3 | 202321060143-FORM 1 [07-09-2023(online)].pdf | 2023-09-07 |
| 4 | 202321060143-FIGURE OF ABSTRACT [07-09-2023(online)].pdf | 2023-09-07 |
| 5 | 202321060143-DRAWINGS [07-09-2023(online)].pdf | 2023-09-07 |
| 6 | 202321060143-DECLARATION OF INVENTORSHIP (FORM 5) [07-09-2023(online)].pdf | 2023-09-07 |
| 7 | 202321060143-FORM-26 [17-10-2023(online)].pdf | 2023-10-17 |
| 8 | 202321060143-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321060143-DRAWING [07-09-2024(online)].pdf | 2024-09-07 |
| 10 | 202321060143-COMPLETE SPECIFICATION [07-09-2024(online)].pdf | 2024-09-07 |
| 11 | Abstract 1.jpg | 2024-10-03 |
| 12 | 202321060143-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321060143-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321060143-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321060143-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321060143-FORM 3 [29-01-2025(online)].pdf | 2025-01-29 |
| 17 | 202321060143-Power of Attorney [03-02-2025(online)].pdf | 2025-02-03 |
| 18 | 202321060143-Form 1 (Submitted on date of filing) [03-02-2025(online)].pdf | 2025-02-03 |
| 19 | 202321060143-Covering Letter [03-02-2025(online)].pdf | 2025-02-03 |
| 20 | 202321060143-CERTIFIED COPIES TRANSMISSION TO IB [03-02-2025(online)].pdf | 2025-02-03 |
| 21 | 202321060143-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |