Sign In to Follow Application
View All Documents & Correspondence

Method And System For Performing Dynamic Application Programming Interface (Api) Orchestration

Abstract: ABSTRACT METHOD AND SYSTEM FOR PERFORMING A DYNAMIC APPLICATION PROGRAMMING INTERFACE (API) ORCHESTRATION The present disclosure relates to a method of performing dynamic API orchestration by one or more processors (202). The method includes comparing a new API with each API existing in a list of APIs. Further, the method includes identifying that the new API matches with the existing API in the list of APIs based on the comparison. Further, the method includes determining a type of the new API based on identified match of the new API with the existing API. Further, the method includes generating one or more API responses based on the type of the new API. Further, the method includes sending a final API response to the user equipment (102), subsequent to generating the final API response based on the one or more API responses generated and a plurality of parameters provided in the API configuration file. Ref. FIG. 5

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD - 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
2. Sandeep Bisht
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
3. Suman Singh Kanwer
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
4. Ankur Mishra
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India

Specification

DESC:
FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR PERFORMING DYNAMIC APPLICATION PROGRAMMING INTERFACE (API) ORCHESTRATION

2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of application programming interfaces (APIs) and more particularly, the invention pertains to a system and method for performing runtime API orchestration.
BACKGROUND OF THE INVENTION
[0002] Generally, systems integrate a plurality of applications into a single service using API orchestration. In operation, a system incorporating API orchestration involves responding to a single API request by making multiple calls to multiple different services or microservices. An API gateway is provided to perform the function of an intermediary between a client and a backend of the system. The API gateway is configured to perform various operations such as orchestration, caching, monitoring and transformations.
[0003] A problem with the API orchestration occurs when there is a need to add new APIs or change existing APIs at run time. Currently, when a new API is required to be added, or an existing API needs to be modified, or multiple API calls need to be managed, one needs to make changes to a code or logic in the system, get the same approved and get the functionality tested before making it a part of the system. This is a long and tedious process which results in loss of time and resources.
[0004] In order to avoid such loss of time and resources, there is a need for a system and method that can add new APIs and manage multiple API calls dynamically in real time. Accordingly, a system and method for performing runtime API orchestration is proposed.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a system and a method for performing a dynamic application programming interface orchestration.
[0006] In one aspect of the present invention, a method of performing dynamic API orchestration is provided. The method includes receiving, by one or more processors, an API call from a user equipment, where the API call relates to integration of a new API. Further, the method includes determining, by the one or more processors, existence of an API configuration file, wherein the API configuration file comprises a list of APIs. Further, the method includes comparing, by the one or more processors, the new API with each API existing in the list of APIs. Further, the method includes identifying, by the one or more processors, that the new API matches with the existing API in the list of APIs based on the comparison. Further, the method includes determining, by the one or more processors, a type of the new API based on identified match of the new API with the existing API. Further, the method includes generating, by the one or more processors, one or more API responses based on the type of the new API. Further, the method includes sending, by the one or more processors, a final API response to the user equipment, subsequent to generating the final API response based on the one or more API responses generated and a plurality of parameters provided in the API configuration file.
[0007] In an embodiment, the method includes receiving, by the one or more processors, the new API for integration via the API call. Further, the method includes configuring, by the one or more processors, the new API at runtime.
[0008] In an embodiment, comparing, by the one or more processors, the new API with each API existing in the list of APIs, includes the steps of determining, by the one or more processors, similarities between the new API and each of the API existing in the list of APIs.
[0009] In an embodiment, the type of the new API includes at least one of: an asynchronous (async) type and a synchronous (sync) type.
[0010] In an embodiment, the step of generating the one or more API responses based on an async type of the new API, further includes initiating, by the one or more processors, an API call, collecting, by the one or more processors, an API response for preparing the final API response, checking, by the one or more processors, for another match for the new API from the list of APIs, and repeating, by the one or more processors, the step of generating an API response for the other match.
[0011] In an embodiment, the step of generating the one or more API response based on a sync type of the new API, further includes initiating, by the one or more processors, an API call, collecting, by the one or more processors, an API response for preparing the final API response, wherein the API response is awaited before the collection of the API response, checking, by the one or more processors, for another match for the new API from the list of APIs, and repeating, by the one or more processors, the step of generating an API response for the other match.
[0012] In an embodiment, the final API response is a collection of one or more API responses generated, wherein an API response is generated whenever the new API matches with the API in the list of APIs.
[0013] In another aspect of the present invention, a system for performing dynamic API orchestration is provided. The system includes an API gateway configured to receive an API call from a user equipment, where the API call relates to integration of a new API. Further, the API gateway is configured to determine existence of an API configuration file, wherein the API configuration file comprises a list of APIs. Further, the API gateway is configured to compare the new API with each API existing in the list of APIs. Further, the API gateway is configured to identify that the new API matches with the existing API in the list of APIs based on the comparison. Further, the API gateway is configured to determine a type of the new API based on identified match of the new API with the existing API. Further, the API gateway is configured to generate one or more API responses based on the type of the new API. Further, the API gateway is configured to send a final API response to the user equipment, subsequent to generating the final API response based on the one or more API responses generated and a plurality of parameters provided in the API configuration file.
[0014] In an embodiment, the API gateway is configured to read the API configuration file upon receiving the API call related to the new API. Further, the API gateway is configured to iterate through the list of APIs present in the API configuration file. Further, the API gateway is configured to initiate an API call when the new API matches with an API in the list of APIs.
[0015] In an embodiment, the API gateway is configured to collect an API response upon initiating the API call, wherein the API response is awaited before the collection of the API response, when the type of the new API is sync, and wherein the API response is based on a type of the new API. Further, the API gateway is configured to store the API response for processing the final API response. Further, the API gateway is configured to check for another match for the new API from the list of APIs. Further, the API gateway is configured to repeat the step of collecting an API response for the other match.
[0016] In an embodiment, the API gateway is configured to collect an API response upon initiating the API call, when the type of the new API is async, and wherein the API response is based on a type of the new API. Further, the API gateway is configured to store the API response for processing the final API response. Further, the API gateway is configured to check for another match for the new API from the list of APIs. Further, the API gateway is configured to repeat the step of collecting an API response for the other match.
[0017] In an embodiment, the API orchestration module is activated at run time upon receiving the API call at the API gateway, and wherein the activation comprises determining a match of the new API within the list of APIs stored in the API configuration file.
[0018] In an embodiment, an API orchestration module is configured to store the list of APIs in an API configuration file.
[0019] In an embodiment, the final API response generated subsequent to collecting the one or more API responses by an API response collection module of the API gateway.
[0020] In another aspect of the present invention, an non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to receive an API call from a user equipment, wherein the API call relates to integration of a new API; determine existence of an API configuration file, wherein the API configuration file comprises a list of APIs; compare the new API with each API existing in the list of; identify that the new API matches with the existing API in the list of APIs based on the comparison; determine a type of the new API based on identified match of the new API with the existing API; generate a one or more API responses based on the type of the new API; and send a final API response to the user equipment, subsequent to generating the final API response based on one or more API responses generated and a plurality of parameters provided in the API configuration file.
[0021] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.

BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0023] FIG. 1 is an exemplary block diagram of an environment for performing a dynamic application programming interface orchestration, according to various embodiments of the present disclosure.
[0024] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
[0025] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0026] FIG. 4 illustrates a system for performing dynamic API orchestration, according to various embodiments of the present system.
[0027] FIG. 5 illustrates a method for performing the dynamic API orchestration, according to various embodiments of the present system.
[0028] FIG. 6 is an example flow diagram illustrating the method for performing the dynamic application programming interface orchestration, according to various embodiments of the present disclosure
[0029] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0030] The foregoing shall be more apparent from the following detailed description of the invention.

DETAILED DESCRIPTION OF THE INVENTION
[0031] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0032] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0033] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0034] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0035] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0036] Further, the terms first, second etc. may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0037] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0038] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0039] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0040] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0041] Various embodiments of the invention provide a method of performing dynamic API orchestration. The method includes receiving, by one or more processors, an API call from a user equipment, where the API call relates to integration of a new API. Further, the method includes determining, by the one or more processors, existence of an API configuration file, where the API configuration file comprises a list of APIs. Further, the method includes comparing, by the one or more processors, the new API with each API existing in the list of APIs. Further, the method includes identifying, by the one or more processors, that the new API matches with the existing API in the list of APIs based on the comparison. Further, the method includes determining, by the one or more processors, a type of the new API based on identified match of the new API with the existing API. Further, the method includes generating, by the one or more processors, one or more API responses based on the type of the new API. Further, the method includes sending, by the one or more processors, a final API response to the user equipment, subsequent to generating the final API response based on the one or more API responses generated and a plurality of parameters provided in the API configuration file.
[0042] Based on the proposed method and system, an API gateway is configured to edit or add new APIs and change a behaviour of a plurality of APIs without having to undergo any changes in the code. For example, a behaviour of an API from sync to Async and vice versa is manageable by the API gateway.
[0043] FIG. 1 illustrates an exemplary block diagram of an environment (100) for performing a dynamic application programming interface (API) orchestration, according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ……,102-n. The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, ……102-n) is configured to connect to a system (108) via a communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is 102.
[0044] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities. In an embodiment, the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices (or an input unit) for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, a landline with assigned extension within the communication network (106).
[0045] The plurality of UEs (102) may comprise a memory such as a volatile memory (e.g., RAM), a non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory. In one implementation, the memory might be configured or designed to store data. The data may pertain to attributes and access rights specifically defined for the plurality of UEs (102). The UE (102) may be accessed by the user, to receive the requests related to an order determined by the system (108). The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0046] The system (108) is communicatively coupled to a server (104) via the communication network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility) that provides service.
[0047] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0048] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0049] One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE or mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3GPP or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit.
[0050] 3GPP: The term “3GPP” is a 3rd Generation Partnership Project and is a collaborative project between a group of telecommunications associations with the initial goal of developing globally applicable specifications for Third Generation (3G) mobile systems. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications. The 3GPP specifications also provide hooks for non-radio access to the core network, and for networking with non-3GPP networks.
[0051] The system (108) may include one or more processors (202) coupled with a memory (204), wherein the memory (204) may store instructions which when executed by the one or more processors (202) may cause the system (108) executing requests in the communication network (106) or the server (104). An exemplary representation of the system (108) for such purpose, in accordance with embodiments of the present disclosure, is shown in FIG. 2 as system (108). In an embodiment, the system (108) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
[0052] The environment (100) further includes the system (108) communicably coupled to the remote server (104) and each UE of the plurality of UEs (102) via the communication network (106). The remote server (104) is configured to execute the requests in the communication network (106).
[0053] The system (108) is adapted to be embedded within the remote server (104) or is embedded as an individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for a workflow associated with an API call, which gets reflected in real-time independent of the complexity of network.
[0054] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises entities, ecommerce entities, finance entities to update/create/delete information related to the API call in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the workflow and perform real-time analysis in the system (108).
[0055] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility) that provides service.
[0056] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0057] FIG. 2 illustrates a block diagram of the system (108) provided for performing dynamic API orchestration, according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), an interface (e.g., input/output interface unit, user interface or the like) (206), a display (208), an input unit (210), and a centralized database (or database) (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0058] The information related to the API call may be provided or stored in the memory (204) of the system (108). The Information in the API call refers to the data, parameters, headers, or metadata that a client application includes when making a request to an API endpoint. This information is essential for the API provider to process the request correctly and generate an appropriate response. Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0059] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and a database. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0060] The information related to the API call may further be configured to render on the user interface (206). The user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input unit (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0061] The centralized database (214) may be communicably connected to the processor (202) and the memory (204). The centralized database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator. Further the remote server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the API call, which provides flexibility to roll out multiple variants of the request as per business needs. In another embodiment, the centralized database (214) may be outside the system (108) and communicated through a wired medium and wireless medium.
[0062] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0063] In order for the system (108) to perform the dynamic application programming interface (API) orchestration, the processor (202) may include an API gateway (216) and an API orchestration module (218) (for example). The API gateway (216) and the API orchestration module (218) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0064] In order for the system (108) to perform the dynamic application programming interface (API) orchestration, the API gateway (216) and the API orchestration module (218) are communicably coupled to each other. In an example embodiment, the API gateway (216) receives an API call from the UE (102), where the API call relates to integration of a new API. Further, the API gateway (216) determines existence of an API configuration file, where the API configuration file includes a list of APIs. The list of APIs in the API configuration file is stored in the API orchestration module (218).
[0065] Further, the API gateway (216) compares the new API with each API existing in the list of APIs. In an embodiment, the API gateway (216) compares the new API with each API existing in the list of APIs by determining similarities between the new API and each of the API existing in the list of APIs. In an example, determining the similarities between the API orchestration configuration file between the new API and each of the API existing in the list of APIs. In another example, the determining the similarities between a parameter between the new API and each of the API existing in the list of APIs. The parameter can be, for example, but not limited to logging levels, log file locations, integration with monitoring tools, error messages information or the like. Further, the API gateway (216) identifies that the new API matches with the existing API in the list of APIs based on the comparison. Further, the API gateway (216) determines a type of the new API based on the identified match of the new API with the existing API. In an embodiment, the type of the new API includes at least one of: an asynchronous (async) type and a synchronous (sync) type. The sync API (or synchronous API) operates in a straightforward manner where each request made by a client waits for a response from the server (104) before proceeding further. In other words, when a client sends a request to a synchronous API endpoint, it blocks and waits until it receives a complete response from the server (104). The async API (or asynchronous API) operates differently by allowing clients to send requests without waiting for an immediate response. Instead of blocking, the client can continue with other tasks or processes while waiting for the response from the server (104).
[0066] Further, the API gateway (216) generates one or more API responses based on the type of the new API. In an embodiment, the one or more API responses, based on the async type of the new API, is generated by initiating the API call, collecting an API response for preparing the final API response, checking for another match for the new API from the list of APIs, and repeating the process of generating an API response for the other match. The final API response is generated based on the one or more API responses and the plurality of parameters provided in the API configuration file.
[0067] In an embodiment, the one or more API response, based on a sync type of the new API, is generated by initiating an API call, collecting an API response for preparing the final API response, checking for another match for the new API from the list of APIs, and repeating the process of generating an API response for the other match.
[0068] Further, the API gateway (216) sends a final API response to the user equipment, subsequent to generating the final API response based on the one or more API responses generated and a plurality of parameters provided in the API configuration file. The parameter in the context of the service API provided by the API provider refer to specific settings or options that can be customized or configured by developers or users of the API. These parameters typically influence the behavior, functionality, or performance of the API and may vary depending on the API provider and the specific API. The parameter may include settings related to logging levels, log file locations, integration with monitoring tools, or specifying how detailed error messages should be. In an embodiment, the final API response is a collection of one or more API responses generated, where an API response is generated whenever the new API matches with the API in the list of APIs. For example, a successful response gets recorded when the new API matches with the API in the list of APIs. Further, an unsuccessful response gets recorded when the new API does not match with the API in the list of APIs. The successful response means all the required configurations or scripts for the API orchestration are there in the system (108). The unsuccessful response means, an admin or the user of the UE (102) have to do all the configuration or provide all the required scripts for orchestration of the API.
[0069] In an embodiment, the API gateway (216) reads the API configuration file upon receiving the API call related to the new API. Further, the API gateway (216) iterates through the list of APIs present in the API configuration file. Further, the API gateway (216) initiates the API call when the new API matches with an API in the list of APIs.
[0070] In an embodiment, the API gateway (216) collects an API response upon initiating the API call, where the API response is awaited before the collection of the API response, when the type of the new API is sync, and where the API response is based on a type of the new API. The list of APIs in the API configuration file is stored in the API orchestration module (218). Further, the API gateway (216) stores the API response for processing the final API response. Further, the API gateway (216) checks for another match for the new API from the list of APIs. Further, the API gateway (216) repeats the step of collecting an API response for the other match.
[0071] In an embodiment, the API gateway (216) collects an API response upon initiating the API call, when the type of the new API is async, and where the API response is based on a type of the new API. Further, the API gateway (216) stores the API response for processing the final API response. Further, the API gateway (216) checks for another match for the new API from the list of APIs. Further, the API gateway (216) repeats the step of collecting an API response for the other match.
[0072] In an embodiment, the API orchestration module (218) is activated at run time upon receiving the API call at the API gateway (216), and where the activation includes determining a match of the new API within the list of APIs stored in the API configuration file.
[0073] In an embodiment, the API orchestration module (218) is configured to store the list of APIs in an API configuration file.
[0074] In an embodiment, the API gateway (216) receives the new API for integration via the API call and configures the new API at runtime.
[0075] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0076] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE (102-1) to execute the requests in the communication network (106).
[0077] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the request to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0078] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210). The operations and functions of the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the API gateway (216) and the API orchestration module (218). The operations and functions of the API gateway (216) and the API orchestration module (218) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0079] FIG. 4 depicts a system 400, in which various embodiments of the present invention can be practiced. The system 400 includes an API service repository (402), an API gateway (216), one or more Elastic Load Balancers (ELBs) (416a, 416b), and an Identity and Access Management (IAM) (418). The API service repository (402) includes a plurality of Service APIs (404a-404n). The API gateway (216) includes an API response collector module (408), an API orchestration configuration module (410), an API Sync call module (412), and an API ASync call module (412). The ELBs (416a, 416b) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more availability zones. The IAM (418) is used for authentication and authorization of third-party consumers.
[0080] In order to achieve runtime or dynamic API orchestration, all the API calls along with a type of an API call such as Async or Sync API call, is configured in the API Orchestration configuration file (hereinafter referred to as “configuration file”). In order to achieve such dynamic API orchestration, all the API calls whether the Sync or the Async are configured one by one into the API orchestration configuration module (410). The API orchestration functionality is automatically activated in run time upon receiving an API call from the API consumer (420) by mapping the received API call to the list of API calls stored in the API orchestration configuration module (410).
[0081] The API orchestration is automatically reflected at run time when the API call is initiated from an API consumer (420). The API gateway (216) reads the API orchestration configuration module (410) and iterates through all the APIs present in the configuration file and each API call is initiated sequentially. In case, an API is Async, the call is initiated and the API response is automatically stored for processing the final API response. The next API call is then processed. In case the API is sync, then the API call is initiated and an API response is awaited. In synchronous (sync) APIs, the response is awaited because of the fundamental design principle that each request made by a client blocks and waits until it receives a complete response from the server before proceeding further. This waiting or blocking behavior is intrinsic to how synchronous APIs operate and is influenced by several factors (e.g., sequential processing, or the like). The API response is processed and stored to prepare the final API response. When all the API calls mentioned in API Orchestration configuration file are completed, then all the responses from APIs are collected and the final response of all the APIs is prepared and stored in the API response collector module (408). The final API response is then provided to the API consumer (420).
[0082] Dynamic API addition, deletion and updation into the configuration file is done in real time depending on the type of the API call (e.g., Sync or Async). A method of performing the API orchestration is described in FIG. 6.
[0083] FIG. 5 is a flow chart (500) illustrating a method for performing the dynamic application programming interface orchestration, according to various embodiments of the present system. At step 502, the method includes receiving the API call from the user equipment (102), where the API call relates to integration of the new API. At step 504, the method includes determining existence of the API configuration file, where the API configuration file includes the list of APIs. At step 506, the method includes comparing the new API with each API existing in the list of APIs.
[0084] At step 508, the method includes identifying that the new API matches with the existing API in the list of APIs based on the comparison. At step 510, the method includes determining the type of the new API based on identified match of the new API with the existing API. At step 512, the method includes generating the one or more API responses based on the type of the new API.
[0085] At step 514, the method includes sending the final API response to the user equipment, subsequent to generating the final API response based on the one or more API responses generated and the plurality of parameters provided in the API configuration file.
[0086] FIG. 6 is a flowchart 600 depicting a method of performing dynamic run time API orchestration.
[0087] At step 602, the new API for integrating or updating into the system (108) is received. At 604, the API to be configured is received by the API gateway (216) at runtime.
[0088] At 606, the API call is initiated by the API consumer (420). At 608, the API gateway (216) checks if an API orchestration configuration file is present. At 610, an error message is prepared and sent to the user, in case the API orchestration configuration file is not available. The method then comes to a halt.
[0089] At 612, in case the configuration file is present, then all the APIs that have been configured in the API orchestration configuration file are listed, and the API gateway (216) iterates through each API one by one.
[0090] At 614, the new API to be integrated is compared against each API present in the API orchestration configuration file. A check is made to determine if the new API already exist in the configuration file. If the new API does not exist in the configuration file, then the final API response is sent to the consumer at step 622.
[0091] At 616, in case the new API matches with an existing API in the configuration file, a check is made to identify if the API is async.
[0092] If the API is not async (i.e. the API is sync), step 618 comes into process. At step 618, the API call is initiated, and the response is awaited before collecting the API response. The API response is used to prepare the final API response as per the parameters provided in the orchestration configuration file.
[0093] At 620, in case the new API is async, the API call is initiated and an API response is collected for preparing the final API response.
[0094] The advantage of the disclosed process is the dynamic API orchestration that is achieved in run time. The new APIs can be added into the system (108) without having to change the code. Further, existing APIs can also be modified, and updated into the configuration file. For example, if the API call changes from sync to async, the response is noted into the configuration file. In a subsequent API call, the response that was stored in the configuration file for the modified API, is provided to the user.
[0095] As a result, productivity of the system is improved, time management is better, as code changes to the system are eliminated to accommodate new APIs. The deployment and microservice complexities are overcome. Over the process of adding anything new into the API Orchestration is made feasible and convenient.
[0096] The API gateway (216) is configured to edit or add new APIs and change the behaviour of the plurality of APIs by making changes to the API orchestration configuration file. The dynamic and runtime API orchestration is achieved without making changes to the code.
[0097] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0098] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0099] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[00100] Environment - 100
[00101] UEs– 102, 102-1-102-n
[00102] Server - 104
[00103] Communication network – 106
[00104] System – 108
[00105] Processor – 202
[00106] Memory – 204
[00107] Interface – 206
[00108] Display – 208
[00109] Input unit – 210
[00110] Centralized Database – 214
[00111] API gateway – 216
[00112] API orchestration module - 218
[00113] System - 300
[00114] Primary processors -305
[00115] Memory– 310
[00116] Kernel– 315
[00117] System - 400
[00118] API services repository – 402
[00119] Service APIs – 404
[00120] API response collector – 408
[00121] API orchestration configuration module – 410
[00122] API Sync call module – 412
[00123] API ASync call module - 414
[00124] ELB – 416a, 416b
[00125] IAM – 418
[00126] API consumer – 420

,CLAIMS:CLAIMS:
We Claim
1. A method of performing dynamic Application Programming Interface (API) orchestration, the method comprising the steps of:
receiving, by one or more processors (202), an API call from a user equipment (UE) (102), wherein the API call relates to integration of a new API;
determining, by the one or more processors (202), existence of an API configuration file, wherein the API configuration file comprises a list of APIs;
comparing, by the one or more processors (202), the new API with each API existing in the list of APIs;
identifying, by the one or more processors (202), that the new API matches with the existing API in the list of APIs based on the comparison;
determining, by the one or more processors (202), a type of the new API based on identified match of the new API with the existing API;
generating, by the one or more processors (202), one or more API responses based on the type of the new API; and
sending, by the one or more processors (202), a final API response to the user equipment (102), subsequent to generating the final API response based on the one or more API responses generated and a plurality of parameters provided in the API configuration file.

2. The method as claimed in claim 1, further comprising:
receiving, by the one or more processors (202), the new API for integration via the API call; and
configuring, by the one or more processors (202), the new API at runtime.

3. The method as claimed in claim 1, wherein the step of comparing, by the one or more processors (202), the new API with each API existing in the list of APIs, includes the steps of:
determining, by the one or more processors (202), similarities between the new API and each of the API existing in the list of APIs.

4. The method as claimed in claim 1, wherein the type of the new API includes at least one of: an asynchronous (async) type and a synchronous (sync) type.

5. The method as claimed in claim 1, wherein the step of generating the one or more API responses based on an async type of the new API, further comprises:
initiating, by the one or more processors (202), an API call;
collecting, by the one or more processors (202), an API response for preparing the final API response;
checking, by the one or more processors (202), for another match for the new API from the list of APIs; and
repeating, by the one or more processors (202), the step of generating an API response for the other match.

6. The method as claimed in claim 1, wherein the step of generating the one or more API response based on a sync type of the new API, further comprises:
initiating, by the one or more processors (202), an API call;
collecting, by the one or more processors (202), an API response for preparing the final API response, wherein the API response is awaited before the collection of the API response;
checking, by the one or more processors (202), for another match for the new API from the list of APIs; and
repeating, by the one or more processors (202), the step of generating an API response for the other match.

7. The method as claimed in claim 1, wherein the final API response is a collection of one or more API responses generated, wherein an API response is generated whenever the new API matches with the API in the list of APIs.

8. A system (108) for performing dynamic Application Programming Interface (API) orchestration, the system (108) comprising:
an API gateway (216) configured to:
receive an API call from a user equipment (102), wherein the API call relates to integration of a new API;
determine existence of an API configuration file, wherein the API configuration file comprises a list of APIs;
compare the new API with each API existing in the list of APIs;
identify that the new API matches with the existing API in the list of APIs based on the comparison;
determine a type of the new API based on identified match of the new API with the existing API;
generate one or more API responses based on the type of the new API; and
send a final API response to the user equipment (102), subsequent to generating the final API response based on the one or more API responses generated and a plurality of parameters provided in the API configuration file.

9. The system (108) as claimed in claim 8, wherein the API gateway (216) is further configured to:
read the API configuration file upon receiving the API call related to the new API;
iterate through the list of APIs present in the API configuration file; and
initiate an API call when the new API matches with an API in the list of APIs.

10. The system (108) as claimed in claim 8, wherein the API gateway (216) is further configured to:
collect an API response upon initiating the API call, wherein the API response is awaited before the collection of the API response, when the type of the new API is sync, and wherein the API response is based on a type of the new API;
store the API response for processing the final API response;
check for another match for the new API from the list of APIs; and
repeat the step of collecting an API response for the other match.

11. The system (108) as claimed in claim 8, wherein the API gateway (216) is further configured to:
collect an API response upon initiating the API call, when the type of the new API is async, and wherein the API response is based on a type of the new API;
store the API response for processing the final API response;
check for another match for the new API from the list of APIs; and
repeat the step of collecting an API response for the other match.

12. The system (108) as claimed in claim 8, wherein an API orchestration module (218) is activated at run time upon receiving the API call at the API gateway (216), and wherein the activation comprises determining a match of the new API within the list of APIs stored in the API configuration file.

13. The system (108) as claimed in claim 8, wherein an API orchestration module (218) is configured to store the list of APIs in an API configuration file.

14. The system (108) as claimed in claim 8, wherein the final API response generated subsequent to collecting the one or more API responses by an API response collection module (218) of the API gateway (216).

15. A User Equipment (UE) (102-1), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (202) of a system (108), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (102-1) to:
transmit a message to the one or more processers (202);
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321047703-STATEMENT OF UNDERTAKING (FORM 3) [14-07-2023(online)].pdf 2023-07-14
2 202321047703-PROVISIONAL SPECIFICATION [14-07-2023(online)].pdf 2023-07-14
3 202321047703-FORM 1 [14-07-2023(online)].pdf 2023-07-14
4 202321047703-FIGURE OF ABSTRACT [14-07-2023(online)].pdf 2023-07-14
5 202321047703-DRAWINGS [14-07-2023(online)].pdf 2023-07-14
6 202321047703-DECLARATION OF INVENTORSHIP (FORM 5) [14-07-2023(online)].pdf 2023-07-14
7 202321047703-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321047703-Proof of Right [04-01-2024(online)].pdf 2024-01-04
9 202321047703-FORM-5 [13-07-2024(online)].pdf 2024-07-13
10 202321047703-DRAWING [13-07-2024(online)].pdf 2024-07-13
11 202321047703-COMPLETE SPECIFICATION [13-07-2024(online)].pdf 2024-07-13
12 Abstract-1.jpg 2024-09-02
13 202321047703-Power of Attorney [11-11-2024(online)].pdf 2024-11-11
14 202321047703-Form 1 (Submitted on date of filing) [11-11-2024(online)].pdf 2024-11-11
15 202321047703-Covering Letter [11-11-2024(online)].pdf 2024-11-11
16 202321047703-CERTIFIED COPIES TRANSMISSION TO IB [11-11-2024(online)].pdf 2024-11-11
17 202321047703-FORM 3 [28-11-2024(online)].pdf 2024-11-28
18 202321047703-FORM 18 [20-03-2025(online)].pdf 2025-03-20