Abstract: ABSTRACT METHOD AND SYSTEM FOR SCALING ONE OR MORE INSTANCES OF APPLICATIONS IN A NETWORK The present disclosure relates to a method for dynamically scaling one or more instances of applications in a network (106). The method includes instantiating the instances of applications based on a requirement of the instances of the applications for handling load in the network. Further, the method includes transmitting service API details to the instantiated instances of the applications. Further, the method includes transferring an existing service API from an existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details. Further, the method includes configuring one or more parameters pertaining to the transferred existing service API at the instantiated instances of the applications. Further, the method includes receiving a service API call at the instantiated one or more instances of the applications subsequent to configuring, the one or more parameters. Ref. FIG. 7
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR SCALING ONE OR MORE INSTANCES OF APPLICATIONS IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates generally to the field of communication technology, and in particular to a system and a method for dynamically scaling an Application programming interface (API) capacity for efficiently configuring and distributing loads in 3rd Generation Partnership Project (3GPP) and Non-3GPP applications.
BACKGROUND OF THE INVENTION
[0002] A demand for wireless data traffic has seen a significant increase since the deployment of a fourth generation (4G) communication systems. As a result, there has been a concerted effort to develop an improved communication system known as the 5th Generation (5G) network. This innovative technology aims to address the escalating demand for data connectivity and offers enhanced features and capabilities.
[0003] An implementation of the 5G communication system focuses on higher frequency bands, specifically mmWave (millimeter-wave) bands, such as the 60 GHz bands. By utilizing these higher frequency bands, a 5G system can achieve significantly higher data rates compared to previous generations. To optimize performance, techniques such as beamforming, massive multiple-input multiple-output (MIMO), Full Dimensional MIMO (FD-MIMO), array antennas, analog beamforming, and large-scale antenna systems are being explored within the realm of 5G communication systems. These advancements aim to reduce propagation loss of radio waves and increase the transmission distance, ensuring efficient and seamless wireless connectivity.
[0004] Moreover, the development of 5G communication systems also encompasses network improvements to meet the evolving needs of users. These improvements include advanced small cells, cloud Radio Access Networks (RANs), ultra-dense networks, device-to-device (D2D) communication, wireless backhaul, moving networks, cooperative communication, Coordinated Multi-Points (CoMP), and reception-end interference cancellation. These advancements collectively work towards creating a robust and reliable network infrastructure capable of handling the growing demands of wireless data traffic.
[0005] In terms of coding and access technologies, the 5G system introduces several innovative techniques. Hybrid Frequency Shift Keying (FSK) and quadrature amplitude modulation (QAM) Modulation (FQAM) and sliding window superposition coding (SWSC) have been developed as advanced coding modulation (ACM) schemes. These coding techniques enhance data transmission efficiency and reliability. Additionally, the 5G system employs filter bank multi-carrier (FBMC), non-orthogonal multiple access (NOMA), and sparse code multiple access (SCMA) as advanced access technologies. These techniques optimize spectrum utilization, support diverse use cases, and improve overall network performance.
[0006] The 3GPP plays a crucial role in setting the standards for mobile communication systems. In 3GPP, scaling an API service to increase capacity while ensuring zero downtime can be complex, particularly there are challenges during migrations from one endpoint to another and from one Edge Load Balancer (ELB) to another. These challenges include synchronizing data between endpoints, managing Domain Name System (DNS) propagation delays, preserving session state, and handling connections in-flight during the transition. Additionally, addressing potential issues like database replication lag, ensuring that both old and new endpoints are in synchronize (sync), and minimizing the impact on users can be demanding, requiring meticulous planning, and testing to achieve a seamless transition without service interruptions.
[0007] Thus, there exists a need for a system and method for dynamically scaling an API capacity for efficiently configuring and distributing loads in 3GPP and non-3GPP as well, for providing highly scalable system with zero down time and service API hosting and distribution for better service performance and run time change on end points as per system and business team requirement.
SUMMARY OF THE INVENTION
[0008] One or more embodiments of the present disclosure provide a system and a method for dynamically scaling one or more instances of applications in a network
[0009] In one aspect of the present invention, the method for dynamically scaling one or more instances of applications in a network is disclosed. The method includes instantiating, by one or more processors, the one or more instances of applications based on a requirement (e.g., business team requirement or the like) of the one or more instances of the applications for handling load in the network. Further, the method includes transmitting, by the one or more processors, service API details to the instantiated one or more instances of the applications. Further, the method includes transferring, by the one or more processors, an existing service API from an existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details. Further, the method includes configuring, by the one or more processors, one or more parameters pertaining to the transferred existing service API at the instantiated one or more instances of the applications. Further, the method includes receiving, by the one or more processors, a service API call at the instantiated one or more instances of the applications subsequent to configuring, the one or more parameters.
[0010] In an embodiment, the instantiated one or more instances of applications are added in one or more cluster of applications.
[0011] In an embodiment, the one or more processors distributes an incoming load among the instantiated one or more instances of the applications in the network.
[0012] In an embodiment, scaling the one or more instances of applications pertains to scaling of an API capacity for efficiently configuring and distributing loads among the instantiated one or more instances of the applications.
[0013] In an embodiment, the service API details are transmitted to the instantiated one or more instances of the applications by the one or more processors using at least one of, a Command Line Interface (CLI) and a User Interface (UI).
[0014] In an embodiment, the configured one or more parameters includes at least one of, routing rule, and a Secure Sockets Layer (SSL) certificate.
[0015] In an embodiment, the one or more processors is configured to remove the service API, blacklist the service API, block the service API and update the service API in real time.
[0016] In an embodiment, the one or more instances of the applications includes at least one Edge Load Balancer (ELB).
[0017] In another aspect of the present invention, the system for dynamically scaling one or more instances of applications in the network is disclosed. The system includes an instantiation unit, a transceiver unit, a sharing unit, and a configuration unit. The instantiation unit is configured to instantiate the one or more instances of applications based on a requirement of the one or more instances of the applications for handling load in the network. The transceiver unit is configured to transmit service API details to the instantiated one or more instances of the applications. The sharing unit is configured to transfer an existing service API from an existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details. The configuration unit is configured to configure one or more parameters pertaining to the transferred existing service API at the instantiated one or more instances of the applications. The transceiver unit is configured to receive a service API call at the instantiated one or more instances of the applications subsequent to configuring the one or more parameters.
[0018] In another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is provided. Ther computer-readable instructions, when executed by a processor, causes the processor to instantiate, the one or more instances of applications based on a requirement of the one or more instances of the applications for handling load in the network. The processor is configured to transmit service API details to the instantiated one or more instances of the applications. Further, the processor is configured to transfer an existing service API from an existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details. Further, the processor is configured to configure one or more parameters pertaining to the transferred existing service API at the instantiated one or more instances of the applications. Further, the processor is configured to receive a service API call at the instantiated one or more instances of the applications subsequent to configuring the one or more parameters.
[0019] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0021] FIG. 1 is an exemplary block diagram of an environment for dynamically scaling one or more instances of applications in a network, according to various embodiments of the present disclosure.
[0022] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
[0023] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0024] FIG. 4 is an example block diagram illustrating a system architecture for dynamically scaling one or more instances of applications in the network, according to various embodiments of the present disclosure.
[0025] FIG. 5 shows a sequence flow diagram illustrating a method for dynamically scaling the one or more instances of applications in the network, according to various embodiments of the present disclosure.
[0026] FIG. 6 is an exemplary flow diagram illustrating the method for dynamically scaling an API capacity with new instance added for an existing system and updating a new service API end points, according to various embodiments of the present disclosure.
[0027] FIG. 7 is another exemplary flow diagram illustrating the method for dynamically scaling one or more instances of applications in the network, according to various embodiments of the present disclosure.
[0028] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0029] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0030] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0031] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0032] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0033] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0034] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0035] Further, the terms first, second etc… may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0036] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0037] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0038] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0039] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0040] Below are the glossary used in the patent disclosure.
- CLI : Command Line Interface.
- UI : User Interface.
- CAPIF: Common Application Programming Interface Framework.
- AMS : Application Managed Services outsources the task of providing ongoing support for our apps to an external provider that specializes in this type of maintenance and monitoring.
- IAM : Identity and Access Management (IAM) is used for authentication and authorization of the third-party consumers.
- ELB : Edge Load Balancer (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones.
[0041] Various embodiments of the invention provide a system and a method for dynamically scaling an API capacity for efficiently configuring and distributing loads in 3GPP and non-3GPP are disclosed. The present invention provides a solution with zero down time and service API hosting and distribution of loads with better service performance. The present invention provides a solution by giving a high scaling based on the system and business requirement, the system capacity can be increased or decreased at runtime. A new node can be added into an application cluster and a load can be distributed among the available application instances. Further the system provides a run time change on end points based on the requirements (e.g., business requirements or the like). In the present invention, an API can be efficiently distributed at run-time from one instance to another instance, or from one endpoint to another one endpoint. Further, the system can remove, blacklist, or block a particular service call/ service API at runtime and update the service APIs. Further, the system can add another instance of ELB and distribute the load among the ELBs.
[0042] In an embodiment of the present invention, a system for dynamically scaling an API capacity for efficiently configuring and distributing loads in 3GPP and non-3GPP is disclosed. The system comprises at least one common API gateway, at least one API consumer, a plurality of API providers, and at least one API publisher. The plurality of API providers communicates with the common API gateway through a network. In one implementation of the present invention, the plurality of API providers may be authorized API providers. In another implementation, the plurality of API providers may be 3rd party external API providers. The common API gateway is a provisioning server for 3GPP to onboard and offboard API consumers, register and release APIs.
[0043] FIG. 1 illustrates an exemplary block diagram of an environment (100) for dynamically scaling one or more instances of applications in a communication network (106), according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ……,102-n. The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, ……102-n) is configured to connect to a system (108) via the communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is 102.
[0044] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities. In an embodiment, the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, a landline with assigned extension within the communication network (106).
[0045] The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0046] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G) network, a Fourth Generation (4G) network, a Fifth Generation (5G) network, a Sixth Generation (6G) network, a New Radio (NR) network, a Narrow Band Internet of Things (NB-IoT) network, an Open Radio Access Network (O-RAN), and the like.
[0047] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0048] One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE or mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3GPP or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications.
[0049] The system (108) is communicatively coupled to a server (104) via the communication network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility) that provides service.
[0050] The environment (100) further includes the system (108) communicably coupled to the remote server (104) and each UE of the plurality of UEs (102) via the communication network (106). The remote server (104) is configured to execute the requests in the communication network (106).
[0051] The system (108) is adapted to be embedded within the remote server (104) or is embedded as the individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for an API call, which gets reflected in real-time independent of the complexity of network.
[0052] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the API call and perform real-time analysis in the system (108).
[0053] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility) that provides service.
[0054] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0055] FIG. 2 illustrates a block diagram of the system (108) provided for dynamically scaling one or more instances of applications in a network, according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), an input/output interface unit (206), a display (208), an input device (210), and the database (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0056] The information related to the request may be provided or stored in the memory (204) of the system (108). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0057] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and a database. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0058] The information related to the requests may further be configured to render on the user interface (206). The user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input device(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0059] The database (214) may be communicably connected to the processor (202) and the memory (204). The database (214) may be configured to store and retrieve the request pertaining to features, or services or API call of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator. Further the remote server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the request, which provides flexibility to roll out multiple variants of the request as per business needs. In another embodiment, the database (214) may be outside the system (108) and communicated through a wired medium and wireless medium.
[0060] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0061] In order for the system (108) to dynamically scaling one or more instances of applications in the network (106), the processor (202) includes an instantiation unit (216), a transceiver unit (218), a sharing unit (220), a configuration unit (222), and a distribution unit (224). The instantiation unit (216), the transceiver unit (218), the sharing unit (220), the configuration unit (222), and the distribution unit (224) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0062] In order for the system (108) to dynamically scale the one or more instances of applications in the network (106), the instantiation unit (216), the transceiver unit (218), the sharing unit (220), the configuration unit (222), and the distribution unit (224) are communicably coupled to each other. In an example embodiment, scaling the one or more instances of applications pertains to scaling of the API capacity for efficiently configuring and distributing loads among the instantiated one or more instances of the applications. The one or more instances of the applications includes at least one of an Edge Load Balancer (ELB). The scaling applications refers to the process of increasing or decreasing the number of application instances to handle varying loads. This can be achieved through vertical scaling (adding more resources like CPU or memory to a single instance) or horizontal scaling (adding more instances to distribute the load). The scaling API capacity involves adjusting the infrastructure that supports API requests to handle higher volumes of traffic or provide better performance. This can include scaling the backend services, optimizing resource allocation, and improving the efficiency of data handling and response times. By scaling both the application instances and the API capacity, the user ensures that the application remains responsive and can handle high volumes of traffic efficiently. In an example, based on the proposed method, many cloud platforms offer auto-scaling features that automatically add or remove instances based on real-time traffic or load metrics. For example, during peak traffic times, the system (108) might automatically spin-up additional instances, and during low traffic periods, it might reduce the number of instances to save costs.
[0065] The instantiation unit (216) instantiates the one or more instances of applications based on the requirement (e.g., business requirements, service provider requirement or the like) of the one or more instances of the applications for handling load in the communication network (106). In an embodiment, the instantiated one or more instances of applications are added in one or more cluster of applications. The cluster of applications is a set of application instances that are deployed and managed together to perform a specific API service. These instances are usually distributed across multiple servers or containers and are designed to work collaboratively. The cluster of applications is responsible for handling requests, processing data, and maintaining the service’s performance and reliability. In an embodiment, the instantiation unit (216) handles the creating instances and stores the instances. In creating instances, the instantiation unit (216) uses the API requests or CLI commands to launch new instances based on specified configurations (e.g., load handling, network management or the like). While storing instances, a cloud provider (for example) handles the storage and management of the instances (e.g., virtual instances). For example, in the view of a cloud computing context with a popular service like Web Services (WS), when the system (108) creates an instance, the system (108) is essentially setting up a virtual server. This involves choosing a Machine Image (MI) that defines an operating system (OS) and software, selecting instance types (e.g., t2.micro, m5.large or the like), and configuring network settings.
[0066] The transceiver unit (218) transmits service API details to the instantiated one or more instances of the applications. The service API details could include API endpoints, authentication credentials, data schemas, or other relevant configuration details needed for the applications to properly interact with the APIs. The API endpoints are specific URLs or URIs (Uniform Resource Identifiers) through which an API interacts with other services or applications. Each endpoint typically corresponds to a specific functionality provided by the API. The authentication credentials are the mechanisms used to verify the identity of a user or application accessing the API. They ensure that only authorized users or applications can interact with the API. The data schemas define the structure of data that is exchanged between a client and the server (104) through the API. They specify how data should be formatted, including the fields and their types. The service API details are transmitted to the instantiated one or more instances of the applications by the transceiver unit (216) using at least one of: a CLI (not shown) and the user interface (206). Interacting with the service API via the CLI or the UI typically involves sending commands or configurations to manage or query resources.
[0067] The sharing unit (220) transfers an existing service API from an existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details. The managing of the existing service APIs involves a combination of API gateways, application servers, orchestration tools, and service discovery systems. The API gateways, application servers, orchestration tools, and service discovery systems work together to ensure that API requests are correctly routed, and new instances are properly integrated into an existing API infrastructure. The configuration unit (222) configures one or more parameters (e.g., routing rule, SSL certificate or the like) pertaining to the transferred existing service API at the instantiated one or more instances of the applications. In an example, the system (108) uses Edge Load Balancer (ELB) to distribute incoming API requests to the new application instances. After deploying new instances, the system (108) needs to update routing rules to include these new instances. To secure the API traffic, the system (108) needs to configure SSL certificates on the load balancer. Further, the configuration unit (222) removes the service API, blacklists the service API, block the service API and update the service API in real time. Consider, the system (108) decides to discontinue support for a legacy fraud detection API because it’s outdated and no longer meets performance requirements. Then, the configuration unit (222) identifies that the fraud detection API is no longer needed. The configuration unit (222) removes the API details (e.g., endpoints, credentials or the like) from the database (214). Hence, all instances of the fraud detection service across the platform are updated in real-time to stop using the discontinued API. The microservices that relied on this API are reconfigured to either use an alternative service or operate without it. When the API is identified as a security risk or is being used for malicious activities, the configuration unit (222) blacklists the API details. When the API needs to be temporarily unavailable for maintenance or updates, the configuration unit (222) blocks the API details.
[0068] The transceiver unit (218) receives a service API call at the instantiated one or more instances of the applications subsequent to configuring the one or more parameters. Further, the distribution unit (224) distributes an incoming load among the instantiated one or more instances of the applications in the network (106). The distributing incoming load among the instantiated application instances is crucial for maintaining performance, reliability, and scalability. The criteria for the distribution are determined by a load balancing technique used by the load balancer using a distribution criteria. The distribution criteria can be, for example, but not limited to least connections, least response time, round robin or the like. The round robin distributes requests sequentially across all available instances. Each instance receives a request in turn, in a cyclical order. The least connections routes traffic to the instance with fewest active connections. This helps balance the load more evenly based on current traffic. The least response time directs requests to the instance with the lowest response time. This can help minimize latency for users.
[0069] The example for dynamically scaling the one or more instances of applications in a network (214) is explained in FIG. 4 to FIG. 6.
[0070] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0071] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE (102-1) to execute the requests in the communication network (106).
[0072] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the request to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0073] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210). The operations and functions of the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the instantiation unit (216), the transceiver unit (218), the sharing unit (220), the configuration unit (222), and the distribution unit (224). The operations and functions of the instantiation unit (216), the transceiver unit (218), the sharing unit (220), the configuration unit (222), and the distribution unit (224) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0074] FIG. 4 shows a block diagram of a system architecture (400) for dynamically scaling the one or more instances of the applications in the communication network (106) in accordance with an exemplary embodiment of the present invention. The system architecture (400) can be used for dynamically scaling the API capacity for efficiently configuring and distributing loads in a 3GPP system and a non-3GPP system. The system architecture (400) includes a common API gateway (440), an API consumer (402a) communicably connected to the common API gateway (440) via the communication network (106), a plurality of API providers (414a-414n) communicably connected to the common API gateway (440) via the communication network (106), an API publisher (402b) communicably connected to the common API gateway (440) via the communication network (106). In an embodiment, the common API gateway (440) may be a part of a subscriber system. The common API gateway (440) may be used to expose, secure, and manage backend applications, infrastructure and/or network systems as published APIs. The API consumer (402a) may comprise an API developer, an API playground, and an API enterprise architecture. The API developer may communicate with the common API gateway (440) and may be used to provide access to and resources for the published APIs to internal and external developers. For example, the internal and external developers may communicate with the API developer to access published APIs that the developers may use to build applications against, such as web and mobile applications. The API enterprise architecture is used to connect enterprise applications and backend resources. In the API enterprise architecture, the APIs are critical as businesses adopt new technologies and applications. The API playground is a tool allowing developers to browse and explore all APIs. The API playground exposes all API endpoints and provides a convenient testbed to perform queries.
[0075] The API publisher (402b) may comprise an API developer, an API playground, and an API enterprise architecture, which are similar to the API consumer (402a). Further, the API publisher (140) may comprise a marketplace platform (406) and a subscription engine (404). In an embodiment, the API consumer (402a) can be any application, developers or enterprise that want to use their API for their use cases. In an embodiment, the API provider (414a-414n) is an application that hosts the APIs and uses the common API gateway (440) to expose their APIs. The marketplace platform (406) is a platform available on the public domain, in which the API consumer (402a) can create an account and login with credentials. After the login, the user can explore an API repository (not shown) and purchase the APIs. The subscription engine (404) is a backend application of the marketplace platform (406) where all user related data along with their subscription details are stored.
[0076] The common API gateway (440) is a provisioning server hosting application logic to create/modify/display/delete of subscription information, authentication information, and equipment information. The common API gateway (440) supports NETCONF/SSH and Restful/HTTP interfaces. The common API gateway (440) supports both client and server-side validation of input parameters for syntax and semantic checks. The common API gateway (440) provides lightweight CLI for all provisioning requirements. The common API gateway (440) comprises a CAPIF module (426), an Access Management System (AMS) module (428), an IAM module (432) and an Edge Load Balancer (ELB) module (430).
[0077] The CAPIF module (426) is a complete 3GPP API framework that covers functionality related to on-board and off-board API consumers (402a), register and release APIs that need to be exposed, discovering APIs by third entities, as well as authorization and authentication. The CAPIF module (426) acts as a common API gateway for the API consumers (402a) and the API repository for the API providers (402b). The AMS module (428) outsources the task of providing ongoing support for the apps to an external provider that specializes in type of maintenance and monitoring. In other words, the AMS module (428) performs plan check, access control policy check and enrichment of an API call. The IAM module (432) is used for authentication and authorization of the third party consumers. The IAM module (432) is particularly responsible for identity and access management of consumers. The ELB module (430) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more availability zones. An Edge Load balancer (not shown) may route the request to the destination application based on rules like round robin, context-based routing, header-based routing, or Transmission Control Protocol (TCP) based routing. The common API gateway (440) securely communicates with the API publisher. In an embodiment, the common API gateway (440) generates an SSL certificate and then use its public key to verify and communicate with the API publisher (402b). The common gateway (440) may communicate with the API publisher (402b) through the SSL certificate and key exchange and validate the respective API publisher (402b).
[0078] The common API gateway (440) further includes a persistent database (434) for storing all persistent records and a distributed cache (436). The persistent database (434) is a scalable, document oriented, and schema free database. The distributed cache (436) is an in-memory data structure store, used as a database, cache and message broker. The distributed cache (436) supports almost all types of data structure to store data.
[0079] The system architecture (400) further includes a troubleshooting platform (410) for optimizing managing and accessing traffic based on Artificial Intelligence (AI) and Machine Learning (ML) processes. The troubleshooting platform (410) is communicably connected with the API publishers (402b) and to the common API gateway (440) via the communication network (106). The troubleshooting platform (410) may comprise an AI/ML module (not shown) configured for automatically managing and accessing traffic in an optimum way. The AI/ML module may learn and manage the process of managing the traffic and accessing the traffic and decide the process based on run time data and traffic.
[0080] The system architecture (400) further includes a UI (User Interface) API dashboard (424). A user interface of the CAPIF module (426) is configured as rich user interface available to visualize the data and perform configuration. The common API gateway (440) is configured to display to the UI API dashboard (424) via the network (106). The UI API dashboard (424) stores and maintains the data related to the plurality of API providers (402b) and the API consumers (402a), their subscriptions, usage, usage history and balance subscription data for each API provider. The UI API dashboard (424) may use a data analytics engine (not shown) to show the data of API providers (402b) in a way of charts and tables. In an embodiment, the data analytics engine is an API analytics unit (408) connected with the troubleshooting platform (410). The UI API dashboard (424) further shows the data related to number of times the API invoked/ click to call subscriptions initiated and used. The system architecture (400) further includes an Element management system (EMS) module (430) for fault, configuration, accounting, performance and security (FCAPS) management and one cache, a unified Data Cache/Data Store to enable Multiple GSMA CAMARA use cases, including SIM SWAP. A unified gateway connects with various components of the system to host Sim-Swap APIs (426).
[0081] In an embodiment, typically the API consumer (402a) may be a 3rd party application provider having a service agreement with the complete 3GPP and non-3GPP API framework of the API gateway (440). The API provider (414a-414n) hosts one or more service APIs and has a service API arrangement with the CAPIF module (426) to offer a service APIs to the API consumer (402a). The CAPIF module (426) and the API provider (414a-414n) may be part of the same organization in which case the business relationship between the two is internal to a single organization. The CAPIF module (426) and the API provider (414a-414n) may be part of different organizations, in which case the business relationship between the two must exist. The system architecture (400) provides an interactive and user-friendly CLI interface for subscriber provisioning, operation and maintenance purposes and a web based intuitive graphical user interface (UI) for Bulk Provisioning, operation, and maintenance. The CLI and UI are used to onboard API providers. In an embodiment, CLI and UI are part of the subscriber system.
[0082] FIG. 5 shows a sequence flow diagram (500) illustrating a method for dynamically scaling the one or more instances of applications in the network (106), according to various embodiments of the present disclosure. The method may be implemented by the system architecture (400). The system architecture (400) may comprise at least one common API gateway (440), the at least one API consumer (402a), the plurality of API providers (414a-414n), and the at least one API publisher (402a) as described in the previous embodiments (refer FIG. 4). At step 502, the method includes instantiating the new instance of application (e.g., CAPIF, ELB, AMS or the like) using a scaling function based on the requirements. The scaling function of the one or more instances of applications pertains to scaling of the API capacity for efficiently configuring and distributing loads among the instantiated one or more instances of the applications. At step 504, the method includes running the application instances available in the cluster. At step 506, the method includes sharing the available service API details with the newly added application using the CLI or the UI (206). At step 508, the method includes moving the already exist service API from the existing instance to the new instance.
[0083] At step 510, the method includes adding a routing rule and request configuration for mapped service API. The routing rule and request configuration are mechanisms that determine how incoming API requests are distributed across different instances or endpoints of the API service. They help in managing traffic to ensure efficient use of resources, high availability, and load balancing. At step 512, the method includes collecting and logging every service API call. At step 514, the method includes configuring the SSL certificates and sharing all service API related data shared with the new instances. At step 516, the method includes receiving the service API call through newly added instance.
[0084] FIG. 6 is an exemplary flow diagram (600) illustrating the method for dynamically scaling the API capacity with the new instance added for the existing system and updating a new service API end points in accordance with the present invention.
[0085] At step 602, the method includes adding the new instance for the existing system. At step 604, the new instance is registered with a CAPIF core function. At step 606, if the data is validated then a positive response is sent. At step 608, based on the service type, the list of service end points are prepared. At step 612, the method includes preparing the service API details.
[0086] At step 614, the service API end point and context at the newly added application is configured. At step 616, the application is initiated to accept the service API call. If the validation fails, a negative response will be sent to the user at step 610.
[0087] In an embodiment, the system (108) can add add/ update/remove service API end points using the Universal Command Line Interface UCLI/UI (206) at step 618. At step 620, then, it is determined for data validation, if data is validated then a positive response is sent and based on service type, list of all API end points with updated data are prepared at step 622. At step 624, API endpoint details with application are shared and application configures itself with updated and new details at step 626.
[0088] FIG. 7 is an exemplary flow diagram (700) illustrating the method for dynamically scaling the one or more instances of applications in the communication network (106), according to various embodiments of the present disclosure.
[0089] At step 702, the method includes instantiating the one or more instances of applications based on the requirement of the one or more instances of the applications for handling load in the communication network (106). In an embodiment, the method allows the instantiation unit (216) to instantiate the one or more instances of applications based on the requirement of the one or more instances of the applications for handling load in the communication network (106).
[0090] At step 704, the method includes transmitting the service API details to the instantiated one or more instances of the applications. In an embodiment, the method allows the transceiver unit (218) to transmit the service API details to the instantiated one or more instances of the applications.
[0091] At step 706, the method includes transferring the existing service API from the existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details. In an embodiment, the method allows the sharing unit (220) to transfer the existing service API from the existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details.
[0092] At step 708, the method includes configuring the one or more parameters pertaining to the transferred existing service API at the instantiated one or more instances of the applications. In an embodiment, the method allows the configuration unit (222) to configure the one or more parameters pertaining to the transferred existing service API at the instantiated one or more instances of the applications.
[0093] At step 710, the method includes receiving the service API call at the instantiated one or more instances of the applications subsequent to configuring the one or more parameters. In an embodiment, the method allows the transceiver unit (218) to receive the service API call at the instantiated one or more instances of the applications subsequent to configuring, the one or more parameters.
[0094] Below is the technical advancement of the present invention:
[0095] The proposed system (108) provides the highly scalable API infrastructure system with zero down time that can expand seamlessly without causing service interruptions. The proposed system (108) efficiently distributes and hosts available service APIs while allowing run-time configuration of API service provisioning and deprovisioning of endpoints. The proposed system (108) is helpful in keeping an up-to-date centralized API list accessible from the single instance ensuring streamlined management. Based on the proposed method, the capacity management is achieved effectively by configuring and distributing service APIs among cluster instances, so as to ensure the resource optimization and uninterrupted service delivery.
[0096] The proposed system (108) provides a high scaling system based on requirements at runtime. Based on the proposed method, the new node can be added into the application cluster and load can be distributed among the available application instances. Based on the proposed method, the API can be efficiently distributed at run-time from one instance to another instance, or from one endpoint to another one endpoint. Further, the system (108) can remove, blacklist, or block a particular service call/ service API at runtime and update the service APIs and can add another instance of ELB and distribute the load among the ELBs.
[0097] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-7) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0098] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0099] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[00100] Environment - 100
[00101] UEs– 102, 102-1-102-n
[00102] Server - 104
[00103] Communication network – 106
[00104] System – 108
[00105] Processor – 202
[00106] Memory – 204
[00107] User Interface – 206
[00108] Display – 208
[00109] Input device – 210
[00110] Database – 214
[00111] Instantiation unit– 216
[00112] Transceiver unit – 218
[00113] Sharing unit – 220
[00114] Configuration unit – 224
[00115] Distribution unit – 226
[00116] System - 300
[00117] Primary processors -305
[00118] Memory– 310
[00119] Kernel– 315
[00120] System architecture – 400
[00121] API consumer – 402a, 402b
[00122] Subscription engine – 404
[00123] Market place platform – 406
[00124] API analytics unit – 408
[00125] Troubleshooting platform – 410
[00126] EMS – 412
[00127] API provider(s) – 414a, 414b, 414n
[00128] SSA Exposure function – 416
[00129] SSA DC MS – 418
[00130] Streaming platform – 420
[00131] UI API dashboard – 424
[00132] CAPIF module – 426
[00133] AMS module – 428
[00134] ELB module – 430
[00135] IAM module – 432
[00136] Persistent database – 434
[00137] Cache database – 436
[00138] Common API gateway - 440
,CLAIMS:CLAIMS
We Claim
1. A method for scaling one or more instances of applications in a network (106), the method comprising the steps of:
instantiating, by one or more processors (202), the one or more instances of applications based on a requirement of the one or more instances of the applications for handling load in the network (106);
transmitting, by the one or more processors (202), service Application Programming Interface (API) details to the instantiated one or more instances of the applications;
transferring, by the one or more processors (202), an existing service API from an existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details;
configuring, by the one or more processors (202), one or more parameters pertaining to the transferred existing service API at the instantiated one or more instances of the applications; and
receiving, by the one or more processors (202), a service API call at the instantiated one or more instances of the applications subsequent to configuring, the one or more parameters.
2. The method as claimed in claim 1, wherein the instantiated one or more instances of applications are added in one or more cluster of applications.
3. The method as claimed in claim 1, wherein the one or more processors (202) distributes an incoming load among the instantiated one or more instances of the applications in the network (106).
4. The method as claimed in claim 1, wherein scaling the one or more instances of applications pertains to scaling of an API capacity for configuring and distributing loads among the instantiated one or more instances of the applications.
5. The method as claimed in claim 1, wherein the service API details are transmitted to the instantiated one or more instances of the applications by the one or more processors using at least one of: a Command Line Interface (CLI) and a User Interface (UI) (206).
6. The method as claimed in claim 1, wherein the configured one or more parameters includes at least one of: a routing rule, and a Secure Sockets Layer (SSL) certificate.
7. The method as claimed in claim 1, wherein the one or more processors (202) is further configured to remove the service API, blacklist the service API, block the service API and update the service API in real time.
8. The method as claimed in claim 1, wherein the one or more instances of the applications includes at least one of, an Edge Load Balancer (ELB).
9. A system (108) for scaling one or more instances of applications in a network (106), the system (108) comprising:
an instantiation unit (216), configured to, instantiate, the one or more instances of applications based on a requirement of the one or more instances of the applications for handling load in the network (106);
a transceiver unit (218), configured to, transmit, service Application Programming Interface (API) details to the instantiated one or more instances of the applications;
a sharing unit (220), configured to, transfer, an existing service API from an existing one or more instances of the applications to the instantiated one or more instances of the applications subsequent to the transmission of the service API details;
a configuration unit (224), configured to, configure, one or more parameters pertaining to the transferred existing service API at the instantiated one or more instances of the applications; and
the transceiver unit (218), configured to, receive, a service API call at the instantiated one or more instances of the applications subsequent to configuring, the one or more parameters.
10. The system (108) as claimed in claim 9, wherein the instantiated one or more instances of applications are added in one or more cluster of applications.
11. The system (108) as claimed in claim 9, wherein a distribution unit (226) distributes an incoming load among the instantiated one or more instances of the applications in the network.
12. The system (108) as claimed in claim 9, wherein scaling the one or more instances of applications pertains to scaling of an API capacity for configuring and distributing loads among the instantiated one or more instances of the applications.
13. The system (108) as claimed in claim 9, wherein the service API details are transmitted to the instantiated one or more instances of the applications by the transceiver unit using at least one of, a Command Line Interface (CLI) and a User Interface (UI).
14. The system (108) as claimed in claim 9, wherein the configured one or more parameters includes at least one of: a routing rule, and a Secure Sockets Layer (SSL) certificate.
15. The system (108) as claimed in claim 9, wherein the configuration unit (224) is further configured to remove the service API, blacklist the service API, block the service API and update the service API in real time.
16. The system (108) as claimed in claim 9, wherein the one or more instances of the applications includes at least one of, an Edge Load Balancer (ELB).
| # | Name | Date |
|---|---|---|
| 1 | 202321060026-STATEMENT OF UNDERTAKING (FORM 3) [06-09-2023(online)].pdf | 2023-09-06 |
| 2 | 202321060026-PROVISIONAL SPECIFICATION [06-09-2023(online)].pdf | 2023-09-06 |
| 3 | 202321060026-FORM 1 [06-09-2023(online)].pdf | 2023-09-06 |
| 4 | 202321060026-FIGURE OF ABSTRACT [06-09-2023(online)].pdf | 2023-09-06 |
| 5 | 202321060026-DRAWINGS [06-09-2023(online)].pdf | 2023-09-06 |
| 6 | 202321060026-DECLARATION OF INVENTORSHIP (FORM 5) [06-09-2023(online)].pdf | 2023-09-06 |
| 7 | 202321060026-FORM-26 [17-10-2023(online)].pdf | 2023-10-17 |
| 8 | 202321060026-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321060026-DRAWING [05-09-2024(online)].pdf | 2024-09-05 |
| 10 | 202321060026-COMPLETE SPECIFICATION [05-09-2024(online)].pdf | 2024-09-05 |
| 11 | Abstract 1.jpg | 2024-10-01 |
| 12 | 202321060026-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321060026-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321060026-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321060026-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321060026-FORM 3 [29-01-2025(online)].pdf | 2025-01-29 |
| 17 | 202321060026-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |