Abstract: ABSTRACT METHOD AND SYSTEM FOR MANAGING RESOURCES OF A NETWORK FUNCTION IN A NETWORK The present disclosure relates to a system (108) and a method (600) for managing resources of a network function (222) in a network (106). The system (108) includes a receiving unit (210) configured to receive a resource allocation request from a User Equipment (UE) (102). The resource allocation request indicates one or more resources required to operate the network function (222) as per a network requirement. The system (108) further includes an adding unit (212) configured to add one or more resources at a container host (220) based on the resource allocation request. The system (108) further includes an updating unit (214) configured to update an inventory database (218) related to the network function (222) on addition of the one or more resources at the container host (220). Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR MANAGING RESOURCES OF A NETWORK FUNCTION IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to network designing, more particularly relates to a method and a system for managing resources of a network function in a network.
BACKGROUND OF THE INVENTION
[0002] With increasing user number in a network there is also increment in exchange of information over a network. The server has to cater to multiple requests at a time, receiving the request, processing the requests, connecting with other servers and sending the results to the user end. All of these processes go on within specified time frame and over the network thread which may lead to heavy traffic, memory shortage, temporary stalling of operation and many such problems on the network. To alleviate network of such technical issues there may be need to redesign the network function like CNF (container network function) or VNF (virtual network function) so that the request commands and processing can be performed smoothly. Redesigning a CNF includes tedious data analysis and defining various flavors like storage memory, CPU (central processing unit) parameters, RAM (random access memory) etc.
[0003] Usually, including and assigning flavors to CNF/CNFC (container network function components) is accompanied by defining various parameters like how much RAM may be required, how much CPU capacity may be utilized; shutting the operation of CNF/CNFC that is being redesigned then restarting the entire network system to reflect and assimilate the changes. This is time-consuming and requires manual intervention to complete the redesigning.
[0004] However, shutting down the running CNFC would come with possible downtime when no service can be accessed. Every time a redesigning is required then the flavor parameters have to be defined again and again from the start which is a redundant resource utilization and wastage of time. This impacts the quality of service provided. There is need of an approach to eliminate need of manual intervention, repetitive setting tinkering for the flavors as well as downtime.
[0005] Presently there is no such mechanism available. There is a need to develop a system and method to redesign a CNF by adding computing flavors which are easily customizable, and which can be applied to a running CNFC.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and system for managing resources of a network function in a network.
[0007] In one aspect of the present invention, the system for managing the resources of the network function in the network is disclosed. The system includes a receiving unit, configured to receive, one or more resource allocation requests from a User Equipment (UE). The one or more resource allocation request indicates one or more resources required to operate the network function as per a network requirement. The system further includes an adding unit, configured to add one or more resources at a container host based on the one or more resource allocation request. The system further includes an updating unit, configured to update an inventory database related to the network function on addition of the one or more resources at the container host.
[0008] In an embodiment, the one or more resource allocation requests are received dynamically during run time of the network function to change a configuration of the network function.
[0009] In an embodiment, on addition of the one or more resources at the container host, the system comprises a transmitting unit, configured to, transmit, a confirmation to the UE that the one or more resources is added to the container host.
[0010] In an embodiment, the inventory database receives an add request from the UE to add required resources as indicated in the add request at the inventory database.
[0011] In an embodiment, the one or more resources of the network function is managed at one of, a designing stage and a run time stage of the network functions.
[0012] In an embodiment, the one or more resources is at least one of, a Central Processing Unit (CPU), Random Access Memory (RAM), disk space, and a combination thereof, and wherein the network requirement is based on at least one of a network traffic.
[0013] In an embodiment, the inventory database transmits an acknowledgment to the UE pertaining to completing the addition of required resources, upon adding the required resources as indicated in the create request at the inventory database.
[0014] In an embodiment, the system communicates with the inventory database and the container host via a communication channel. In an embodiment, the communication channel is an interface between a container orchestrator and the inventory database. In an embodiment, the interface is at least one of IM_DA interface.
[0015] In another aspect of the present invention, the method of managing the resources of the network function in the network is disclosed. The method includes the step of receiving one or more resource allocation requests from a User Equipment (UE). The one or more resource allocation request indicates one or more resources required to operate a network function as per a network requirement. The method further includes the step of adding the one or more resources at a container host based on the one or more resource allocation request. The method further includes the step of updating an inventory database related to the network function on addition of the one or more resources at the container host.
[0016] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive one or more resource allocation requests from a User Equipment (UE). The one or more resource allocation request indicates one or more resources required to operate the network function as per a network requirement. The processor is configured to add one or more resources at a container host based on the one or more resource allocation request. The processor is configured to update an inventory database related to the network function on addition of the one or more resources at the container host.
[0017] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor causes the UE to transmit one or more resource allocation requests to the one or more processers. The one or more resource allocation request indicates one or more resources required to operate the network function as per a network requirement. Further, the UE is configured to receive confirmation that the one or more resources is added to the container host. The UE is further configured to transmit an add request to add required resources as indicated in the add request at the inventory database. The UE is further configured to receive, an acknowledgement pertaining to completing the addition of required resources.
[0018] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0020] FIG. 1 is an exemplary block diagram of an environment for managing resources of a network function in a network, according to one or more embodiments of the present invention;
[0021] FIG. 2 is an exemplary block diagram of a system for managing the resources of the network function in the network, according to one or more embodiments of the present invention;
[0022] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to the one or more embodiments of the present invention;
[0023] FIG. 4 is a signal flow diagram for managing the resources of the network function in the network, according to one or more embodiments of the present invention;
[0024] FIG. 5 is a schematic representation of a method of managing the resources of the network function in the network, according to one or more embodiments of the present invention; and
[0025] FIG. 6 illustrates an architecture framework (e.g., MANO architecture framework), in which the present invention can be implemented in accordance with one or more embodiments of the present invention.
[0026] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0028] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0029] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0030] The present invention provides a system and method for adding resources to a network function like, but not limited to, Container Network Function (CNF) or Virtual Network Function (VNF). More particularly, the system and method provide a solution for adding one or more resources to existing network function in a running network system without any need to shut down or restart the network-system.
[0031] FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing resources of a network function 222 (as shown in FIG. 2) in a network 106, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, the network 106 and a system 108 communicably coupled to each other for managing the resources of the network function 222 in the network 106.
[0032] In an embodiment, the network functions 222 are a functional block within a network architecture that performs a specific task or set of tasks to manage, control, and process network traffic and services. The network functions 222 can be implemented in physical hardware or as virtualized instances. The network functions 222 are at least one of Container Network function (CNF) and Virtual Network Function (VNF). The CNF is a network function 222 that is deployed and run within containers, a lightweight form of virtualization. The container is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, such as the code, runtime, system tools, libraries, and settings. The CNF is typically composed of multiple Containerized Network Function Components (CNFCs). The CNFC refers to a network function 222 that has been containerized for deployment in a cloud-native environment. The cloud-native environment refers to an infrastructure and set of practices designed to fully leverage cloud computing models for building, deploying, and operating applications. The VNF is a software implementation of the network function 222 that can run on a virtualized infrastructure (such as virtual machines or containers) instead of being tied to specific hardware. In an embodiment, the resources of the network function 222 refer to the computational and storage elements required to execute or operate the network functions. The resources include, but are not limited to, Central Processing Unit (CPU), Random Access Memory (RAM), disk space, network bandwidth. The resources are allocated or added based on the network function's needs, which may vary depending on the network conditions, such as traffic load or service requirements.
[0033] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0034] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0035] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0036] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0037] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0038] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to manage the resources of the network function 222 in the network 106. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0039] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0040] FIG. 2 is an exemplary block diagram of the system 108 for managing the resources of the network function 222 in the network 106, according to one or more embodiments of the present invention.
[0041] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. In an embodiment, the network function 222, an inventory database 218 and a container host 220 are communicably coupled to the system 108.
[0042] For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0043] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0044] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0045] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0046] In order for the system 108 to manage the resources of the network function in the network 106, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a receiving unit 210, an adding unit 212, an updating unit 214, and a transmitting unit 216 communicably coupled to each other for managing the resources of the network function 222 in the network 106.
[0047] In one embodiment, each of the one or more modules can be used in combination or interchangeably for managing the resources of the network function 222 in the network 106.
[0048] The receiving unit 210, the adding unit 212, the updating unit 214, and the transmitting unit 216 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0049] In an embodiment, the receiving unit 210 is configured to receive one or more resource allocation request from the UE 102. In another embodiment, the one or more resource allocation request is received from the UI 206. The one or more resource allocation request indicates one or more resources required to operate the network function 222 as per a network requirement. The one or more resources refers to the specific computational and storage elements necessary to support and operate the network function 222, as dictated by the network's needs or requirements. The one or more resources is at least one of, a Central Processing Unit (CPU), Random Access Memory (RAM), disk space, and a combination thereof. The network requirement refers to the needs or conditions that the network function must 222 fulfill in order to operate effectively and meet the desired performance standards. The performance standards include, but are not limited to bandwidth, latency and throughput that the network function 222 must meet. The network requirements are based on various factors, such as network traffic, service level agreements, and operational demands. In an embodiment, the network requirement is based on at least one of, a network traffic. The network traffic refers to the varying flow of data across the network 106, which influences the amount and type of resources that need to be allocated to the network function 222 to meet the network's performance requirements. The network traffic influences the allocation of resources within the network function 222, as the network traffic directly impacts the load and performance requirements of the network 106.
[0050] In one embodiment, the UE 102 transmits a create request to the inventory database 218. In an embodiment, the UI 206 transmits the create request to the inventory database 218. The inventory database 218 is a specialized database that keeps a real-time record of all resources within the network 106, particularly those allocated to network functions 222. The inventory database 218 stores detailed information about network functions 222 like routers, switches, servers, firewalls, CNF, VNF and other hardware elements. The inventory database 218 is at least one of Physical and Virtual Inventory Manager (PVIM). The create request aids in creating required resources as indicated in the create request at the inventory database 218. In an embodiment, the create request is at least one of Hypertext Transfer Protocol (HTTP) request. The create request is intended to add new one or more resources to the inventory database 218 to ensure that the network function 222 has the necessary resources to operate effectively according to the network requirements. The create request includes detailed information about the resources that need to be created. The detailed information includes, but is not limited to, type of resources, specific amount of each type of resources. Upon creating the required resources as indicated in the create request at the inventory database 218, the inventory database 218 transmits an acknowledgement to the UE 102. In an embodiment, the inventory database 218 transmits the acknowledgement to the UI 206. The acknowledgement includes confirmation that the requested one or more resources are successfully created at the inventory database 218.
[0051] In an embodiment, the one or more resource allocation requests are received dynamically during run time of the network function 222 to change a configuration of the network function 222. In particular, when the one or more resource allocation requests are received during the run time of the network function 222, the system 108 dynamically updates the network function 222 configuration allowing the network function 222 to optimize or scale its resources based on current network demands, such as fluctuating traffic or changing service requirements. For instance, a telecom network operator is running a Virtual Network Function (VNF) for a Voice over LTE (VoLTE) service. Initially, the VoLTE VNF is allocated 2 CPUs and 4GB of RAM based on average traffic demand. During a major sports event, the network 106 experiences a sudden surge in traffic. Multiple users have started making calls and the existing resources are no longer sufficient to handle the increased load. The VoLTE VNF detects the surge in real-time and sends the resource allocation request to the system 108, for additional resources (e.g., more CPU and RAM) to handle the increased traffic. The system 108 dynamically processes the resource allocation request during the run time of the VNF, without stopping or restarting the service. Subsequently, the system 108 allocates 2 additional CPUs and 2GB of extra RAM to the VoLTE VNF, allowing it to scale up and continue processing calls smoothly. Thus, the VoLTE VNF dynamically adjusts its resource allocation during the high-demand event, maintaining service quality without any interruptions.
[0052] In an embodiment, upon receiving the one or more resource allocation request from the UE 102, the adding unit 212 is configured to add the one or more resources at the container host 220. The container host 220 refers to the computing environment or infrastructure where containerized applications and network functions 222 are deployed and executed. In an embodiment, the container host 220 is at least one of docker host. The docker host refers to the computing environment or physical/virtual machine where docker is installed and running. The docker host is responsible for managing containers and providing the necessary resources (such as CPU, memory, storage, and networking) for containerized network functions to operate. The docker host includes, but not limited to docker engine, containers, container images, resources management.
[0053] In an embodiment, the one or more resources of the network function 222 is added at one of, a designing stage and a run time stage of the network functions. The designing stage refers to the phase where the network functions and their requirements are planned and configured by a telecom operator before they are deployed or activated in the network 106. For example, the telecom operator designs a new virtual firewall to be deployed in their network 106. During the designing stage, the telecom operator determines that the firewall would need 4 CPU cores, 8 GB of RAM, and 50 GB of disk space. The telecom operator configures these requirements in their planning tools and prepares the necessary resources before deployment. The run time stage refers to the phase where the network functions 222 are actively operating and handling real-time network traffic or user requests. For example, the virtual firewall deployed by the telecom operator begins handling live traffic. During the run time stage, if there is a surge in traffic, additional CPU cores and RAM might be allocated dynamically to ensure that the firewall continues to function effectively without performance degradation. Upon adding the one or more resources, the container host 220 transmits the response indicating the successful addition of the one or more resources at the container host 220. Subsequently, upon addition of the one or more resources at the container host 220, the updating unit 214 updates the inventory database 218. The update is related to the successful addition of the one or more resources at the container host 220.
[0054] Simultaneously, on addition of the one or more resources at the container host 220, the transmitting unit 216 is configured to transmit a confirmation to the UE 102. In an embodiment, the confirmation is transmitted to the UI 206. The confirmation is related to that the one or more resources is added to the container host 220.
[0055] In an embodiment, the inventory database 218, a container orchestrator 402 and the container host 220 are communicated through a communication channel. The container orchestrator 402 is a system 108 or tool that automates the deployment, management, scaling, and networking of containerized applications. In an exemplary embodiment, the container orchestrator 402 is at least one or Docker Swarm Adapter (DSA). The DSA is a conceptual or specific implementation component designed to enable the integration of docker swarm with network functions within the network 106. The DSA is useful in scenarios where network functions (such as VNFs or CNFs) are deployed as containerized services. The docker swarm is a container orchestration tool that allows you to manage a cluster of Docker engines. The communication channel is an interface between inventory database 218 and the container orchestrator 402. The interface is at least one of Inventory database_ Container orchestrator (IM_DA) interface. The IM_DA interface aids communication between the inventory database 218, the container orchestrator 402 and the container host 220 in updating and modifying the one or more resources during the run time of the network functions 222.
[0056] In an exemplary embodiment, the network optimization tool running in the container is experiencing high network traffic. The UE 102 identifies that the network optimization tool needs additional resources to handle the increased load and sends a resource allocation request. The resource allocation request specifies that the network optimization tool needs 3 additional CPU cores, 6 GB of RAM, and 30 GB of disk space to cope with the higher network traffic. To manage the new resources the UE 102 sends the create request to the inventory database 218. The create request includes detailed information of resources such as 3 CPU cores, 6 GB RAM, and 30 GB disk space. The create request ensures that these resources are registered and available in the inventory database 218 for future. The inventory database 218 processes the create request and updates its records with the new resources. The inventory database 218 transmits an acknowledgment back to the UE 102, confirming that the requested resources (3 CPU cores, 6 GB RAM, 30 GB disk space) have been successfully created and recorded. Upon receiving the resource allocation request from the UE 102, the adding unit 212 identifies the container host 220 where the network optimization tool is deployed and allocates the requested resources (3 CPU cores, 6 GB RAM, 30 GB disk space) to the container host 220. The container host 220 confirms the successful allocation of resources (3 CPU cores, 6 GB RAM, 30 GB disk space). Thereafter, the inventory database 218 is updated with the allocated resources (3 CPU cores, 6 GB RAM, 30 GB disk space) at the container host 220. Subsequently, the upon successful allocation of the resources (3 CPU cores, 6 GB RAM, 30 GB disk space) at the container host 220, the UE 102 receives a confirmation that the addition resources (3 CPU cores, 6 GB RAM, 30 GB disk space) is successfully allocated and the network optimization tool can handle the increased load efficiently.
[0057] Therefore, by managing the resources of the network function 222 the system 108 enables dynamic allocation of resources (such as CPU, RAM, disk space) to the network function based on real-time network requirements. The system 108 reduces the risk of resource mismanagement and ensures efficient tracking of available resources. Further, the system 108 ensures that network functions 222 operate efficiently, leading to optimized network performance and reduced latency.
[0058] FIG. 3 describes a preferred embodiment of the system 108 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 102a and the system 108 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0059] As mentioned earlier in FIG. 1, each of the first UE 102a, the second UE 102b, and the third UE 102c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 102a without deviating from the scope of the present disclosure and the limiting the scope of the present disclosure. The first UE 102a includes one or more primary processors 302 communicably coupled to the one or more processors 202 of the system 108.
[0060] The one or more primary processors 302 are coupled with a memory 304 storing instructions which are executed by the one or more primary processors 302. Execution of the stored instructions by the one or more primary processors 302 enables the first UE 102a to transmit the resource allocation request to the one or more processers. The resource allocation request indicates one or more resources required to operate the network function 222 as per the network requirement. The first UE 102a is further configured to receive the confirmation that the one or more resources is added to the container host 220. The first UE 102a is further configured to transmit the create request to create required resources as indicated in the create request at the inventory database 218. Thereafter, the first UE 102a is further configured to receive the acknowledgement pertaining to completing the creation of required resources.
[0061] As mentioned earlier in FIG. 2, the one or more processors 202 of the system 108 is configured to manage the resources of the network function in the network 106. As per the illustrated embodiment, the system 108 includes the one or more processors 202, the memory 204, the user interface 206, and the database 208. In an embodiment, the inventory database 218 and the container host 220 are communicably coupled to the system 108. The operations and functions of the one or more processors 202, the memory 204, the user interface 206, the database 208, the network function 222, the inventory database 218 and the container host 220 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0062] Further, the processor 202 includes the receiving unit 210, the adding unit 212, the updating unit 214, and the transmitting unit 216. The operations and functions of the receiving unit 210, the adding unit 212, the updating unit 214, and the transmitting unit 216 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description as provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0063] FIG. 4 is a signal flow diagram for managing the resources of the network function 222 in the network 106, according to one or more embodiments of the present invention.
[0064] At step 402, the inventory database 218 receives the create request from the UI 206. The create request is to create the one or more resources as indicated in the create request at the inventory database 218. The create request is intended to add the one or more resources as indicated in the create request to the inventory database 218 to ensure that the network function 222 has the necessary resources to operate effectively according to the network requirements.
[0065] At step 404, upon creating the required one or more resources, the inventory database 218 transmits a response indicating the successful creation of the required one or more resources to the UI 206.
[0066] At step 406, the container orchestrator 402 receives the resource allocation request from the UI 206. The resource allocation request indicates one or more resources required to operate the network function 222 as per the network requirement. The one or more resources is at least one of, CPU, RAM, disk space, and a combination thereof, and the network requirement is based on at least one of the network traffic.
[0067] At step 408, upon receiving the request from the UI 206, the container orchestrator 402 adds the one or more resources at the container host 220.
[0068] At step 410, upon adding the one or more resources, the container host 220 transmits the response indication of the successful addition of the one or more resources.
[0069] At step 412, upon receiving the response pertaining to successful addition of the one or more resources from the container host 220, the container orchestrator 402 updates the inventory database 218. The update is related to the network function 222 on addition of the one or more resources at the container host 220.
[0070] At step 414, upon receiving the updating related to the addition of the one or more resources of the network function 222 at the container host 220 from the container orchestrator 402, the inventory database 218 transmits the response indication the successful updating of the addition of the one or more resources of the network function 222 at the container host 220.
[0071] At step 416, subsequently, the container orchestrator 402, transmits the confirmation to the UI 206 that the one or more resources is added to the container host 220.
[0072] FIG. 5 is a flow diagram of a method 500 for managing the resources of the network function 222 in the network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0073] At step 502, the method 500 includes the step of receiving the resource allocation request from the UE 102 by the receiving unit 210. The resource allocation request indicates one or more resources required to operate the network function 222 as per the network requirement. The one or more resources is at least one of, CPU, RAM, disk space, and a combination thereof, and the network requirement is based on at least one of the network traffic.
[0074] In an embodiment, the inventory database 218 receives the create request from the UE 102. The create request is to create required one or more resources as indicated in the create request at the inventory database 218. Upon creating the required resources as indicated in the create request at the inventory database 218, the inventory database 218 transmits the acknowledgment to the UE 102 pertaining to completing the creation of required resources.
[0075] At step 504, the method 500 includes the step of adding the one or more resources at the container host 220 based on the resource allocation request by the adding unit 212. Upon addition of the one or more resources at the container host 220, the transmitting unit 216 transmits the confirmation to the UE 102 that the one or more resources is added to the container host 220.
[0076] At step 506, the method 500 includes the step of updating the inventory database 218 related to the network function 222 on addition of the one or more resources at the container host 220 by the updating unit 214.
[0077] FIG. 6 illustrates an architecture framework 600 (e.g., MANO architecture framework), in which the present invention can be implemented in accordance with one or more embodiments of the present invention. The system architecture 600 includes the user interface 206, a Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) design function module 602, a platform foundation service module 604, a platform core service module 606, and a platform resource adapter and utilities module 608.
[0078] The NFV and SDN design function module 602 is crucial for modernizing network infrastructure by enabling virtualized, scalable, and programmable network functions and management systems, particularly within the framework of CNFs. The platform foundation service module 604 refers to the underlying services and infrastructure components that support and enable the deployment, operation, and management of containerized network functions. The platform foundation service module 604 provides the essential capabilities and resources required for the CNF environment to function effectively.
[0079] The platform core service module 606 refers to the fundamental services and components that are essential for the core functionality and operation of containerized network functions. These services are critical for the effective deployment, execution, and management of CNFs, providing the necessary support and infrastructure for their operation. The platform resource adapter and utilities module 608 refers to a set of components and tools designed to manage and adapt various resources and services necessary for the operation of CNFs. The platform resource adapter and utilities module 608 plays a crucial role in integrating CNFs with underlying infrastructure and services, providing the necessary support for efficient operation, resource utilization, and interoperability.
[0080] The NFV and SDN design function module 602 includes a Virtual Network Function (VNF) lifecycle manager 602a, a VNF catalog 602b, a network service catalog 602c, a network slicing and service chaining manager 602d, a physical and virtual resource manager 602e, and a CNF lifecycle manager 602f.
[0081] The VNF lifecycle manager 602a is responsible for managing the entire lifecycle of VNFs. The VNF lifecycle manager 602a ensures that VNFs or CNFs are deployed, configured, monitored, scaled, and eventually decommissioned effectively. The VNF catalog 602b (referred to as a CNF catalog) is a repository or registry that stores information about various containerized network functions and their configurations. The VNF catalog 602b serves as a central reference for managing and deploying CNFs, providing details about their capabilities, requirements, and how they can be used within the network environment. The network service catalog 602c is a comprehensive repository that organizes and manages the information related to network services composed of multiple CNFs or other network functions. The network service catalog 602c serves as a central resource for defining, deploying, and managing these services within a containerized network environment.
[0082] The network slicing and service chaining manager 602d is a crucial component responsible for orchestrating and managing network slicing and service chaining functionalities. The network slicing and service chaining functionalities are essential for efficiently utilizing network resources and delivering tailored network services in a dynamic and scalable manner. The physical and virtual resource manager 602e is a critical component responsible for overseeing and managing both physical and virtual resources required to support the deployment, operation, and scaling of CNFs. The physical and virtual resource manager 602e ensures that the necessary resources are allocated efficiently and effectively to meet the performance, availability, and scalability requirements of containerized network functions.
[0083] Further, the CNF lifecycle manager 602f is a component responsible for overseeing the entire lifecycle of containerized network functions. This includes the management of CNFs from their initial deployment through ongoing operation and maintenance, up to their eventual decommissioning. The CNF lifecycle manager 602f ensures that the CNFs are efficiently deployed, monitored, scaled, updated, and removed, facilitating the smooth operation of network services in a containerized environment.
[0084] The platform foundation service module 604 includes a microservice elastic load balancer 604a, an identity and access manager 604b, a command line interface 604c, a central logging manager 604d and an event routing manager 604e.
[0085] The microservice elastic load balancer 604a is a specific type of load balancer designed to dynamically distribute network traffic across a set of microservices running in a containerized environment. The primary purpose of the microservice elastic load balancer 604a is to ensure efficient resource utilization, maintain high availability, and improve the performance of network services by evenly distributing incoming traffic among multiple instances of microservices. The identity and access manager 604b is a critical component responsible for managing and securing access to containerized network functions and their resources. The identity and access manager 604b ensures that only authorized users and systems can access specific resources, and it enforces policies related to identity verification, authentication, authorization, and auditing within the CNF ecosystem.
[0086] The central logging manager 604d is a component responsible for aggregating, managing, and analyzing log data from various containerized network functions and associated infrastructure components. The central logging manager 604d ensures that logs are collected from disparate sources, consolidated into a single repository, and made accessible for monitoring, troubleshooting, and auditing purposes. The event routing manager 604e is a component responsible for handling the distribution and routing of events and notifications generated by various parts of the CNF environment. The event routing manager 604e includes events related to system status, performance metrics, errors, and other operational or application-level events. The event routing manager 604e ensures that these events are efficiently routed to the appropriate consumers, such as monitoring systems, alerting systems, or logging infrastructure, for further processing and action.
[0087] The platform core service module 606 includes an NFV infrastructure monitoring manager 606a, an assurance manager 606b, a performance manager 606c, a policy execution engine 606d, a capacity monitoring manager 606e, a release management repository 606f, a configuration manager and GCT 606g, a NFV platform decision analytics unit 606h, a platform NoSQL DB 606i, a platform scheduler and Cron Jobs module 606j, a VNF backup & upgrade manager 606k, a micro service auditor 606l, and a platform operation, administration and maintenance manager 606m.
[0088] The NFV infrastructure monitoring manager 606a monitors the underlying infrastructure of NFV environments, including computing, storage, and network resources. The NFV infrastructure monitoring manager 606a provides real-time visibility into resource health, performance, and utilization. Further, the NFV infrastructure monitoring manager 606a detects and alerts infrastructure issues. Further, the NFV infrastructure monitoring manager 606a integrates with monitoring tools to ensure reliable operation of CNFs.
[0089] The assurance manager 606b manages the quality and reliability of network services by ensuring compliance with service level agreements (SLAs) and operational standards. The performance manager 606c optimizes the performance of CNFs by tracking and analyzing key performance indicators (KPIs). The policy execution engine 606d enforces and applies policies within the CNF environment to manage operations and access. Further, the policy execution engine 606d executes policies related to security, resource allocation, and service quality. Further, the policy execution engine 606d executes policies, translates policy rules into actionable configurations and enforces compliance across CNFs.
[0090] The capacity monitoring manager 606e monitors and manages the capacity of resources within the CNF environment to ensure optimal usage and avoid resource shortages. The release management repository 606f stores and manages software releases, configurations, and versions of CNFs. Further, the release management repository 606f keeps track of different versions of CNFs.
[0091] The configuration manager and Generic Configuration Tool (GCT) 606g manages the configuration of CNFs and related infrastructure components. The NFV platform decision analytics unit 606h analyzes data from a NFV platform to support decision-making and strategic planning.
[0092] The platform NoSQL database (DB) 606i is used for storing and managing large volumes of unstructured or semi-structured data within the CNF environment. The platform scheduler and Cron Jobs module 606j manage scheduled tasks and periodic operations within the CNF environment. The VNF backup & upgrade manager 606k oversees the backup and upgrade processes for VNFs within the CNF environment.
[0093] The micro service auditor 606l monitors and audits microservices to ensure compliance with operational and security standards. The platform operation, administration and maintenance manager 606m manages the overall operation, administration, and maintenance of the CNF platform.
[0094] The platform resource adapter and utilities module 608 includes a platform external API adaptor and gateway 608a, a generic decoder and indexer 608b, a swarm adaptor 608c, an OpenStack API adaptor 608d and a NFV gateway 608e.
[0095] The platform external API adaptor and gateway 608a facilitates communication between the CNF platform and external systems or services by providing an interface for API interactions. The generic decoder and indexer 608b decode and indexes various types of data and logs within the CNF environment. The swarm adaptor 608c facilitates communication between a swarm cluster and the CNF environment, including container deployment, scaling, and management.
[0096] The OpenStack API adaptor 608d provides an interface for the CNF platform to interact with OpenStack APIs, enabling operations such as provisioning, scaling, and managing virtual resources. The NFV gateway 608e manages and facilitates communication between NFV (Network Functions Virtualization) components and external networks or services.
[0097] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive a resource customization request from the UE 102. The resource allocation request indicates one or more resources required to operate the network function 222 as per the network requirement The processor 202 is further configured to add one or more resources at the container host 220 based on the resource allocation request. The processor 202 is further configured to update the inventory database 218 related to the network on addition of the one or more resources at the container host 220.
[0098] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0099] The present disclosure incorporates technical advancement of allocating or adding multiple resources according to requirements. The resources are categorized and stored based on the parameters such as tiny, small, medium, large, and extra-large. The allocated resources will be easily accessed in future. The end user can update the resources according to their requirements. Further, the present invention enables dynamic allocation of resources (such as CPU, RAM, disk space) to the network function based on real-time network requirements. Further, the present invention reduces the risk of resource mismanagement and ensures efficient tracking of available resources. Further, the present invention ensures that network functions operate efficiently, leading to optimized network performance and reduced latency. Furthermore, the present invention eliminates downtime efficiently utilizing the time and resources.
[00100] The present invention offers multiple advantages over the prior art and the above-listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[00101] Environment- 100
[00102] User Equipment (UE)- 102
[00103] Server- 104
[00104] Network- 106
[00105] System -108
[00106] Processor- 202
[00107] Memory- 204
[00108] User Interface- 206
[00109] Database- 208
[00110] Receiving Unit- 210
[00111] Adding Unit- 212
[00112] Updating unit- 214
[00113] Transmitting Unit- 216
[00114] Inventory database- 218
[00115] Container host- 220
[00116] Network Function-222
[00117] One or more primary processor – 302
[00118] Memory- 304
[00119] Container orchestrator- 402
[00120] NFV and SDN design function module -602
[00121] Virtual Network Function (VNF) lifecycle manager -602a,
[00122] VNF catalog -602b,
[00123] Network service catalog -602c,
[00124] Network slicing and service chaining manager -602d,
[00125] Physical and virtual resource manager -602e,
[00126] CNF lifecycle manager -602f
[00127] Platform foundation service module -604
[00128] Microservice elastic load balancer -604a
[00129] Identity and access manager -604b
[00130] Command line interface -604c
[00131] Central logging manager -604d
[00132] Event routing manager -604e
[00133] Platform core service module -606
[00134] NFV infrastructure monitoring manager -606a,
[00135] Assurance manager -606b,
[00136] Performance manager -606c,
[00137] Policy execution engine -606d,
[00138] Capacity monitoring manager -606e
[00139] Release management repository -606f
[00140] Configuration manager and GCT -606g
[00141] NFV platform decision analytics unit -606h
[00142] Platform NoSQL DB -606i
[00143] Platform scheduler and Cron Jobs module -606j
[00144] VNF backup & upgrade manager -606k
[00145] Micro service auditor -606l
[00146] Platform operation, administration and maintenance manager -606m
[00147] Platform resource adapter and utilities module 608
[00148] Platform external API adaptor and gateway -608a
[00149] Generic decoder and indexer -608b
[00150] Swarm adaptor -608c
[00151] OpenStack API adaptor -608d
[00152] NFV gateway -608e
,CLAIMS:CLAIMS
We Claim:
1. A method (500) of managing resources of a network function in a network (106), the method (500) comprising the steps of:
receiving, by one or more processors (202), one or more resource allocation request from a User Equipment (UE) (102), wherein the resource allocation request indicates one or more resources required to operate a network function (222) as per a network requirement;
adding, by the one or more processors (202), the one or more resources at a container host (220) based on the resource allocation request; and
updating, by the one or more processors (202), an inventory database (218) related to the network function (222) on addition of the one or more resources at the container host (220).
2. The method as claimed in claim 1, wherein the one or more resource allocation requests are received dynamically during run time of the network function (222) to change a configuration of the network function (222).
3. The method (500) as claimed in claim 1, wherein on addition of the one or more resources at the container host (220), the method (500) comprises the step of:
transmitting, by the one or more processors (202), a confirmation to the UE (102) that the one or more resources are added to the container host (220).
4. The method (500) as claimed in claim 1, wherein the inventory database (218) receives a create request from the UE (102) to create required resources as indicated in the create request at the inventory database (218).
5. The method (500) as claimed in claim 1, wherein the one or more resources of the network function (222) are added at one of, a designing stage and a run time stage of the network functions (222).
6. The method (500) as claimed in claim 1, wherein the one or more resources is at least one of, a Central Processing Unit (CPU), Random Access Memory (RAM), disk space, and a combination thereof, and wherein the network requirement is based on at least one of a network traffic.
7. The method (500) as claimed in claim 4, wherein the inventory database (218) transmits an acknowledgment to the UE (102) pertaining to completing the creation of required resources, upon creating the required resources as indicated in the create request at the inventory database (218).
8. The method (500) as claimed in claim 1, wherein the one or more processors (202) communicates with the inventory database (218) and the container host (220) via a communication channel.
9. The method (500) as claimed in claim 8, wherein the communication channel is an interface between a container orchestrator (402) and the inventory database (218).
10. The method (500) as claimed in claim 9, wherein the interfaces is at least one of, IM_DA interface.
11. A system (108) for managing resources of a network function in a network (106), the system (108) comprising:
a receiving unit (210), configured to, receive, a resource allocation request from a User Equipment (UE) (102), wherein the resource allocation request indicates one or more resources required to operate the network function (222) as per a network requirement;
an adding unit (212), configured to, add, the one or more resources at a container host (220) based on the resource allocation request; and
an updating unit (214), configured to, update, an inventory database (218) related to the network function (222) on addition of the one or more resources at the container host (220).
12. The system as claimed in claim 11, wherein the one or more resource allocation requests are received dynamically during run time of the network function (222) to change a configuration of the network function (222).
13. The system (108) as claimed in claim 11, wherein on addition of the one or more resources at the container host (220), the system (108) comprises:
a transmitting unit (216), configured to, transmit, a confirmation to the UE (102) that the one or more resources is added to the container host (220).
14. The system (108) as claimed in claim 11, wherein the inventory database (218) receives a create request from the UE (102) to create required resources as indicated in the create request at the inventory database (218).
15. The system (108) as claimed in claim 11, wherein the one or more resources of the network function (222) is managed at one of, a designing stage and a run time stage of the network functions.
16. The system (108) as claimed in claim 11, wherein the one or more resources is at least one of, a Central Processing Unit (CPU), Random Access Memory (RAM), disk space, and a combination thereof, and wherein the network requirement is based on at least one of a network traffic.
17. The system (108) as claimed in claim 14, wherein the inventory database (218) transmits an acknowledgment to the UE (102) pertaining to completing the creation of required resources, upon creating the required resources as indicated in the create request at the inventory database (218).
18. The system (108) as claimed in claim 11, wherein the system communicates with the inventory database (218) and the container host (220) via a communication channel.
19. The system (108) as claimed in claim 18, wherein the communication channel is an interface between a container orchestrator (402) and the inventory database (218).
20. The system (108) as claimed in claim 19, wherein the interface is at least one of IM_DA interface.
21. A User Equipment (UE) (102), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory (304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (302) causes the UE (102)to:
transmit, a resource allocation request to the one or more processers, the resource allocation request indicates one or more resources required to operate the network function (222) as per a network requirement,
receive, a confirmation that the one or more resources is added to the container host (220),
transmit, an add request to add required resources as indicated in the add request at the inventory database (218),
receive, an acknowledgement pertaining to completing the addition of required resources,
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321061736-STATEMENT OF UNDERTAKING (FORM 3) [13-09-2023(online)].pdf | 2023-09-13 |
| 2 | 202321061736-PROVISIONAL SPECIFICATION [13-09-2023(online)].pdf | 2023-09-13 |
| 3 | 202321061736-POWER OF AUTHORITY [13-09-2023(online)].pdf | 2023-09-13 |
| 4 | 202321061736-FORM 1 [13-09-2023(online)].pdf | 2023-09-13 |
| 5 | 202321061736-FIGURE OF ABSTRACT [13-09-2023(online)].pdf | 2023-09-13 |
| 6 | 202321061736-DRAWINGS [13-09-2023(online)].pdf | 2023-09-13 |
| 7 | 202321061736-DECLARATION OF INVENTORSHIP (FORM 5) [13-09-2023(online)].pdf | 2023-09-13 |
| 8 | 202321061736-FORM-26 [27-11-2023(online)].pdf | 2023-11-27 |
| 9 | 202321061736-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 10 | 202321061736-DRAWING [11-09-2024(online)].pdf | 2024-09-11 |
| 11 | 202321061736-COMPLETE SPECIFICATION [11-09-2024(online)].pdf | 2024-09-11 |
| 12 | Abstract 1.jpg | 2024-10-08 |
| 13 | 202321061736-FORM-9 [10-01-2025(online)].pdf | 2025-01-10 |
| 14 | 202321061736-FORM 18A [14-01-2025(online)].pdf | 2025-01-14 |
| 15 | 202321061736-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321061736-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 17 | 202321061736-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 18 | 202321061736-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 19 | 202321061736-FORM 3 [29-01-2025(online)].pdf | 2025-01-29 |
| 20 | 202321061736-FER.pdf | 2025-03-06 |
| 21 | 202321061736-FER_SER_REPLY [22-04-2025(online)].pdf | 2025-04-22 |
| 22 | 202321061736-US(14)-HearingNotice-(HearingDate-14-10-2025).pdf | 2025-09-19 |
| 23 | 202321061736-Correspondence to notify the Controller [22-09-2025(online)].pdf | 2025-09-22 |
| 24 | 202321061736-Written submissions and relevant documents [23-10-2025(online)].pdf | 2025-10-23 |
| 25 | 202321061736-PatentCertificate17-11-2025.pdf | 2025-11-17 |
| 26 | 202321061736-IntimationOfGrant17-11-2025.pdf | 2025-11-17 |
| 1 | 202321061736_SearchStrategyNew_E_SearchHistory-1736E_05-03-2025.pdf |