Sign In to Follow Application
View All Documents & Correspondence

Method And System For Deploying An Application In An Environment

Abstract: ABSTRACT METHOD AND SYSTEM FOR DEPLOYING AN APPLICATION IN AN ENVIRONMENT The present disclosure relates to a system (120) and a method (700) for deploying an application in an environment. The method (700) includes the step of creating by utilizing a compiled logic one or more binary folder/image for deploying the application in the environment based on a request received from a user. The method (700) further includes the step of adding the created one or more binary folder/image in a container to create a Containerized Network Function (CNF). The method (700) includes the step of deploying the created CNF pertaining to the application in the environment. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
2. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
3. Rizwan Ahmad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
4. Kapil Gill
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
5. Shashank Bhushan
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR DEPLOYING AN APPLICATION IN AN ENVIRONMENT
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION

[0001] The present invention relates to a system and method for application deployment, more specifically relates to the system and method for deploying an application in an environment.
BACKGROUND OF THE INVENTION
[0002] Fulfilment Management System (FMS) has the capability of orchestration management, provisioning management and inventory management within the same architecture. FMS requires various software deployment capabilities and architecture.

[0003] Software deployment relates to activities that occur to make a software system available for use and able to run in a specific environment. It brings key advantages to enterprises. Tasks like installing, uninstalling and updating software applications on each computer are time consuming. Software deployment services aim to reduce the time and to make the process error free. Software can be easily controlled and managed through deployment as it enables transition of the capability to the ultimate end-user, as well as transition of support and maintenance responsibilities to the post-deployment support organization or organizations.

[0004] However, for traditionally available deployment technology to support all advanced types of architectures such as cloud based, hybrid, docker type of deployments various changes in the code level and/or configuration level are required to deploy the system/software in a different environment, this is very time and effort consuming.

[0005] Therefore, currently the deployment technology does not support all advanced types of architectures. In order to enable deployment on any type of environment various changes to the code and configuration levels are required, making it difficult to deploy and the method time consuming.

SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present invention provides a method and a system for deploying an application in an environment.
[0007] In one aspect of the present invention, the method for deploying the application in the environment is disclosed. The method includes the step of creating, by utilizing a compiled logic, one or more binary folder/image which includes data related to a plurality of required resources for deploying the application in the environment based on a request received from a user. Further, the method includes the step of adding the created one or more binary folder/image in a container to create a Containerized Network Function (CNF). Further, the method includes the step of deploying the created CNF pertaining to the application in the environment.
[0008] In one embodiment, the environment includes at least one of, a hybrid server, a bare metal server, a public cloud, a private cloud, a cloud native.
[0009] In one embodiment the plurality of required resources includes at least one of, one or more configuration files, one or more libraries, a docker file, one or more script files, and a runnable jar.
[0010] In one embodiment, the CNF pertains to a cloud native network function.
[0011] In one embodiment, the step of deploying the created CNF pertaining to the application in an environment is performed utilizing a Management and Orchestration (MANO) platform via a containerized interface. The MANO platform includes at least one of the Kubernetes.
[0012] In one embodiment, the application includes, at least one of a Fulfilment Management System (FMS) and/or a combination of at least one of, an inventory system, a provision system and an orchestration system.
[0013] In another aspect of the present invention, the system for deploying the application in the environment is disclosed. The system includes a generation unit, configured to create by utilizing a compiled logic one or more binary folder/image. The one or more binary folder/image includes data related to the plurality of required resources for deploying the application in the environment based on a request received from the user. The system further includes a computation unit configured to add the created one or more binary folder/image in a container to create the Containerized Network Function (CNF). Further the system includes a deployment unit, configured to deploy the created CNF pertaining to the application in the environment. In yet another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by the processor. The processor is configured to create by utilizing a compiled logic one or more binary folder/image. The binary folder/image includes data related to the plurality of required resources for deploying the application in an environment based on a request received from the user. The processor is further configured add the created one or more binary folder/image in a container to create the Containerized Network Function (CNF). Further, the processor is configured to deploy the created CNF pertaining to the application in the environment.
[0014] In yet another aspect of invention, User Equipment (UE) includes one or more primary processors. The one or more primary processors are communicatively coupled to one or more processors, and a memory. The memory stores the instructions which when executed by one or more primary processors causes the UE to transmit the request to the one or more processors for deploying an application in an environment.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 is an exemplary block diagram of a communication system for deploying an application in an environment, according to one or more embodiments of the present disclosure;
[0018] FIG. 2 is an exemplary block diagram of the system for deploying the application in the environment, according to one or more embodiments of the present disclosure;
[0019] FIG. 3 is a schematic representation of a workflow of system of FIG. 2, according to one or more embodiments of the present disclosure;
[0020] FIG. 4 is a workflow diagram illustrating the system for deploying the application in the environment, according to one or more embodiments of the present disclosure;
[0021] FIG. 5 is an exemplary block diagram of an architecture implemented in the system of FIG.2, according to one or more embodiments of the present disclosure;

[0022] FIG. 6 is a signal flow diagram for deploying the application in the environment, according to one or more embodiments of the present disclosure; and
[0023] FIG. 7 is a flow diagram illustrating the method for deploying the application in the environment, according to one or more embodiments of the present disclosure.
[0024] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0025] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0026] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0027] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0028] The present disclosure addresses the challenges faced in established technologies where the deployment technology does not support all advanced types of architectures. In order to enable deployment in any type of environment various changes to code and configuration levels of applications are required, making it difficult to deploy and the method time consuming. To overcome the above-mentioned challenges, the present invention introduces a novel technique which reduces time required for deployment of application/software on various deployment architectures. The present disclosure facilitates an error free method to deploy application/software in any type of environment.
[0029] FIG. 1 illustrates an exemplary block diagram of a communication system 100 for deploying an application in an environment, according to one or more embodiments of the present disclosure. The communication system 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 to transmit the request to the one or more processors for deploying an application in an environment.
[0030] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0031] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0032] The network 105 may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0033] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0034] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0035] The communication system 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0036] The communication system 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as an individual entity.
[0037] Operational and construction features of the system 120 will be explained in detail with respect to the following figures. FIG. 2 illustrates an exemplary block diagram of the system 120 for deploying the application in the environment, according to one or more embodiments of the present disclosure. In an embodiment, the application includes but not limited to, Fulfillment Management System (FMS). The environment includes, but to limited to, hybrid server, a bare metal server, a public cloud, a private cloud, cloud-native.
[0038] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a user interface and a database 220.
[0039] For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0040] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium which may be fetched and executed to display the enriched data to the user via the user interface in order to perform analysis. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0041] In an embodiment, the user interface unit 215 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like. In one embodiment, the user interface unit 215 provides a communication pathway for one or more components of the system 120.
[0042] The database 220 is configured to store data such as binary folder/image. Further, the database 220 provides structured storage, support for complex queries, and enables efficient data retrieval and analysis. The database 130 is one of, but is not limited to, one of a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0043] In order for the system 120 to deploy the application in the environment, the processor 205 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a generating unit 225, a computation unit 230 and a deployment unit 235 communicably coupled to each other for deploying the application in the communication system 100.
[0044] The generating unit 225, the computation unit 230, and the deployment unit 235, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0045] The generating unit 225 is configured to create one or more binary folder/image by utilizing a compiled logic. The binary folder/image includes data related to a plurality of required resources for deploying the application in the environment based on a request received from a user. The binary folder/image refers to a packaged form of the application and its dependencies for the deployment in an environment. The environment refers to a computing environment that facilitates applications to run within the containers or virtual machines. The containers refer to lightweight, standalone, executable packages. The containers may include but not limited to, code, runtime, system tools, system libraries. The plurality of required resources includes at least one of, but not limited to, one or more configuration files, one or more libraries, a docker file, one or more script files, and a runnable jar.
[0046] In one embodiment, the one or more configuration files refers to structured document containing information pertaining to specific settings and parameters essential for proper functioning of the application FMS (the information about the FMS is mentioned in the FIG. 4) in the network 105. In an embodiment the specific settings and parameters include but not limited to, networking configuration, deployment configuration, scaling and auto-scaling, resource management, monitoring and logging, security and compliance, fault tolerance and high availability, application-specific configurations, versioning and dependency management.
[0047] In one embodiment, the one or more libraries refers to precompiled functions, the one or more libraries encapsulate specific functionalities essential for the application FMS operation. The one or more libraries can include but not limited to, Software-Defined Networking (SDN) libraries, Network Function Virtualization (NFV) libraries, specific protocol stacks, and docker file. The docker file is a text document used to define the environment and dependencies such as but not limited to, base image selection, installation of network tools, configuration of network settings, libraries for protocols, network function dependencies to containerize the application FMS.
[0048] Further the docker file contains information pertaining to building a docker image. The docker file contains instructions for configuring the environment, installing dependencies, and setting up the runtime environment necessary for the FMS. In one embodiment, the one or more script files automate the tasks such as but not limited to deployment, configuring, and maintenance of the FMS.
[0049] The script files include, but is not limited to, bash scripts for automating network function deployment, python scripts for orchestrating service instances, network configuration across multiple nodes. In one embodiment, the runnable jars package the FMS and the dependencies of the FMS into a single executable file format. The runnable jar can include but not limited to, network management applications, service orchestrators, ensuring the FMS can be easily deployed and scaled within any deployment architecture such as but not limited to, virtual machines and physical server.
[0050] In one embodiment, the complied logic refers to the process of transforming a source code of the application the FMS into machine-readable binary code. In one embodiment, the generating unit 225 accepts the source code of the application the FMS as the input. Further the generating unit 225 utilizes its complier to translate the source code accepted into machine-readable binary code. Upon compilation the resulting binary code with required resources is packed into the one or more binary folder/image. The one or more binary folder/image includes, but is not limited to, configuration files, libraries, docker file, script files, runnable jars. The source code refers to the original human-readable manuscript-based representation of a computer program written in a programming language. The programming language may include, C, C++, java, Python and so on. The source code comprises a statement, an expression, and a declaration composed in a programming language's syntax. The machine-readable binary code refers to a sequence of binary digits (0s and 1s). The binary digits (0s and 1s) include instructions and data in a format directly understandable by a computer's Central Processing Unit (CPU).
[0051] Upon creating the one or more binary folder/image, the one or more binary folder/ image is transmitted to the computation unit 230. The computation unit 230 is configured to add the created one or more binary folder/image in a container to create a Containerized Network Function (CNF). The CNF refers to a network function implemented using containerization technology. The containerization technology refers to the encapsulating network functions and applications into lightweight, portable containers. The network function refers to the various software-based operations performed in the network 105 to facilitate communication services. The operations may include but are not limited to, radio resource management, packet routing and forwarding, mobility management, quality of service management, authentication and security.
[0052] In an embodiment, the CNF pertains to cloud native network function. The cloud native network function refers to the network function specifically designed and optimized for deployment in cloud-native environments. The CNF includes, but is not limited to, microservices architecture, containerization, and orchestration. The CNF provides rapid deployment, scaling, and management of the network functions in the distributed cloud environments.
[0053] The containerized binary folder/image in the CNF are transmitted to the deployment unit 235. The deployment unit 235 is configured to deploy the created CNF pertaining to the application in the environment. The environment includes at least one of a hybrid server, a bare metal server, a public cloud, a private cloud, and cloud-native. The deployment unit 235 deploys the created CNF pertaining to the application in the environment utilizing a Management and Orchestration (MANO) platform via a containerized interface. The MANO platform refers to a framework that facilitates the automated management and orchestration of the network functions and resources in the network 105. The MANO platform facilitates the provisioning, configuration, monitoring, and an optimization of network functions and resources in the network 105. The MANO platform includes at least one of the Kubernetes.
[0054] The Kubernetes refers to an open-source container orchestration platform. The Kubernetes automates the deployment, scaling, and management of the containerized applications and the network functions in the networks 105. The Kubernetes functions as the central MANO layer for CNFs, enabling operators to efficiently deploy, scale, and manage the network 105 services across distributed infrastructure environments. The containerized interface refers to the interface or mechanism. which is utilized by the Kubernetes of the MANO platform to interact with the CNF during the deployment process.
[0055] FIG. 3 is a schematic representation of the workflow of the system 120 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a and the system 120 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0056] As mentioned earlier in FIG. 1, the UE 110 may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120.
[0057] The one or more primary processors 305 are coupled with a memory unit 310 storing instructions which are executed by the one or more primary processors 305. The one or more primary processors 305 enables the first UE 110a for deploying an application in an environment.
[0058] As per the illustrated embodiment, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the database 220. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the database 220 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0059] Further, the processor 205 includes the generating unit 225, the computation unit 230, and the deployment unit 235. The operations and functions of the generating unit 225, the computation unit 230, and the deployment unit 235 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0060] FIG. 4 is a workflow diagram of the system for deploying an application in an environment, according to one or more embodiments of the present invention. The exemplary architecture as illustrated in the FIG. 4 includes a Containerized Network Function (CNF) 405 pertaining to an application for example, a Fulfillment Management System (FMS), a Management and Orchestration (MANO) unit 410, a bare metal 415, a private cloud 420, a public cloud 425 and the UE 110. In an embodiment, the FMS includes a binary folder/image which includes code compiled to generate binary folder containing the required resource configuration files, libraries, docker file, script files, runnable jar, etc.
[0061] In an embodiment, the application to be deployed is the FMS. The FMS is a cloud-native application orchestrated with an orchestrator like Kubernetes. The Kubernetes refers to an orchestrated system for automating software deployment, scaling, and management.
[0062] In an embodiment, the CNF 405 contains the one or more binary folders/images pertaining to the FMS which is required to be deployed in the environment such as but not limited to the bare metals 415, the private cloud 420 and the public cloud 425.
[0063] In an embodiment, the deployment of the CNF 405 pertaining to the application such as the FMS in the bare metal 415 is performed to facilitate the high performance of the bare metal 415. The bare metal 415 contains hardware resources which include, but not limited to, CPU, RAM, Storage, network port. Further the FMS deployment in the bare metals 415 eliminates the overhead associated with a hypervisor. In an embodiment, the plurality of hardware resources of the bare metals 415 are primarily allocated to containerized workloads, ensuring every CPU cycle is allocated to business applications. Further the network packets and storage operations of the bare metals 415 are handled directly, bypassing hypervisor intervention. The FMS can be easily deployed in the bare metals 415 such as but not limited to data centers and the like.
[0064] In an embodiment, the deployment of the CNF 405 pertaining to the application such as the FMS in the cloud server may be on private/public cloud servers is performed if the requirement for high scalability and flexibility is to be satisfied. The private cloud may include, but not limited to, a telecommunications operator’s private clouds, an enterprise private clouds, a government/private sector hybrid clouds, a research and development labs, a critical infrastructure provider. The public cloud may include, but not limited to, an Amazon web services (AWS), a Microsoft Azure, a google cloud platform (GCP), an IBM cloud, an oracle cloud infrastructure.
[0065] In an embodiment, the deployment of the CNF 405 pertaining to the application such as the FMS in the private cloud 420 is required with the implementation of management and orchestration capabilities. The Management and Orchestration (MANO) 410 is selected as the orchestration manager to support deployment in the private cloud 420. The deployment of the FMS in the private cloud 420 facilitates in achieving scalability and high flexibility.
[0066] In an embodiment, the deployment of the FMS in the public cloud 425 is performed with any advanced type of architecture. The architecture includes, but is not limited to, cloud based, hybrid and docker type. The architecture may be any one of the containers or virtual machines or a combination thereof.
[0067] FIG. 5 is an exemplary architecture of the application FMS 505 implemented in the system 120 of the FIG.2, according to one or more embodiments of the present invention. The exemplary embodiment as illustrated in the FIG. 5 includes the user interface 215, a dynamic routing manager 505, a distributed database 510 having a distributed data lake 515, the cache data store 520, a dynamic activator 525, a workflow manager 530, a message broker 535, a graph database 540, an operation and management module 545 and a load balancer 550.
[0068] In an embodiment, the user interface 215 serves as the front-end component, which facilitates the user such as but not limited to network operator, subscriber, network admin to interact with the FMS 505. Further the user interface 215 facilitates users such as network operator to analyse and debug the FMS 505.
[0069] The dynamic routing manager 505 is system designed to manage and optimize the routing of data packets in real-time based on current network conditions, policies, and demands. The dynamic routing manager 505 ensures that data takes the most efficient and effective path through the network 105, adapting to changes such as traffic load, link failures, and varying bandwidth requirements.
[0070] The distributed data lake 515 is an architectural approach to storing and managing vast amounts of structured and unstructured data across multiple locations or systems. The distributed data lake 515 leverages distributed computing and storage technologies to provide scalable, flexible, and efficient data storage solutions, enabling organizations to manage and analyze large datasets seamlessly.
[0071] The workflow manager 530 is a system designed to automate and manage various operational processes within the FMS. The primary function of the workflow manager 530 is to streamline, coordinate, and monitor complex tasks, ensuring efficient and consistent execution of workflows. To the deploy the FMS 505 in any architecture such as but not limited to the bare metals 415, the private cloud 420, the public cloud 425, the workflow manager 530 is configured to read the configuration and stored in the cache data store 520 and procced for the further processing.
[0072] The workflow manager 530 is further configured to instruct the dynamic activator 414, to execute the state for the workflow based on the configuration. The dynamic activator 525 is a system or component that dynamically enables and manages network functions, services, or resources in response to changing conditions, demands, and configurations within the network 105. The dynamic activator 525 is configured to retrieve information from the cache data store to execute the state.
[0073] The message broker 535 is an intermediary software component that facilitates communication between different systems or applications by translating messages from the messaging protocol of the transmitter to the messaging protocol of the receiver. The message broker 535 acts as a middleware that helps decouple services and allows them to communicate asynchronously, ensuring reliable and scalable message delivery.
[0074] The graph database 540 is a database designed to store, manage, and query data structured as graphs, which consist of nodes (entities), edges (relationships), and properties (attributes) that describe the characteristics of nodes and edges
[0075] The operation and management module 545 refers to a set of tools, processes, and functionalities designed to oversee, control, and maintain the efficient operation of the FMS 505. The operation and management module 545 performs a health check mechanism such as but not limited to, alarms, counters, availability for the microservices in the application FMS 505. The operation and management module 545 continuously checks the registration status of all the microservices within the FMS 505. Further, the operation and management module 545 tracks the microservices, which are presently registered and actively participating in the FMS 505. Further the operation and management module 545 sends the re-registration requests to all unregistered micro services in the FMS 505. Further, the operation and management module 545 encompasses the tasks necessary to ensure that the network 105 performs optimally, remains secure, and provides the required services reliably.
[0076] The load balancer 550 is a device or software application that distributes network 105 or application traffic across multiple servers to ensure no single server becomes overwhelmed. By balancing the load, it helps optimize resource use, maximize throughput, minimize response time, and prevent overload on any single server. The load balancer 550 is essential in ensuring the high availability and reliability of applications.
[0077] In one embodiment, all the microservices of the FMS 505 such as but not limited to, the dynamic routing manager 505, the distributed database 510 having the distributed data lake 515, a cache data store 520, the dynamic activator 525, the workflow manager 530, the message broker 535, the graph database 540, the operation and management module 545 and the load balancer 550 can be deployed as the CNF 405 in the environment such as but not limited to the bare metals 415, the public cloud 420, the private cloud 425.
[0078] FIG. 6 is a signal flow diagram for deploying the application in the environment, according to one or more embodiments of the present invention. For the purpose of description, the signal flow diagram is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0079] At step 605, the UE 110 receives the request from the user for deploying the application in an environment.
[0080] At step 610, upon receiving the request from the end user, the UE 110 transmits the request to the one or more processors of the system 120 for deploying the application in the environment.
[0081] At step 615, upon receiving the request from the UE 110, the generating unit 215 is configured to create by utilizing a compiled logic one or more binary folder/image. The one or more binary folder/image includes data related to the plurality of required resources for deploying the application in the environment, based on a request received from a user. Further the created one or more binary folder/image is transmitted to the computation unit 220.
[0082] At step 620, upon receiving the binary folder/image, the computation unit 220 adds the created one or more binary folder/image in a container to create a CNF. The information about the CNF is mentioned earlier in the FIG. 2. The CNF is further transmitted to the deployment unit 225 for deployment
[0083] At step 625, upon receiving the CNF, the deployment unit 225 deploys the created CNF pertaining to the application in the environment. The created CNF pertaining to the application in the environment is performed utilizing a MANO 610 platform via a containerized interface. The environment includes at least one of, a hybrid server, a bare metal server, a public cloud, a private cloud, Kubernetes.
[0084] FIG. 7 is a flow diagram illustrating a method 600 for deploying an application in an environment.
[0085] At step 705, the method 700 includes creating one or more binary folder/image by utilizing a compiled logic based on the user request received from the user by the generating unit 225. The plurality of required resources includes at least one of, one or more configuration files, one or more libraries, a docker file, one or more script files, and a runnable jar.
[0086] At step 710, the method 700 includes the step of adding the created one or more binary folder/image in a container to create the CNF. The CNF pertains to cloud native network function. The CNF includes, but is not limited to, microservices architecture, containerization, and orchestration. The CNF provides rapid deployment, scaling, and management of the network functions in the distributed cloud environments.
[0087] At step 715, the method 700 includes the step of deploying the created CNF pertaining to the application in the environment. The created CNF pertaining to the application in an environment is performed utilizing the MANO 410 via the containerized interface. the environment includes at least one of, a hybrid server, a bare metal server, a public cloud, a private cloud, Kubernetes. The Kubernetes functions as the central MANO 410 layer for CNFs, enabling operators to efficiently deploy, scale, and manage the network services across distributed infrastructure environments.
[0088] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to deploy the application in the environment. The processor 205 is configured to create by utilizing a compiled logic one or more binary folder/image Further, the processor 205 is configured to add the created one or more binary folder/image in the container to create the CNF. Further, the processor 205 is configured to deploy the created CNF pertaining to the application in the environment.
[0089] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-7) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0090] The present disclosure incorporates technical advancement that facilitates deploying an application in the environment is performed by improved deployment capabilities and architecture. Further the present invention provides the deployment on edge, cloud, hybrid deployment. The deployment can be on containers or on virtual machines and can be orchestrated through any orchestration platforms. Further the system and method offer significantly reduced time required for deployment of application/software on various deployment architectures.
[0091] The present invention provides various advantages, including optimal resource utilization and reduced execution time. The system provides an efficient solution for improved deployment of software/application on various software deployment capabilities and architecture. Tasks like installing, uninstalling, and updating software applications on each computer are time consuming. The present invention aims to reduce the time and to make the process error free. Further software can be easily controlled and managed through the deployment as it enables transition of the capability to the end-user. Further the FMS 505 can be easily deployed on cloud servers, by adding new resources to individual virtual machines and adding a whole new server can be performed in the matter of minutes. The FMS 505 orchestrated with Kubernetes facilitates more flexibility to choose deployment should use physical servers or virtual machines. Further the FMS 505 is designed to fit on any deployment architecture on the basis of business requirement and use cases without any code level changes. Further the invention provides flexibility by facilitating in FMS deployment using containers and virtual machines or a combination thereof, such as the containers in the virtual machines.
[0092] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0093] Communication system – 100
[0094] Network – 105
[0095] User Equipment – 110
[0096] Server – 115
[0097] System – 120
[0098] Processor -205
[0099] Memory – 210
[00100] User Interface– 215
[00101] Database- 220
[00102] Generating unit – 225
[00103] Computation unit – 230
[00104] Deployment unit – 235
[00105] Primary processor – 305
[00106] Memory – 310
[00107] Fulfillment Management System (FMS) – 505
[00108] MANO – 410
[00109] Bare metals – 415
[00110] Private cloud – 420
[00111] Public cloud – 425
[00112] Dynamic Routing Manager – 505
[00113] Distributed Database – 510
[00114] Distributed Data lake - 515
[00115] Message Broker – 535
[00116] Graph Database – 540
[00117] Operation and Management module - 545
[00118] Load Balancer – 550
[00119] Containerized Network Function (CNF)-405

,CLAIMS:CLAIMS
We Claim:
1. A method (700) for deploying an application in an environment, the method (500) comprising the steps of:
creating (705), by one or more processors (205), utilizing a compiled logic, one or more binary folder/image which includes data related to a plurality of required resources for deploying the application in the environment, based on a request received from a user.
adding (710), by the one or more processors (205), the created one or more binary folder/image in at least one of, a container, a virtual machine or a combination thereof to create a Containerized Network Function (CNF); and
deploying (715), by the one or more processors (205), the created CNF pertaining to the application in the environment.

2. The method (700) as claimed in claim 1, wherein the environment includes at least one of, a hybrid server, a bare metal server, a public cloud, a private cloud, cloud-native.

3. The method (700) as claimed in claim 1, wherein the plurality of required resources includes at least one of, one or more configuration files, one or more libraries, a docker file, one or more script files, and a runnable jar.

4. The method (700) as claimed in claim 1, wherein the CNF pertains to cloud native network function.

5. The method (700) as claimed in claim 1, wherein the step of deploying, the created CNF pertaining to the application in an environment is performed utilizing a management and orchestration platform via a containerized interface.

6. The method (700) as claimed in claim 5, wherein the management and orchestration platform include at least one of, a Kubernetes.

7. The method (700) as claimed in claim 1, wherein the application includes, at least one of a Fulfilment Management System (FMS) and/or a combination of at least one of, an inventory system, a provision system and an orchestration system.

8. A system (120) for deploying an application in an environment, the system comprising:
a generation unit (225), configured to, create, utilizing a compiled logic, one or more binary folder/image which includes data related to a plurality of required resources for deploying the application in the environment based on a request received from a user;
a computation unit (230), configured to, add, the created one or more binary folder/image in at least one of, a container, a virtual machine or a combination thereof to create a Containerized Network Function (CNF); and
a deployment unit (235), configured to, deploy, the created CNF pertaining to the application in the environment.

9. The system (120) as claimed in claim 8, wherein the environment includes at least one of, a hybrid server, a bare metal server, a public cloud, a private cloud, cloud-native.

10. The system (120) as claimed in claim 8, wherein the plurality of required resources includes at least one of, one or more configuration files, one or more libraries, a docker file, one or more script files, and a runnable jar.

11. The system (120) as claimed in claim 8, wherein the CNF pertains to cloud native network function.

12. The system (120) as claimed in claim 8, wherein the deployment unit deploys, the created CNF pertaining to the application in the environment utilizing a management and orchestration platform via a containerized interface.

13. The system (120) as claimed in claim 12, wherein the management and orchestration platform include at least one of, a Kubernetes.

14. The system (120) as claimed in claim 8, wherein the application includes, at least one of a Fulfilment Management System (FMS) and/or a combination of at least one of, an inventory system, a provision system and an orchestration system.

15. A User Equipment (110) (UE), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory, wherein said memory (310) stores instructions which when executed by the one or more primary processors causes (305) the UE to:
transmit, the request to the one or more processors (205) for deploying an application in an environment; and
wherein the one or more processors (205) are configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321047845-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2023(online)].pdf 2023-07-15
2 202321047845-PROVISIONAL SPECIFICATION [15-07-2023(online)].pdf 2023-07-15
3 202321047845-FORM 1 [15-07-2023(online)].pdf 2023-07-15
4 202321047845-FIGURE OF ABSTRACT [15-07-2023(online)].pdf 2023-07-15
5 202321047845-DRAWINGS [15-07-2023(online)].pdf 2023-07-15
6 202321047845-DECLARATION OF INVENTORSHIP (FORM 5) [15-07-2023(online)].pdf 2023-07-15
7 202321047845-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321047845-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321047845-DRAWING [13-07-2024(online)].pdf 2024-07-13
10 202321047845-COMPLETE SPECIFICATION [13-07-2024(online)].pdf 2024-07-13
11 Abstract-1.jpg 2024-08-28
12 202321047845-Power of Attorney [21-10-2024(online)].pdf 2024-10-21
13 202321047845-Form 1 (Submitted on date of filing) [21-10-2024(online)].pdf 2024-10-21
14 202321047845-Covering Letter [21-10-2024(online)].pdf 2024-10-21
15 202321047845-CERTIFIED COPIES TRANSMISSION TO IB [21-10-2024(online)].pdf 2024-10-21
16 202321047845-FORM 3 [02-12-2024(online)].pdf 2024-12-02
17 202321047845-FORM 18 [20-03-2025(online)].pdf 2025-03-20