Sign In to Follow Application
View All Documents & Correspondence

Method And System For Automatic Installation Of Docker On Target Nodes

Abstract: The present disclosure relates to a method and a system for automatic installation of a docker on one or more target nodes. The method comprises: (a) receiving, by transceiver unit [302] at central controller [301], via user interface [310], login credentials associated with the target nodes, and software package managers comprising configured set of installation commands; (b) initiating, by processing unit [304] install docker command to initiate the installation of the docker; (c) verifying, by verification unit [306], an operating system version on each target node; (d) downloading, by the processing unit [304], on each target node, a software package manager, associated with corresponding operating system version; and (e) executing, by the processing unit [304], the configured set of installation commands, on each of the target nodes, to install the docker on each of the target nodes, in parallel. [Figure 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Sandeep Bisht
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Aayush Bhatnagar
Office - 1 01, Saffron, N r. Centre Point, IPanchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
3. Sri Krishna Saichand Seelamanthula
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Pritesh Raman
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Ankur Sharma
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Saurabh Pandey
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF
1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR AUTOMATIC INSTALLATION OF DOCKER
ON TARGET NODES”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR AUTOMATIC INSTALLATION OF DOCKER
ON TARGET NODES
FIELD OF INVENTION
[0001] Embodiments of the present disclosure generally relate to wireless communications systems. More particularly, embodiments of the present disclosure relate to automatic installation of a docker on target nodes.
BACKGROUND OF THE DISCLOSURE
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. The third generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users. Further, reducing call drops and latency is of paramount importance in the telecommunications industry. Call drops can be frustrating for users, and they can also result in lost revenue for service providers. Latency, on the

other hand, refers to the time it takes for data to travel from one device to another and can cause delays and disruptions in communication. The introduction of 5G technology promises to address these issues by delivering ultra-low latency and high-speed data transmission. With 5G, call drops are going to be minimized, and users are going to experience seamless, uninterrupted communication. Additionally, 5G technology may enable the development of new applications and services that require high-speed, low-latency communication, such as remote surgeries, autonomous vehicles, and virtual reality. The reduction of call drops, and latency is crucial in ensuring that users have access to reliable and efficient communication services, and the 5G technology is a significant step towards achieving this goal.
[0004] In a communication system, a number of network function (NF) modules/ network nodes are provided. For example, in 5G network, there are various network functions such as, an Access and Mobility Management Function (AMF), a Session Management Function (SMF), a Policy Control Function (PCF), a Unified Data Manager Function (UDM), a Network Slice Selection Function (NSSF), and/or a Network Repository Function (NRF), etc., one or more of which interact with each other to implement multiple operations of the 5G communication system. These network functions are either configured over a distributed virtual network or a distributed cloud-based network. In case of distributed virtual network, the network setup process may comprise installation of docker as part of software components of network nodes. Here, a ‘docker’ or ‘docker file’ may refer to a document that contains commands that may be used by software components on a network node. A docker may be part of a platform that simplifies the process of building, and running applications as self-sufficient containers that allow developers to package up an application with all its dependencies (for e.g., libraries, configurations, etc.) into a single unit.
[0005] Accordingly, installation of the docker on the plurality of network nodes is a vital step in network setup process. Conventionally, a method of installing the docker on the one or more target nodes includes: manually logging-in, by an application centric interface, into one of a plurality of target nodes; manually running, by a user, an installation script on the one of the plurality of target nodes for installing the docket on the one of the plurality of target nodes; and repeating the steps of manually logging-in, and the manually running

of the installation script, for installation of the docker on each of the plurality of target nodes. However, the step of repeatedly performing the steps of manually logging-in, and the manually running of the installation script, for installation of the docker on each of the plurality of target nodes, is an effort consuming step. Particularly, such conventional methods for installation of the docker on each of the plurality of target nodes, is a labour-intensive process, and may require high valued time of skilled people. This is not a productive technique.
[0006] Thus, there exists an imperative need in the art to provide a solution for automatic installation of a docker on one or more target nodes, which the present disclosure aims to address.
SUMMARY OF THE DISCLOSURE
[0007] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0008] An aspect of the present disclosure may relate to a method for an automatic installation of a docker on one or more target nodes. The method comprises receiving, by a transceiver unit at a central controller, via a user interface, one or more login credentials associated with the one or more target nodes, and one or more software package managers comprising a configured set of installation commands for an installation of the docker. Further, the method comprises initiating, by a processing unit at the central controller, via the user interface, an install docker command to initiate the installation of the docker. Further, the method comprises verifying, by a verification unit at the central controller, an operating system version on each of the one or more target nodes. Further, the method comprises downloading, by the processing unit at the central controller, on each of the target node from the one or more target nodes, a software package manager from the one or more software package managers, associated with a corresponding operating system version. Further, the method comprises executing, by the processing unit at the central controller, the configured set of installation commands from the downloaded software

package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.
[0009] In an exemplary aspect of the present disclosure, the configured set of installation commands for the installation of the docker is associated with a version of the docker.
[0010] In an exemplary aspect of the present disclosure, prior to initiating, the install docker command, the method comprises validating, by a validation unit at the central controller, a reachability of the one or more target nodes; and validating, by the validation unit at the central controller, the one or more login credentials associated with the one or more target nodes.
[0011] In an exemplary aspect of the present disclosure, the validating the reachability of the one or more target nodes comprises identifying, a network path available to each of the one or more target nodes.
[0012] In an exemplary aspect of the present disclosure, the operating system version on each of the one or more target nodes is verified after logging to each of the one or more target nodes using the one or more login credentials associated with the one or more target nodes.
[0013] In an exemplary aspect of the present disclosure, the method further comprises checking, by the verification unit, one of a success installation status and a failure installation status from each of the target node from the one or more target nodes.
[0014] In an exemplary aspect of the present disclosure, the method further comprises displaying, by the processing unit, at the user interface, one of the success installation status and the failure installation status from each of the target node from the one or more target nodes.

[0015] In an exemplary aspect of the present disclosure, the one or more configured set of installation commands associated with the installation of the docker, are configured based on inputs from a user.
[0016] In an exemplary aspect of the present disclosure, the method further comprises executing, by the processing unit at the central controller, for each target node from the one or more target nodes, one or more pre-check scripts.
[0017] In an exemplary aspect of the present disclosure, the method further comprises displaying one of a success and a failure indication associated with the execution of the one or more pre-check scripts, for each target node from the one or more target nodes.
[0018] Another aspect of the present disclosure may relate to a system for an automatic installation of a docker on one or more target nodes. The system comprises a transceiver unit at a central controller. The transceiver unit is configured to receive, at a user interface, one or more login credentials associated with the one or more target nodes and one or more software package managers comprising a configured set of installation commands for an installation of the docker. Further, the system comprises a processing unit at the central controller. The processing unit is connected at least to the transceiver unit. The processing unit is configured to initiate an install docker command, at the user interface, to initiate the installation of the docker. Further, the system comprises a verification unit at the central controller. The verification unit is connected at least to the processing unit. The verification unit is configured to verify an operating system version on each of the one or more target nodes. Further, the processing unit is configured to download, on each of the target node from the one or more target nodes, a software package manager from the one or more software package managers, associated with a corresponding operating system version. The processing unit is further configured to execute, the configured set of installation commands from the downloaded software package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.
[0019] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for an automatic installation of a docker on

one or more target nodes, the instructions include executable code which, when executed
by one or more units of a system, causes: a transceiver unit at the central controller to
receive, at a user interface, one or more login credentials associated with the one or more
target nodes and one or more software package managers comprising a configured set of
5 installation commands for an installation of the docker. Further, the instructions include
executable code which, when executed causes a processing unit at the central controller to initiate an install docker command, at the user interface, to initiate the installation of the docker. Further, the instructions include executable code which, when executed causes a verification unit at the central controller to verify an operating system version on each of
10 the one or more target nodes. Further, the instructions include executable code which, when
executed causes the processing unit at the central controller further to download, on each of the target node from the one or more target nodes, a software package manager from the one or more software package managers, associated with a corresponding operating system version. Further, the instructions include executable code which, when executed causes the
15 processing unit at the central controller further to execute, the configured set of installation
commands from the downloaded software package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.
20 [0020] Yet another aspect of the present disclosure may relate to a user equipment (UE).
The UE comprises a transmitter unit configured to transmit a request to a system for an automatic installation of a docker on one or more target nodes. Further, the UE comprises a receiver unit, configured to receive from the system a response to the request, wherein the response comprises an indication of the automatic installation of the docker on the one
25 or more target nodes. The response generated by the system is based on receiving, by a
transceiver unit at a central controller, via a user interface, one or more login credentials associated with the one or more target nodes, and one or more software package managers comprising a configured set of installation commands for an installation of the docker. The response generated by the system is further based on initiating, by a processing unit at the
30 central controller, via the user interface, an install docker command to initiate the
installation of the docker. The response generated by the system is further based on verifying, by a verification unit at the central controller, an operating system version on each of the one or more target nodes. The response generated by the system is further based
7

on downloading, by the processing unit at the central controller, on each of the target node
from the one or more target nodes, a software package manager from the one or more
software package managers, associated with a corresponding operating system version.
The response generated by the system is further based on executing, by the processing unit
5 at the central controller, the configured set of installation commands from the downloaded
software package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.
10 OBJECTS OF THE INVENTION
[0021] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
15 [0022] It is an object of the present disclosure to provide a method and a system that can
enable automatic docker installation on multiple target nodes.
[0023] It is an object of the present disclosure to provide a solution that can perform docker installation in parallel manner which speeds up deployment process. 20
[0024] It is an object of the present disclosure to provide a solution that can automatically download required packages based on OS version of target node(s).
DESCRIPTION OF THE DRAWINGS
25
[0025] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed
30 upon clearly illustrating the principles of the present disclosure. Also, the embodiments
shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art
8

that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0026] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core
5 (5GC) network architecture.
[0027] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. 10
[0028] FIG. 3 illustrates an exemplary block diagram of a system for an automatic installation of a docker on one or more target nodes, in accordance with exemplary implementations of the present disclosure.
15 [0029] FIG. 4 illustrates a method flow diagram for an automatic installation of a docker
on one or more target nodes, in accordance with exemplary implementations of the present disclosure.
[0030] FIG. 5 illustrates an exemplary schematic architecture diagram of an exemplary
20 system in connection with one or more target nodes, for an automatic installation of a
docker on the one or more target nodes, in accordance with exemplary implementations of the present disclosure.
[0031] The foregoing shall be more apparent from the following more detailed description
25 of the disclosure.
DETAILED DESCRIPTION
[0032] In the following description, for the purposes of explanation, various specific
30 details are set forth in order to provide a thorough understanding of embodiments of the
present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An
9

individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0033] The ensuing description provides exemplary embodiments only, and is not intended
5 to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing
description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
10
[0034] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components
15 in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0035] Also, it is noted that individual embodiments may be described as a process which
is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a
block diagram. Although a flowchart may describe the operations as a sequential process,
20 many of the operations may be performed in parallel or concurrently. In addition, the order
of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0036] The word “exemplary” and/or “demonstrative” is used herein to mean serving as
25 an example, instance, or illustration. For the avoidance of doubt, the subject matter
disclosed herein is not limited by such examples. In addition, any aspect or design
described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed
as preferred or advantageous over other aspects or designs, nor is it meant to preclude
equivalent exemplary structures and techniques known to those of ordinary skill in the art.
30 Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar
words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word— without precluding any additional or other elements.
10

[0037] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special purpose
5 processor, a conventional processor, a digital signal processor, a plurality of
microprocessors, one or more microprocessors in association with a (Digital Signal
Processing) DSP core, a controller, a microcontroller, Application Specific Integrated
Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
The processor may perform signal coding data processing, input/output processing, and/or
10 any other functionality that enables the working of the system according to the present
disclosure. More specifically, the processor or processing unit is a hardware processor.
[0038] As used herein, a ‘docker’ or ‘docker file’ refers to a document that contains
commands that may be used by software components on a network node. A docker file
15 may be part of a platform that simplifies the process of building, and running applications
as self-sufficient containers that allow developers to package up an application with all its dependencies (for e.g., libraries, configurations, etc.) into a single unit. This ensures consistency across different environments, from development to production.
20 [0039] As used herein, “storage unit” or “memory unit” refers to a machine or computer-
readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage
25 media. The storage unit stores at least the data that may be required by one or more units
of the system to perform their respective functions.
[0040] As used herein “interface” or “user interface refers to a shared boundary across
which two or more separate components of a system exchange information or data. The
30 interface may also be referred to a set of rules or protocols that define communication or
interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
11

[0041] All modules, units, components used herein, unless explicitly excluded herein, may
be software modules or hardware processors, the processors being a general-purpose
processor, a special purpose processor, a conventional processor, a digital signal processor
(DSP), a plurality of microprocessors, one or more microprocessors in association with a
5 DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC),
Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0042] As used herein the transceiver unit include at least one receiver and at least one
transmitter configured respectively for receiving and transmitting data, signals,
10 information or a combination thereof between units/components within the system and/or
connected with the system.
[0043] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented
15 interchangeably. While specific embodiments may disclose a particular functionality of
these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve
20 the intended functionality described herein, are considered to be encompassed within the
scope of the present disclosure.
[0044] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other
25 existing problems in this field of technology by providing a method and a system of an
automatic installation of a docker on one or more target nodes. The solution comprises receiving a command for installation of docker at a central controller that hosts a docker service and a host service. Docker service is a service that handles all commands related to docker installation and pre-check scripts. Host service is a service that hosts all files
30 related to docker installation. The central controller then sends commands to all the target
nodes (on which the docker installation is desired) for docker installation. The OS version may be automatically detected by the central controller and only the relevant files may be
12

downloaded at each of the target nodes for installation. After installation process is completed, the target node sends the installation status to the central controller.
[0045] Hereinafter, exemplary embodiments of the present disclosure will be described
5 with reference to the accompanying drawings.
[0046] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture, in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user
10 equipment (UE) [102], a radio access network (RAN) [104], an access and mobility
management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection Function (NSSF) [116], a Network Exposure Function
15 (NEF) [118], a Network Repository Function (NRF) [120], a Policy Control Function
(PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
20
[0047] Radio Access Network (RAN) [104] is the part of a mobile telecommunications system that connects user equipment (UE) [102] to the core network (CN) and provides access to different types of networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that enable wireless communication.
25
[0048] Access and Mobility Management Function (AMF) [106] is a 5G core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability. It also handles mobility management procedures like handovers and paging.
30
[0049] Session Management Function (SMF) [108] is a 5G core network function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding
13

and handles Internet Protocol (IP) address allocation and Quality of Service (QoS) enforcement.
[0050] Service Communication Proxy (SCP) [110] is a network function in the 5G core
5 network that facilitates communication between other network functions by providing a
secure and efficient messaging service. It acts as a mediator for service-based interfaces.
[0051] Authentication Server Function (AUSF) [112] is a network function in the 5G core
responsible for authenticating UEs during registration and providing security services. It
10 generates and verifies authentication vectors and tokens.
[0052] Network Slice Specific Authentication and Authorization Function (NSSAAF)
[114] is a network function that provides authentication and authorization services specific
to network slices. It ensures that UEs can access only the slices for which they are
15 authorized.
[0053] Network Slice Selection Function (NSSF) [116] is a network function responsible for selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies. 20
[0054] Network Exposure Function (NEF) [118] is a network function that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications.
25 [0055] Network Repository Function (NRF) [120] is a network function that acts as a
central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
[0056] Policy Control Function (PCF) [122] is a network function responsible for policy
30 control decisions, such as QoS, charging, and access control, based on subscriber
information and network policies.
14

[0057] Unified Data Management (UDM) [124] is a network function that centralizes the management of subscriber data, including authentication, authorization, and subscription information.
5 [0058] Application Function (AF) [126] is a network function that represents external
applications interfacing with the 5G core network to access network capabilities and services.
[0059] User Plane Function (UPF) [128] is a network function responsible for handling
10 user data traffic, including packet routing, forwarding, and QoS enforcement.
[0060] Data Network (DN) [130] refers to a network that provides data services to user equipment (UE) in a telecommunications system. The data services may include but are not limited to Internet services, private data network related services.
15
[0061] FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [200] may implement a method for an automatic installation of a docker on one or
20 more target nodes utilising a system [300]. In another implementation, the computing
device [200] itself implements the method for an automatic installation of a docker on one or more target nodes using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
25
[0062] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main
30 memory [206], such as a random-access memory (RAM), or other dynamic storage device,
coupled to the bus [202] for storing information and instructions to be executed by the processor [204]. The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by
15

the processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a special-purpose
machine that is customized to perform the operations specified in the instructions. The
computing device [200] further includes a read only memory (ROM) [208] or other static
5 storage device coupled to the bus [202] for storing static information and instructions for
the processor [204].
[0063] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The
10 computing device [200] may be coupled via the bus [202] to a display [212], such as a
cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command
15 selections to the processor [204]. Another type of user input device may be a cursor
controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device
20 to specify positions in a plane.
[0064] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the
25 computing device [200] to be a special-purpose machine. According to one
implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206]. Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210].
30 Execution of the sequences of instructions contained in the main memory [206] causes the
processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
16

[0065] The computing device [200] also may include a communication interface [218]
coupled to the bus [202]. The communication interface [218] provides a two-way data
communication coupling to a network link [220] that is connected to a local network [222].
5 For example, the communication interface [218] may be an integrated services digital
network (ISDN) card, cable modem, satellite modem, or a modem to provide a data
communication connection to a corresponding type of telephone line. As another example,
the communication interface [218] may be a local area network (LAN) card to provide a
data communication connection to a compatible LAN. Wireless links may also be
10 implemented. In any such implementation, the communication interface [218] sends and
receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0066] The computing device [200] can send messages and receive data, including
15 program code, through the network(s), the network link [220] and the communication
interface [218]. In the Internet example, a server [230] might transmit a requested code for
an application program through the Internet [228], the Internet Service Provider (ISP)
[226], the local network [222], the host [224], and the communication interface [218]. The
received code may be executed by the processor [204] as it is received, and/or stored in the
20 storage device [210], or other non-volatile storage for later execution.
[0067] Referring to FIG. 3, an exemplary block diagram of a system [300] for an automatic installation of a docker on one or more target nodes, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises
25 at least a central controller [301]. The central controller [301] further comprises at least
one transceiver unit [302], at least one processing unit [304], at least one verification unit [306], at least one validation unit [308], and at least one user interface (UI) [310]. Also, all the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 3 only a few units are shown, however, the system
30 [300] may comprise multiple such units or the system [300] may comprise any such
numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may reside in a server or a network entity.
17

[0068] The system [300] is configured for an automatic installation of a docker on one or more target nodes, with the help of the interconnection between the components/units of the system [300].
5 [0069] Initially for the automatic installation of the docker on the one or more target nodes,
the transceiver unit [302] may receive, at the user interface [310], one or more login credentials associated with the one or more target nodes and one or more software package managers comprising a configured set of installation commands for an installation of the docker. In an implementation, a user may need to login at the one or more target network
10 nodes, wherein the one or more target nodes are one or more network nodes at which the
user wants to install the new docker file(s), such as, but not limited to, AMF node, SMF node, UDM node, UPF node, NSSF node, etc. Also, a person skilled in the art would appreciate that the installation of docker may not be limited to the nodes of 5G communication network, but the disclosure encompasses all other communication
15 networks for implementation of features as disclosed herein, i.e., for installation of docker
on components of any communication network. For this purpose, each target node may have a separate login credential which the user may need to provide. In an implementation, some or all of the target nodes may be logged in by the user using a common login credentials for the target nodes Also, along with the login credentials input, the transceiver
20 unit may also receive the software packages managers. These software package managers
may comprise a configured set of installation commands for installation of dockers at these one or more target nodes. This configured set of installation commands may be referred to as a set of commands that may comprise installation configurations for corresponding network nodes. In an implementation, the configured set of installation commands for the
25 installation of the docker is associated with a version of the docker. Also, in an
implementation, the one or more configured set of installation commands associated with the installation of the docker, are configured by the processing unit [304] at the central controller [301], based on inputs, at the user interface [310], from a user.
30 [0070] The processing unit [304] may be connected at least to the transceiver unit [302].
The processing unit [304] may initiate an install docker command, at the user interface [310], to initiate the installation of the docker. Here, after receiving the configured set of installation commands for the installation of the docker, the processing unit [304] initiates
18

the command that may start the installation process. Further, the validation unit [308] is
connected at least to the transceiver unit [302]. In an implementation, prior to initiating the
install docker command, the validation unit [308] may validate, reachability of the one or
more target nodes. If the target nodes are reachable, it means that the system [300] is able
5 to communicate with the one or more target nodes. In other words, in an implementation,
the validating the reachability of the one or more target nodes by the validation unit [308] comprises identifying, a network path available to each of the one or more target nodes. In an implementation, the reachability of the one or more target nodes may be validated by the validation unit [308] upon receiving a user input via the UI [310]. After checking and
10 validating the reachability, the validation unit [308] may also validate the one or more
login credentials associated with the one or more target nodes. If the target nodes are reachable, it is checked whether the login credentials provided for the one or more target nodes are correct or not. If the login credentials for a target node are not correct, it means that the user or the system is not authorized to perform actions on that target node. Also, if
15 the login credentials for a target node are found to be correct when validated, then the
actions on that target node, for example, installation of docker, can be performed. Thus, after the validation is performed by the validation unit [308] for a target node among the one or more target nodes, and the login credentials for the target node are found to be correct, the processing unit [304] may initiate an install docker command. That is, in an
20 implementation, the user may be given an option to initiate the install docker command at
the UI [310] using which, the install docker command may be initiated. In another implementation, the install docker command may be automatically initiated by the processing unit [304] after the login credentials for a target node are found to be correct.
25 [0071] The verification unit [306] is connected at least to the transceiver unit [302]. The
verification unit [306] may verify an operating system version on each of the one or more target nodes. Notably, each target node may comprise a set of hardware components, and software components which may adhere to an operating system (OS). Like all operating systems are updated as per the requirements/needs/demands, they are marked with an OS
30 version which indicates various information about the OS platform to which the software
components are associated. The software components may be compatible with one or more versions of the OS, and may not be compatible with some other versions of the OS. Therefore, the verification unit [306] may verify the OS version on each of the one or more
19

target nodes, so that correct software package managers can be downloaded for each target
node among the one or more target nodes. In an implementation, the operating system
version on each of the one or more target nodes is verified, by the verification unit [306],
after logging to each of the one or more target nodes using the one or more login credentials
5 associated with the one or more target nodes.
[0072] After the OS version on a target node is verified, the processing unit [304] may download, on each of the target node from the one or more target nodes, a software package manager from the one or more software package managers, associated with a
10 corresponding operating system version. That is, when the OS version on a target node is
verified by the verification unit [306], the processing unit [304] may download the software package manager for that target node based on the OS version of that target node. In an implementation, the processing unit [304] downloads the software package manager for the target node as and when the OS version for that target node is verified by the
15 verification unit [306]. In another implementation, the processing unit [304] downloads
the software package managers for each of the one or more target nodes when the OS version for each of the one or more target nodes is verified by the verification unit [306]. Also, this process of downloading the software package managers for each of the one or more target nodes may be performed in parallel by the processing unit [304]. This means
20 that processing unit [304] may not wait for completion of downloading the software
package manager for a target node for downloading the software package manager for another target node. Also, in an implementation, the one or more software package managers may be pre-stored in a storage unit that is integrated with or connected to the system [300].
25
[0073] Further, the processing unit [304] may execute, the configured set of installation commands from the downloaded software package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel. Here, the processing unit [304] may install the docker
30 on each of the one or more target nodes by running/executing the installation commands
that are a part of the software package manager. This process may be performed by the processing unit in parallel for the one or more target nodes. Pertinently, none of the steps for performing the docker installation on the one or more target nodes, for example,
20

downloading of software package managers, execution of the configured set of installation
commands, displaying of the installation status, may be synchronized. Thus, the
downloading of the software package managers and the execution of the configured set of
installation commands may be performed in parallel in asynchronous manner. For
5 example, in one case, the software package manager for a target node, say TN_1, may be
downloaded and executed. Before the completion of the execution of the set of installation commands from the downloaded software package manager for this TN_1, the software package manager for another target node, say TN_2, may be downloaded and executed, meaning thereby that these events, in an implementation, may be mutually exhaustive of
10 each other (that is, the steps may be performed in parallel for each of the target nodes, and
the performance of the steps for one target node may not affect the performance of the steps for another target node). A person skilled in the art would appreciate that the above example is provided for understanding purposes only and does not limit or restrict the present disclosure in any possible manner. Accordingly, the asynchronous nature of actions
15 may not be limited to downloading of software package managers and execution of the set
of installation commands only, but also extends to other functions performed by other components of the system [300].
[0074] In an implementation, the verification unit [306] may further check one of a success
20 installation status and a failure installation status from each of the target node from the one
or more target nodes. After, the processing unit [304] executes the set of installation
commands from the downloaded software package manager for the one or more target
nodes, the installation may be successful on some or all of the one or more target nodes,
and may be unsuccessful on some or all of the one or more target nodes. Thus, for the
25 purpose of gathering knowledge about the successful installation and/or unsuccessful
installation on each of the one or more target nodes, the verification unit [306] may check
an installation status from each of the target node from the one or more target nodes. In
this checking, each of the target nodes may send the installation status, and the verification
unit [306] may receive an installation status from each of the one or more target nodes,
30 indicating that whether the docker installation was successful or not, on each of the one or
more target nodes. In an implementation, the processing unit [304] may further display, at the user interface [310], one of the success installation status and the failure installation status from each of the target node from the one or more target nodes. Thus, after receiving
21

the installation status from each of the target nodes, the processing unit [304] may display
the installation status of each target node against the identity of the respective target node,
on the UI [310]. Also, the processing unit [304], in an implementation, first waits for the
completion of installation process on each of the one or more target nodes and receiving
5 the installation status from each of the one or more target nodes before displaying the
installation status against the respective identity of each of the one or more target nodes at
the UI [310]. In another implementation, the processing unit [304] does not wait for the
completion of installation process on each of the one or more target nodes and receiving
the installation status from each of the one or more target nodes and displays the
10 installation status for a target node against the respective identity of the target node as and
when the installation status is received from that target node.
[0075] In an implementation, the processing unit [304] at the central controller [301] may execute, for each target node from the one or more target nodes, one or more pre-check
15 scripts. Here, the pre-check scripts may perform operations (that the target nodes may
perform after the docker installation) and validate system configurations necessary for app deployment at later stage on the one or more target nodes. Also, in an implementation, the processing unit [304] may display one of a success and a failure indication associated with the execution of the one or more pre-check scripts, for each target node from the one or
20 more target nodes.
[0076] Referring to FIG. 4, an exemplary flow diagram of method [400] for an automatic
installation of a docker on one or more target nodes, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the method [400]
25 is performed by the system [300]. Also, as shown in Figure 4, the method [400] starts at
step [402].
[0077] Initially for the automatic installation of the docker on the one or more target nodes,
at step 404, the method of the present disclosure comprises receiving, by a transceiver unit
30 [302] at a central controller [301], via a user interface [310], one or more login credentials
associated with the one or more target nodes, and one or more software package managers comprising a configured set of installation commands for an installation of the docker. In an implementation, a user may need to login at the one or more target network nodes,
22

wherein the one or more target nodes are one or more network nodes at which the user
wants to install the new docker file(s), such as, but not limited to, AMF node, SMF node,
UDM node, UPF node, NSSF node, etc. Also, a person skilled in the art would appreciate
that the installation of docker may not be limited to the nodes of 5G communication
5 network, but the disclosure encompasses all other communication networks for
implementation of features as disclosed herein, i.e., for installation of docker on components of any communication network. For this purpose, each target node may have a separate login credential which the user may need to provide. In an implementation, some or all of the target nodes may be logged in by the user using a common login credentials
10 for the target nodes Also, along with the login credentials input, the transceiver unit may
also receive the software packages managers. These software package managers may comprise a configured set of installation commands for installation of dockers at these one or more target nodes. This configured set of installation commands may be referred to as a set of commands that may comprise installation configurations corresponding network
15 nodes. In an implementation, the configured set of installation commands for the
installation of the docker is associated with a version of the docker. Also, in an implementation, the one or more configured set of installation commands associated with the installation of the docker, are configured by the processing unit [304] at the central controller [301], based on inputs, at the user interface [310], from a user.
20
[0078] At step 406, the method of the present disclosure comprises initiating, by a processing unit [304] at the central controller [301], via the user interface [310], an install docker command to initiate the installation of the docker. Here, after receiving the configured set of installation commands for the installation of the docker, the processing
25 unit [304] initiates the command that may start the installation process. Further, the
validation unit [308] is connected at least to the transceiver unit [302]. In an implementation, prior to initiating the install docker command, the validation unit [308] may validate, reachability of the one or more target nodes. If the target nodes are reachable, it means that the system [300] can communicate with the one or more target nodes. In other
30 words, in an implementation, the validating the reachability of the one or more target nodes
by the validation unit [308] comprises identifying, a network path available to each of the one or more target nodes. In an implementation, the reachability of the one or more target nodes may be validated by the validation unit [308] upon receiving a user input via the UI
23

[310]. After checking and validating the reachability, the validation unit [308] may also
validate the one or more login credentials associated with the one or more target nodes. If
the target nodes are reachable, it is checked whether the login credentials provided for the
one or more target nodes are correct or not. If the login credentials for a target node are not
5 correct, it means that the user or the system is not authorized to perform actions on that
target node. Also, if the login credentials for a target node are found to be correct when validated, then the actions on that target node, for example, installation of docker, can be performed. Thus, after the validation is performed by the validation unit [308] for a target node among the one or more target nodes, and the login credentials for the target node are
10 found to be correct, the processing unit [304] may initiate an install docker command. That
is, in an implementation, the user may be given an option to initiate the install docker command at the UI [310] using which, the install docker command may be initiated. In another implementation, the install docker command may be automatically initiated by the processing unit [304] after the login credentials for a target node are found to be correct.
15
[0079] At step 408, the method of the present disclosure comprises verifying, by a verification unit [306] at the central controller [301], an operating system version on each of the one or more target nodes. Notably, each target node may comprise a set of hardware components, and software components which may adhere to an operating system (OS).
20 Like all operating systems are updated as per the requirements/needs/demands, they are
marked with an OS version which indicates various information about the OS platform to which the software components are associated. The software components may be compatible with one or more versions of the OS, and may not be compatible with some other versions of the OS. Therefore, the verification unit [306] may verify the OS version
25 on each of the one or more target nodes, so that correct software package managers can be
downloaded for each target node among the one or more target nodes. In an implementation, the operating system version on each of the one or more target nodes is verified, by the verification unit [306], after logging to each of the one or more target nodes using the one or more login credentials associated with the one or more target nodes.
30
[0080] After the OS version on a target node is verified, at step 410, the method of the present disclosure comprises downloading, by the processing unit [304] at the central controller [301], on each of the target node from the one or more target nodes, a software
24

package manager from the one or more software package managers, associated with a
corresponding operating system version. That is, when the OS version on a target node is
verified by the verification unit [306], the processing unit [304] may download the software
package manager for that target node based on the OS version of that target node. In an
5 implementation, the processing unit [304] downloads the software package manager for
the target node as and when the OS version for that target node is verified by the verification unit [306]. In another implementation, the processing unit [304] downloads the software package managers for each of the one or more target nodes when the OS version for each of the one or more target nodes is verified by the verification unit [306].
10 Also, this process of downloading the software package managers for each of the one or
more target nodes may be performed in parallel by the processing unit [304]. This means that processing unit [304] may not wait for completion of downloading the software package manager for a target node for downloading the software package manager for another target node. Also, in an implementation, the one or more software package
15 managers may be pre-stored in a storage unit that is integrated with or connected to the
system [300].
[0081] Further, at step 412, the method of the present disclosure comprises executing, by the processing unit [304] at the central controller [301], the configured set of installation
20 commands from the downloaded software package manager, on each of the target node
from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel. Here, the processing unit [304] may install the docker on each of the one or more target nodes by running/executing the installation commands that are a part of the software package manager. This process may be performed by the
25 processing unit in parallel for the one or more target nodes. Pertinently, none of the steps
for performing the docker installation on the one or more target nodes, for example, downloading of software package managers, execution of the configured set of installation commands, displaying of the installation status, may be synchronized. Thus, the downloading of the software package managers and the execution of the configured set of
30 installation commands may be performed in parallel in asynchronous manner. For
example, in one case, the software package manager for a target node, say TN_1, may be downloaded and executed. Before the completion of the execution of the set of installation commands from the downloaded software package manager for this TN_1, the software
25

package manager for another target node, say TN_2, may be downloaded and executed,
meaning thereby that these events, in an implementation, may be mutually exhaustive of
each other (that is, the steps may be performed in parallel for each of the target nodes, and
the performance of the steps for one target node may not affect the performance of the
5 steps for another target node). A person skilled in the art would appreciate that the above
example is provided for understanding purposes only and does not limit or restrict the
present disclosure in any possible manner. Accordingly, the asynchronous nature of actions
may not be limited to downloading of software package managers and execution of the set
of installation commands only, but also extends to other functions performed by other
10 components of the system [300].
[0082] In an implementation, the verification unit [306] may further check one of a success installation status and a failure installation status from each of the target node from the one or more target nodes. After, the processing unit [304] executes the set of installation
15 commands from the downloaded software package manager for the one or more target
nodes, the installation may be successful on some or all of the one or more target nodes, and may be unsuccessful on some or all of the one or more target nodes. Thus, for the purpose of gathering knowledge about the successful installation and/or unsuccessful installation on each of the one or more target nodes, the verification unit [306] may check
20 an installation status from each of the target node from the one or more target nodes. In
this checking, each of the target nodes may send the installation status, and the verification unit [306] may receive an installation status from each of the one or more target nodes, indicating that whether the docker installation was successful or not, on each of the one or more target nodes. In an implementation, the processing unit [304] may further display, at
25 the user interface [310], one of the success installation status and the failure installation
status from each of the target node from the one or more target nodes. Thus, after receiving the installation status from each of the target nodes, the processing unit [304] may display the installation status of each target node against the identity of the respective target node, on the UI [310]. Also, the processing unit [304], in an implementation, first waits for the
30 completion of installation process on each of the one or more target nodes and receiving
the installation status from each of the one or more target nodes before displaying the installation status against the respective identity of each of the one or more target nodes at the UI [310]. In another implementation, the processing unit [304] does not wait for the
26

completion of installation process on each of the one or more target nodes and receiving
the installation status from each of the one or more target nodes, and displays the
installation status for a target node against the respective identity of the target node as and
when the installation status is received from that target node. Further, the method ends at
5 step 414.
[0083] In an implementation, the method comprises executing, by the processing unit [304] at the central controller [301], for each target node from the one or more target nodes, one or more pre-check scripts. Here, the pre-check may comprise performing operations
10 (that the target node may perform after the docker installation) and validating system
configurations necessary for app deployment at later stage on the one or more target nodes. Also, in an implementation, the method comprises displaying one of a success and a failure indication associated with the execution of the one or more pre-check scripts, for each target node from the one or more target nodes.
15
[0084] Now, referring to FIG. 5 which illustrates an exemplary schematic architecture diagram of an exemplary system [500] in connection with one or more target nodes, for an automatic installation of a docker on the one or more target nodes, in accordance with exemplary implementations of the present disclosure. As shown in FIG. 5, the exemplary
20 system [500] comprises at least an automated cloud installer user interface (ACI UI) [502],
and an automated cloud installer control unit (ACI CU) [504] comprising a docker service unit [506] and a host service unit [508]. and the exemplary system [500] is connected to the one or more target nodes [510]. All components/ units of the exemplary system [500] may be assumed to be connected to each other unless otherwise indicated. Also, in Fig. 5
25 only a few units are shown, however, the system [500] may comprise multiple such units
or the system [500] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Using the ACI UI [502], one may validate a reachability of the target nodes [510] and login credentials associated with the target nodes [510]. Further, using the ACI UI [502], one may send instructions to run docker installation
30 process, and perform a pre-check on scripts that may be executed for the docker installation
process. As shown in Fig. 5, the ACI UI [502] sends an install docker command to the ACI CU [504] for installation of docker on the target nodes [510]. The docker service unit [506] may handle one or more commands related to docker installation and pre-check scripts.
27

The host service unit [508] may host one or more files related to docker installation. The
ACI CU [504] after receiving commands, via the docker service unit [506], for installation
of docker from the ACI UI [502], sends commands for installation of docker to all the
target nodes [510] on which the docker installation is desired. Also, the OS version related
5 to the target nodes [510] may be automatically detected by the ACI CU [504] and only the
relevant files may be downloaded at each of the target nodes for installation from the host service unit [508]. Accordingly, the relevant software package managers and a set of other files comprising a set of relevant repository files may be downloaded on each of the one or more target nodes [510]. At each target node, a backup of docker configuration files and
10 existing repository files may also be performed. Also, after the downloading of the relevant
software package managers and the set of other files comprising the set of relevant repository files at the target nodes [510], the target node may execute commands for installation of docker. After installation process is completed, each of the target nodes [510] may execute commands to check the docker installation status. Finally, each of the
15 one or more target nodes [510] may send the installation status to the ACI CU [504]. A
person skilled in the art would appreciate that the above explanation with reference to Fig. 5 is provided with respect to an exemplary system for understanding purposes only, and therefore, does not limit or restrict the present disclosure in any possible manner.
20 [0085] The present disclosure further discloses a non-transitory computer readable storage
medium storing instructions for an automatic installation of a docker on one or more target nodes. The instructions include executable code. The executable code when executed by one or more units of a system [300], the system [300] comprising a central controller [301], causes a transceiver unit [302] at the central controller [301] to receive, at a user interface
25 [310], one or more login credentials associated with the one or more target nodes and one
or more software package managers comprising a configured set of installation commands for the installation of the docker. Further, the executable code when executed further causes a processing unit [304] at the central controller [301] to initiate an install docker command, at the user interface [310], to initiate the installation of the docker. Further, the
30 executable code when executed further causes a verification unit [306] at the central
controller [301] to verify an operating system version on each of the one or more target nodes. Further, the executable code when executed further causes the processing unit [304] at the central controller [301] to download, on each of the target node from the one or more
28

target nodes, a software package manager from the one or more software package
managers, associated with a corresponding operating system version. Further, the
executable code when executed further causes the processing unit [304] at the central
controller [301] to execute, the configured set of installation commands from the
5 downloaded software package manager, on each of the target node from the one or more
target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.
[0086] The present disclosure further discloses a user equipment (UE). The UE comprises
10 a transmitter unit configured to transmit a request to a system [300] for an automatic
installation of a docker on one or more target nodes. Further, the UE comprises a receiver unit, configured to receive from the system [300] a response to the request, wherein the response comprises an indication of the automatic installation of the docker on the one or more target nodes. The response generated by the system [300] is based on receiving, by a
15 transceiver unit [302] at a central controller [301], via a user interface [310], one or more
login credentials associated with the one or more target nodes, and one or more software package managers comprising a configured set of installation commands for an installation of the docker. The response generated by the system [300] is further based on initiating, by a processing unit [304] at the central controller [301], via the user interface [310], an
20 install docker command to initiate the installation of the docker. The response generated
by the system [300] is further based on verifying, by a verification unit [306] at the central controller [301], an operating system version on each of the one or more target nodes. The response generated by the system [300] is further based on downloading, by the processing unit [304] at the central controller [301], on each of the target node from the one or more
25 target nodes, a software package manager from the one or more software package
managers, associated with a corresponding operating system version. The response generated by the system [300] is further based on executing, by the processing unit [304] at the central controller [301], the configured set of installation commands from the downloaded software package manager, on each of the target node from the one or more
30 target nodes, to install the docker on each of the target node from the one or more target
nodes, in parallel.
29

[0087] As is evident from the above, the present disclosure provides a technically
advanced solution for an automatic installation of a docker on one or more target nodes.
The present solution enables automatic docker installation on multiple target nodes. This
docker installation is performed in parallel manner which speeds up deployment process.
5 Also, the features of the present solution also enable automatic download of required
packages based on OS version of target node(s).
[0088] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and that
10 many changes can be made to the implementations without departing from the principles
of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
30

We Claim:
1. A method [400] for an automatic installation of a docker on one or more target nodes,
the method comprising:
- receiving, by a transceiver unit [302] at a central controller [301], via a user interface [310], one or more login credentials associated with the one or more target nodes, and one or more software package managers comprising a configured set of installation commands for an installation of the docker;
- initiating, by a processing unit [304] at the central controller [301], via the user interface [310], an install docker command to initiate the installation of the docker;
- verifying, by a verification unit [306] at the central controller [301], an operating system version on each of the one or more target nodes;
- downloading, by the processing unit [304] at the central controller [301], on each of the target node from the one or more target nodes, a software package manager from the one or more software package managers, associated with a corresponding operating system version; and
- executing, by the processing unit [304] at the central controller [301], the configured set of installation commands from the downloaded software package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.

2. The method [400] as claimed in claim 1, wherein the configured set of installation commands for the installation of the docker is associated with a version of the docker.
3. The method [400] as claimed in claim 1, wherein prior to initiating, the install docker command, the method comprises:
- validating, by a validation unit [308] at the central controller [301], a reachability
of the one or more target nodes; and
- validating, by the validation unit [308] at the central controller [301], the one or
more login credentials associated with the one or more target nodes.

4. The method [400] as claimed in claim 3, wherein the validating the reachability of the one or more target nodes comprises identifying a network path available to each of the one or more target nodes.
5. The method [400] as claimed in claim 1, wherein the operating system version on each of the one or more target nodes is verified after logging to each of the one or more target nodes using the one or more login credentials associated with the one or more target nodes.
6. The method [400] as claimed in claim 1, further comprises, checking, by the verification unit [306], one of a success installation status and a failure installation status from each of the target node from the one or more target nodes.
7. The method [400] as claimed in claim 6, further comprises, displaying, by the processing unit [304], at the user interface [310], one of the success installation status and the failure installation status from each of the target node from the one or more target nodes.
8. The method [400] as claimed in claim 1, wherein the method further comprises:
executing, by the processing unit [304] at the central controller [301], for each target node from the one or more target nodes, one or more pre-check scripts.
9. The method [400] as claimed in claim 8, further comprises, displaying one of a success and a failure indication associated with the execution of the one or more pre-check scripts, for each target node from the one or more target nodes.
10. The method [400] as claimed in claim 1, wherein the one or more configured set of installation commands associated with the installation of the docker are configured based on inputs from a user.
11. A system [300] for an automatic installation of a docker on one or more target nodes, the system comprising:

- a transceiver unit [302] at a central controller [301] configured to receive, at a user interface [310], one or more login credentials associated with the one or more target nodes and one or more software package managers comprising a configured set of installation commands for an installation of the docker;
- a processing unit [304] at the central controller [301], connected at least to the transceiver unit [302], the processing unit [304] configured to initiate an install docker command, at the user interface [310], to initiate the installation of the docker; and
- a verification unit [306] at the central controller [301], connected at least to the processing unit [304], the verification unit [306] configured to verify an operating system version on each of the one or more target nodes;
wherein the processing unit [304] at the central controller [301] is further configured to:
o download, on each of the target node from the one or more target nodes, a software package manager from the one or more software package managers, associated with a corresponding operating system version; and o execute, the configured set of installation commands from the downloaded software package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.
12. The system [300] as claimed in claim 11, wherein the configured set of installation commands for the installation of the docker is associated with a version of the docker.
13. The system [300] as claimed in claim 11, wherein the system comprises a validation unit [308] at the central controller [301], connected at least to the transceiver unit [302], wherein prior to initiating the install docker command, the validation unit [308] is configured to:
o validate, reachability of the one or more target nodes; and
o validate the one or more login credentials associated with the one or more target nodes.

14. The system [300] as claimed in claim 13, wherein the validating the reachability of the one or more target nodes comprises identifying a network path available to each of the one or more target nodes.
15. The system [300] as claimed in claim 11, wherein the verification unit [306] is configured to verify the operating system version on each of the one or more target nodes after logging to each of the one or more target nodes using the one or more login credentials associated with the one or more target nodes.
16. The system [300] as claimed in claim 11, wherein the verification unit [306] is further configured to check one of a success installation status and a failure installation status from each of the target node from the one or more target nodes.
17. The system [300] as claimed in claim 16, wherein the processing unit [304] is further configured to display, at the user interface [310], one of the success installation status and the failure installation status from each of the target node from the one or more target nodes.
18. The system [300] as claimed in claim 11, wherein the processing unit [304] at the central controller [301] is further configured to:
execute, for each target node from the one or more target nodes, one or more pre-check scripts.
19. The system [300] as claimed in claim 18, wherein the processing unit [304] at the central controller [301] is further configured to display one of a success and a failure indication associated with the execution of the one or more pre-check scripts, for each target node from the one or more target nodes.
20. The system [300] as claimed in claim 11, wherein the one or more configured set of installation commands associated with the installation of the docker are configured based on inputs from a user.

21. A user equipment (UE) comprising:
- a transmitter unit, configured to transmit a request to a system [300] for an automatic installation of a docker on one or more target nodes; and
- a receiver unit, configured to receive from the system [300] a response to the request, wherein the response comprises an indication of an automatic installation of a docker on one or more target nodes, and wherein the response is generated by the system [300] based on:
o receiving, by a transceiver unit [302] at a central controller [301], via a user interface [310], one or more login credentials associated with the one or more target nodes, and one or more software package managers comprising a configured set of installation commands for an installation of the docker;
o initiating, by a processing unit [304] at the central controller [301], via the user interface [310], an install docker command to initiate the installation of the docker;
o verifying, by a verification unit [306] at the central controller [301], an operating system version on each of the one or more target nodes;
o downloading, by the processing unit [304] at the central controller [301], on each of the target node from the one or more target nodes, a software package manager from the one or more software package managers, associated with a corresponding operating system version; and
o executing, by the processing unit [304] at the central controller [301], the configured set of installation commands from the downloaded software package manager, on each of the target node from the one or more target nodes, to install the docker on each of the target node from the one or more target nodes, in parallel.

Documents

Application Documents

# Name Date
1 202321047305-STATEMENT OF UNDERTAKING (FORM 3) [13-07-2023(online)].pdf 2023-07-13
2 202321047305-PROVISIONAL SPECIFICATION [13-07-2023(online)].pdf 2023-07-13
3 202321047305-FORM 1 [13-07-2023(online)].pdf 2023-07-13
4 202321047305-FIGURE OF ABSTRACT [13-07-2023(online)].pdf 2023-07-13
5 202321047305-DRAWINGS [13-07-2023(online)].pdf 2023-07-13
6 202321047305-FORM-26 [14-09-2023(online)].pdf 2023-09-14
7 202321047305-Proof of Right [25-10-2023(online)].pdf 2023-10-25
8 202321047305-ORIGINAL UR 6(1A) FORM 1 & 26)-011223.pdf 2023-12-08
9 202321047305-ENDORSEMENT BY INVENTORS [08-07-2024(online)].pdf 2024-07-08
10 202321047305-DRAWING [08-07-2024(online)].pdf 2024-07-08
11 202321047305-CORRESPONDENCE-OTHERS [08-07-2024(online)].pdf 2024-07-08
12 202321047305-COMPLETE SPECIFICATION [08-07-2024(online)].pdf 2024-07-08
13 202321047305-FORM 3 [02-08-2024(online)].pdf 2024-08-02
14 Abstract-1.jpg 2024-08-09
15 202321047305-Request Letter-Correspondence [14-08-2024(online)].pdf 2024-08-14
16 202321047305-Power of Attorney [14-08-2024(online)].pdf 2024-08-14
17 202321047305-Form 1 (Submitted on date of filing) [14-08-2024(online)].pdf 2024-08-14
18 202321047305-Covering Letter [14-08-2024(online)].pdf 2024-08-14
19 202321047305-CERTIFIED COPIES TRANSMISSION TO IB [14-08-2024(online)].pdf 2024-08-14