Sign In to Follow Application
View All Documents & Correspondence

Method And System For Managing A Host For Container Network Function Components

Abstract: The present disclosure relates to method and system for managing a host for container network function components (CNFCs). The method comprises receiving, at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. The method further comprises transmitting an instruction to a docker service adapter (DSA) node to re-instantiate the one or more CNFCs to a new host. Further, the method comprises re-instantiating the one or more CNFCs to the new host. Further, the method comprises transmitting a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. Furthermore, the method comprises transmitting, via the CNFLM node, to a physical and virtual resource manager (PVIM) node, a set of details related to the new host. FIG. 5

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

1
FORM 2
THE PATENTS ACT, 1970 (39 OF
1970)
&
5 THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
10 “METHOD AND SYSTEM FOR MANAGING A HOST FOR CONTAINER
NETWORK FUNCTION COMPONENTS”
15 We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point,
Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
20
The following specification particularly describes the invention and the manner in which
it is to be performed.
25
2
METHOD AND SYSTEM FOR MANAGING A HOST FOR CONTAINER
NETWORK FUNCTION COMPONENTS
5 FIELD OF THE DISCLOSURE
[0001] Embodiment of the present disclosure generally relate to a field of wireless
communication. More particularly, the present disclosure relates to a method and a
system for managing a host for container network function components.
10
BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
15 include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. The third generation
(3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth generation (5G) technology is
30 being deployed, promising even faster data speeds, low latency, and the ability to
3
connect multiple devices simultaneously. With each generation, wireless
communication technology has become more advanced, sophisticated, and capable
of delivering more services to its users.
5 [0004] The 5G core networks are based on service‐based architecture (SBA) that is
centred around network function (NF) services. In the said Service‐Based
Architecture (SBA), a set of interconnected Network Functions (NFs) deliver the
control plane functionality and common data repositories of the 5G network, where
each NF is authorized to access services of other NFs. Particularly, each NF can
10 register itself and its supported services to a Network Repository Function (NRF),
which is used by other NFs for the discovery of NF instances and their services.
Further, the network functions may include, but not limited to, a containerized
network function (CNF) and a virtual network function (VNF).
15 [0005] The CNFs are a set of small, independent, and loosely coupled services such
as microservices. These microservices work independently, which may increase
speed and flexibility while reducing deployment risk. In 5G communication, cloudnative 5G network offers the fully digitized architecture necessary for deploying
new cloud services and taking full advantage of cloud-native 5G features such as
20 edge computing, as well as network slicing and other services. Whereas the VNFs
may run in virtual machines (VMs) on common virtualization infrastructure. The
VNFs may be created on top of network function virtualization infrastructure
(NFVI) which may allocate resources like compute, storage, and networking
efficiently among the VNFs.
25
[0006] In communication network such as 5G communication network, CNFs and
Containerized Network Function Components (CNFCs) instances run on multiple
host or servers for providing services in the network. There may be multiple CNFs
or CNFCs instances running or working on a single host. When any host or server
30 becomes faulty, it may be non-operational. Therefore, the CNFs or CNFCs
instantiated on that faulty or non-operational host or server may also stop to run or
4
work. Further, all CNFCs need to be restarted manually after the commissioning of
new hosts. Due to new hosts activation, there may be inventory data mismatch as
new host commissioning leads to change of host IP’s and IDs etc. In traditional
way, for CNFs or CNFCs to become operational or active, operational team has to
5 operate manually by login on server and up the CNFs or CNFCs services to resolve
the problem. This process is not efficient and is more time-consuming task.
[0007] Hence, in view of these and other existing limitations, there arises an
imperative need to provide an efficient solution to overcome the above-mentioned
10 and other limitations which the present disclosure aims to disclose.
SUMMARY
[0008] This section is provided to introduce certain aspects of the present disclosure
15 in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0009] An aspect of the present disclosure may relate to a method for managing a
20 host for container network function components (CNFCs). The method comprises
receiving, by a processing unit via a user interface (UI), and at a container network
function lifecycle manager (CNFLM) node, a request for replacement of a host of
the one or more CNFCs. The method further comprises transmitting, by the
processing unit via the CNFLM node, an instruction to a docker service adapter
25 (DSA) node to re-instantiate the one or more CNFCs to a new host. Further, the
method comprises re-instantiating, by the processing unit via the DSA node, the
one or more CNFCs to the new host. Further, the method comprises transmitting,
by the processing unit via the DSA node, a success response to the CNFLM node,
in response to the re-instantiation of the one or more CNFCs to the new host.
30 Furthermore, the method comprises transmitting, by the processing unit via the
5
CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set
of details related to the new host.
[0010] In an exemplary aspect of the present disclosure, the method further
5 comprises transmitting, by the processing unit via the CNFLM node, and to the UI,
a success response indicative of replacement of the host and re-instantiation of the
one or more CNFCs to the new host.
[0011] In an exemplary aspect of the present disclosure, the method further
10 comprises displaying, by the processing unit at the UI, a plurality of new hosts.
Further, the method comprises receiving, by the processing unit via the UI, based
on an input from a user, a selection of the new host from the plurality of new hosts,
for re-instantiating the one or more CNFCs to the new host, wherein the instruction
to the DSA node to re-instantiate the one or more CNFCs is based on the selection
15 of the new host.
[0012] In an exemplary aspect of the present disclosure, the PVIM node is
communicably coupled to a database, and wherein the method comprises updating,
by the processing unit via the PVIM node, the database with the set of details related
20 to the new host.
[0013] In an exemplary aspect of the present disclosure, the set of details related to
the new host comprises a new host name and a new host internet protocol (IP)
address.
25
[0014] In an exemplary aspect of the present disclosure, the method further
comprises displaying, by the processing unit via the CNFLM node, and at the UI,
the set of details related to the new host.
30 [0015] Another aspect of the present disclosure may relate to a system for
managing a host for one or more container network function components (CNFCs).
The system comprises a processing unit configured to receive, via a user interface
(UI), and at a container network function lifecycle manager (CNFLM) node, a
6
request for replacement of a host of the one or more CNFCs. The processing unit is
further configured to transmit, via the CNFLM node, an instruction to a docker
service adapter (DSA) node to instantiate the one or more CNFCs to a new host.
Further, the processing unit is configured to re-instantiate, via the DSA node, the
5 one or more CNFCs to the new host. Further, the processing unit is configured to
transmit, via the DSA node, a success response to the CNFLM node, in response to
the re-instantiation of the one or more CNFCs to the new host. Furthermore, the
processing unit is configured to transmit, via the CNFLM node, and to a physical
and virtual resource manager (PVIM) node, a set of details related to the new host.
10
[0016] Yet another aspect of the present disclosure relates a non-transitory
computer readable storage medium storing one or more instructions for managing
a host for one or more container network function components (CNFCs). The
instructions include executable code which, when executed by one or more units of
15 a system, causes a processing unit of the system to receive, via a user interface (UI),
and at a container network function lifecycle manager (CNFLM) node, a request
for replacement of a host of the one or more CNFCs. Further, the executable code
when executed causes the processing unit to transmit, via the CNFLM node, an
instruction to a docker service adapter (DSA) node to instantiate the one or more
20 CNFCs to a new host. The executable code when further executed causes the
processing unit to re-instantiate, via the DSA node, the one or more CNFCs to the
new host. Further, the executable code when executed causes the processing unit to
transmit, via the DSA node, a success response to the CNFLM node, in response to
the re-instantiation of the one or more CNFCs to the new host. Furthermore, the
25 executable code when executed causes the processing unit to transmit, via the
CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set
of details related to the new host.
OBJECTS OF THE DISCLOSURE
30
7
[0017] Some of the objects of the present disclosure which at least one embodiment
disclosed herein satisfies are listed herein below.
[0018] It is an object of the present disclosure to provide a system and method for
5 managing a host for one or more container network function components (CNFCs).
[0019] It is another object of the present disclosure to provide solution where no
manual intervention at backend is needed to re-instantiate CNFCs.
10 [0020] It is another object of the present disclosure to provide solution to keep
inventory in sync by updating the inventory.
[0021] It yet another object of the present disclosure to provide an optimal solution
for the user to re-instantiate same CNFC’s and also keeps inventory in sync.
15
BREIF DESCRIPTION OF DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
20 and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
25 according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
8
[0023] FIG. 1 illustrates an exemplary block diagram representation of
management and orchestration (MANO) architecture/ platform, in accordance with
exemplary implementation of the present disclosure.
5 [0024] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented, in accordance
with exemplary implementation of the present disclosure.
[0025] FIG. 3 illustrates an exemplary block diagram of a system for managing a
10 host for one or more container network function components (CNFCs), in
accordance with exemplary implementation of the present disclosure.
[0026] FIG. 4 illustrates an exemplary signalling flow diagram for managing a host
for one or more container network function components (CNFCs), in accordance
15 with exemplary implementation of the present disclosure.
[0027] FIG. 5 illustrates an exemplary flow diagram of method for managing a host
for one or more container network function components (CNFCs), in accordance
with exemplary implementation of the present disclosure.
20
[0028] FIG. 6. illustrates an exemplary diagram of a system architecture for
managing a host for one or more container network function components (CNFCs),
in accordance with exemplary implementation of the present disclosure.
25 [0029] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION
30 [0030] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
9
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
5 problems discussed above.
[0031] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
10 the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
15 [0032] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
20 embodiments in unnecessary detail.
[0033] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
25 a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
10
[0034] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
5 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
10 similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0035] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
15 processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
20 integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
25 [0036] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
30 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
11
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
5
[0037] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
10 magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
15 [0038] As used herein “interface” or “user interface refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
20 called.
[0039] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
25 a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
12
[0040] As used herein the transceiver unit include at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
and/or connected with the system.
5
[0041] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing a
method and a system for managing a host for one or more container network
10 function components (CNFCs). More particularly, the present disclosure provides
a solution, where no manual intervention at backend is needed to re-instantiate
CNFC instances. Further, the present disclosure provides a solution to keep
inventory in sync by updating the inventory. Furthermore, the present disclosure
provides an optimal solution for the user to re-instantiate same CNFC’s and also
15 keeps inventory in sync.
[0042] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
20 [0043] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture/ platform [100], in
accordance with exemplary implementation of the present disclosure. The MANO
architecture [100] is developed for managing telecom cloud infrastructure
automatically, managing design or deployment design, managing instantiation of
25 network node(s)/ service(s) etc. The MANO architecture [100] deploys the network
node(s) in the form of Virtual Network Function (VNF) and Cloud-native/
Container Network Function (CNF). The system may comprise one or more
components of the MANO architecture [100]. The MANO architecture [100] is
used to auto-instantiate the VNFs into the corresponding environment of the present
30 disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to
the platform.
13
[0044] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer, a network function virtualization (NFV) and software defined
network (SDN) design function module [104], a platforms foundation services
5 module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112]. All the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
10 [0045] The NFV and SDN (NFVSDN) design function module [104] comprises a
VNF lifecycle manager (compute) [1042], a VNF catalogue [1044], a network
services catalogue [1046], a network slicing and service chaining manager [1048],
a physical and virtual resource manager [1050] and a CNF lifecycle manager
[1052]. The VNF lifecycle manager (compute) [1042] is responsible for deciding
15 on which server of the communication network, the microservice will be
instantiated. The VNF lifecycle manager (compute) [1042] may manage the overall
flow of incoming/ outgoing requests during interaction with the user. The VNF
lifecycle manager (compute) [1042] is responsible for determining which sequence
to be followed for executing the process. For e.g. in an AMF network function of
20 the communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
information about the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
25 of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual resource manager [1050] stores the
logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] is used for the CNFs lifecycle
management.
30
[0046] The platforms foundation services module [106] comprises a microservices
elastic load balancer [1062], an identify & access manager [1064], a command line
14
interface (CLI) [1066], a central logging manager [1068], and an event routing
manager [1070]. The microservices elastic load balancer [1062] is used for
maintaining the load balancing of the request for the services. The identify & access
manager [1064] is used for logging purposes. The command line interface (CLI)
5 [1066] is used to provide commands to execute certain processes which require
changes during the run time. The central logging manager [1068] is responsible for
keeping the logs of every service. These logs are generated by the MANO platform
[100]. These logs are used for debugging purposes. The event routing manager
[1070] is responsible for routing the events i.e., the application programming
10 interface (API) hits to the corresponding services.
[0047] The platforms core services module [108] comprises NFV infrastructure
monitoring manager [1082], an assure manager [1084], a performance manager
[1086], a policy execution engine [1088], a capacity monitoring manager [1090], a
15 release management (mgmt.) repository [1092], a configuration manager & GCT
[1094], an NFV platform decision analytics [1096], a platform NoSQL DB [1098];
a platform schedulers and cron jobs [1100], a VNF backup & upgrade manager
[1102], a micro service auditor [1104], and a platform operations, administration
and maintenance manager [1106]. The NFV infrastructure monitoring manager
20 [1082] monitors the infrastructure part of the NFs. For e.g., any metrics such as
CPU utilization by the VNF. The assure manager [1084] is responsible for
supervising the alarms the vendor is generating. The performance manager
[1086] is responsible for managing the performance counters. The policy execution
engine (PEEGN) [1088] is responsible for all the managing the policies. The
25 capacity monitoring manager (CMM) [1090] is responsible for sending the request
to the PEEGN [1088]. The release management (mgmt.) repository (RMR) [1092]
is responsible for managing the releases and the images of all the vendor network
node. The configuration manager & (GCT) [1094] manages the configuration and
GCT of all the vendors. The NFV platform decision analytics (NPDA) [1096] helps
30 in deciding the priority of using the network resources. It is further noted that the
policy execution engine (PEEGN) [1088], the configuration manager & GCT
15
[1094] and the NPDA [1096] work together. The platform NoSQL DB [1098] is a
database for storing all the inventory (both physical and logical) as well as the
metadata of the VNFs and CNF. The platform schedulers and cron jobs [1100]
schedules the task such as but not limited to triggering of an event, traverse the
5 network graph etc. The VNF backup & upgrade manager [1102] takes backup of
the images, binaries of the VNFs and the CNFs and produces that backup on
demand in case of server failure. The micro service auditor [1104] audits the
microservices. For e.g., in a hypothetical case, instances not being instantiated by
the MANO architecture [100] using the network resources then the micro service
10 auditor [1104] audits and informs the same so that resources can be released for
services running in the MANO architecture [100], thereby assuring the services
only run on the MANO platform [100]. The platform operations, administration and
maintenance manager [1106] is used for newer instances that are spawning.
15 [0048] The platform resource adapters and utilities module [112] further comprises
a platform external API adaptor and gateway [1122]; a generic decoder and indexer
(XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an API adapter
[1128]; and an NFV gateway [1130]. The platform external API adaptor and
gateway [1122] is responsible for handling the external services (to the MANO
20 platform [100]) that requires the network resources. The generic decoder and
indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in
the XML, CSV, JSON format. The docker service adaptor [1126] is the interface
provided between the telecom cloud and the MANO architecture [100] for
communication. The API adapter [1128] is used to connect with virtual machines
25 (VMs). The NFV gateway [1130] is responsible for providing the path to each
service going to/incoming from the MANO architecture [100].
[0049] Referring to FIG. 2 an exemplary block diagram of a computing device
[200] upon which the features of the present disclosure may be implemented in
30 accordance with exemplary implementation of the present disclosure is shown. In
an implementation, the computing device [200] may implement a method
16
automating management of network traffic at one or more network functions in a
network by utilising a system [300]. In another implementation, the computing
device [200] itself implements the method for automating management of network
traffic at one or more network functions in a network using one or more units
5 configured within the computing device [200], wherein said one or more units are
capable of implementing the features as disclosed in the present disclosure.
[0050] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
10 processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
15 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
20 instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[0051] A storage device [210], such as a magnetic disk, optical disk, or solid-state
25 drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
30 alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
17
[204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
5 of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0052] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
10 and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
15 instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
20 software instructions.
[0053] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
25 local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
30 compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
18
electromagnetic or optical signals that carry digital data streams representing
various types of information.
[0054] The computing device [200] can send messages and receive data, including
5 program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], a host [224], the local network [222] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
10 and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0055] Referring to FIG. 3 an exemplary block diagram of a system [300] for
managing a host for one or more container network function components (CNFCs),
15 in accordance with exemplary implementation of the present disclosure is
illustrated. In one example, the system [300] may be in communication with other
network entities/components known to a person skilled in the art. Such network
entities/components have not been depicted in FIG. 3 and have not been explained
here for the sake of brevity.
20
[0056] Referring to FIG. 4 an exemplary signalling flow diagram [400] for
managing a host for one or more container network function components (CNFCs),
in accordance with exemplary implementation of the present disclosure is
illustrated.
25
[0057] It may be noted that FIG. 3 and FIG. 4 have been explained simultaneously
and may be read in conjunction with each other.
[0058] As depicted in FIG. 3 the system [300] comprises at least one processing
30 unit [302] and at least one display unit [304]. Also, all of the components/ units of
the system [300] are assumed to be connected to each other unless otherwise
indicated below. As shown in the FIG. 3 all units shown within the system [300]
19
should also be assumed to be connected to each other. Also, in FIG. 3 only a few
units are shown, however, the system [300] may comprise multiple such units or
the system [300] may comprise any such number of said units, as required to
implement the features of the present disclosure. Further, in an implementation, the
5 system [300] may reside in a server or the network entity or the system [300] may
be in communication with the network entity to implement the features as disclosed
in the present disclosure.
[0059] The system [300] is configured for managing a host for one or more
10 container network function components (CNFCs) with the help of the
interconnection between the components/units of the system [300]. The container
network function (CNF) may be a network function that may be implemented
within a containerized environment using technologies such as, but not limited to,
a docker, and the like. The said container network functions that may be
15 implemented within the containerized environment may be cloud-native network
functions. The cloud-native network functions may not rely on a dedicated
hardware or virtual machines for implementation, they may be implemented within
a container. Also, the containerization of the network function may make it possible
to manage how and when the network function may run across a cluster in the
20 environment. Furthermore, the container network function components (CNFCs)
may include components such as, but not limited to, a container runtime, an
operating system, lifecycle management component, storage component, etc.
[0060] Further, as would be understood the host on which CNF runs may be a
25 physical or a virtual machine that may have all the necessary components such as,
but not limited to, networking, storage and security. The host further ensures that
CNFs may operate efficiently within a containerized environment or a cloud-native
environment.
30 [0061] In operation, the processing unit [302] may receive, via a user interface (UI),
at a container network function lifecycle manager (CNFLM) node, a request for
20
replacement of a host of the one or more CNFCs. This has been depicted by Step
[402] in FIG. 4.
[0062] As would be understood, the CNFLM node may manage the lifecycle of the
5 container. The management of the lifecycle of the container is a crucial process,
where the CNFLM node may oversee the creation, deployment and operation of the
container until the container may be eventually decommissioned.
[0063] In an example a user may receive all details related to plurality of hosts at
10 the UI. Once the user receives all the details related to the plurality of hosts, the
user may select one or more faulty hosts from the plurality of hosts, at the UI, to be
replaced with one or more new hosts. Further, the details related to the plurality of
hosts may include such as, but not limited to, a host name and a host internet
protocol (IP) address. There are a variety of issues that could cause hosts or servers
15 to go down or become faulty, including hardware failure, viruses, power outages,
as well as natural or physical disasters like fires or floods. A host or server may also
go down because of corrupted files, or misconfigurations.
[0064] Continuing further, once the user selects the one or more faulty hosts from
20 the plurality of hosts, the details related to the CNFCs on the one or more faulty
hosts may be displayed on the UI. The details related to the CNFCs on the faulty
hosts may include a CNF name, a CNF version, a CNF ID, a CNFC Name, a CNFC
ID, and a container ID.
25 [0065] Continuing further, the display unit [304] may display at the UI, a plurality
of new hosts. The display unit [304] may further display, via the CNFLM node, and
at the UI, the set of details relating to the new host. The details related to the new
host may comprise one or more new host names and a new host IP address. The
user may select a new host from the plurality of new hosts displayed at the UI of
30 the display unit [304]. The user, to select the new host, may select the new host
name from the one or more new host names displayed at the UI of the display unit
[304]. Once the user selects the new host name from the set of new host names, the
21
new host IP address corresponding to the selected new host name may be displayed,
at the UI, on the display unit [304].
[0066] Continuing further, the processing unit [302] may transmit, via the CNFLM
5 node, an instruction to a docker service adapter (DSA) node to instantiate the one
or more CNFCs to the new host. This has been depicted by Step [404] in FIG. 4.
[0067] As would be understood, the DSA is a component of the system [300] that
may have been designed to interface between the docker services and the other
10 components of the system [300]. Further, as would be understood instantiate may
be a process to create an instance of the CNFCs and to make the CNFCs operational
on the selected new host.
[0068] Continuing further, the processing unit [302] may re-instantiate, via the
15 DSA node, the one or more CNFCs to the new host. This has been depicted by Step
[406] in FIG. 4.
[0069] Furthermore, the processing unit [302] may transmit from the new host or
20 the server, via the DSA node (shown as step [408]), a success response to the
CNFLM node, in response to the re-instantiation of the one or more CNFCs to the
new host. This has been depicted by Step [410] of FIG. 4. The success response
may indicate that the one or more CNFCs operationalised on the faulty host are now
successfully operationalised on the selected new host.
25
[0070] Thereafter, the processing unit [302] may transmit, via the CNFLM node,
to a physical and virtual resource manager (PVIM) node, a set of details related to
the new host. This has been depicted by Step [412] of FIG. 4.
30 [0071] As would be understood the PVIM service maintains the virtual inventory,
such as virtual machines, and limited physical inventory, such as servers. It
maintains the relation between physical and virtual resources (w.r.t overlay). Also,
22
it describes physical and virtual resources with respect to different attributes using
updates from external micro-services, such as the CNFLM microservice or node.
[0072] Continuing further, in an implementation, the PVIM node is communicably
5 coupled to a database. The processing unit [302] may update, via the PVIM node,
the database with the set of details related to the new host. The database,
communicably coupled with the PVIM, may store all the details related to the
selected new host for the said one or more CNFCs. The set of details related to the
new host comprises a new host name and a new host internet protocol (IP) address.
10
[0073] Further, the processing unit [302] may transmit, via the CNFLM node, to
the UI, a success response indicative of replacement of the host and re-instantiation
of the one or more CNFCs to the new host. This has been depicted by step 414 of
FIG. 4. Once the success response is received at the UI, the updated details related
15 to the new host may be displayed, at the UI of the display unit [304], to the user.
[0074] Referring to FIG. 5 an exemplary flow diagram of a method [500] for
managing a host for one or more container network function components (CNFCs),
in accordance with exemplary implementation of the present disclosure is
20 illustrated. In an implementation the method [500] is performed by the system
[300]. Also, as shown in FIG. 5, the method [500] initiates at step [502].
[0075] At step [504], the method [500] comprises receiving, by a processing unit
[302] via a user interface (UI), and at a container network function lifecycle
25 manager (CNFLM) node, a request for replacement of a host of the one or more
CNFCs.
[0076] As would be understood, the CNFLM node may mange the lifecycle of the
container. The management of the lifecycle of the container is a crucial process,
30 where the CNFLM node may oversee the creation, deployment and operation of the
container until the container may be eventually decommissioned.
23
[0077] In an example a user may receive all details related to plurality of hosts at
the UI. Once the user receives all the details related to the plurality of hosts, the
user may select one or more faulty hosts from the plurality of hosts, at the UI, to be
replaced with one or more new hosts. There are a variety of issues that could cause
5 hosts or servers to go down or become faulty, including hardware failure, viruses,
power outages, as well as natural or physical disasters like fires or floods. A host or
server may also go down because of corrupted files, or misconfigurations. Further
the details related to the plurality of hosts may include such as, but not limited to, a
host name and a host internet protocol (IP) address.
10
[0078] Continuing further, once the user selects the one or more faulty hosts from
the plurality of hosts, the details related to the CNFCs on the one or more faulty
hosts may be displayed on the UI. The details related to the CNFCs on the faulty
hosts may include a CNF name, a CNF version, a CNF ID, a CNFC Name, a CNFC
15 ID, and a container ID.
[0079] Continuing further, the display unit [304] may display at the UI, a plurality
of new hosts. The display unit [304] may further display, via the CNFLM node, and
at the UI, the set of details relating to the new host. The details related to the new
20 host may comprise one or more new host names and a new host IP address. The
user may select a new hosts from the plurality of new hosts displayed at the UI of
the display unit [304]. The user, to select the new hosts, may selects the new host
name from the one or more new host names displayed at the UI of the display unit
[304]. Once the user selects the new host name from the set of new host names, the
25 new host IP address corresponding to the selected new host name may be displayed,
at the UI, on the display unit [304].
[0080] Next, at sept [506], the method [500] comprises transmitting, by the
processing unit [302] via the CNFLM node, an instruction to a docker service
30 adapter (DSA) node to re-instantiate the one or more CNFCs to a new host.
24
[0081] As would be understood the DSA is a component of the system [300] that
may have been designed to interface between the docker services and the other
components of the system [300]. Further, as would be understood instantiate may
be a process to create an instance of the CNFCs and to make the CNFCs operational
5 on the selected new host.
[0082] Further, at step [508], the method [500] comprises re-instantiating, by the
processing unit [302] via the DSA node, the one or more CNFCs to the new host.
The processing unit [302] may receive, via the UI, based on the input from the user,
10 the selection of the new host from the plurality of new hosts, for re-instantiating the
one or more CNFCs to the new host. Further, the instruction to the DSA node to reinstantiate the one or more CNFCs is based on the selection of the new host. As
would be understood to re-instantiate the one or more CNFCs, the processing unit
[302] may operationalise the said one or more CNFCs, that may be operationalised
15 on the faulty host, on the selected new host.
[0083] Further, at step [510], the method [500] comprises transmitting, by the
processing unit [302] via the DSA node, a success response to the CNFLM node,
in response to the re-instantiation of the one or more CNFCs to the new host. The
20 success response may indicate that the one or more CNFCs operationalised on the
faulty host are now successfully operationalised on the selected new host.
[0084] Furthermore, at step [512], the method [500] comprises transmitting, by the
processing unit [302] via the CNFLM node, and to a physical and virtual resource
25 manager (PVIM) node, a set of details related to the new host. As would be
understood the PVIM may be responsible to manage both the physical resources
such as, but not limited to, servers, storage devices, other hardware resources and
the virtual resources such as, but not limited to, virtual machines, virtual networks,
etc.
30
[0085] In an implementation, the PVIM node is communicably coupled to a
database. The processing unit [302] may update, via the PVIM node, the database
25
with the set of details related to the new host. The database, communicably coupled
with the PVIM, may store all the details related to the selected new host for the said
one or more CNFCs. The details related to the selected new host may comprise a
new host name and a new host internet protocol (IP) address.
5
[0086] Moreover, the processing unit [302] may transmit, via the CNFLM node,
and to the UI, a success response indicative of replacement of the host and reinstantiation of the one or more CNFCs to the new host. Once the success response
may be received at the UI, the updated details related to the new host may be
10 displayed, at the UI of the display unit [304], to the user.
[0087] Referring to FIG. 6, an exemplary diagram of a system architecture [600]
for managing a host for one or more container network function components
(CNFCs), in accordance with exemplary implementation of the present disclosure
15 is illustrated. The system architecture [600] comprises a User Interface (UI) [602],
a Container Network function Lifecycle Manger (CNFLM) [604], a Docker Service
Adapter (DSA) [606], and a Physical and virtual resource manager (PVIM) [608].
As shown in FIG. 6, all units shown of the system architecture [600] should also be
assumed to be connected to each other. Also, in FIG. 6 only a few units are shown,
20 however, the system architecture [600] may comprise multiple such units or the
system architecture [600] may comprise any such numbers of said units, as required
to implement the features of the present disclosure.
[0088] The UI [602] may display all details related to plurality of hosts. Once the
25 details related to the plurality of hosts are displayed, the user may select one or
more faulty hosts from the plurality of hosts that may be replaced with one or more
new hosts. Also, the details related to the one or more new hosts may be displayed
on the UI [602]. Once the user selects the one or more new hosts to replace the one
or more faulty hosts, the UI may send an instruction to the CNFLM [604] to replace
30 the one or more faulty hosts with the selected one or more new host.
26
[0089] The CNFLM [604] captures the details of Vendors, CNFs and CNFCs via
Create, Read, and Update APIs exposed by the CNFLM [604] service. The captured
details are stored in an elastic search database and can be further used by the DSA
[606]. The CNFLM [604] is responsible for creating a CNF or individual CNFC
5 instances. Also, it is responsible for healing and scaling out CNFs or individual
CNFCs. The CNFLM [604] further transmits the instructions, to the DSA [606] to
re-instantiate the CNFC instances to the selected one or more new hosts. The DSA
[606] may be used for creating the containers on Docker Sites as a swarm service.
10 [0090] Continuing further, the CNFLM [604] sends the instruction with CNFC
details to the DSA [606]. Every CNFC may be deployed on different Docker site
as per the instructions with at least one replication. When container runs
successfully, a Docker Agent Manager (DAM) sends a response to the DSA [606]
per CNF and then the DSA [606] sends a final response to the CNFLM [604]. Once
15 the CNFLM [604] receives the response from the DSA [606], the CNFLM transmits
the instructions, to the PVIM [608], to update the inventory based on the reinstantiation of the CNFCs instances on the selected one or more new instances.
[0091] The PVIM [608], maintains the virtual inventory and limited physical
20 inventory. It maintains relation between physical and virtual resources. Also, it
describes physical and virtual resources with respect to different attributes using
updates from external micro-services. Once the inventory is updated, the PVIM
[608] sends a response to the CNFLM [604] about the successful updating of the
details related to the selected one or more new hosts.
25
[0092] Finally, the CNFLM [604] transmits a success response to the UI [602]. The
details related the updated one or more new hosts may be displayed at the UI [602]
for the user.
30 [0093] The present disclosure further, discloses a non-transitory computer readable
storage medium storing one or more instructions for managing a host for one or
more container network function components (CNFCs). The instructions include
27
executable code which, when executed by one or more units of a system [300],
causes a processing unit [302] of the system [300] to receive, via a user interface
(UI), and at a container network function lifecycle manager (CNFLM) node, a
request for replacement of a host of the one or more CNFCs. Further, the executable
5 code when executed causes the processing unit [302]to transmit, via the CNFLM
node, an instruction to a docker service adapter (DSA) node to instantiate the one
or more CNFCs to a new host. The executable code when further executed causes
the processing unit [302] to re-instantiate, via the DSA node, the one or more
CNFCs to the new host. Further, the executable code when executed causes the
10 processing unit [302] to transmit, via the DSA node, a success response to the
CNFLM node, in response to the re-instantiation of the one or more CNFCs to the
new host. Furthermore, the executable code when executed causes the processing
unit [302] to transmit, via the CNFLM node, and to a physical and virtual resource
manager (PVIM) node, a set of details related to the new host.
15
[0094] As is evident from the above, the present disclosure provides a technically
advanced solution for managing a host for one or more container network function
components (CNFCs). More particularly, the present solution where no manual
intervention at backend is needed to re-instantiate CNFC instances. Further, the
20 present solution keeps inventory in sync by updating the inventory. Furthermore,
the present solution provides easy one-click operation for the user to re-instantiate
same CNFC’s and also keeps inventory in sync.
[0095] While considerable emphasis has been placed herein on the disclosed
25 implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
30 and non-limiting.
28
[0096] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
5 configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
10 of the present disclosure.
29
We Claim:
1. A method for managing a host for one or more container network function
components (CNFCs), the method comprising:
- receiving, by a processing unit [302] via a user interface (UI), and at a
container network function lifecycle manager (CNFLM) node, a
request for replacement of a host of the one or more CNFCs;
- transmitting, by the processing unit [302] via the CNFLM node, an
instruction to a docker service adapter (DSA) node to re-instantiate the
one or more CNFCs to a new host;
- re-instantiating, by the processing unit [302] via the DSA node, the one
or more CNFCs to the new host;
- transmitting, by the processing unit [302] via the DSA node, a success
response to the CNFLM node, in response to the re-instantiation of the
one or more CNFCs to the new host; and
- transmitting, by the processing unit [302] via the CNFLM node, and to
a physical and virtual resource manager (PVIM) node, a set of details
related to the new host.
2. The method as claimed in claim 1, wherein the method comprises
transmitting, by the processing unit [302] via the CNFLM node, and to the
UI, a success response indicative of replacement of the host and reinstantiation of the one or more CNFCs to the new host.
3. The method as claimed in claim 1, wherein the method comprises:
- displaying, by a display unit [304] at the UI, a plurality of new hosts;
and
- receiving, by the processing unit [302] via the UI, based on an input
from a user, a selection of the new host from the plurality of new hosts,
for re-instantiating the one or more CNFCs to the new host,
30
wherein the instruction to the DSA node to re-instantiate the one or
more CNFCs is based on the selection of the new host.
4. The method as claimed in claim 1, wherein the PVIM node is communicably
coupled to a database, and wherein the method comprises updating, by the
processing unit via the PVIM node, the database with the set of details
related to the new host.
5. The method as claimed in claim 1, wherein the set of details related to the
new host comprises a new host name and a new host internet protocol (IP)
address.
6. The method as claimed in claim 1, the method further comprising:
- displaying, by the display unit [304] via the CNFLM node, and at the
UI, the set of details related to the new host.
7. A system for managing a host for one or more container network function
components (CNFCs), the system comprising:
- a processing unit [302] configured to:
o receive, via a user interface (UI), and at a container network
function lifecycle manager (CNFLM) node, a request for
replacement of a host of the one or more CNFCs;
o transmit, via the CNFLM node, an instruction to a docker service
adapter (DSA) node to instantiate the one or more CNFCs to a
new host;
o re-instantiate, via the DSA node, the one or more CNFCs to the
new host;
o transmit, via the DSA node, a success response to the CNFLM
node, in response to the re-instantiation of the one or more CNFCs
to the new host; and
31
o transmit, via the CNFLM node, and to a physical and virtual
resource manager (PVIM) node, a set of details related to the new
host.
8. The system as claimed in claim 7, wherein the processing unit [302] is
configured to transmit, via the CNFLM node, and to the UI, a success
response indicative of replacement of the host and re-instantiation of the one
or more CNFCs to the new host.
9. The system as claimed in claim 7, wherein the system further comprises:
- a display unit [304] connected to at least the processing unit [302], the
display unit [304] is configured to display, at the UI, a plurality of new
hosts; and
- the processing unit [302] configured to receive, via the UI, based on an
input from a user, a selection of the new host from the plurality of new
hosts, for re-instantiating the one or more CNFCs to the new host,
wherein the instruction to the DSA node to re-instantiate the one or more
CNFCs is based on the selection of the new host.
10. The system as claimed in claim 7, wherein the PVIM node is communicably
coupled to a database, and wherein the processing unit [302] is configured
to update, via the PVIM node, the database with the set of details related to
the new host.
11. The system as claimed in claim 7, wherein the set of details related to the
new host comprises a new host names and a new host internet protocol (IP)
address.
12. The system as claimed in claim 7, wherein the display unit [304] is further
configured to: - display, via the CNFLM node, and at the UI, the set of details relating to the new host.

Documents

Application Documents

# Name Date
1 202321065008-STATEMENT OF UNDERTAKING (FORM 3) [27-09-2023(online)].pdf 2023-09-27
2 202321065008-PROVISIONAL SPECIFICATION [27-09-2023(online)].pdf 2023-09-27
3 202321065008-POWER OF AUTHORITY [27-09-2023(online)].pdf 2023-09-27
4 202321065008-FORM 1 [27-09-2023(online)].pdf 2023-09-27
5 202321065008-FIGURE OF ABSTRACT [27-09-2023(online)].pdf 2023-09-27
6 202321065008-DRAWINGS [27-09-2023(online)].pdf 2023-09-27
7 202321065008-Proof of Right [09-02-2024(online)].pdf 2024-02-09
8 202321065008-FORM-5 [25-09-2024(online)].pdf 2024-09-25
9 202321065008-ENDORSEMENT BY INVENTORS [25-09-2024(online)].pdf 2024-09-25
10 202321065008-DRAWING [25-09-2024(online)].pdf 2024-09-25
11 202321065008-CORRESPONDENCE-OTHERS [25-09-2024(online)].pdf 2024-09-25
12 202321065008-COMPLETE SPECIFICATION [25-09-2024(online)].pdf 2024-09-25
13 202321065008-FORM 3 [08-10-2024(online)].pdf 2024-10-08
14 202321065008-Request Letter-Correspondence [09-10-2024(online)].pdf 2024-10-09
15 202321065008-Power of Attorney [09-10-2024(online)].pdf 2024-10-09
16 202321065008-Form 1 (Submitted on date of filing) [09-10-2024(online)].pdf 2024-10-09
17 202321065008-Covering Letter [09-10-2024(online)].pdf 2024-10-09
18 202321065008-CERTIFIED COPIES TRANSMISSION TO IB [09-10-2024(online)].pdf 2024-10-09
19 Abstract.jpg 2024-10-25
20 202321065008-ORIGINAL UR 6(1A) FORM 1 & 26-070125.pdf 2025-01-14