Abstract: The present disclosure relates to a method and a system for service continuity of a network node. The present disclosure encompasses monitoring, by a monitoring unit [302], a health status of one or more processes [408] running inside a container [402] that is spawned using an image. A transceiver unit [304] receives a health indication regarding the one or more processes [408] running inside the container [402]. Then a high availability module [306] spawns a new process or restarts a process based on a restart policy stored within a configuration module [310] that stores container data, configuration related to processes and state data of the one or more processes [408]. The present disclosure further encompasses restarting, by a restarting unit [308], one or more supporting services [406] in the image of the container [402]. [FIG. 3]
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR SERVICE CONTINUITY OF A
NETWORK NODE”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR SERVICE CONTINUITY OF A
NETWORK NODE
FIELD OF THE DISCLOSURE
5
[0001] Embodiments of the present disclosure generally relate to network service
management systems. More particularly, embodiments of the present disclosure
relate to methods and systems for service continuity of a network node.
10 BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as an admission of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few
20 decades, with each generation bringing significant improvements and
advancements. The first-generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
services became possible, and text messaging was introduced. The third-generation
25 (3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth-generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth-generation (5G) technology is
being deployed, promising even faster data speeds, low latency, and the ability to
30 connect multiple devices simultaneously. With each generation, wireless
3
communication technology has become more advanced, sophisticated, and capable
of delivering more services to its users.
[0004] The existing wireless communication systems may use containerized
5 microservice architecture for providing services to the users. In a standard container
network function (CNF), or the nodes in containerized architecture, a microservice
or a process is running inside the container.
[0005] In the event of microservice failure/crash, service continuity may be
10 provided by spawning a new container and/or restarting a new container. As would
be understood, the spawning of the new container not only takes additional
resources, but also requires additional time in restoration of the service provided by
the microservice. In another option for restoring the service, the restarting of the
container may be performed, however, due to bad health of the container due to
15 certain ongoing issues, the process may be terminated within the container, and the
container may keep crashing again causing to form a loop of crash and restart. Also,
the supporting services which may support the process may also be restarted
causing further issues. This required container restart time which may be
considerably higher and may result in significant degradation in the quality of
20 services. Thus, the solution for spawning a new process may be considered to be
better than restarting the container, since the restart may take longer due to
allocation of hardware and software issues. However, in cases of the spawning of
the new process no allocation of resources is required, which may be done in a few
seconds. For example, for images which are heavy sized, the container restart may
25 even take a significant period of time which may be in seconds.
[0006] Telecommunication applications may not be able to afford this large amount
of time for container restart. Since, during such procedure, there may be a downtime
of the services which may not be acceptable due significant degradation in the
30 overall quality of service. Also, any supporting service/process running inside the
4
container also needs to be restarted on container restart which creates an additional
burden of handling this specific case.
[0007] Thus, there exists an imperative need in the art to provide a method and a
5 system for providing service continuity during microservice failure, which the
present disclosure aims to address.
SUMMARY
10 [0008] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
15 [0009] An aspect of the present disclosure may relate to a method for service
continuity of a network node. The method comprises monitoring, by a monitoring
unit, a health status of one or more processes running inside a container, wherein
the container being spawned using an image of the container. The method further
comprises receiving, by a transceiver unit, a health indication regarding the one or
20 more processes running inside the container. The method further comprises
spawning, by a high availability module, a new process or restarting a process based
on a restart policy stored within a configuration module, wherein the configuration
module stores container data, configuration related to processes and state data of
the one or more processes running inside the container. The method also comprises
25 restarting, by a restarting unit, one or more supporting services in the image of the
container.
[0010] In an exemplary aspect of the present disclosure, the one or more supporting
services are associated with the one or more processes running inside the container.
30
5
[0011] In another exemplary aspect of the present disclosure, the container is
spawned using the image of the container.
[0012] In another exemplary aspect of the present disclosure, the receiving, by the
5 transceiver unit, the image of the container comprises receiving the image of the
container from a storage unit.
[0013] In another exemplary aspect of the present disclosure, the restarting, by the
restarting unit, the one or more supporting services is based on an entry point.
10
[0014] In another exemplary aspect of the present disclosure, the restart policy is
controlled by a configuration module.
[0015] In another exemplary aspect of the present disclosure, the restarting, by the
15 restarting unit, the one or more supporting services based on an entry point running
inside the container in an infinite loop.
[0016] In another exemplary aspect of the present disclosure, the method further
comprises providing the service continuity in a containerized network function
20 (CNF) environment.
[0017] In another exemplary aspect of the present disclosure, the network node is
implemented as a microservice.
25 [0018] Another aspect of the present disclosure may relate to a system for service
continuity of a network node. The system comprises a monitoring unit, a transceiver
unit, a high availability module, a configuration module, and a restarting module
connected with each other. The monitoring unit is configured to monitor a health
status of one or more processes running inside a container, wherein the container
30 being spawned using an image of the container. The transceiver unit is configured
to receive a health indication regarding the one or more processes running inside
6
the container. The high availability module is configured to spawn a new process
or restarting a process based on the restart policy stored within a configuration
module, wherein the configuration module stores container data, configuration
related to processes and state data of the one or more processes running inside the
5 container. The restarting unit is configured to restart one or more supporting
services in the image of the container.
[0019] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing one or more instructions for service
10 continuity of a network node, the one or more instructions include executable code
which, when executed by one or more units of a system, causes the one or more
units to perform certain functions. The one or more instructions when executed
causes a monitoring unit to monitor a health status of one or more processes running
inside a container. The container being spawned using an image of the container.
15 The one or more instructions when executed further causes a transceiver unit to
receive a health indication regarding the one or more processes running inside the
container. The one or more instructions when executed further causes a high
availability module to spawn a new process or restarting a process based on the
restart policy stored within a configuration module. The configuration module
20 stores container data, configuration related to processes and state data of the one or
more processes running inside the container. The one or more instructions when
executed further causes a restarting unit to restart one or more supporting services
in the image of the container.
25 OBJECTS OF THE DISCLOSURE
[0020] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
30 [0021] It is an object of the present disclosure to provide a system and a method for
service continuity of a network node.
7
[0022] It is an object of the present disclosure to provide a system and a method for
providing service continuity during microservice failure which avoids container
restart in an event of microservice malfunction.
5
[0023] It is another object of the present disclosure to provide a solution that avoids
the additional burden of managing supporting services inside the container.
[0024] It is yet another object of the present disclosure to provide a solution which
10 does not require any support from the container for providing service continuity.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] The accompanying drawings, which are incorporated herein, and constitute
15 a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
20 limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
25
[0026] FIG. 1 illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture.
[0027] FIG. 2 illustrates an exemplary block diagram of a computing device upon
30 which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
8
[0028] FIG. 3 illustrates an exemplary block diagram of a system for service
continuity of a network node, in accordance with exemplary implementations of the
present disclosure.
5
[0029] FIG. 4 illustrates a system architecture [400] used for service continuity of
the network node, in accordance with exemplary implementations of the present
disclosure.
10 [0030] FIG. 5 illustrates a method flow diagram for service continuity of the
network node, in accordance with exemplary implementations of the present
disclosure.
[0031] The foregoing shall be more apparent from the following more detailed
15 description of the disclosure.
DETAILED DESCRIPTION
[0032] In the following description, for the purposes of explanation, various
20 specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
25 address any of the problems discussed above or might address only some of the
problems discussed above.
[0033] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
30 the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
9
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
5 [0034] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skills in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
10 embodiments in unnecessary detail.
[0035] It should be noted that the terms "first", "second", "primary", "secondary",
"target" and the like, herein do not denote any order, ranking, quantity, or
importance, but rather are used to distinguish one element from another.
15
[0036] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
20 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
[0037] The word “exemplary” and/or “demonstrative” is used herein to mean
25 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
30 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
10
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
5 [0038] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
10 Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
15 processing unit is a hardware processor.
[0039] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
20 communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
25 of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
[0040] As used herein, “storage unit” or “memory unit” refers to a machine or
30 computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
11
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
5 functions.
[0041] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
10 communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
[0042] All modules, units, components used herein, unless explicitly excluded
15 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
20 circuits (FPGA), any other type of integrated circuits, etc.
[0043] As used herein the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
25 and/or connected with the system.
[0044] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing
30 method and system of service continuity of a network node.
12
[0045] FIG. 1 illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture, in accordance with exemplary
implementation of the present disclosure. As shown in fig. 1, the 5GC network
architecture [100] includes a user equipment (UE) [102], a radio access network
5 (RAN) [104], an access and mobility management function (AMF) [106], a Session
Management Function (SMF) [108], a Service Communication Proxy (SCP) [110],
an Authentication Server Function (AUSF) [112], a Network Slice Specific
Authentication and Authorization Function (NSSAAF) [114], a Network Slice
Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a
10 Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122],
a Unified Data Management (UDM) [124], an application function (AF) [126], a
User Plane Function (UPF) [128], a data network (DN) [130], wherein all the
components are assumed to be connected to each other in a manner as obvious to
the person skilled in the art for implementing features of the present disclosure.
15
[0046] Radio Access Network (RAN) [104] is the part of a mobile
telecommunications system that connects user equipment (UE) [102] to the core
network (CN) and provides access to different types of networks (e.g., 5G network).
It consists of radio base stations and the radio access technologies that enable
20 wireless communication.
[0047] Access and Mobility Management Function (AMF) [106] is a 5G core
network function responsible for managing access and mobility aspects, such as UE
registration, connection, and reachability. It also handles mobility management
25 procedures like handovers and paging.
[0048] Session Management Function (SMF) [108] is a 5G core network function
responsible for managing session-related aspects, such as establishing, modifying,
and releasing sessions. It coordinates with the User Plane Function (UPF) for data
30 forwarding and handles IP address allocation and QoS enforcement.
13
[0049] Service Communication Proxy (SCP) [110] is a network function in the 5G
core network that facilitates communication between other network functions by
providing a secure and efficient messaging service. It acts as a mediator for servicebased interfaces.
5
[0050] Authentication Server Function (AUSF) [112] is a network function in the
5G core responsible for authenticating UEs during registration and providing
security services. It generates and verifies authentication vectors and tokens.
10 [0051] Network Slice Specific Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and
authorization services specific to network slices. It ensures that UEs can access only
the slices for which they are authorized.
15 [0052] Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors
such as subscription, requested services, and network policies.
[0053] Network Exposure Function (NEF) [118] is a network function that exposes
20 capabilities and services of the 5G network to external applications, enabling
integration with third-party services and applications.
[0054] Network Repository Function (NRF) [120] is a network function that acts
as a central repository for information about available network functions and
25 services. It facilitates the discovery and dynamic registration of network functions.
[0055] Policy Control Function (PCF) [122] is a network function responsible for
policy control decisions, such as QoS, charging, and access control, based on
subscriber information and network policies.
30
14
[0056] Unified Data Management (UDM) [124] is a network function that
centralizes the management of subscriber data, including authentication,
authorization, and subscription information.
5 [0057] Application Function (AF) [126] is a network function that represents
external applications interfacing with the 5G core network to access network
capabilities and services.
[0058] User Plane Function (UPF) [128] is a network function responsible for
10 handling user data traffic, including packet routing, forwarding, and QoS
enforcement.
[0059] Data Network (DN) [130] refers to a network that provides data services to
user equipment (UE) in a telecommunications system. The data services may
15 include but are not limited to Internet services, private data network related services.
[0060] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
20 implementation, the computing device [200] may also implement a method for
service continuity of the network node utilising the system [300]. In another
implementation, the computing device [200] itself implements the method for
service continuity of the network node using one or more units configured within
the computing device [200], wherein said one or more units are capable of
25 implementing the features as disclosed in the present disclosure.
[0061] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a processor [204]
coupled with bus [202] for processing information. The processor [204] may be, for
30 example, a general-purpose microprocessor. The computing device [200] may also
include a main memory [206], such as a random-access memory (RAM), or other
15
dynamic storage device, coupled to the bus [202] for storing information and
instructions to be executed by the processor [204]. The main memory [206] also
may be used for storing temporary variables or other intermediate information
during execution of the instructions to be executed by the processor [204]. Such
5 instructions, when stored in non-transitory storage media accessible to the processor
[204], render the computing device [200] into a special-purpose machine that is
customized to perform the operations specified in the instructions. The computing
device [200] further includes a read only memory (ROM) [208] or other static
storage device coupled to the bus [202] for storing static information and
10 instructions for the processor [204].
[0062] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
15 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
20 [204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
25 the device to specify positions in a plane.
[0063] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
30 or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
16
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
5 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
10 [0064] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
15 a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
20 electromagnetic or optical signals that carry digital data streams representing
various types of information.
[0065] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
25 communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
30 execution.
17
[0066] Referring to FIG. 3, an exemplary block diagram of a system [300] for
service continuity of a network node, is shown, in accordance with the exemplary
implementations of the present disclosure. The system [300] may comprise at least
one monitoring unit [302], at least one transceiver unit [304], at least one high
5 availability module [306], at least one restarting unit [308], at least one
configuration module [310], and/or at least one storage unit [312]. Also, all of the
components/ units of the system [300] are assumed to be connected to each other
unless otherwise indicated below. As shown in the figures all units shown within
the system [300] should also be assumed to be connected to each other. Also, in
10 FIG. 3 only a few units are shown, however, the system [300] may comprise
multiple such units or the system [300] may comprise any such numbers of said
units, as required to implement the features of the present disclosure. Further, in an
implementation, the system [300] may be present in a user device/ user equipment
[102] to implement the features of the present disclosure. The system [300] may be
15 a part of the user device [102]/ or may be independent of but in communication
with the user device [102] (may also referred herein as a UE). In another
implementation, the system [300] may reside in a server or a network entity. In yet
another implementation, the system [300] may reside partly in the server/ network
entity and partly in the user device.
20
[0067] Referring to FIG. 4, an exemplary block diagram of a system architecture
[400] used for service continuity of the network node, is shown, in accordance with
the exemplary implementations of the present disclosure. The system architecture
[400] may comprise at least one container [402], one or more entry points [404],
25 and/or the at least one high availability module [306]. Further, the container [402]
using the one or more entry points [404] and the high availability module [306] may
be used for one or more supporting services [406A] [406B] (collectively referred to
as one or more supporting services [406] herein), and one or more processes [408A]
[408B] (collectively referred to as one or more processes [408]). Also, all of the
30 components/ units of the system architecture [400] are assumed to be connected to
each other unless otherwise indicated below. As shown in the figures all units shown
18
within the system architecture [400] may also be assumed to be connected to each
other. Also, in FIG. 4 only a few units are shown, however, the system architecture
[400] may comprise multiple such units or the system architecture [400] may
comprise any such numbers of said units, as required to implement the features of
5 the present disclosure. Further, in an implementation, the system architecture [400]
may be present in a user device/ user equipment [102] to implement the features of
the present disclosure. The system architecture [400] may be a part of the user
device [102]/ or may be independent of but in communication with the user device
[102] (may also referred herein as a UE). In another implementation, the system
10 architecture [400] may reside in a server or a network entity. In yet another
implementation, the system architecture [400] may reside partly in the server/
network entity and partly in the user device.
[0068] The system [300] is configured for service continuity of the network node,
15 with the help of the interconnection between the components/units of the system
[300]. Further, the system [300] is configured for service continuity of the network
node, with the help of the interconnection between the system architecture [400]
and its components/units. Accordingly, the FIG. 3 and FIG.4 are taken together for
explanation/ description of the present disclosure in the foregoing description.
20
[0069] As would be understood, the network node may refer to redistribution points
or communication endpoints attached to a network that are capable of creating,
receiving, or transmitting information. For example, the network nodes may be one
or more network functions within the telecommunication network, the network
25 nodes may for example, be responsible for providing one or more services to a
consumer. The consumer in such cases may be different UEs or network functions
(NFs).
[0070] In an example, the network nodes may be running as a microservice in a
30 container, where the container can have multiple services/processes running
associated with different functionalities of the network functions. In such examples,
19
the network node may be the AMF [106] or the SMF [108], or the components
performing the functions of the AMF [106] or the SMF [108]. In various
implementations of the present disclosure, the network node may be implemented
as a microservice. It may be noted that the container as shown in FIG. 4, only
5 illustrates a container with entry point and the layout of the one or more processes
running in the container [402] for illustration purposes, and may also contain other
components/units as may be obvious to a person skilled in the art.
[0071] The entry points may refer to a set of executables for the container such as
10 entry point scripts that may be used during the execution of the one or more
processes and the one or more supporting services.
[0072] Further, service continuity may refer to a continuation of services provided
by the network nodes without any interruption or issues. Due to various processes
15 of the functionalities of the network nodes that are performed within the network
node, different functionalities may be running within a microservices container.
During the performance of a functionality of the NF, there may be various reasons
which may cause interruption of services such as failure/ crash of the microservice.
Due to such failure/crash the container restart time may take various seconds for
20 images with heavy size, which may cause significant degradation in the quality of
service. Hence, the present solution ensures continuity of the services when the
present solution is implemented, the procedure for which is described further below.
[0073] For ensuring service continuity of the network node, the monitoring unit
25 [302] monitors the health status of one or more processes [408] running inside a
container [402]. The container [402] may be spawned using an image of the
container [402]. The health status may refer to a status or report indicating the
presence or absence of any issues with the hardware or software components of the
container or the one or more processes that are running within the container. The
30 one or more processes may refer to the process for performing the functions of the
network node. In an example, the one or more processes in case of NF being the
20
AMF [106] may be such as a registration, mobility, authentication, accounting , etc.
Further, as would be understood, in containerized network architecture (i.e., the
CNFs), the network functions are implemented to run within the containers, that
may refer to a packaged software and hardware that are necessary to run the network
5 function for example packaged applications, functions, microservices, etc. Further,
the image may refer to an unchangeable, static file that includes executable code so
it can run an isolated process on physical or virtual infrastructure. In another
implementation of the present disclosure, the transceiver unit [304] may also
receive the image of the container [402] from a storage unit [312].
10
[0074] After the health status is monitored, the transceiver unit [304] receives a
health indication regarding the one or more processes [408] running inside the
container [402]. The health indication may refer to an indication regarding the status
such as a healthy status or an unhealthy status of the one or more processes [408]
15 that may be running within the container.
[0075] The health indication and the health status may be used for determining
whether the one or more processes running inside container are running healthily
or not. Further, this monitoring of the health status and receiving of the health
20 indication may enable taking a decision whether there is need for a restart of the
one or more processes, or the one or more process is required to be started as a new
process at a separate container for example. This helps in identifying whether there
is an issue with the one or more processes or the one or more containers running
the process. Also, in an example, the health status and the health indication may
25 also be used for identification of the requirement of restarting of the one or more
supporting services.
[0076] Continuing further, the high availability module [306] spawns a new process
or restarting a process based on the restart policy stored within a configuration
30 module [310]. The configuration module [310] stores container data, configuration
related to processes and state data of the one or more processes [408] running inside
21
the container [402]. The new process may refer to a separate process which may be
initiated for performing the operations/functions of the network nodes. As would be
understood, the restarting of the process based on the restart policy may refer to
switching OFF and then switching ON of the process by ending the process and
5 then running the same process again. The spawning of the new process or restarting
of the process may lead to solving the issues caused due to crash or failure. When
the new process is spawned and the process is restarted, the configuration module
[310] stores the container data, the configuration related to the process, and the state
data of the one or more processes [408] that are running inside the container. In an
10 example, the container data may refer to information associated with the container
which is running the one or more processes [408]. The configuration related to the
process may refer to the one or more settings related to the performance of the one
or more processes [408]. The state data may refer to the functioning of the one or
more processes [408], such as live, or dormant.
15
[0077] In an implementation of the present disclosure, the container [402] may be
spawned using the image of the container [402]. In such implementations, the high
availability module [306] may spawn the new process in a new container or the old
container which may be determined based on the image of the container.
20
[0078] In an exemplary implementation of the present disclosure, the restart policy
may be controlled by a configuration module [310]. The configuration module
[310] may determine the restart policy and the form of execution of the restart
policy. The restart policy may refer to a set of rules or guidelines for performing the
25 restart of the containers [402] or the one or more processes within the container
[402]. In an example, the restart policy may also be provided for the one or more
supporting services [406].
[0079] After spawning of the new process and/or restarting the process, the
30 restarting unit [308] restarts one or more supporting services [406] in the image of
the container [402]. The one or more supporting services [406] may be restarted in
22
order to cooperatively function with the one or more processes, the restart helps in
coordination of the one or more supporting services [406] with the one or more
processes.
5 [0080] In an exemplary implementation of the present disclosure, the high
availability module [306] and the restarting unit [308] may simultaneously spawn
the new process and restart the one or more supporting services. This helps in saving
a lot of time which may be unnecessarily wasted in case of queueing the steps.
10 [0081] In one of the implementations of the present disclosure, the one or more
supporting services [406] may be associated with the one or more processes [408]
running inside the container [402]. In an example, the one or more supporting
services [406] may refer to the one or more services that may be used along with
the one or more processes such as for substituting the one or more processes or
15 preferably for providing additional services along with the one or more processes.
For example, the one or more supporting services [406] may be a cron service for
scheduling jobs, and other supporting services such as SSH service, and a high
availability service.
20 [0082] In an example, the restarting of a process or spawning of the new process
may be that the high availability module [306] or the high availability service may
utilize the health information for determining the issues such as determining if an
instance of a process running within the container has been failed then the high
availability service or the high availability module [306] may restart the same
25 process in another container in the cluster which may be having the same process
in a standby status. In another example, a new process may be directly initiated
based on a failure of the container, and all of the services/ process of the container
has to be removed and then rejuvenated at another container. In such scenarios, the
restarting unit [308] may terminate the existing one or more support services and
30 may initiate the same in the image of another container.
23
[0083] In further implementations of the present disclosure, the restarting unit [308]
is configured to restart the one or more supporting services [406] based on the entry
point [404]. In certain exemplary implementations of the present disclosure, the
restarting unit [308] may restart the one or more supporting services [406] based on
5 the entry point [404] running inside the container [402] in an infinite loop.
[0084] In another exemplary implementation, the restarting unit [308] may provide
the service continuity in a containerized Network Function (CNF) environment. As
would be understood, the CNF environment may refer to an environment where the
10 multiple NFs are performed based on the CNFs. In such cases, the restarting unit
[308] using the other components of the system [300] and the system architecture
[400] ensures the service continuity in the CNF environment.
[0085] In an implementation of the present disclosure, in case of the restarting of
15 the one or more supporting services is required, then the one or more supporting
services may be terminated. After the termination of the processes for the one or
more supporting services, the one or more supporting services may be restarted with
incorporation of additional desired changes. Such changes may be performed by a
sysctl script or a mysysctl script that takes care to bring the one or more supporting
20 services up with new desired changes. Further, in the present disclosure, the “sysctl
script” or the “mysysctl script” may refer to the commands used in operating
systems that may enables read operation and modify operation on the attributes of
the system kernel such as its version number, maximum limits, and security settings
and may act as a tool for examining and changing kernel parameters at runtime.
25
[0086] Referring to FIG. 5, an exemplary method [500] flow diagram for service
continuity of the network node, in accordance with exemplary implementations of
the present disclosure is shown. In an implementation the method [500] is
performed by the system [300]. Further, in an implementation, the system [300]
30 may be present in a server device to implement the features of the present
disclosure. Also, as shown in FIG. 5, the method [500] starts at step [502].
24
[0087] As would be understood, the network node may refer to redistribution points
or communication endpoints attached to a network that are capable of creating,
receiving, or transmitting information. For example, the network nodes may be one
5 or more network functions within the telecommunication network, the network
nodes may for example, be responsible for providing one or more services to a
consumer. The consumer in such cases may be different UEs or network functions
(NFs).
10 [0088] In an example, the network nodes may be running as a microservice in a
container, where the container can have multiple services/processes running
associated with different functionalities of the network functions. In such examples,
the network node may be the AMF [106] or the SMF [108], or the components
performing the functions of the AMF [106] or the SMF [108]. In various
15 implementations of the present disclosure, the network node may be implemented
as a microservice.
[0089] The entry points may refer to a set of executables for the container such as
entry point scripts that may be used during the execution of the one or more
20 processes and the one or more supporting services.
[0090] Further, service continuity may refer to a continuation of services provided
by the network nodes without any interruption or issues. Due to various processes
of the functionalities of the network nodes that are performed within the network
25 node, different functionalities may be running within a microservices container.
During the performance of a functionality of the NF, there may be various reasons
which may cause interruption of services such as failure/ crash of the microservice.
Due to such failure/crash the container restart time may take various seconds for
images with heavy size, which may cause significant degradation in the quality of
30 service. Hence, the present solution ensures continuity of the services when the
present solution is implemented, the procedure for which is described further below.
25
[0091] At step [504], the method [500] involves monitoring, by a monitoring unit
[302], the health status of one or more processes [408] running inside a container
[402]. The container [402] being spawned using an image of the container [402].
5 The health status may refer to a status or report indicating the presence or absence
of any issues with the hardware or software components of the container or the one
or more processes that are running within the container. The one or more processes
may refer to the process for performing the functions of the network node. In an
example, the one or more processes in case of NF being the AMF [106] may be
10 such as a registration, mobility, authentication, accounting , etc. Further, as would
be understood, in containerized network architecture (i.e., the CNFs), the network
functions are implemented to run within the containers, that may refer to a packaged
software and hardware that are necessary to run the network function for example
packaged applications, functions, microservices, etc. Further, the image may refer
15 to an unchangeable, static file that includes executable code so it can run an isolated
process on physical or virtual infrastructure.
[0092] In another implementation of the present disclosure, the method [500] also
involves receiving by the transceiver unit [304], the image of the container [402]
20 from a storage unit [312].
[0093] Then at step [506], the method [500] involves receiving, by a transceiver
unit [304], a health indication regarding the one or more processes [408] running
inside the container [402]. The health indication may refer to an indication
25 regarding the status such as a healthy status or an unhealthy status of the one or
more processes [408] that may be running within the container.
[0094] The health indication and the health status may be used for determining
whether the one or more processes running inside container are running healthily
30 or not. Further, this monitoring of the health status and receiving of the health
indication may enable taking a decision whether there is need for a restart of the
26
one or more processes, or the one or more process is required to be started as a new
process at a separate container for example. This helps in identifying whether there
is an issue with the one or more processes or the one or more containers running
the process. Also, in an example, the health status and the health indication may
5 also be used for identification of the requirement of restarting of the one or more
supporting services.
[0095] Then at step [508], the method [500] involves spawning, by a high
availability module [306], a new process or restarting a process based on a restart
10 policy stored within a configuration module [310]. The configuration module [310]
stores container data, configuration related to processes and state data of the one or
more processes [408] running inside the container [402]. The new process may refer
to a separate process which may be initiated for performing the operations/functions
of the network nodes. As would be understood, the restarting of the process based
15 on the restart policy may refer to switching OFF and then switching ON of the
process by ending the process and then running the same process again. The
spawning of the new process or restarting of the process may lead to solving the
issues caused due to crash or failure. When the new process is spawned and the
process is restarted, the configuration module [310] stores the container data, the
20 configuration related to the process, and the state data of the one or more processes
[408] that are running inside the container. In an example, the container data may
refer to information associated with the container which is running the one or more
processes [408]. The configuration related to the process may refer to the one or
more settings related to the performance of the one or more processes [408]. The
25 state data may refer to the functioning of the one or more processes [408], such as
live, or dormant.
[0096] In an implementation of the present disclosure, the container [402] may be
spawned using the image of the container [402]. In such implementations, the high
30 availability module [306] may spawn the new process in a new container or the old
container which may be determined based on the image of the container.
27
[0097] In an exemplary implementation of the present disclosure, the restart policy
may be controlled by a configuration module [310]. The configuration module
[310] may determine the restart policy and the form of execution of the restart
5 policy. The restart policy may refer to a set of rules or guidelines for performing the
restart of the containers [402] or the one or more processes within the container
[402]. In an example, the restart policy may also be provided for the one or more
supporting services [406].
10 [0098] Then at step [510], the method [500] involves restarting, by a restarting unit
[308], one or more supporting services [406] in the image of the container [402].
The one or more supporting services [406] may be restarted in order to
cooperatively function with the one or more processes, the restart helps in
coordination of the one or more supporting services [406] with the one or more
15 processes.
[0099] In an exemplary implementation of the present disclosure, the high
availability module [306] and the restarting unit [308] may simultaneously spawn
the new process and restart the one or more supporting services. this helps in saving
20 a lot of time which may be unnecessarily wasted in case of queueing the steps.
[0100] In one of the implementations of the present disclosure, the one or more
supporting services [406] may be associated with the one or more processes [408]
running inside the container [402]. In an example, the one or more supporting
25 services [406] may refer to the one or more services that may be used along with
the one or more processes such as for substituting the one or more processes or
preferably for providing additional services along with the one or more processes.
For example, the one or more supporting services may be a cron service for
scheduling jobs, and other supporting services such as SSH service, and a high
30 availability service.
28
[0101] In an example, the restarting of a process or spawning of the new process
may be that the high availability module [306] or the high availability service may
utilize the health information for determining the issues such as determining if an
instance of a process running within the container has been failed then the high
5 availability service or the high availability module [306] may restart the same
process in another container in the cluster which may be having the same process
in a standby status. In another example, a new process may be directly initiated
based on a failure of the container, and all of the services/ process of the container
has to be removed and then rejuvenated at another container. In such scenarios, the
10 restarting unit [308] may terminate the existing one or more support services and
may initiate the same in the image of another container.
[0102] In an exemplary implementation of the present disclosure, in the method
[500] the step of restarting, by the restarting unit [308], the one or more supporting
15 services [406] may be based on an entry point [404]. In another exemplary
implementation of the present disclosure, in the method [500] the step of restarting,
by the restarting unit [308], the one or more supporting services [406] may be based
on an entry point [404] running inside the container [402] in an infinite loop.
20 [0103] In another exemplary implementation, the method [500] also involves
providing the service continuity in a containerized Network Function (CNF)
environment. As would be understood, the CNF environment may refer to an
environment where the multiple NFs are performed based on the CNFs. In such
cases, the restarting unit [308] using the other components of the system [300] and
25 the system architecture [400] ensures the service continuity in the CNF
environment.
[0104] In an implementation of the present disclosure, in case of the restarting of
the one or more supporting services is required, then the one or more supporting
30 services may be terminated. After the termination of the processes for the one or
more supporting services, the one or more supporting services may be restarted with
29
incorporation of additional desired changes. Such changes may be performed by a
sysctl script or a mysysctl script that takes care to bring the one or more supporting
services up with new desired changes. Further, in the present disclosure, the “sysctl
script” or the “mysysctl script” may refer to the commands used in operating
5 systems that may enables read operation and modify operation on the attributes of
the system kernel such as its version number, maximum limits, and security settings
and may act as a tool for examining and changing kernel parameters at runtime.
[0105] Thereafter, at step [512], the method [500] is terminated.
10
[0106] The present disclosure further discloses a non-transitory computer readable
storage medium storing one or more instructions for service continuity of the
network node, the one or more instructions include executable code which, when
executed by one or more units of a system [300], causes the one or more units to
15 perform certain functions. The one or more instructions when executed causes a
monitoring unit [302] to monitor the health status of one or more processes [408]
running inside a container [402]. The container [402] being spawned using an image
of the container [402]. The one or more instructions when executed further causes
a transceiver unit [304] to receive a health indication regarding the one or more
20 processes [408] running inside the container [402]. The one or more instructions
when executed further causes a high availability module [306] to spawn a new
process or restarting a process based on the restart policy stored within a
configuration module [310]. The configuration module [310] stores container data,
configuration related to processes and state data of the one or more processes [408]
25 running inside the container [402]. The one or more instructions when executed
further causes a restarting unit [308] to restart one or more supporting services [406]
in the image of the container [402].
[0107] As is evident from the above, the present disclosure provides a technically
30 advanced solution for service continuity of a network node. The present solution
avoids container restart / new container spawning in an event of microservice
30
malfunction. Further, the solution avoids the additional burden of managing
supporting services inside the container. Further, the present solution does not
require any support from containers for providing service continuity.
5 [0108] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
10 be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
[0109] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
15 implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
20 arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
31
We Claim:
1. A method for service continuity of a network node, the method comprising:
- monitoring, by a monitoring unit [302], a health status of one or more
5 processes [408] running inside a container [402], wherein the container
[402] being spawned using an image of the container [402];
- receiving, by a transceiver unit [304], a health indication regarding the
one or more processes [408] running inside the container [402];
- spawning, by a high availability module [306], a new process or
10 restarting a process based on a restart policy stored within a
configuration module [310], wherein the configuration module [310]
stores container data, configuration related to processes and state data
of the one or more processes [408] running inside the container [402];
and
15 - restarting, by a restarting unit [308], one or more supporting services
[406] in the image of the container [402].
2. The method as claimed in claim 1, wherein the one or more supporting
services [406] are associated with the one or more processes [408] running
20 inside the container [402].
3. The method as claimed in claim 1, wherein the container is spawned using
the image of the container [402].
25 4. The method as claimed in claim 1, wherein the receiving, by the transceiver
unit [304], the image of the container [402] comprises receiving the image
of the container [402] from a storage unit [312].
5. The method as claimed in claim 1, wherein the restarting, by the restarting
30 unit [308], the one or more supporting services [406] is based on an entry
point [404].
32
6. The method as claimed in claim 3, wherein the restart policy is controlled
by a configuration module [310].
5 7. The method as claimed in claim 1, wherein the restarting, by the restarting
unit [308], the one or more supporting services [406] based on an entry point
[404] running inside the container [402] in an infinite loop.
8. The method as claimed in claim 1, the method further comprises providing
10 the service continuity in a containerized network function (CNF)
environment.
9. The method as claimed in claim 1, wherein the network node is
implemented as a microservice.
15
10. A system [300] for service continuity of a network node, the system [300]
comprising:
- a monitoring unit [302] configured to monitor a health status of one or
more processes [408] running inside a container [402], wherein the
20 container [402] being spawned using an image of the container [402];
- a transceiver unit [304] connected at least to the monitoring unit [302],
the transceiver unit [304] configured to receive a health indication
regarding the one or more processes [408] running inside the container
[402];
25 - a high availability module [306] connected at least to the transceiver unit
[304], the high availability module [306] configured to spawn a new
process or restarting a process based on a restart policy stored within a
configuration module [310], wherein the configuration module [310]
stores container data, configuration related to processes and state data
30 of the one or more processes [408] running inside the container [402];
and
33
- a restarting unit [308] connected at least to the high availability module
[306], the restarting unit [308] configured to restart one or more
supporting services [406] in the image of the container [402].
5 11. The system [300] as claimed in claim 10, wherein the one or more
supporting services [406] are associated with the one or more processes
[408] running inside the container [402].
12. The system [300] as claimed in claim 10, wherein the container [402] is
10 spawned using the image of the container [402].
13. The system [300] as claimed in claim 10, wherein the transceiver unit [304]
is configured to receive the image of the container [402] from a storage unit
[312].
15
14. The system [300] as claimed in claim 10, wherein the restarting unit [308]
is configured to restart of the one or more supporting services [406] based
on an entry point [404].
20 15. The system [300] as claimed in claim 13, wherein the restart policy is
controlled by a configuration module [310].
16. The system [300] as claimed in claim 10, wherein the restarting unit [308]
is configured to restart of the one or more supporting services [406] based
25 on an entry point [404] running inside the container [402] in an infinite loop.
17. The system [300] as claimed in claim 10, wherein the restarting unit [308]
is configured to provide the service continuity in a containerized Network
Function (CNF) environment.
30
34
18. The system [300] as claimed in claim 10, wherein the network node is
implemented as a microservice.
| # | Name | Date |
|---|---|---|
| 1 | 202321064699-STATEMENT OF UNDERTAKING (FORM 3) [26-09-2023(online)].pdf | 2023-09-26 |
| 2 | 202321064699-PROVISIONAL SPECIFICATION [26-09-2023(online)].pdf | 2023-09-26 |
| 3 | 202321064699-POWER OF AUTHORITY [26-09-2023(online)].pdf | 2023-09-26 |
| 4 | 202321064699-FORM 1 [26-09-2023(online)].pdf | 2023-09-26 |
| 5 | 202321064699-FIGURE OF ABSTRACT [26-09-2023(online)].pdf | 2023-09-26 |
| 6 | 202321064699-DRAWINGS [26-09-2023(online)].pdf | 2023-09-26 |
| 7 | 202321064699-Proof of Right [09-02-2024(online)].pdf | 2024-02-09 |
| 8 | 202321064699-FORM-5 [25-09-2024(online)].pdf | 2024-09-25 |
| 9 | 202321064699-ENDORSEMENT BY INVENTORS [25-09-2024(online)].pdf | 2024-09-25 |
| 10 | 202321064699-DRAWING [25-09-2024(online)].pdf | 2024-09-25 |
| 11 | 202321064699-CORRESPONDENCE-OTHERS [25-09-2024(online)].pdf | 2024-09-25 |
| 12 | 202321064699-COMPLETE SPECIFICATION [25-09-2024(online)].pdf | 2024-09-25 |
| 13 | 202321064699-FORM 3 [08-10-2024(online)].pdf | 2024-10-08 |
| 14 | 202321064699-Request Letter-Correspondence [09-10-2024(online)].pdf | 2024-10-09 |
| 15 | 202321064699-Power of Attorney [09-10-2024(online)].pdf | 2024-10-09 |
| 16 | 202321064699-Form 1 (Submitted on date of filing) [09-10-2024(online)].pdf | 2024-10-09 |
| 17 | 202321064699-Covering Letter [09-10-2024(online)].pdf | 2024-10-09 |
| 18 | 202321064699-CERTIFIED COPIES TRANSMISSION TO IB [09-10-2024(online)].pdf | 2024-10-09 |
| 19 | Abstract.jpg | 2024-10-28 |
| 20 | 202321064699-ORIGINAL UR 6(1A) FORM 1 & 26-070125.pdf | 2025-01-14 |