Abstract: The present disclosure relates to a method and a system for updating parameters for one or more network nodes. The method includes receiving, by a transceiver unit [302] at an NMS [320], a set of requests comprising one or more update parameters for the one or more network nodes. The method further includes validating, by a validation unit [304] at the NMS [320], each request from the set of requests. Further, the method includes adding, by the validation unit [304] at the NMS [320], the validated set of requests in a queue maintained in an IO cache [504]. The method further includes running, by a scheduler unit [306], at the NMS [320], a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters. [FIG. 4]
FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR UPDATING PARAMETERS FOR ONE OR MORE NETWORK NODES”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR UPDATING PARAMETERS FOR ONE OR
MORE NETWORK NODES
TECHNICAL FIELD
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to method and system for updating parameters for one or more network nodes.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. The third generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless
communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] For improving the performance of nodes, various parameters need to be changed for the nodes. In the process, one parameter may have to be changed on one node, while another parameter may have to be changed on some other node. For making the changes, the network management system (NMS) serves as an intermediary and sends requests to the appropriate nodes inside the network after receiving them through the northbound interface (NBI) and sends back the response after receiving from nodes. All the changes are made in separate work orders. Also, for making any change on a node, the user might need to access user interface separately and make changes for the particular node. This process may consume a lot of time and effort for the user. Further, raising separate work orders for each change in the node(s) may also lead to high consumption of network resources.
[0005] Thus, there exists an imperative need in the art to provide a method and a system for updating parameters for nodes that consumes less time and efforts of the user, and consumes less amount of network resources, which the present disclosure aims to address.
SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0007] An aspect of the present disclosure may relate to a method for updating parameters for one or more network nodes. The method includes receiving, by a transceiver unit at a network management system (NMS), from an interface, a set of requests comprising one or more update parameters for the one or more network
nodes. The method further includes validating, by a validation unit at the NMS, each of the request from the set of requests. Furthermore, the method includes adding, by the validation unit at the NMS, the validated set of requests in a queue maintained in an input-output (IO) cache. The method further encompasses running, by a scheduler unit at the NMS, a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters. In an exemplary aspect of the present disclosure, the set of requests comprises at least multi-update requests and all-updates requests, wherein the multi-update requests are configured to update each of the one or more update parameters on a list of specified NF instances of the one or more network nodes. The all-update requests are configured to update each of the NF instance of the one or more network nodes.
[0008] In an exemplary aspect of the present disclosure, the set of requests is received from the interface in response to a polling by the transceiver unit at the NMS.
[0009] In an exemplary aspect of the present disclosure, each of the request from the set of requests is associated with a work order identity.
[0010] In an exemplary aspect of the present disclosure, the validating each of the request from the set of requests comprises validating a schema of a configuration data associated with each of the request from the set of requests.
[0011] In an exemplary aspect of the present disclosure, each of the request from the validated set of requests added in the queue, is grouped based on the work order identity.
[0012] In an exemplary aspect of the present disclosure, updating the one or more network nodes with the one or more update parameters, by the scheduler unit, comprises checking, by the scheduler unit, one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of
requests, in the IO cache. The method further comprises sending, by the scheduler unit, the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests. Furthermore, the method comprises sending, by the scheduler unit, a response to the interface.
[0013] In an exemplary aspect of the present disclosure, the method further comprises removing, by an analysis unit, the first work order identity associated with the first subset of requests from the queue maintained in the IO cache after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
[0014] In an exemplary aspect of the present disclosure, the method further comprises sending, by the scheduler unit, an update response for each of the one or more network nodes, to a database, wherein the database stores status associated with each of the one or more network nodes. Furthermore, the method includes updating, by a processing unit, at the NMS, the status associated with each of the one or more network nodes in the database, with the update response for each of the one or more network nodes.
[0015] In an exemplary aspect of the present disclosure, the method further includes receiving, by the transceiver unit, an abort request for a second work order identity associated with a second subset of requests from the set of requests. Furthermore, the method includes checking, by the processing unit, one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache. The method further encompasses removing, by the processing unit, the second work order identity associated with the second subset of requests from the IO cache, in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache. The method further includes sending, by the transceiver unit, an aborted
[0016] In an exemplary aspect of the present disclosure, the method further includes sending, by the transceiver unit, at the NMS, to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache.
[0017] In an exemplary aspect of the present disclosure, the abort request for the second work order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit, at the NMS.
[0018] Another aspect of the present disclosure may relate to a network management system for updating parameters for one or more network nodes. The network management system includes a transceiver unit is configured to receive a set of requests comprising one or more update parameters for the one or more network nodes. The network management system further includes a validation unit connected to at least the transceiver unit. The validation unit is configured to validate, each request from the set of requests. The validation unit is further configured to add, the validated set of requests in a queue maintained in an input-output (IO) cache. The network management system further includes a scheduler unit, connected to at least the analysis unit, the scheduler unit is configured to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
[0019] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for updating parameters for one or more network nodes, the instructions include executable code which, when executed by a one or more units of a system, causes: a transceiver unit of the system to receive a set of requests comprising one or more update parameters for the one or more network nodes. The instructions include executable code which, when executed, causes a validation unit of the system to validate each request from the set of requests and the validation unit to add the validated set of requests in a queue maintained in an input-output (IO) cache. The instructions include executable code which, when executed, causes a scheduler unit to run a scheduler job at a configured
interval for updating the one or more network nodes with the one or more update parameters.
OBJECTS OF THE INVENTION
5
[0020] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0021] It is an object of the present disclosure to provide a system and a method for
10 updating parameters for nodes that consumes less time and efforts of the user.
[0022] It is another object of the present disclosure to provide a solution for updating parameters for nodes that consumes less amount of network resources.
15 [0023] It is another object of the present disclosure to provide a solution for
updating parameters for nodes that supports an abort request functionality, allowing to cancel ongoing parameter update requests which minimizes potential system disruptions.
20 [0024] It is another object of the invention to address the limitations of existing
NMS workflow by introducing a bidirectional data flow.
[0025] It is another object of the invention to enhance the NBI interface by allowing the NBI interface an opportunity to update node parameters through NMS. 25
[0026] It is another object of the invention to allow for quick updates to parameters requests through the NBI interface.
[0027] It is another object of the invention to support an abort request functionality
30 thereby allowing to cancel ongoing parameter update requests which minimizes
potential system disruptions.
7
DESCRIPTION OF THE DRAWINGS
[0028] The accompanying drawings, which are incorporated herein, and constitute
5 a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as
10 limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
15
[0029] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture.
[0030] FIG. 2 illustrates an exemplary block diagram of a computing device upon
20 which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
[0031] FIG. 3 illustrates an exemplary block diagram of a system for updating
parameters for one or more network nodes, in accordance with exemplary
25 implementations of the present disclosure.
[0032] FIG. 4 illustrates a method flow diagram for updating parameters for one or more network nodes in accordance with exemplary implementations of the present disclosure. 30
8
[0033] FIG. 5 illustrates an exemplary implementation of the system for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure.
5 [0034] FIG. 6 illustrates an exemplary representation of the process for updating
parameters for one or more network nodes, in accordance with exemplary embodiments of the present disclosure.
[0035] The foregoing shall be more apparent from the following more detailed
10 description of the disclosure.
DETAILED DESCRIPTION
[0036] In the following description, for the purposes of explanation, various
15 specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
20 address any of the problems discussed above or might address only some of the
problems discussed above.
[0037] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather,
25 the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
30
9
[0038] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
5 may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0039] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
10 diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
15
[0040] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not
20 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner
25 similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0041] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
30 processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
10
of microprocessors, one or more microprocessors in association with a (Digital
Signal Processing) DSP core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
5 input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0042] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
10 “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
15 phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
20 such unit(s) which are required to implement the features of the present disclosure.
[0043] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable
25 medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
30
11
[0044] As used herein “interface” or “user interface refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
5 each other, which also includes the methods, functions, or procedures that may be
called.
[0045] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a
10 general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
15
[0046] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
20
[0047] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system of updating parameters for one or more network nodes.
25
[0048] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture [100], in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment (UE) [102], a radio access network
30 (RAN) [104], an access and mobility management function (AMF) [106], a Session
Management Function (SMF) [108], a Service Communication Proxy (SCP) [110],
12
an Authentication Server Function (AUSF) [112], a Network Slice Specific
Authentication and Authorization Function (NSSAAF) [114], a Network Slice
Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a
Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122],
5 a Unified Data Management (UDM) [124], an application function (AF) [126], a
User Plane Function (UPF) [128], a data network (DN) [130], wherein all the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
10 [0049] The Radio Access Network (RAN) [104] is the part of a mobile
telecommunications system that connects user equipment (UE) [102] to the core network (CN) and provides access to different types of networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that enable wireless communication.
15
[0050] The Access and Mobility Management Function (AMF) [106] is a 5G core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability. It also handles mobility management procedures like handovers and paging.
20
[0051] The Session Management Function (SMF) [108] is a 5G core network function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
25
[0052] The Service Communication Proxy (SCP) [110] is a network function in the 5G core network that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
30
13
[0053] The Authentication Server Function (AUSF) [112] is a network function in the 5G core responsible for authenticating UEs during registration and providing security services. It generates and verifies authentication vectors and tokens.
5 [0054] The Network Slice Specific Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
10 [0055] The Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
[0056] The Network Exposure Function (NEF) [118] is a network function that
15 exposes capabilities and services of the 5G network to external applications,
enabling integration with third-party services and applications.
[0057] The Network Repository Function (NRF) [120] is a network function that
acts as a central repository for information about available network functions and
20 services. It facilitates the discovery and dynamic registration of network functions.
[0058] The Policy Control Function (PCF) [122] is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies. 25
[0059] The Unified Data Management (UDM) [124] is a network function that centralizes the management of subscriber data, including authentication, authorization, and subscription information.
14
[0060] The Application Function (AF) [126] is a network function that represents external applications interfacing with the 5G core network to access network capabilities and services.
5 [0061] The User Plane Function (UPF) [128] is a network function responsible for
handling user data traffic, including packet routing, forwarding, and QoS enforcement.
[0062] The Data Network (DN) [130] refers to a network that provides data
10 services to user equipment (UE) in a telecommunications system. The data services
may include but are not limited to Internet services, private data network related services.
[0063] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
15 upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [200] may also implement a method for
updating parameters for one or more network nodes utilising the system. In another
implementation, the computing device [200] itself implements the method for
20 updating parameters for one or more network nodes using one or more units
configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0064] The computing device [200] may include a bus [202] or other
25 communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a random-
access memory (RAM), or other dynamic storage device, coupled to the bus [202]
30 for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
15
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a special-
purpose machine that is customized to perform the operations specified in the
5 instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204].
[0065] A storage device [210], such as a magnetic disk, optical disk, or solid-state
10 drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
15 alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
20 cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0066] The computing device [200] may implement the techniques described
25 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
30 sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
16
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
5 software instructions.
[0067] The computing device [200] also may include a communication interface [218] coupled to the bus [202]. The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a
10 local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a
15 compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
20 [0068] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the communication interface [218]. In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the host [224], the local network [222] and the communication interface
25 [218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0069] The present disclosure is implemented by a system [300] (as shown in FIG.
30 3). In an implementation, the system [300] may include the computing device [200]
17
(as shown in FIG. 2). It is further noted that the computing device [200] is able to perform the steps of a method [400] (as shown in FIG. 4).
[0070] Referring to FIG. 3, an exemplary block diagram of a system [300] for
5 updating parameters for one or more network nodes, is shown, in accordance with
the exemplary implementations of the present disclosure. The system [300] comprises at least one transceiver unit [302], at least one validation unit [304], at least one scheduler unit [306], at least one analysis unit [308], at least one processing unit [310], and at least one database [312]. Also, all the components/
10 units of the system [300] are assumed to be connected to each other unless otherwise
indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the
15 features of the present disclosure. Further, in an implementation, the system [300]
may be present in a user device to implement the features of the present disclosure. The system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also referred herein as a UE). In another implementation, the system [300] may reside in a server or a network entity.
20 In yet another implementation, the system [300] may reside partly in the server/
network entity and partly in the user device.
[0071] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various the components/units can be
25 implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative
30 arrangements and substitutions of units, provided they achieve the intended
18
functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0072] The system [300] is configured for updating parameters for one or more
5 network nodes, with the help of the interconnection between the components/units
of the system [300].
[0073] The system [300] includes a network management system (NMS) [320]. The NMS [320] includes the transceiver unit [302]. The transceiver unit [302] is
10 configured to receive a set of requests comprising one or more update parameters
for the one or more network nodes. The one or more update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like. The one or more update
15 parameters may further include a value associated with the one or more associated
update parameters. The value may be of any type such as a Boolean type, a string type, an integer type, a float type, and the like. For instance, the set of requests sent at the NMS [320] by the transceiver unit [302] may include a SAP identifier, a node identifier, a parameter name, the value of the one or more update parameters and
20 the like. The set of requests comprises at least multi-update requests and all-update
requests. The at least multi-update requests may be configured to update each of the one or more update parameters on a list of specified Network Function (NF) instances of the one or more network nodes. The NF instances of the one or more network nodes refers to an instance of the one or more nodes. The NF instances are
25 configured to perform a specific operation in the one or more network nodes. The
at least all-update requests may be configured to update each of the NF instance of the one or more network nodes in a circle. The circle refers to a pre-defined geographical area, a pre-defined location, a tracking area code (TAC), a cell identity, and the like. The set of requests is received from the interface in response
30 to a polling by the transceiver unit [302] at the NMS [320]. Each of the request of
the set of requests is associated with a work order identity. The work order identity
19
is a unique identifier which may be allotted to a request for a work order or task in the telecommunication network. The work order identity may help in managing and tracking the request for the work order or task.
5 [0074] In an implementation of the present disclosure, the NMS [320] may support
at least two types of requests for updating the at least one or more network nodes-
the multi-update request and the all-update request. The multi–update request may
update the at least one network node parameter from the list of specified network
nodes. The all–update request may update every parameter from the list of network
10 nodes. The transceiver unit [302] is further configured to perform the polling at the
NMS [320]. The polling refers to a communication where the transceiver unit [302] may repeatedly send requests to the NMS [320] at fixed intervals to check for updates.
15 [0075] The NMS [320] further includes the validation unit [304] connected to at
least the transceiver unit [302]. The validation unit [304] is configured to validate, each request from the set of requests. The validation unit [304] is further configured to add, the validated set of requests in a queue maintained in an input-output (IO) cache [504]. The validation unit [304] is further configured to validate a format
20 associated with each request from the set of requests. Each request from the
validated set of requests added in the queue, is grouped based on the work order identity.
[0076] In an implementation of the present disclosure, the validation unit [304]
25 checks if the set of requests are valid or not. If the set of requests are not valid
requests, the transceiver unit [302] sends a failure response to the user. If the set of
requests are valid requests, the set of requests are inserted in the IO cache [504].
The set of requests are maintained in a queue in the IO Cache [504] with the work
order identity. The IO cache [504] refers to a customized cache to store data
30 temporarily and enhancing the performance of the NMS [320]. The IO cache [504]
may reduce latency by storing the set of requests temporarily.
20
[0077] The NMS [320] further includes the scheduler unit [306], connected to at
least the analysis unit [308]. The scheduler unit [306] is configured to run a
scheduler job at a configured interval, for updating the one or more network nodes
with the one or more update parameters. The updating the one or more network
5 nodes with the one or more update parameters, by the scheduler unit [306],
comprises the scheduler unit [306], to check one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504]. The updating the one or more network nodes with the one or more update parameters further includes the scheduler unit [306], to send
10 the first subset of requests associated with the first work order identity, in a batch,
to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests. The updating the one or more network nodes with the one or more update parameters further includes the scheduler unit [306] to send a response to the interface.
15
[0078] In an implementation of the present disclosure, the configured interval may be determined by a user or the NMS [320]. In an embodiment of the present disclosure, the configured interval may be changed in every session. The scheduler unit [306] of the NMS [320] may run the scheduler job at the configured intervals.
20 For instance, the scheduler unit [306] may run the scheduler job after every 5
minutes, as defined by the user. The scheduler unit [306] may check for the first work order identities present in the IO Cache [504]. If the scheduler unit [306] does not find any queued work order identity, the scheduler unit [306] may assume that no set of requests were initiated, the scheduler unit [306] may not initiate any action.
25 If the queued work order identity is found by the scheduler unit [306], the scheduler
unit [306] may send the one or more parameter update requests to the node [506] in batches. The batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the NMS [320] can handle sending 100 updates at a time and group the updates into batches of 100.
30
21
[0079] The analysis unit [308] is further configured to remove, the first work order
identity associated with the first subset of requests from the queue maintained in
the IO cache [504] after sending the first subset of requests associated with the first
work order identity for updating the one or more network nodes.
5
[0080] In an implementation of the present disclosure, the analysis unit [308] may
receive an acknowledgment from the one or more network nodes to confirm that
the first subset of requests is updated. The analysis unit [308] may access the queue
maintained in the IO cache [504] and locate the work order identity that must be
10 removed. Further, the analysis unit [308] may search the queue maintained in the
IO cache [504] to find the work order identity. The analysis unit [308] may remove
the work order identity from the queue.
[0081] The system [300] further includes the scheduler unit [306], to send an update
15 response for each of the one or more network nodes, to the database [312]. The
database [312] stores status associated with each of the one or more network nodes.
The system [300] further includes the processing unit [310], at the NMS [320], to
update the status associated with each of the one or more network nodes in the
database [312], with the update response for each of the one or more network nodes.
20 For instance, there is an update in the QoS parameters of the one or more network
nodes. The scheduler unit [306] may send the update to the database [312] to store the updated QoS parameters of the one or more network nodes. The database [312] may update the QoS parameters of the one or more network nodes accordingly.
25 [0082] The system [300] is configured to receive, by the transceiver unit [302], an
abort request for a second work order identity associated with a second subset of requests from the set of requests. The abort request may be sent to the transceiver unit [302] if the set of requests comprising one or more update parameters may disrupt the services of the one or more network nodes. For instance, the value in the
30 one or more update parameters is very high and the one or more network nodes may
not be able to handle the value of the update parameter. The system is further configured to check, by the processing unit [310], one of a presence and an absence
22
of the second work order identity associated with the second subset of requests in
the IO cache [504]. The processing unit [310] is further configured to remove the
second work order identity associated with the second subset of requests from the
IO cache [504], in an event of the presence of the second work order identity
5 associated with the second subset of requests in the IO cache [504]. The transceiver
unit [302] is further configured to send an aborted response to the interface. The
abort request for the second work order identity associated with the second subset
of requests, is received in response to a polling, by the transceiver unit [302], at the
NMS [320].
10
[0083] The system [300] further includes the transceiver unit [302], at the NMS
[320], to send to the interface, a failure response, in an event of the absence of the
second work order identity associated with the second subset of requests in the IO
cache [504].
15
[0084] In an implementation of the present disclosure, the NMS [320] may check
for the second work order identity in the IO cache [504], which may be received in
the abort request. The processing unit [310] may further insert the set of requests in
the IO cache [504]. If the second work order identity is found in the IO Cache [504],
20 the second work order identity are removed from the IO queue and send the ‘aborted
response’ by the transceiver unit [302]. In case the work order identity is missing
in the IO Cache [504], the IO cache [504] assumes that the work order identity has
already been executed and sends a failure response by the transceiver unit [302] to
the interface.
25
[0085] Referring to FIG. 4, an exemplary method flow diagram [400] for updating
parameters for one or more network nodes, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the
method [400] is performed by the system [300]. Further, in an implementation, the
30 system [300] may be present in a server device to implement the features of the
present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402].
23
[0086] At step [404], the method [400] comprises receiving, by a transceiver unit
[302] at a network management system (NMS) [320], from an interface, a set of
requests comprising one or more update parameters for the one or more network
nodes. The one or more update parameters may include an internet protocol address,
5 Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize,
throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like. The one or more update parameters may further include a value associated with the one or more associated update parameters. The value may be in a Boolean type, a string type, an integer type, a float type, and the like.
10 For instance, the set of requests sent at the NMS [320] by the transceiver unit [302]
may include an SAP identifier, a node identifier, a parameter name, the value of the one or more update parameters. The set of requests comprises at least multi-update requests and all-update requests. The at least multi-update requests may be configured to update each of the one or more update parameters on a list of specified
15 Network Function (NF) instances of the one or more network nodes. The NF
instances of the one or more network nodes refers to an instance of the one or more nodes. The NF instances are configured to perform a specific operation in the one or more network nodes. The at least all-update requests may be configured to update each of the NF instance of the one or more network nodes in a circle. The circle
20 refers to a pre-defined geographical area, a pre-defined location, a tracking area
code (TAC), a cell identity, and the like. The set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320]. Each of the request from the set of requests is associated with a work order identity. The work order identity is a unique identifier which may be allotted to a request for
25 a work order or task in the telecommunication network. The work order identity
may help in managing and tracking the request for the work order or task.
[0087] In an implementation of the present disclosure, the NMS [320] may support
at least two types of requests for updating the at least one or more network nodes-
30 the multi-update request and the all-update request. The multi–update request may
update the at least one network node parameter from the list of specified network
24
nodes. The all–update request may update every parameter from the list of network nodes. The polling refers to a communication where the transceiver unit [302] may repeatedly send requests to the NMS [320] at fixed intervals to check for updates.
5 [0088] Next at step [406], the method [400] comprises validating, by a validation
unit [304] at the NMS [320], each of the request from the set of requests. The
validation unit [304] is further configured to validate a format associated with each
of the request from the set of requests. Each of the request from the validated set of
requests added in the queue, is grouped based on the work order identity.
10
[0089] Next, at step [408], the method [400] encompasses adding, by the validation
unit [304] at the NMS [320], the validated set of requests in a queue maintained in
an input-output (IO) cache [504].
15 [0090] In an implementation of the present disclosure, the set of requests are
checked by the validation unit [304]. The validation unit [304] checks if they are valid or not. If the set of requests are not valid requests, a failure response is sent by the transceiver unit [302]. If the set of requests are valid requests, the set of requests are inserted in the IO cache [504]. The set of requests are maintained in a
20 queue in the IO Cache [504] with the work order identity. The IO cache [504] refers
to a customized cache to store data temporarily and enhancing the performance of the NMS [320]. The IO cache [504] may reduce latency by storing the set of requests temporarily.
25 [0091] Next, at step [410], the method [400] encompasses running, by a scheduler
unit [306], at the NMS [320], a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters. The updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306] includes checking, by the scheduler unit [306], one of a
30 presence and an absence of a first work order identity associated with a first subset
of requests from the set of requests, in the IO cache [504]. The updating the one or more network nodes with the one or more update parameters, by the scheduler unit
25
[306] further includes sending, by the scheduler unit [306], the first subset of
requests associated with the first work order identity, in a batch, to update the one
or more nodes, in an event of the presence of the first work order identity associated
with the first subset of requests. Furthermore, the updating the one or more network
5 nodes with the one or more update parameters, by the scheduler unit [306] includes
sending, by the scheduler unit [306], a response to the interface. The method further
includes removing, by an analysis unit [308], the first work order identity associated
with the first subset of requests from the queue maintained in the IO cache [504]
after sending the first subset of requests associated with the first work order identity
10 for updating the one or more network nodes.
[0092] In an implementation of the present disclosure, the configured interval may be determined by a user or the NMS [320]. In an embodiment of the present disclosure, the configured interval may be changed in every session. The scheduler
15 job at the configured intervals may be run by the scheduler unit [306] of the NMS
[320]. For instance, the scheduler unit [306] may run the scheduler job after every 5 minutes, as defined by the user. The presence of the first work order identities in the IO Cache [504] may be checked by the scheduler unit [306]. If any queued work order identity is not found by the scheduler unit [306], the scheduler unit [306] may
20 assume that no set of requests were initiated, the scheduler unit [306] may not
initiate any action. If the queued work order identity is found by the scheduler unit [306], the scheduler unit [306] may send the one or more parameter update requests to the node [506] in batches. The batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the NMS [320]
25 can handle sending 100 updates at a time and group the updates into batches of 100.
[0093] The method [400] further comprises sending, by the scheduler unit [306],
an update response for each of the one or more network nodes, to a database [312].
The database [312] stores status associated with each of the one or more network
30 nodes. Furthermore, the method [400] includes updating, by a processing unit [310],
at the NMS [320], the status associated with each of the one or more network nodes
26
in the database [312], with the update response for each of the one or more network
nodes. For instance, there is an update in the QoS parameters of the one or more
network nodes. The scheduler unit [306] may send the update to the database [312]
to store the updated QoS parameters of the one or more network nodes. The
5 database [312] may update the QoS parameters of the one or more network nodes
accordingly.
[0094] In an implementation of the present disclosure, an acknowledgment from
the one or more network nodes to confirm that the first subset of requests is updated
10 may be received by the analysis unit [308]. The analysis unit [308] may access the
queue maintained in the IO cache [504] and locate the work order identity that must
be removed. Further, the analysis unit [308] may search the queue maintained in
the IO cache [504] to find the work order identity. The analysis unit [308] may
remove the work order identity from the queue.
15
[0095] The method [400] further comprises receiving, by the transceiver unit [302],
an abort request for a second work order identity associated with a second subset of
requests from the set of requests. The abort request may be sent to the transceiver
unit [302] if the set of requests comprising one or more update parameters may
20 disrupt the services of the one or more network nodes. For instance, the value in the
one or more update parameters is very high and the one or more network nodes may not be able to handle the value of the update parameter. Further, the method [400] encompasses checking, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of
25 requests in the IO cache [504]. Furthermore, the method encompasses removing,
by the processing unit [310], the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504]. The method [400] further includes sending, by the transceiver unit
30 [302], the aborted response to the interface. The abort request for the second work
order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit [302], at the NMS [320].
27
[0096] In an implementation of the present disclosure, the abort request for a
second work order identity in a running request. The NMS [320] may check for the
second work order identity in the IO cache [504], which may be received in the
5 abort request. The processing unit [310] may further insert the set of requests in the
IO cache [504]. If the second work order identity is found in the IO Cache [504], the second work order identity are removed from the IO queue and send the ‘aborted response’ by the transceiver unit [302].
10 [0097] The method [400] further comprises sending, by the transceiver unit [302],
at the NMS [320], to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache [504].
15 [0098] In an implementation of the present disclosure, in case the work order
identity is missing in the IO Cache [504], the IO cache [504] assumes that the work order identity has already been executed and sends a failure response by the transceiver unit [302].
20 [0099] The system [300] and method [400] will be explained in detail by an
exemplary implementation of the system as shown in FIG. 5. Referring to FIG. 5,
it illustrates an exemplary implementation of the system for updating parameters
for one or more network nodes, in accordance with exemplary implementations of
the present disclosure.
25
[0100] The system [500] comprises at least one Configuration Management System
[502], at least one IO Cache [504], at least one north bound interface (NBI) [508],
at least one Node [506], and the database [312].
30 [0101] The system [500] is configured for updating parameters for the one or more
nodes.
[0102] At step 1, the CMS [502] may send the set of requests for updating parameters for the one or more network nodes at the node [506].The one or more
28
update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like. 5
[0103] At step 2, the node [506] may further send the update response for each of the one or more network nodes to the database [312]. The database [312] may store the status associated with each of the one or more network nodes.
10 [0104] At step 3, the database [312] may further send the update response to a
Northbound Interface (NBI) [508]. The NBI [508] is an output-oriented interface which may be configured to send outputs to the user.
[0105] At step 4, the NBI [508] may poll for the set of requests for updating
15 parameters to the IO Cache [504]. Polling refers to a communication where the
CMS [502] may repeatedly send requests to the IO Cache [504]. It is to be noted that the NMS [320] supports two types of requests for updating parameters:
a) Multi type – updates requested parameters on the list of specified network function (NF) instances.
20 b) All type – updates requested parameters for all the NF instances in the
circle. The circle refers to a pre-defined geographical area, a pre-defined location, a pre-defined tracking area code (TAC), a pre-defined cell identity, and the like.
[0106] At step 5, the IO cache [504] may validate each request from the set of
25 requests. Each request from the validated set of requests added in the queue, is
grouped based on the work order identity.
[0107] At step 6, the CMS [502] may check for the presence and the absence of the
first work order identity associated with the first subset of requests from the set of
30 requests. The IO Cache [504] may further insert the validated set of requests in the
29
queue maintained in the IO cache [504]. The updating parameters in the one or more
network nodes further includes the CMS [502] to send the first subset of requests
associated with the first work order identity, in a batch, to update the one or more
nodes, in an event of the presence of the first work order identity associated with
5 the first subset of requests. The CMS [502] may further send a response to the NBI
[508] of the updating of the one or more nodes.
[0108] If the CMS [502] does not find any queued work order identity, the CMS
[502] may assume that no set of requests were initiated, the CMS [502] may not
10 initiate any action. If the queued work order identity is found by the CMS [502],
the CMS [502] may send the one or more parameter update requests to the node
[506] in batches. The batch refers to sending the first subset of requests associated
with the first work order identity together. For instance, if the node [506] can handle
sending 100 updates at a time and group the updates into batches of 100.
15
[0109] At step 7, the NBI [508] may send the abort request for the second work
order identity associated with the second subset of requests from the set of requests
to the CMS [502]. The abort request may be sent if the set of requests comprising
one or more update parameters may disrupt the services of the one or more network
20 nodes. The CMS [502] may check one of a presence and an absence of the second
work order identity associated with the second subset of requests in the IO cache [504]. The CMS [502] may remove the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in
25 the IO cache [504].
[0110] At step 8, the CMS [502] may send the aborted response to the NBI [508] in response to the poll for the abort request by the NBI [508].
30 [0111] The exemplary implementation of the system [500] will be explained in
detail by a method flow for the exemplary implementation as shown in FIG. 6. Referring to FIG. 6, it illustrates an exemplary representation of the process of
30
updating parameters for nodes, in accordance with exemplary embodiments of the present disclosure. As shown in FIG. 6, the method begins are step [602].
[0112] At step [604], a set of poll request(s) is received. The set of poll requests
5 comprises of parameters for network nodes. Each of the poll request from the set of
poll requests is associated with a work order identity.
[0113] At step [606], the validation unit [304] checks if the set of poll requests are valid or not. 10
[0114] If the set of poll requests are not a valid request, the method proceeds to step [608]. At step [608], the transceiver unit [302] sends a failure response to the NBI [508].
15 [0115] If the request is a valid request, the method proceeds to step [610]. At step
[610], the set of poll requests are inserted in the IO cache [504]. It maintains a queue in IO Cache [504] with a unique work order identity based on request received.
[0116] The NMS [320] supports two types of requests for updating configuration
20 parameters:
1. Multi request– updates requested parameters on the list of specified NF instances.
2. All type request– updates requested parameters for all the NF instances.
25 [0117] At step [612], the scheduler unit [306] of the NMS [320] runs the scheduler
job at the configured intervals, where the configured intervals maybe determined by the user or the NMS [320].
[0118] At step [614], the scheduler unit [306] checks for queued work order
30 identities present in the IO Cache [504].
31
[0119] If the scheduler unit [306] does not find any queued work order identity, the method may proceed to step [616]. At step [616], the scheduler unit [306] may assume that no set of poll requests were initiated by the NBI [508], the scheduler unit [306] may not initiate any action. 5
[0120] If the queued work order identity is found by the scheduler unit [306], the method may proceed to step [618].
[0121] At step [618], the method starts sending parameter update request to the
10 node [506] in batches.
[0122] At step [620], once the response is received from the node [506], the
scheduler unit [306] sends the response to the NBI [508]. In case the request type
is an ALL type, the NMS [320] maintains count for all NF instances and number of
15 responses received from node for that work order identity.
[0123] At step [622], the scheduler unit [306] sends the update response for each
of the one or more network nodes to the database [312]. The database [312] stores
status associated with each of the one or more network nodes. The processing unit
20 [310] may update at the NMS [320], the status associated with each of the one or
more network nodes in the database [312], with the update response for each of the one or more network nodes.
[0124] The NMS [320] also supports for aborting a currently running task, which
25 starts at step [626]. At step [626], the CMS [502] polls for an abort request for a
work order identity from the NBI [508].
[0125] Further, at step [628], the NMS [320] checks for the work order identity in the IO cache [504], which was received in the abort request. 30
32
[0126] The method may then proceed to [610] for inserting the set of poll requests
in the IO cache [504]. If the work order identity is found in the IO Cache [504], the
work order identity is removed from the IO queue and send the ‘aborted response’
to the NBI [508]. In case the work order identity is missing in the IO Cache [504],
5 the IO cache [504] assumes that the work order identity has already been executed
and sends a failure response to the NBI [508].
[0127] At step 624, the analysis unit [308] removes the work order identity
associated with the set of requests from the queue maintained in the IO cache [504]
10 after sending the set of requests associated with the work order identity for updating
the one or more network nodes.
[0128] At step 630, the method comes to an end.
15 [0129] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for updating parameters for one or more network nodes, the instructions include executable code which, when executed by a one or more units of a system, causes: a transceiver unit [302] of the system [300] to receive a set of requests comprising one or more update parameters for the one
20 or more network nodes. Also, the instructions include executable code which, when
executed, causes a validation unit [304] of the system [300] to validate, each request from the set of requests. The instructions include executable code which, when executed, causes the validation unit [304] of the system [300] to add, the validated set of requests in a queue maintained in an input-output (IO) cache [504] and a
25 scheduler unit [306] of the system [300] to run a scheduler job at a configured
interval, for updating the one or more network nodes with the one or more update parameters.
[0130] As is evident from the above, the present disclosure provides a technically
30 advanced solution for updating parameters for nodes. The solution of the present
invention provides a system and a method for updating parameters for nodes that
33
consumes less time and efforts of the user. Further, implementing the features of
the present invention enables one to save network resources. Also, the solution for
updating parameters for nodes, as disclosed, supports an abort request functionality,
allowing to cancel ongoing parameter update requests which minimizes potential
5 system disruptions.
[0131] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
10 principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
34
We Claim:
1. A method for updating parameters for one or more network nodes, the
method comprising:
- receiving, by a transceiver unit [302] at a network management system (NMS) [320], from an interface, a set of requests comprising one or more update parameters for the one or more network nodes;
- validating, by a validation unit [304] at the NMS [320], each of the request from the set of requests;
- adding, by the validation unit [304] at the NMS [320], the validated set of requests in a queue maintained in an input-output (IO) cache [504]; and
- running, by a scheduler unit [306], at the NMS [320], a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
2. The method as claimed in claim 1, wherein, the set of requests comprises at least multi-update requests and all-update requests, wherein the multi-update requests are configured to update each of the one or more update parameters on a list of specified Network Function (NF) instances of the one or more network nodes, wherein the all-update requests are configured to update each of the NF instances of the one or more network nodes.
3. The method as claimed in claim 1, wherein, the set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320].
4. The method as claimed in claim 1, wherein, each request from the set of requests is associated with a work order identity.
5. The method as claimed in claim 1, wherein, the validating each request from the set of requests comprises validating a schema of a configuration data associated with each of the request from the set of requests.
6. The method as claimed in claim 4, wherein, each of the request from the validated set of requests added in the queue, is grouped based on the work order identity.
7. The method as claimed in claim 1, wherein, updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306], further comprises:
- checking, by the scheduler unit [306], one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504];
- sending, by the scheduler unit [306], the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests; and
- sending, by the scheduler unit [306], a response to the interface.
8. The method as claimed in claim 7, the method further comprising:
- removing, by an analysis unit [308], the first work order identity
associated with the first subset of requests from the queue maintained in
the IO cache [504] after sending the first subset of requests associated
with the first work order identity for updating the one or more network
nodes.
9. The method as claimed in claim 1, further comprising:
sending, by the scheduler unit [306], an update response for each of the one or more network nodes, to a database [312], wherein the database [312] stores status associated with each of the one or more network nodes; and
updating, by a processing unit [310], at the NMS [320], the status associated with each of the one or more network nodes in the database, with the update response for each of the one or more network nodes.
10. The method as claimed in claim 9 the method further comprising:
- receiving, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests;
- checking, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504];
- removing, by the processing unit [310], the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504]; and
- sending, by the transceiver unit [302], an aborted response to the interface.
11. The method as claimed in claim 10, further comprising:
- sending, by the transceiver unit [302], at the NMS [320], to the interface,
a failure response, in an event of the absence of the second work order
identity associated with the second subset of requests in the IO cache
[504].
12. The method as claimed in claim 10, wherein the abort request for the second work order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit [302], at the NMS [320].
13. A system for updating parameters for one or more network nodes, the system comprising a network management system (NMS) [320], the NMS [320] further comprising:
- a transceiver unit [302], configured to receive from an interface, a set of requests comprising one or more update parameters for the one or more network nodes;
- a validation unit [304] connected to at least the transceiver unit [302], the validation unit [304] configured to:
o validate, each request from the set of requests, and
o add, the validated set of requests in a queue maintained in an input-output (IO) cache [504]; and
- a scheduler unit [306], connected to at least an analysis unit [308], the
scheduler unit [306] configured to run a scheduler job at a configured
interval, for updating the one or more network nodes with the one or
more update parameters.
14. The system as claimed in claim 13, wherein, the set of requests comprises at least multi-update requests and all-update requests, wherein the multi-update requests are configured to update each of the one or more update parameters on a list of specified NF instances of the one or more network nodes, wherein the all-update requests are configured to update each of the NF instances of the one or more network nodes.
15. The system as claimed in claim 13, wherein, the set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320].
16. The system as claimed in claim 13, wherein, each request of the set of requests is associated with a work order identity.
17. The system as claimed in claim 13, wherein, the validation unit [304] is configured to validate a format associated with each request from the set of requests.
18. The system as claimed in claim 16, wherein, each request from the validated set of requests added in the queue, is grouped based on the work order identity.
19. The system as claimed in claim 13, wherein, the scheduler unit [306], is further configured to:
- check one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504];
- send the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests; and
- send a response to the interface.
20. The system as claimed in claim 19, wherein the analysis unit [308] is further
configured to:
- remove, the first work order identity associated with the first subset of
requests from the queue maintained in the IO cache [504] after sending
the first subset of requests associated with the first work order identity
for updating the one or more network nodes.
21. The system as claimed in claim 19, further comprising:
- the scheduler unit [306], configured to send, an update response for each of the one or more network nodes, to a database [312], wherein the database [312] stores status associated with each of the one or more network nodes; and
- a processing unit [310], configured to update, at the NMS [320], the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes.
22. The system as claimed in claim 21, wherein the system is further configured
to:
- receive, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests;
- check, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504];
- remove, by the processing unit [310], the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504]; and
- send, by the transceiver unit [302], an aborted response to the interface.
23. The system as claimed in claim 22, further comprising:
- the transceiver unit [302], configured to send, at the NMS [320], to the
interface, a failure response, in an event of the absence of the second
work order identity associated with the second subset of requests in the
IO cache [504].
24. The system as claimed in claim 22, wherein the abort request for the second
work order identity associated with the second subset of requests, is
received in response to a polling, by the transceiver unit [302], at the NMS
[320].
| # | Name | Date |
|---|---|---|
| 1 | 202321047028-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf | 2023-07-12 |
| 2 | 202321047028-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf | 2023-07-12 |
| 3 | 202321047028-FORM 1 [12-07-2023(online)].pdf | 2023-07-12 |
| 4 | 202321047028-FIGURE OF ABSTRACT [12-07-2023(online)].pdf | 2023-07-12 |
| 5 | 202321047028-DRAWINGS [12-07-2023(online)].pdf | 2023-07-12 |
| 6 | 202321047028-FORM-26 [19-09-2023(online)].pdf | 2023-09-19 |
| 7 | 202321047028-Proof of Right [06-10-2023(online)].pdf | 2023-10-06 |
| 8 | 202321047028-ORIGINAL UR 6(1A) FORM 1 & 26)-231023.pdf | 2023-11-06 |
| 9 | 202321047028-ENDORSEMENT BY INVENTORS [08-07-2024(online)].pdf | 2024-07-08 |
| 10 | 202321047028-DRAWING [08-07-2024(online)].pdf | 2024-07-08 |
| 11 | 202321047028-CORRESPONDENCE-OTHERS [08-07-2024(online)].pdf | 2024-07-08 |
| 12 | 202321047028-COMPLETE SPECIFICATION [08-07-2024(online)].pdf | 2024-07-08 |
| 13 | 202321047028-FORM 3 [02-08-2024(online)].pdf | 2024-08-02 |
| 14 | Abstract-1.jpg | 2024-08-09 |
| 15 | 202321047028-Request Letter-Correspondence [14-08-2024(online)].pdf | 2024-08-14 |
| 16 | 202321047028-Power of Attorney [14-08-2024(online)].pdf | 2024-08-14 |
| 17 | 202321047028-Form 1 (Submitted on date of filing) [14-08-2024(online)].pdf | 2024-08-14 |
| 18 | 202321047028-Covering Letter [14-08-2024(online)].pdf | 2024-08-14 |
| 19 | 202321047028-CERTIFIED COPIES TRANSMISSION TO IB [14-08-2024(online)].pdf | 2024-08-14 |