Abstract: ABSTRACT METHOD AND SYSTEM FOR SYNCHRONIZING INSTANCES OF NETWORK NODES IN A COMMUNICATION NETWORK The present disclosure relates to a system (108) and a method (600) for synchronizing instances of network nodes in the communication network (106). The system (108) includes a receiving module (210) to receive a request for synchronization of instances of the network nodes from a User Equipment (UE) (102) via a user interface (206). The system (108) includes a fetching unit (212) to fetch a plurality of workflow details from a cache data store (412). The system (108) includes an identification module (214) to identify a plurality of first network endpoints and a plurality of first instances and a plurality of second network endpoints and a plurality of second instances. The system (108) includes a dynamic activator (218) to synchronize the plurality of first instances and the plurality of second instances on the first network node and the second network node, by triggering execution of a workflow. Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR SYNCHRONIZING INSTANCES OF NETWORK NODES IN A COMMUNICATION NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to the field of data communication in networks, more particularly relates to method and system for synchronizing instances of network nodes in a communication network.
BACKGROUND OF THE INVENTION
[0002] Network nodes in mobile communication networks usually operate multiple instances. Generally, such instances are not in sync. While some network nodes are capable of synchronizing the instances themselves, some cannot. If some node needs to be synched, external support is required. There is a need for a system and method that can synchronize multiple instances of a network node externally.
[0003] Hence, a system and method for synchronizing multiple instances or interfaces of a network node, that does not have its instances in sync is disclosed herein.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and a system for synchronizing instances of network nodes in a communication network.
[0005] In one aspect of the present invention, the system for synchronizing instances of network nodes in the communication network is disclosed. The system includes a receiving module configured to receive a request for synchronization of instances of the network nodes from a user equipment via a user interface. The system further includes a fetching unit configured to fetch, a plurality of workflow details from a cache data store based on the received request. The system further includes an identification module, configured to, identify, a plurality of first network endpoints and a plurality of first instances to be provisioned on a first network node and a plurality of second network endpoints and a plurality of second instances to be provisioned on a second network node. The system further includes a dynamic activator, configured to, synchronize, the plurality of first instances with the plurality of second instances on the first network node and the second network node, respectively by triggering execution of a workflow based on the plurality of workflow details.
[0006] In an embodiment, when a request is received at the receiving module from the user equipment to add or remove a new instance from a network node, the system comprises an updating unit, configured to, update, the plurality of workflow details with details associated with the new instance and an execution unit, configured to, execute, the workflow as per the updated plurality of workflow details and the new instance.
[0007] In an embodiment, the details associated with the new instance are provided by the user equipment via the user interface and wherein the details are provided by a user for configuring synchronization of the instances on the network nodes.
[0008] In an embodiment, a transceiver utilizing the identified plurality of first network endpoints and the plurality of second network endpoints, transmits a command for synchronization to each of the plurality of first instances and the plurality of second instances in order to synchronize the plurality of first instances and the plurality of second instances.
[0009] In another aspect of the present invention, the method for synchronizing instances of network nodes in the communication network is disclosed. The method includes the step of receiving a request for synchronization of instances of the network nodes, from a user equipment via a user interface. The method further includes the step of fetching a plurality of workflow details from a cache data store based on the received request. The method further includes the step of identifying a plurality of first network endpoints and a plurality of first instances to be provisioned on a first network node and a plurality of second network endpoints and a plurality of second instances to be provisioned on a second network node. The method further includes the step of synchronizing the plurality of first instances and the plurality of second instances on the first network node and the second network node, respectively, by triggering, execution of a workflow based on the plurality of workflow details.
[0010] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive a request for synchronization of instances of the network nodes, from a user equipment via a user interface. The processor is further configured to fetch a plurality of workflow details from a cache data store based on the received request. The processor is further configured to identify a plurality of first network endpoints and a plurality of first instances to be provisioned on a first network node and a plurality of second network endpoints and a plurality of second instances to be provisioned on a second network node. The processor is further configured to synchronize the plurality of first instances and the plurality of second instances on the first network node and the second network node, respectively, by triggering execution of a workflow based on the plurality of workflow details.
[0011] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor causes the UE to transmit a request to the one or more processors to synchronize instances of network nodes.
[0012] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0014] FIG. 1 is an exemplary block diagram of an environment for synchronizing instances of network nodes in a communication network, according to one or more embodiments of the present invention;
[0015] FIG. 2 an exemplary block diagram of a system for synchronizing instances of network nodes in the communication network, according to one or more embodiments of the present invention;
[0016] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to the one or more embodiments of the present invention;
[0017] FIG. 4 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0018] FIG. 5 is a signal flow diagram for synchronizing instances of network nodes in the communication network according to one or more embodiments of the present invention; and
[0019] FIG. 6 is a schematic representation of a method for synchronizing instances of network nodes in the communication network, according to one or more embodiments of the present invention.
[0020] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0021] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0022] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0023] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0024] FIG. 1 illustrates an exemplary block diagram of an environment 100 for synchronizing instances of network nodes in a communication network, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for synchronizing instances of network nodes in the communication network 106. The UE 102 aids a user to interact with the system 108 for transmitting the request.
[0025] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0026] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0027] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0028] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0029] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0030] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to synchronize instances of network nodes in the communication network 106. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0031] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0032] FIG. 2 is an exemplary block diagram of the system 108 for synchronizing instances of network nodes in the communication network 106, according to one or more embodiments of the present invention.
[0033] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processors 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0034] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0035] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0036] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of the database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0037] In order for the system 108 for selecting the one or more hyperparameters values for model training, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a receiving module 210, a fetching unit 212, an identification module 214, a transceiver 216, a dynamic activator 218, an updating unit 220 and an execution unit 222 communicably coupled to each other for synchronizing instances of network nodes in the communication network 106.
[0038] In one embodiment, the one or more modules includes, but not limited to, the receiving module 210, the fetching unit 212, the identification module 214, the transceiver 216, the dynamic activator 218, the updating unit 220 and the execution unit 222 can be used in combination or interchangeably for synchronizing instances of network nodes in the communication network 106.
[0039] The receiving module 210, the fetching unit 212, the identification module 214, the transceiver 216, the dynamic activator 218, the updating unit 220 and the execution unit 222 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0040] In one embodiment, the receiving module 210 is configured to receive a request for synchronization of instances of the network nodes from the UE 102 via the user interface 206. The synchronizing instances of network nodes in a communication network involves ensuring that multiple instances of network nodes are aligned and operating in harmony. The synchronizing instances of network nodes can include, but not limited to, matching configurations, state information, and other critical parameters across different instances to ensure consistent and reliable network performance. The instance of the network nodes refers to individual or specific implementations of network components within the network 106 that handle various aspects of data routing, management, and service delivery. The network nodes include at least one of router, switch, hub, gateway, server, client, firewall, modem, bridge and access point.
[0041] Upon receiving the request from the UE 102, the fetching unit 212 is configured to fetch a plurality of workflow details from a cache data store based on the received request. The workflow refers to the sequence of tasks or activities that are carried out to achieve a specific goal or complete a particular process. The workflow details refer to the specific attributes, parameters, and steps that constitute the workflow. The workflow details include, but not limited to, workflow identifier (ID), workflow name, objective, tasks, conditions, resources, error handling, approval process, notifications, completion criteria, documentation. The cache data store refers to a high- speed storage layer that temporarily holds frequently accessed data to enable quick retrieval. In particular, the cache data store is to store workflow details and other frequently accessed data temporarily to speed up the synchronization process of network nodes. The cache data store is typically located closer to the processing units (e.g., servers or nodes) to minimize latency and improve access times.
[0042] Upon fetching the plurality of workflow details, the identification module 214 is configured to identify the plurality of first network endpoints and a plurality of first instances to be provisioned on a first network node and a plurality of second network endpoints and a plurality of second instances to be provisioned on a second network node. The network endpoints refer to specific addresses or interfaces on the network node that serve as points of connection for sending and receiving data. The network nodes is at least one of Internet Protocol (IP) addresses, Media Access Control (MAC) addresses, ports and virtual network interfaces. The instances are separate and distinct deployments of applications, services or processes that run on the network node. The instances is at least one of application instances, service instances, process instances, containerized services, and virtual machines.
[0043] Further, the transceiver 216 utilizes the identified plurality of first network endpoints and the plurality of second network endpoints, to transmit a command for synchronization to each of the plurality of first instances and the plurality of second instances. The command for synchronization is transmitted to each of the plurality of first instances and the plurality of second instances in order to synchronize the plurality of first instances and the plurality of second instances.
[0044] Subsequently, the dynamic activator 218 is configured to synchronizes the plurality of first instances and the plurality of second instances on the first network node and the second network node respectively, by triggering execution of the workflow based on the plurality of workflow details. The synchronization between the plurality of first instance and the second instance is performed based on transferring the data, configurations, and operational states between the first and second instances.
[0045] In an embodiment, when the request is received at the receiving module 210 from the UE 102 to add or remove a new instance from the network node, the updating unit 220 is configured to update the plurality of workflow details with details associated with the new instance. Upon updating the plurality of workflow details, the execution unit 222 is configured to execute the workflow as per the updated plurality of workflow details and the new instance. The details associated with the new instance are provided by the UE 102 via the user interface 206. The details are provided by the user for configuring synchronization of the instances on the network nodes. Therefore, the system 108 is configured to synchronize multiple instances of the network node by modifying the configuration at the user interface 206.
[0046] FIG. 3 describes a preferred embodiment of the system 108 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 102a and the system 108 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0047] As mentioned earlier in FIG. 1, each of the first UE 102a the second UE 102b, and the third UE 102c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 102a without deviating from the scope of the present disclosure and the limiting the scope of the present disclosure. The first UE 102a includes one or more primary processors 302 communicably coupled to the one or more processors 202 of the system 108.
[0048] The one or more primary processors 302 are coupled with a memory 304 storing instructions which are executed by the one or more primary processors 302. Execution of the stored instructions by the one or more primary processors 302 enables the first UE 102a to transmit the request to the one or mor processors 202 to synchronize instances of network nodes.
[0049] As mentioned earlier in FIG. 2, the one or more processors 202 of the system 108 is configured for selecting the one or more hyperparameters values for model training. As per the illustrated embodiment, the system 108 includes the one or more processors 202, the memory 204, the user interface 206, and the database 208. The operations and functions of the one or more processors 202, the memory 204, the user interface 206, and the database 208 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0050] Further, the processor 202 includes the receiving module 210, the fetching unit 212, the identification module 214, the transceiver 216, the dynamic activator 218, the updating unit 220 and the execution unit 222. The operations and functions of the receiving module 210, the fetching unit 212, the identification module 214, the transceiver 216, the dynamic activator 218, the updating unit 220 and the execution unit 222 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description as provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0051] FIG. 4 is an exemplary block diagram of an architecture 400 implemented in the system 108 for synchronizing instances of network nodes in the communication network 106, according to one or more embodiments of the present invention.
[0052] The architecture 400 may include, but may not be limited to, an operation and management unit 402, a workflow manager 404, the dynamic activator 218, a message broker 406, a graph database 408, a distributed data lake database 410, a cache data store 412, a load balancer 414, the user interface 206, and a dynamic routing management 416.
[0053] In an embodiment, the user interface 206 receives requests from the UE 102 and transmits the requests for synchronization of instances of the network nodes to the workflow manager 404 via the operation and management unit 402. Upon receiving the request from the user interface 206, the plurality of workflow details is fetched from the cache data store 412 based on the received request. The cache data store 412 helps in storing recently stored data or frequently accessed data pertaining to the details associated with the workflows.
[0054] Upon fetching the plurality of workflow details, the workflow manager 404, identifies the plurality of first network endpoints and the plurality of first instances to be provisioned on the first network node and the plurality of second network endpoints and the plurality of second instances to be provisioned on the second network node. Further, utilizing the identified plurality of first network endpoints and the plurality of second network endpoints, the command for synchronization is transmitted to each of the plurality of first instances and the plurality of second instances in order to synchronize the plurality of first instances and the plurality of second instances.
[0055] Further, the workflow manager 404 includes the message broker 406 and the graph database 408. The message broker 406 acts as a queuing engine to facilitate communication between components and the graph database 408 stores data in a graph structure for quick retrieval and management.
[0056] Upon identifying the plurality of first network endpoint and the plurality of second network end points, the workflow manager 404 executes the workflow with the help of the dynamic activator 218.
[0057] The dynamic activator 218 synchronizes the plurality of first instances and the plurality of second instances on the first network node and the second network node respectively, by triggering the execution of the workflow based on the plurality of workflow details.
[0058] In particular, the workflow manager 404, identifies the plurality of first network endpoints and the plurality of first instances to be provisioned on the first network node and the plurality of second network endpoints and the plurality of second instances to be provisioned on the second network node. Further, the command for synchronization (for example, sync command) is transmitted to each of the plurality of first instances and the plurality of second instances in order to synchronize the plurality of first instances and the plurality of second instances. Upon receiving the command for synchronization, the workflow manager 404 is executed for synchronizing the plurality of first instances and the plurality of second instances on the first network node and the second network node respectively with the help of the dynamic activator 218.
[0059] In an embodiment, when the request is received from the UE 102 via the user interface 206 to add or remove the new instance from the network node, the workflow manager 404 updates the plurality of workflows details with details associated with new instances. Further, the workflow manager 404 instructs the dynamic activator 218 to add another instance from the network node based on the details associated with the new instance. Later, the dynamic activator 218 executes the workflow as per the updated plurality of workflow details and the new instance.
[0060] In an embodiment, the load balancer 414 is communicably coupled with the dynamic activator 218 and the user interface 206. The load balancer 414 dynamically balances the incoming request for synchronizing instances of network nodes in the network 106.
[0061] FIG. 5 is a signal flow diagram for synchronizing instances of network nodes in the communication network 106, according to one or more embodiments of the present invention.
[0062] In an embodiment, the system 108 is at least one of Fulfilment management system (FMS). The FMS refers to the processes and technologies used to ensure that services, configurations, and instances are properly deployed and maintained across a communication network. For example, the FMS includes, but not limited to Network Function Virtualization Orchestrators (NFVO), Software-Defined Networking (SDN) Controllers, cloud management platforms, service orchestration platforms, telecom network management systems, distributed database management systems.
[0063] At step 502, receives the request for synchronization of instances of the network nodes from the UE 102 via the user interface 206.
[0064] At step 504, upon receiving the request, the FMS fetches the plurality of workflow details from the cache data store 412 based on the received request.
[0065] At step 506, upon fetching the details of the plurality of workflows, the plurality of first network endpoints and the plurality of first instances are identified to be provisioned on the first network node. Further, the plurality of second network endpoints and the plurality of second instances are identified to be provisioned on the second network node. Furthermore, by utilizing the identified plurality of first network endpoints and the plurality of second network endpoints, the command for synchronization is transmitted to each of the plurality of first instances and the plurality of second instances in order to synchronize the plurality of first instances and the plurality of second instances.
[0066] At step 508, the request is received from the UE 102 via the user interface 206 to add or more the new instance from the network node.
[0067] At step 510, upon receiving the request to add or remove new instance, the FMS updates the plurality of workflow details with details associated and the new instance.
[0068] At step 512, subsequently, the FMS executes the execution of the workflow based on the plurality of workflow details. The workflow is executed based on the plurality of workflow details by synchronizing the plurality of first instances and the plurality of second instances on the first network node and the second network node respectively. Further, in case of adding or removing the new instance, the workflow is executed as per the updated plurality of workflow details and the new instance.
[0069] FIG. 6 is a flow diagram of a method 600 for synchronizing instances of network nodes in the communication network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0070] At step 602, the method 600 includes the step of receiving the request for synchronization of instances of the network nodes from the UE 102 via the user interface 206 by the receiving module 210.
[0071] At step 604, the method 600 includes the step of fetching the plurality of workflow details from the cache data store 412 based on the received request by the fetching unit 212.
[0072] At step 606, the method 600 includes the step of identifying the plurality of first network endpoints and plurality of first instances to be provisioned on the first network node and the plurality of second network endpoints and the plurality of second instances to be provisioned on the second network node by the identification module 214. Further, the transceiver 216 utilizes the identified plurality of first network endpoints and the plurality of second network endpoints transmits the command for synchronization to each of the plurality of first instances and the plurality of second instances.
[0073] At step 608, the method 600 includes the step of synchronizing the plurality of first instances and the second instance on the first network node and the second network node respectively, by triggering the execution of the workflow based on the plurality of workflow details by the dynamic activator 218. In an embodiment, when the request is received from the UE 102 via the user interface to add or remove a new instance from the network node, the plurality of workflow details with details associated with the new instance is updated by the updating unit 220. Upon updating the details, the workflow is executed by the execution unit 222 as per the updated plurality of workflow details and the new instance.
[0074] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive the request for synchronization of instances of the network nodes from the UE 102 via the user interface 206. The processor 202 is further configured to fetch the plurality of workflow details from the cache data store 412 based on the received request. The processor 202 is further configured to identify the plurality of first network endpoints and the plurality of first instances to be provisioned on the first network node and the plurality of second network endpoints and the plurality of second instances to be provisioned on the second network node. The processor 202 is further configured to trigger execution of the workflow based on the plurality of workflow details.
[0075] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0076] The present disclosure incorporates technical advancement of synchronizing the multiple instances of the network node by modifying the configuration at the user interface. Further, zero development effort is required as no code level changes are performed. Further, the present disclosure helps in syncing all the nodes in less time and effort. In particular, less time and effort are required for integrating the network nodes.
[0077] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0078] Environment- 100
[0079] User Equipment (UE)- 102
[0080] Server- 104
[0081] Network- 106
[0082] System -108
[0083] Processor- 202
[0084] Memory- 204
[0085] User Interface- 206
[0086] Database- 208
[0087] Receiving module- 210
[0088] Fetching unit- 212
[0089] Identification module- 214
[0090] Transceiver- 216
[0091] Dynamic activator- 218
[0092] Updating Unit- 220
[0093] Execution Unit- 222
[0094] Primary processor- 302
[0095] Memory- 304
[0096] Operation and Management Unit- 402
[0097] Workflow Manger- 404
[0098] Message Broker- 406
[0099] Graph Database- 408
[00100] Distributed Data Lake- 410
[00101] Cache Data store- 412
[00102] Load Balancer- 414
[00103] Dynamic Routing Manager- 416
,CLAIMS:CLAIMS:
We Claim:
1. A method (600) for synchronizing instances of network nodes in a communication network (106), the method (600) comprising the steps of:
receiving, by one or more processors (202), a request for synchronization of instances of the network nodes, from a User Equipment (UE) (102) via a user interface (206);
fetching, by the one or more processors (202), a plurality of workflow details from a cache data store (412) based on the received request;
identifying, by the one or more processors (202), a plurality of first network endpoints and a plurality of first instances to be provisioned on a first network node and a plurality of second network endpoints and a plurality of second instances to be provisioned on a second network node; and
synchronizing, by the one or more processors (202), the plurality of first instances and the plurality of second instances on the first network node and the second network node, respectively, by triggering, execution of a workflow based on the plurality of workflow details.
2. The method (600) as claimed in claim 1, wherein the method (600) further comprising the steps of:
receiving, by the one or more processors (202), a request from the user equipment to add or remove a new instance from a network node;
updating, by the one or more processors (202), the plurality of workflow details with details associated with the new instance; and
executing, by the one or more processors (202), the workflow as per the updated plurality of workflow details and the new instance.
3. The method (600) as claimed in claim 2, wherein the details associated with the new instance are provided by the User Equipment (UE) (102) via the user interface (206); and wherein the details are provided by a user for configuring synchronization of the instances on the network nodes.
4. The method (600) as claimed in claim 1, wherein the one or more processors (202), utilizing the identified plurality of first network endpoints and the plurality of second network endpoints, transmits a command for synchronization to each of the plurality of first instances and the plurality of second instances in order to synchronize the plurality of first instances and the plurality of second instances.
5. A system (108) for synchronizing instances of network nodes in a communication network (106), the system (108) comprises:
a receiving module (210) configured to receive a request for synchronization of instances of the network nodes, from a User Equipment (UE) (102) via a user interface (206);
a fetching unit (212), configured to, fetch, a plurality of workflow details from a cache data store (412) based on the received request;
an identification module (214), configured to, identify, a plurality of first network endpoints and a plurality of first instances to be provisioned on a first network node and a plurality of second network endpoints and a plurality of second instances to be provisioned on a second network node; and
a dynamic activator (218), configured to, synchronize, the plurality of first instances and the plurality of second instances on the first network node and the second network node, respectively by triggering execution of a workflow based on the plurality of workflow details.
6. The system (108) as claimed in claim 5, wherein when a request is received at the receiving module from the User Equipment (UE) (102) to add or remove a new instance from a network node, the system (108) comprises:
an updating unit (220), configured to, update, the plurality of workflow details with details associated with the new instance; and
an execution unit (222), configured to, execute, the workflow as per the updated plurality of workflow details and the new instance.
7. The system (108) as claimed in claim 6, wherein the details associated with the new instance are provided by the User Equipment (UE) (102) via the user interface (206); and wherein the details are provided by a user for configuring synchronization of the instances on the network nodes.
8. The system (108) as claimed in claim 5, wherein a transceiver (216) utilizing the identified plurality of first network endpoints and the plurality of second network endpoints, transmits a command for synchronization to each of the plurality of first instances and the plurality of second instances in order to synchronize the plurality of first instances and the plurality of second instances.
9. A User Equipment (UE) (102), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory (304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (302) causes the UE (102) to:
transmit, a request to the one or more processers to synchronize instances of network nodes;
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321047353-STATEMENT OF UNDERTAKING (FORM 3) [13-07-2023(online)].pdf | 2023-07-13 |
| 2 | 202321047353-PROVISIONAL SPECIFICATION [13-07-2023(online)].pdf | 2023-07-13 |
| 3 | 202321047353-FORM 1 [13-07-2023(online)].pdf | 2023-07-13 |
| 4 | 202321047353-FIGURE OF ABSTRACT [13-07-2023(online)].pdf | 2023-07-13 |
| 5 | 202321047353-DRAWINGS [13-07-2023(online)].pdf | 2023-07-13 |
| 6 | 202321047353-DECLARATION OF INVENTORSHIP (FORM 5) [13-07-2023(online)].pdf | 2023-07-13 |
| 7 | 202321047353-FORM-26 [20-09-2023(online)].pdf | 2023-09-20 |
| 8 | 202321047353-Proof of Right [08-01-2024(online)].pdf | 2024-01-08 |
| 9 | 202321047353-DRAWING [13-07-2024(online)].pdf | 2024-07-13 |
| 10 | 202321047353-COMPLETE SPECIFICATION [13-07-2024(online)].pdf | 2024-07-13 |
| 11 | Abstract-1.jpg | 2024-08-29 |
| 12 | 202321047353-Power of Attorney [05-11-2024(online)].pdf | 2024-11-05 |
| 13 | 202321047353-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf | 2024-11-05 |
| 14 | 202321047353-Covering Letter [05-11-2024(online)].pdf | 2024-11-05 |
| 15 | 202321047353-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf | 2024-11-05 |
| 16 | 202321047353-FORM 3 [28-11-2024(online)].pdf | 2024-11-28 |
| 17 | 202321047353-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |