Abstract: ABSTRACT METHOD AND SYSTEM FOR MANAGING NETWORK TRAFFIC The present disclosure relates to a method (500) and a system (108) for managing network traffic. The system (108) includes a retrieval unit (210) to retrieve a plurality of metrics corresponding to a plurality of instances of a Fulfilment Management System (FMS). The plurality of metrics is indicative of at least one of a maintenance status and an activity status. The system (108) includes a parsing unit (212) to parse the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances. The system (108) includes a pausing unit (214) to pause at least one instance of the plurality of instances based on identification of at least one of a scheduled maintenance activity from the maintenance status and an inactive status from the activity status for the at least one instance. Ref. FIG. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR MANAGING NETWORK TRAFFIC
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to a communication network, and more particularly relates, to a method and a system for managing network traffic.
BACKGROUND OF THE INVENTION
[0002] Load balancing plays a crucial role in managing the distribution of network traffic across multiple instances and network nodes in the context of Elastic Load Balancers (ELBs) and the Northbound interface. This process becomes more complex due to the presence of instances acting as intermediaries between the ELB and the actual network nodes. Efficient load distribution, session persistence, and dynamic scalability are key challenges that need to be addressed to ensure optimal performance, reliability, and scalability of the network infrastructure.
[0003] Achieving optimal load distribution between the ELB and the instances is a significant challenge. The ELB must evenly distribute incoming traffic across the instances, considering factors such as capacity, performance, and availability. At the same time, the instances need to efficiently forward the traffic to the network nodes they are responsible for, ensuring an optimal distribution of requests. Coordinating load balancing between the ELB, instances, and network nodes is essential to prevent overloading of certain components while others remain underutilized. Load balancing algorithms such as round robin, least connections, or IP hash can be employed to achieve a balanced distribution of traffic.
[0004] Maintaining session persistence or affinity across the multi-tier load balancing architecture is another challenge. Session persistence ensures that subsequent requests from the same client are directed to the same instance, enabling seamless session continuity. The instances need to implement mechanisms such as source IP-based affinity or cookie-based affinity to correctly associate clients with the appropriate network nodes. This coordination between the ELB, instances, and network nodes ensures that sessions are preserved throughout the load balancing process, providing a seamless user experience.
[0005] Dynamic scalability and failover management present additional challenges. The network infrastructure needs to be capable of dynamically scaling the number of instances or network nodes based on demand. Scaling events, such as pausing or removing instances, should not disrupt the load balancing mechanism or cause performance degradation. In existing methods, the North bound interface is disrupted as a request gets failed due to unavailability of any instance. Sometimes, when the North bound interface is disrupted, the system requires down time for completion of maintenance activities.
[0006] Hence, there exists a need for a method and a system for intelligently balancing instances and network nodes for efficient and effective distribution of traffic, and seamless session continuity.
SUMMARY OF THE INVENTION
[0007] One or more embodiments of the present disclosure provide a method and a system for managing network traffic.
[0008] In one aspect of the present invention, the system for managing the network traffic is disclosed. The system includes a retrieval unit configured to retrieve a plurality of metrics corresponding to a plurality of instances of a Fulfilment Management System (FMS). The plurality of metrics is indicative of at least one of a maintenance status and an activity status. The system includes a parsing unit configured to parse the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances. The system includes a pausing unit configured to pause at least one instance of the plurality of instances based on identification of at least one of a scheduled maintenance activity from the maintenance status and an inactive status from the activity status for the at least one instance.
[0009] In an embodiment, the activity status corresponds to one of an active status and the inactive status.
[0010] In an embodiment, the maintenance status includes information corresponding to the scheduled maintenance at a predefined time.
[0011] In an embodiment, the pausing unit is configured to resume, the at least one paused instance of the plurality of instances on one of completion of the scheduled maintenance and on change in the inactive status to the active status.
[0012] In an embodiment, the system includes a storage unit configured to store, one or more requests to be served by the at least one paused instance therein upon pausing the at least one instance of the plurality of instances. The system further includes a retrieval unit configured to retrieve, the one or more requests to be served by the at least one paused instance from the storage unit upon resuming of the at least one paused instance of the plurality of instances, wherein the retrieved one or more requests are served by the at least one resumed instance.
[0013] In another aspect of the present invention, the method of managing the network traffic is disclosed. The method includes the step of retrieving a plurality of metrics corresponding to a plurality of instances of a Fulfilment Management System (FMS). The plurality of metrics indicative of at least one of a maintenance status and an activity status. Further, the method includes the step of parsing the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances. Further, the method includes the step of pausing at least one instance of the plurality of instances based on identification of at least one of a scheduled maintenance activity from the maintenance status and an inactive status from the activity status for the at least one instance.
[0014] In another aspect of the present invention, non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to retrieve a plurality of metrics corresponding to a plurality of instances of a Fulfilment Management Systems (FMS). The plurality of metrics indicative of at least one of a maintenance status and an activity status. The processor is configured to parse the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances. Further, the processor is configured to pause, at least one instance of the plurality of instances based on identification of one of a scheduled maintenance activity from the maintenance status and an inactive status from the activity status for the at least one instance.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 is an exemplary block diagram of an environment for managing the network traffic, according to various embodiments of the present disclosure;
[0018] FIG. 2 is an exemplary block diagram of a system for managing the network traffic, according to various embodiments of the present disclosure;
[0019] FIG. 3 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to various embodiments of the present disclosure;
[0020] FIG. 4 is a flow chart diagram for managing the network traffic, according to various embodiments of the present disclosure; and
[0021] FIG. 5 is schematic representation of a method of managing the network traffic, according to various embodiments of the present disclosure.
[0022] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0024] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0025] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0026] The present invention provides a system and a method for managing the network traffic to ensure efficient and effective distribution of traffic, seamless session continuity, and the ability to adapt to changing network conditions. The present invention involves load balancing using ELBs for the Northbound Interface traffic, more particularly managing the distribution of network traffic across multiple FMS instances and network nodes mapped to various FMS instances. If any activity is going on in any FMS instance, then the same instance may be paused automatically in case a failure or maintenance is detected by an AI/ML module. During this pause period, the system continues to keep taking requests from northbound interface and keep the same in a queue for the corresponding specific instance. When the same instance is resumed, then the system resumes sending requests to Southbound Interface nodes and completes the process in the sequence as stored in the queue. More specifically, the resumed instance processes the request and then sends it to the Southbound Interface nodes and completes the process in the sequence as stored in the queue. The disruption at Northbound Interface gets prevented and the same continues to take the requests from the system for distribution without any failure.
[0027] FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing network traffic, according to various embodiments of the present disclosure. The environment 100 includes a User Equipment (UE) 102, a server 104, a network 106, and a system 108 communicably coupled to each other for managing the network traffic.
[0028] The network traffic refers to the data moving across the network 106. The data moving across the network 106 includes, but is not limited to emails, web pages, video streams, file transfers, or any other kind of digital communication.
[0029] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0030] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0031] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0032] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0033] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0034] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to manage the network traffic. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0035] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0036] FIG. 2 is an exemplary block diagram of the system 108 for managing the network traffic, according to one or more embodiments of the present invention.
[0037] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0038] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0039] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0040] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] In order for the system 108 to manage the network traffic, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a retrieval unit 210, a parsing unit 212, a pausing unit 214, and a storage unit 216, communicably coupled to each other for managing the network traffic.
[0042] In one embodiment, each of the retrieval unit 210, the parsing unit 212, the pausing unit 214, and the storage unit 216 can be used in combination or interchangeably for managing the network traffic.
[0043] The retrieval unit 210, the parsing unit 212, the pausing unit 214, and the storage unit 216 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0044] In an embodiment, the retrieval unit 210 is configured to retrieve a plurality of metrics corresponding to a plurality of instances of a Fulfilment Management System (FMS). The plurality of metrics is indicative of at least one of a maintenance status and an activity status. The Fulfilment Management System (FMS) is a system designed to manage and oversee the processes involved in fulfilling various types of service requests, particularly in network 106.
[0045] The plurality of metrics refers to a diverse set of quantitative measures or data points that are collected and monitored to assess various aspects of each FMS instance's performance, maintenance status, and activity status. The instance refers to a specific, individual execution or deployment of the FMS that operates independently or semi-independently from other deployments. The FMS instance refers to a distinct operational unit or deployment of the FMS within the network 106. Each FMS instance operates independently to manage specific tasks related to network service fulfilment. The specific tasks related to network service fulfilment can include service activation, configuration, and maintenance processes. The maintenance status indicates whether the plurality of instance is currently undergoing maintenance activities or is scheduled for maintenance. In an embodiment, the maintenance status includes information corresponding to the scheduled maintenance at a predefined time. The maintenance status is at least one of, scheduled, in-progress, and completed. The predefined time refers to a specific time or time period that has been scheduled and documented in advance for conducting maintenance activities on the Fulfillment Management System (FMS) instances. The activity status indicates whether an instance is actively processing tasks and service requests or is idle. The activity status is at least one of, active and inactive. The plurality of metrics corresponding to the plurality of instance of the FMS involves gathering data continuously or at regular intervals to maintain an up-to-date overview of the instance’s conditions.
[0046] Upon retrieving the plurality of metrics, the parsing unit 212 is configured to parse the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances. The parsing unit 212 analyses the retrieved plurality of metrics to identify the maintenance status of each of the plurality of instance by identifying any indicators or flags that denote scheduled maintenance. Further, the parsing unit 212 analyses the retrieved plurality of metrics to identify the activity status of each instance of the plurality of instances by checking the activity logs, uptime indicators or other status flags that denote whether the instance is currently active or inactive.
[0047] Based on identification of the at least one of the maintenance status as scheduled maintenance and the activity status as the inactive of the at least one instance of the plurality of instances, the pausing unit 214 is configured to pause the at least one instance of the plurality of instances.
[0048] In particular, upon identifying the maintenance status and activity status by the parsing unit 212, the pausing unit 214 pauses the at least one instance of the plurality of instances. The pausing unit 214 pauses the at least one instance of the plurality of instances if the at least one instance of the plurality of instances is scheduled for maintenance or an instance is inactive. For example, for instance A, the metrics shows a scheduled maintenance window from 2PM to 4PM, the parsing unit 212 identifies the scheduled maintenance window and informs the pausing unit 214 to pause the instance A at 2PM. In another example, for instance B, the metrics shows no activity for the past 30 minutes and status as inactive, the parsing unit 212 identifies the inactive status and informs the pausing unit 214 to pause the instance B.
[0049] Upon pausing the at least one instance of the plurality of instances, the storage unit 216 is configured to store one or more incoming requests to be served by the at least one paused instance. The one or more requests includes various tasks, operations or service demands that are intended to be processed by the paused FMS instance. The one or more requests includes, but not limited to user queries, data processing tasks, service provisioning requests, and network management tasks.
[0050] Further, the pausing unit 214 is configured to resume the at least one paused instance of the plurality of instances. The at least one paused instance of the plurality of instances is resumed based on one of completion of the scheduled maintenance and on change in the inactive status to the active status. In an embodiment, upon pausing the at least one instance of the plurality of instances, the retrieval unit 210 is configured to receive the plurality of metrics at regular intervals to determine the maintenance status and the activity status.
[0051] Upon resuming the at least one paused instance of the plurality of instances, the retrieval unit 210 is configured to retrieve the one or more requests to be served by the at least one paused instance from the storage unit 216. The retrieved one or more requests are served by the at least one resumed instance.
[0052] For example, the instance A is scheduled for maintenance from 2:00 PM to 4:00 PM, at 2:00 PM instance A is paused by the pausing unit 214. If the maintenance activities are completed by 3:30 PM, the pausing unit 214 resumes instance A upon completion of the maintenance at 3:30 PM. Thereafter, the instance A resumes its normal operations and starts processing any pending one or more requests stored during the maintenance window. In another example, instance B is paused at 1.00 PM due to inactivity. If at 1:30 PM, new tasks are assigned to the instance B, then the status of the instance B is changed from inactive to active. The pausing unit 214 detects this change in status and resumes instance B. Thereafter, the instance B starts processing the new tasks and any stored requests that were pending during the inactivity period. Therefore, the system 108 prevents failure in instances and reduces operations activities.
[0053] FIG. 3 is an exemplary block diagram of an architecture 300 of the system 108 for managing the network traffic, according to one or more embodiments of the present invention.
[0054] The architecture 300 includes an operation and management unit 302, a workflow manager 304, a message broker 306, a graph database 308, a dynamic activator 310, a distributed data lake 312, a cache data store 314, a load balancer 316, a dynamic routing manager 318, a command line interface and the user interface 206.
[0055] In an embodiment, the operation and management unit 302 is a centralized framework that manages the administration, control and supervision of the network traffic.
[0056] In an embodiment, the workflow manager 304 acts as a decision-making engine and responsible for orchestrating and managing the execution of various network management tasks and processes in the FMS. The workflow manager 304 includes the plurality of instances. The workflow manager 304 retrieves the plurality of metrics corresponding to the plurality of instances from the distributed data lake 312 and the cache data store 314. The distributed data lake 312 is a centralized repository for storing large volumes of data including historical and real time metrics related to the performance and status of the plurality of instances. The cache data store 314 provides fast access to frequently used data. The cache data store 314 stores the real-time metrics that the workflow manager 304 uses to quickly determine the status of the plurality of instances. The plurality of metrics is indicative of at least one of the maintain status and the activity status. The maintenance status includes information corresponding to the scheduled maintenance at the predefined time. The maintenance status includes at least one of scheduled, in-progress and completed. The activity status corresponds to one of an active status and the inactive status.
[0057] Upon retrieving the plurality of metrics corresponding to the plurality of instances, the workflow manager 304 parses the plurality of metrics. The workflow manager 304 parses the plurality of metrics to identify the at least one of the maintenance status and the activity status of each of the plurality of instances.
[0058] Based on identification of the at least one of the maintenance status and the activity status of each of the plurality of instances, the workflow manager 304 determines whether the at least one instance of the plurality of instances needs to be paused or resume the paused instance.
[0059] In an embodiment, the dynamic activator 310 executes the decision made by the workflow manager 304. The dynamic activator 310 pauses the at least one of instance of the plurality of instances based on identification of at least one of the scheduled maintenance activity from the maintenance status and the inactive status from the activity status for the at least one instance.
[0060] In an embodiment, the one or more requests from the user interface 206 or the command line interface 320 are served by the plurality of instances. If the at least one of instance of the plurality of instance is resumed based on identification of at least one of the scheduled maintenance activity from the maintenance status and the inactive status from the activity status for the at least one instance, the one or more requests to be served by the at least one paused instance is stored in the distributed data lake 312 and the cache data store 314.
[0061] Further, dynamic activator 310 resume the at least one paused instance of the plurality of instance on completion of the scheduled maintenance and on change in the inactive status to the active status. Upon resuming the at least one paused instance of the plurality of instance, the one or more requests to be served by the at least one paused instance is retrieved from the distributed data lake 312 and the cache data store 314. The retrieved one or more requests are served by the at least one resumed instance.
[0062] In an embodiment, the message broker 306 manages communication between different components, ensuring efficient message queuing and processing and the graph database 308 stores and manages the relationships and statuses of various network instances and components. In an embodiment, the load balancer 316 distributes the incoming network traffic across multiple instances. In an embodiment, the dynamic routing manager 318 manages the routing of network traffic.
[0063] FIG. 4 is a flow chart diagram 400 for managing the network traffic, according to various embodiments of the present disclosure.
[0064] The flow chart diagram 400 includes a Northbound Interface (NBI) 402, an Elastic Load Balancer (ELB) 404, plurality of FMS instances such as FMS instance 1 406a and FMS instance 2 406b and plurality of network nodes such as network node 1 408a and network node 2 408b.
[0065] The Northbound Interface (NBI) 402 transmits the one or more requests to the Elastic Load Balancer (ELB) 404. The NBI 402 acts as an interface for higher-level network management systems to interact with the ELB 404 and the FMS instances. The ELB 404 is a critical component in network traffic management, responsible for distributing incoming network traffic across multiple FMS instances to ensure optimal resource utilization, high availability, and reliability of network services.
[0066] The ELB 404 distributes the received one or more requests to the plurality of FMS instances such as FMS instance 1 406a, FMS instance 2 406b, and so on. In an embodiment, the plurality of instances is connected with the Artificial Intelligence/Machine Learning (AI/ML) model. The AI/ML model consists of all the metrics of the plurality of FMS instances. The AI/ML model also monitors and analyses the metrics of the plurality of FMS instances.
[0067] In an embodiment, the AI/ML model collects and analyses the metrics from the plurality of FMS instances. Subsequently, the AI/ML model parses the collected metrics to determine the maintenance status and activity status of each FMS instance of the plurality of FMS instances. Upon determining the maintenance status and activity status of each FMS instance of the plurality of FMS instances, if the AI/ML model identifies that at least one of FMS instance of the plurality of FMS instances is going into maintenance or downtime, then that at least one of FMS instance of the plurality of FMS instances is paused. For example, if the FMS instance 2 406b goes into maintenance, then the FMS instance 2 406b is paused. In an embodiment, the at least one of FMS instance of the plurality of FMS instances is paused automatically by AI/ML model upon detecting the problem in order to prevent failures. Thereafter, the NBI 402 transmits the requests to at least one of FMS instance of the plurality of FMS instances which is not paused i.e., FMS instance 1 406a.
[0068] In an embodiment, the AI/ML model resumes the paused FMS instance upon completion of the maintenance. Further, upon resuming the paused FMS instance, the requests need to be served by the paused FMS instance is completed by the resumed FMS instance. For example, upon resuming the FMS instance 2 406b, the FMS instance 2 406b process the request which was pending during the FMS instance 2 406b is paused.
[0069] FIG. 5 is a flow diagram of a method 500 for managing the network traffic, according to various embodiments of the present disclosure. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0070] At step 502, the method 500 includes the step of retrieving the plurality of metrics corresponding to the plurality of instances of the FMS by the retrieval unit 210. The plurality of metrics is indicative of at least one of the maintenance status and the activity status. The activity status corresponds to one of the active status and the inactive status. The maintenance status includes information corresponding to the scheduled maintenance at the predefined time.
[0071] At step 504, the method 500 includes the step of parsing the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances by the parsing unit 212.
[0072] At step 506, the method 500 includes the step of pausing at least one instance of the plurality of instances based on identification of at least one of the scheduled maintenance activities from the maintenance status and the inactive status from the activity status for the at least one instance by the pausing unit 214. Upon pausing the at least one instance of the plurality of instances, the one or more requests to be served by the at least one paused instance is stored in the storage unit 216. Thereafter, the at least one paused instance of the plurality of instance is resumed based on one of completion of the scheduled maintenance and no change in the inactive status to active status. Upon resuming the at least one paused instance of the plurality of instances, the one or more requests to be served by the at least one paused instance is retrieved from the storage unit 216. The retrieved one or more requests are served by the at least one resumed instance.
[0073] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to retrieve a plurality of metrics corresponding to a plurality of instances of the FMS. The plurality of metrics is indicative of at least one of the maintenance status and the activity status. The processor 202 is further configured to parse, the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances. The processor 202 is further configured to pause at least one instance of the plurality of instances based on identification of one of the scheduled maintenance activity from the maintenance status and the inactive status from the activity status for the at least one instance.
[0074] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0075] The present disclosure incorporates technical advancement of achieving optimal load distribution, maintaining session persistence, and enabling dynamic scalability and failover. In particular, the present disclosure prevents failure in instances and reduces operations activities.
[0076] The present invention offers multiple advantages over the prior art and the above-listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0077] Environment - 100
[0078] User Equipment (UE)– 102
[0079] Server - 104
[0080] Network – 106
[0081] System – 108
[0082] Processor – 202
[0083] Memory – 204
[0084] User Interface – 206
[0085] Database – 208
[0086] Retrieval unit– 210
[0087] Parsing unit – 212
[0088] Pausing unit – 214
[0089] Storage unit – 216
[0090] Operation and Management Unit- 302
[0091] Workflow Manager- 304
[0092] Message Broker- 306
[0093] Graph Database- 308
[0094] Dynamic Activator- 310
[0095] Distributed Data Lake- 312
[0096] Cache data store- 314
[0097] Load balancer 316
[0098] Dynamic routing manager- 318
[0099] Command line interface- 320
,CLAIMS:CLAIMS:
We Claim:
1. A method (500) of managing network traffic, the method (500) comprising the steps of:
retrieving, by one or more processors (202), a plurality of metrics corresponding to a plurality of instances of a Fulfilment Management System (FMS), the plurality of metrics is indicative of at least one of a maintenance status and an activity status;
parsing, by the one or more processors (202), the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances; and
pausing, by the one or more processors (202), at least one instance of the plurality of instances based on identification of at least one of a scheduled maintenance activity from the maintenance status and an inactive status from the activity status for the at least one instance.
2. The method (500) as claimed in claim 1, wherein the activity status corresponds to one of an active status and the inactive status.
3. The method (500) as claimed in claim 1, wherein the maintenance status includes information corresponding to the scheduled maintenance at a predefined time.
4. The method (500) as claimed in claim 1, comprises the step of resuming, by the one or more processors (202), the at least one paused instance of the plurality of instances on one of completion of the scheduled maintenance and on change in the inactive status to the active status.
5. The method (500) as claimed in claim 1, wherein upon pausing the at least one instance of the plurality of instances the method comprises the step of storing, by the one or more processors (202), one or more requests to be served by the at least one paused instance in a storage unit (216).
6. The method (500) as claimed in claim 4, wherein upon resuming of the at least one paused instance of the plurality of instances comprises the step of retrieving, by the one or more processors (202), the one or more requests to be served by the at least one paused instance from the storage unit (216), wherein the retrieved one or more requests are served by the at least one resumed instance.
7. A system (108) for managing network traffic, the system (108) comprises:
a retrieval unit (210) configured to retrieve, a plurality of metrics corresponding to a plurality of instances of a Fulfilment Management System (FMS), the plurality of metrics indicative of at least one of a maintenance status and an activity status;
a parsing unit (212) configured to parse, the plurality of metrics to identify at least one of the maintenance status and the activity status of each of the plurality of instances; and
a pausing unit (214) configured to pause, at least one instance of the plurality of instances based on identification of at least one of a scheduled maintenance activity from the maintenance status and an inactive status from the activity status for the at least one instance
8. The system (108) as claimed in claim 7, wherein the activity status corresponds to one of an active status and the inactive status.
9. The system (108) as claimed in claim 7, wherein the maintenance status includes information corresponding to the scheduled maintenance at a predefined time.
10. The system (108) as claimed in claim 7, wherein the pausing unit is configured to resume, the at least one paused instance of the plurality of instances on one of completion of the scheduled maintenance and on change in the inactive status to the active status.
11. The system (108) as claimed in claim 7, comprising
a storage unit (216) configured to store, one or more requests to be served by the at least one paused instance therein upon pausing the at least one instance of the plurality of instances; and
the retrieval unit (210) configured to retrieve, the one or more requests to be served by the at least one paused instance from the storage unit (216) upon resuming of the at least one paused instance of the plurality of instances, wherein the retrieved one or more requests are served by the at least one resumed instance.
| # | Name | Date |
|---|---|---|
| 1 | 202321052144-STATEMENT OF UNDERTAKING (FORM 3) [03-08-2023(online)].pdf | 2023-08-03 |
| 2 | 202321052144-PROVISIONAL SPECIFICATION [03-08-2023(online)].pdf | 2023-08-03 |
| 3 | 202321052144-FORM 1 [03-08-2023(online)].pdf | 2023-08-03 |
| 4 | 202321052144-DRAWINGS [03-08-2023(online)].pdf | 2023-08-03 |
| 5 | 202321052144-DECLARATION OF INVENTORSHIP (FORM 5) [03-08-2023(online)].pdf | 2023-08-03 |
| 6 | 202321052144-FORM-26 [03-10-2023(online)].pdf | 2023-10-03 |
| 7 | 202321052144-Proof of Right [08-01-2024(online)].pdf | 2024-01-08 |
| 8 | 202321052144-DRAWING [31-07-2024(online)].pdf | 2024-07-31 |
| 9 | 202321052144-COMPLETE SPECIFICATION [31-07-2024(online)].pdf | 2024-07-31 |
| 10 | Abstract-1.jpg | 2024-10-11 |
| 11 | 202321052144-Power of Attorney [05-11-2024(online)].pdf | 2024-11-05 |
| 12 | 202321052144-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf | 2024-11-05 |
| 13 | 202321052144-Covering Letter [05-11-2024(online)].pdf | 2024-11-05 |
| 14 | 202321052144-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf | 2024-11-05 |
| 15 | 202321052144-FORM 3 [02-12-2024(online)].pdf | 2024-12-02 |
| 16 | 202321052144-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |