Abstract: Disclosed is a method (500) for monitoring data files ingestion in a multi-vendor network (105). The method includes monitoring an ingestion of a plurality of data files from a first edge node into a Distributed File System (DFS) (206). The method further includes comparing identification information associated with ingested data files with identification information associated with the plurality of data files upon a determination that a count of the ingested data files is less than a specific count. Further, the method includes identifying non-ingested data files based on the comparison and switching from the first edge node to at least one second edge node. Further, the method includes transmitting, to the at least one second edge node, a request to retrieve the non-ingested data files from an Element Management System (EMS) (101) and monitoring the ingestion of the non-ingested data files from the at least one second edge node. FIG. 5
DESC:FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
SYSTEM AND METHOD FOR MONITORING DATA FILES INGESTION IN A MULTI-VENDOR NETWORK
Jio Platforms Limited, an Indian company, having registered address at Office -101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[0001] The embodiments of the present disclosure generally relate to a field of wireless communication networks. More particularly, the present disclosure relates to a system and a method for monitoring data files ingestion in a multi-vendor network.
BACKGROUND OF THE INVENTION
[0002] The subject matter disclosed in the background section should not be assumed or construed to be prior art merely because of its mention in the background section. Similarly, any problem statement mentioned in the background section or its association with the subject matter of the background section should not be assumed or construed to have been previously recognized in the prior art.
[0003] In wireless communication networks, where millions of nodes are deployed across diverse environments, ensuring integrity and efficiency of data collection is of utmost importance for monitoring network health and performance. Element Management Systems (EMS) function as control centers for collecting the data from all the nodes in a multi-vendor environment. This data collection process from all the nodes is a complex task which includes collecting the data related to network performance including fault detection, configuration changes, accounting records, performance metrics, and security events from the nodes dispersed throughout the network.
[0004] To facilitate the data collection process, the EMS employs two primary methodologies including periodically pushing the data to an Operations Support System (OSS) tool through North Bound Interface (NBI) and on-demand pulling of the data by the OSS tool from the EMS via the NBI interface for performance management purposes. However, in scenarios where edge nodes utilized in the NBI interface encounter issues such as overload, process hang-ups, or connectivity disruptions, the data collection process suffers. These issues impede a timely and accurate ingestion of the data, thereby hindering data integrity and the network performance.
[0005] Heretofore, in conventional data collection methods, redundant edge nodes are used to ensure the data collection continuity if a primary edge node fails. These conventional data collection methods require manual configuration and are not effective in addressing real-time issues. Further, failover systems in these methods results in delays during transition from one edge node to another edge node, which impacts the data integrity and timeliness of the data. Further, existing monitoring tools detect anomalies such as node overloads or connectivity problems and trigger alerts, but they are do not capable of quickly adapting to evolving network conditions or identifying subtle performance degradation.
[0006] Therefore, to overcome aforementioned challenges and limitations, there lies a need for improved system and method for monitoring data files ingestion in a multi-vendor network to ensure uninterrupted data collection.
SUMMARY
[0007] The summary is provided to introduce aspects and embodiments related to techniques for generating ingestion alerts corresponding to one or more missing data instances. Particularly, this section is provided to introduce a selection of concepts in a simplified format that is further described in the detailed description of the invention. This summary is not intended to identify key or essential inventive concepts of the disclosed subject matter nor is it intended for use in determining or limiting the scope of the disclosed subject matter.
[0008] According to an aspect of the present disclosure, disclosed herein is a method for monitoring data files ingestion in a multi-vendor network. The method includes monitoring, by a monitoring module of a server, an ingestion of a plurality of data files from a first edge node of one or more edge nodes into a Distributed File System (DFS). The method further includes determining, by a determination module of the server based on the monitoring, whether a count of ingested data files from the plurality of data files in the DFS is less than a specific count of data files over a predefined time interval. Further, the method includes comparing, by a data processing module of the server, identification information associated with the ingested data files with identification information associated with the plurality of data files based on a result of the determination that the count of the ingested data files in the DFS is less than the specific count of the data files. Further, the method includes identifying, by the data processing module, a set of non-ingested data files from the plurality of data files based on the comparison. Furthermore, the method includes switching, by the data processing module, upon the identification of the set of the non-ingested data files, from the first edge node to at least one second edge node among the one or more edge nodes. Furthermore, the method includes transmitting, by a transmitting module of the server to the at least one second edge node, a request to retrieve the set of non-ingested data files from an Element Management System (EMS) and ingest the set of the non-ingested data files into the DFS. Thereafter, the method includes monitoring, by the monitoring module, based on an acceptance of the transmitted request by the at least one second edge node, the ingestion of the set of non-ingested data files from the at least one second edge node into the DFS.
[0009] In one or more implementations, the plurality of data files comprises one or more of performance management data files, fault data files, configuration data files, accounting data files, and security data files.
[0010] In one or more implementations, the one or more edge nodes periodically retrieve the plurality of data files from the EMS.
[0011] In one or more implementations, the identification information associated with the ingested data files and the plurality of data files comprises at least one of a file name, a unique file identifier, or a timestamp associated with each of the data files.
[0012] In one or more implementations, the identification information associated with the ingested data files and the plurality of data files is retrieved from a database.
[0013] In one or more implementations, identifying, by the data processing module, the set of non-ingested data files includes identifying a mismatch between the ingested data files and the plurality of data files based on a result of the comparison between the identification information associated with the ingested data files and the identification information associated with the plurality of data files, and identifying the set of non-ingested data files based on the identified mismatch.
[0014] In one or more implementations, for switching from the first edge node to the at least one second edge node, the method includes selecting, by the data processing module, the at least one second edge node among the plurality of edge nodes in a round robin manner. Further, the method includes switching, by the data processing module, from the first edge node to the selected at least one second edge node.
[0015] According to another aspect of the present disclosure, disclosed herein is a system for monitoring data file ingestion in a multi-vendor network. The system comprises a monitoring module, a determination module, a data processing module, and a transmitting module. The monitoring module is configured to monitor an ingestion of a plurality of data files from a first edge node of one or more edge nodes into a Distributed File System (DFS). The determination module is configured to determine, based on the monitoring, whether a count of ingested data files from the plurality of data files in the DFS is less than a specific count of data files over a predefined time interval. The data processing module is configured to compare identification information associated with the ingested data files with identification information associated with the plurality of data files based on a result of the determination that the count of the ingested data files in the DFS is less than the specific count of the data files. The data processing module is further configured to identify a set of non-ingested data files from the plurality of data files based on the comparison. Further, the data processing module is configured to switch, upon the identification of the set of the non-ingested data files, from the first edge node to at least one second edge node among the one or more edge nodes. The transmitting module is configured to transmit to the at least one second edge node, a request to retrieve the set of non-ingested data files from an Element Management System (EMS) and ingest the set of the non-ingested data files into the DFS. The monitoring module is further configured to monitor, based on an acceptance of the transmitted request by the at least one second edge node, the ingestion of the set of non-ingested data files from the at least one second edge node into the DFS.
BRIEF DESCRIPTION OF DRAWINGS
[0016] Various embodiments disclosed herein will become better understood from the following detailed description when read with the accompanying drawings. The accompanying drawings constitute a part of the present disclosure and illustrate certain non-limiting embodiments of inventive concepts. Further, components and elements shown in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. For consistency and ease of understanding, similar components and elements are annotated by reference numerals in the exemplary drawings.
[0017] FIG. 1 illustrates a diagram depicting a communication environment in a multi-vendor network, in accordance with an embodiment of the present disclosure.
[0018] FIG. 2 illustrates a block diagram depicting a communication system for monitoring data files ingestion in the multi-vendor network, in accordance with an embodiment of the present disclosure.
[0019] FIG. 3 illustrates a block diagram of an application server, in accordance with an embodiment of the present disclosure.
[0020] FIG. 4 illustrates a line diagram depicting a flow of monitoring the data files ingestion in the multi-vendor network, in accordance with an embodiment of the present disclosure.
[0021] FIG. 5 illustrates a flowchart of a method for monitoring the data files ingestion in the multi-vendor network, in accordance with an embodiment of the present disclosure.
[0022] FIG. 6 illustrates a schematic block diagram of a computing system for monitoring the data files ingestion in the multi-vendor network, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Inventive concepts of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of one or more embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Further, the one or more embodiments disclosed herein are provided to describe the inventive concept thoroughly and completely, and to fully convey the scope of each of the present inventive concepts to those skilled in the art. Furthermore, it should be noted that the embodiments disclosed herein are not mutually exclusive concepts. Accordingly, one or more components from one embodiment may be tacitly assumed to be present or used in any other embodiment.
[0024] The following description presents various embodiments of the present disclosure. The embodiments disclosed herein are presented as teaching examples and are not to be construed as limiting the scope of the present disclosure. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified, omitted, or expanded upon without departing from the scope of the present disclosure.
[0025] The following description contains specific information pertaining to embodiments in the present disclosure. The detailed description uses the phrases “in some embodiments” or “some implementations” which may each refer to one or more or all of the same or different embodiments or implementations. The term “some” as used herein is defined as “one, or more than one, or all.” Accordingly, the terms “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” In view of the same, the terms, for example, “in an embodiment” or “in an implementation” refers to one embodiment or one implementation and the term, for example, “in one or more embodiments” refers to “at least one embodiment, or more than one embodiment, or all embodiments.” Further, the term, for example, “in one or more implementations” refers to “at least one implementation, or more than one implementation, or all implementations.
[0026] The term “comprising,” when utilized, means “including, but not necessarily limited to;” it specifically indicates open-ended inclusion in the so-described one or more listed features, elements in a combination, unless otherwise stated with limiting language. Furthermore, to the extent that the terms “includes,” “has,” “have,” “contains,” and other similar words are used in either the detailed description, such terms are intended to be inclusive in a manner similar to the term “comprising.”
[0027] In the following description, for the purposes of explanation, various specific details are set forth to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features.
[0028] The description provided herein discloses exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the present disclosure. Rather, the foregoing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing any of the exemplary embodiments. Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it may be understood by one of the ordinary skilled in the art that the embodiments disclosed herein may be practiced without these specific details.
[0029] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein the description, the singular forms "a", "an", and "the" include plural forms unless the context of the invention indicates otherwise.
[0030] The terminology and structure employed herein are for describing, teaching, and illuminating some embodiments and their specific features and elements and do not limit, restrict, or reduce the scope of the present disclosure. Accordingly, unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
[0031] An object of the present disclosure is to provide a system and a method for automatically detecting missing data files and initiating a data file pull process through edge nodes to ensure data availability in a Distributed File System (DFS).
[0032] Another object of the present disclosure is to provide a system and a method that regularly monitors number of data files in the DFS and compares the number of the data files with an expected number of the data files after a set time interval to identify unavailable or missing data files.
[0033] Yet another object of the present disclosure is to provide a system and a method that provides a backup mechanism where, if a main edge node fails to retrieve the data file, another available edge node automatically takes over and submits a data file pull request to a source.
[0034] Several key terms used in the description play pivotal roles in facilitating the system functionality. In order to facilitate an understanding of the description, the key terms are defined below.
[0035] The term “node” in the entire disclosure may refer to a network entity that plays a role in data transmission, processing or data management within a communication network.
[0036] The term “edge node” in the entire disclosure may refer to a computing device located at a periphery of a distributed network architecture that performs localized data processing and data file management operations, including data file pulling, monitoring, and submission to the DFS. The edge nodes may include, but are not limited to, nodes, gateways, or micro data centers.
[0037] The term “Element Management Systems (EMS)” in the entire disclosure may refer to a centralized platform that enables real-time status monitoring and management of network connected elements or edge devices and is responsible for managing, storing, and providing access to the data files that are pulled by the edge nodes for further processing and distribution.
[0038] The term “DFS” in the entire disclosure may refer to a storage architecture where the data files are stored across multiple nodes or servers in a distributed manner, allowing for redundancy, scalability, and efficient data file retrieval and the management.
[0039] The term “multi-vendor network” in the entire disclosure may refer to a network architecture comprising a plurality of network equipment and devices associated with different vendors to fulfil diverse operational and technological requirements. For instance, a telecommunication service provider may deploy a plurality of routers corresponding to a first vendor, switches corresponding to a second vendor, and access points corresponding to a third vendor within the network architecture. The vendor in the multi-vendor network may correspond to service providers who are responsible for fulfilling the operational and the technological requirements and may host services including but not limited thereto, invoicing, streamlining a company’s wireless services, and providing network continuity.
[0040] The term “ingestion” in the entire disclosure may refer to a process of acquiring, retrieving, and transferring the data files from a source system (the EMS) to a designated location (the DFS) within a network infrastructure.
[0041] The term “ingested data files” in the entire disclosure may refer to the data files that are successfully pulled from the EMS by the edge node and subsequently pushed into the DFS by the edge node.
[0042] The term “non-ingested data files” in the entire disclosure may refer to the data files that are expected to be ingested into the DFS, but the data files are either not retrieved from the EMS by the edge node or were lost due to system failures, network disruptions, or processing errors.
[0043] The present disclosure relates to the system and the method for monitoring data files ingestion in the multi-vendor network. More specifically, the present disclosure addresses challenges in performance management where the edge nodes, responsible for fetching the data files from the EMS and transmitting the data files to the DFS, may experience failures due to overload, process hanging, or connectivity issues. To maintain data integrity, the system and the method introduces an automated monitoring and a failover mechanism that may detect data file discrepancies and dynamically switch to an alternative edge node for the data file retrieval, ensuring uninterrupted data collection and ingestion.
[0044] Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings. FIG. 1 through FIG. 6, discussed below, and the one or more embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed in any way to limit the scope of the present disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device.
[0045] FIG. 1 illustrates a diagram depicting a communication environment 100 in a multi-vendor network 105, in accordance with an embodiment of the present disclosure. The communication environment 100 shown in FIG. 1 is for illustration only. The communication environment 100 includes a plurality of vendors associated EMSs 101’ through 104’ (hereinafter also referred to as EMS 101), a plurality of nodes 101” though 104”, the multi-vendor network 105, and a North Bound Interface (NBI) 110.
[0046] As shown in FIG. 1, the multi-vendor network 105 may communicate with the EMS 101 via a vendor specific or a standard interface 115. The EMS 101 may communicate with a Network Management System (NMS) (not shown in Figure) via the NBI 110. The NMS corresponds to a higher-level management component within the communication network, which may be configured to perform network management functions in coordination with EMS servers. Such network management functions may include, but not limited thereto, aggregating data from the EMSs, and providing information related to network's health and performance to an end user.
[0047] The EMS 101 may be configured to monitor, manage, and control individual Network Elements (NEs) within the communication network. The EMS 101 may be further configured to configure the NEs, perform fault management, and optimize performance of the communication network to ensure smooth network operations.
[0048] The NBI 110 may be configured to facilitate data exchange between the EMS 101 and the NMS. The NBI 110 may correspond to one of an Application Programming Interface (API) or a protocol that allows a lower-level network component such as the EMS to communicate with the higher-level or a more central component such as the NMS. Although FIG. 1 illustrates one example of the communication environment 100, various changes may be made to FIG. 1. For example, the communication environment 100 may include any number of EMSs and any number of NBIs in any suitable arrangement.
[0049] FIG. 2 illustrates a block diagram depicting a communication system 200 for monitoring the data files ingestion in the multi-vendor network 105, in accordance with an embodiment of the present disclosure.
[0050] The communication system 200 comprises the EMS 101, an edge node 204, a DFS 206, and an application server 208.
[0051] The EMS 101 act as a centralized system that manages and monitors the NEs across the multi-vendor environment. The EMS 101 collects the data files related to the data such as performance management data, fault data, configuration data, accounting data, and security data from multiple network nodes and provides the data files to connected edge nodes for further processing and storage.
[0052] The edge node 204 (hereinafter may also be referred to as the “edge nodes 204” or “one or more edge nodes 204”) serve as an intermediary between the EMS 101 and the DFS 206 and pulls or receives the data files from the EMS 101 and processes them as needed before transferring the files to the DFS 206. In case of the connectivity issues or the overload, the edge node 204 may switch roles with other edge nodes 204 as monitored by the application server 208.
[0053] The DFS 206 provides scalable and reliable storage for the data files collected from the edge node 204 and stores the data files in a distributed manner, ensuring high availability and fault tolerance. The DFS 206 further supports the ingestion of large volumes of the data files and ensures redundancy across multiple nodes for the data integrity.
[0054] The application server 208 (may also be referred to as “server 208”) hosts various modules responsible for monitoring and managing the integrity of the data ingestion across the communication system 200 and executes processes that oversee data file counts, detect the data file discrepancies, and trigger corrective actions such as the switching of the edge node 204. The application server 208 continuously monitors the number of data files ingested into the DFS 206 against the expected number of data files and executes monitoring tasks and manages decision-making processes for switching the edge node 204 when the data file discrepancies are detected.
[0055] The communication system 200 optimizes data flow, ensuring that the data files are accurately collected, processed, and stored in the DFS 206, with minimal disruption or data loss. Although FIG.2 illustrates one example of the communication system 200 for an automatic edge node 204 switching in the multi-vendor network 105, various changes may be made to FIG. 2. For example, the communication system 200 may include any number of the EMSs, any number of the servers, any number and type of the edge nodes 204 in any suitable arrangement.
[0056] FIG. 3 illustrates a block diagram of the application server 208, in accordance with an embodiment of the present disclosure. The embodiment of the application server 208 shown in FIG. 3 is for illustration only. Other embodiments of the application server 208 may be used without departing from the scope of this disclosure.
[0057] The application server 208 (may also be referred to as a “system 208”) includes one or more processors 302 (hereinafter also referred to as “processor 302”), a memory 304, processing modules 306, a communication interface 308, and an Input-Output (I/O) interface 310 coupled to each other via a first communication bus 312.
[0058] The processor 302 may include various processing circuitry and communicates with the memory 304 and the communication interface 308. The processor 302 is configured to execute instructions stored in the memory 304 and to perform various processes. The processor 302 may include an intelligent hardware device including a general-purpose processor, such as, for example, and without limitation, a Central Processing Unit (CPU), an Application Processor (AP), a dedicated processor, or the like, a graphics-only processing unit such as a Graphics Processing Unit (GPU), a microcontroller, a Field-Programmable Gate Array (FPGA), a programmable logic device, a discrete hardware component, or any combination thereof. In some cases, the processor 302 may be configured to operate a memory array using a memory controller. In some cases, a memory controller may be integrated into the processor 302. The processor 302 may be configured to execute computer-readable instructions stored in the memory 304 to cause the system 208 to perform various functions for monitoring the data files ingestion in the multi-vendor network 105.
[0059] The memory 304 may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of Electrically Programmable Memories (EPROM) or Electrically Erasable and Programmable (EEPROM) Memories. In addition, the memory 304 may, in some examples, be considered a non-transitory storage medium. The "non-transitory" storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted as the memory 304 is non-movable. In some examples, the memory 304 may be configured to store larger amounts of information. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
[0060] The communication interface 308 may facilitate communication of the system 208 with various devices connected to it. The communication interface 308 may also provide a communication pathway for one or more components of the system 208. Examples of such components include, but are not limited to, the processing module(s) 306.
[0061] In an embodiment, the processing module(s) 306 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the system 208. In non-limiting examples, described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing modules(s) 306 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor 302 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing module(s) 306. In such examples, the system 208 may also comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 208 and the processing resource. In other examples, the processing module(s) 306 may be implemented using an electronic circuitry.
[0062] Further, the communication interface 308 includes an electronic circuit specific to a standard that enables wired or wireless communication. The communication interface 308 is configured to communicate internally between internal hardware components. The communication interface 308 may be further configured to communicate with external devices via the communication network. The communication interface 308 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a Radio Frequency (RF) interface, a Universal Serial Bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like.
[0063] The I/O interface 310 may include suitable logic, circuitry, interfaces, and/or codes that may be configured to receive input(s) and present (or display) output(s) on the system 208. For example, the I/O interface 310 may have an input interface (not shown) and an output interface (not shown). The input interface may be configured to enable the user to provide input(s) to trigger (or configure) the system 208 for performing data processing operation(s). Examples of the I/O interface 310 may include, but are not limited to, a touch interface, a mouse, a keyboard, a motion recognition unit, a gesture recognition unit, a voice recognition unit, or the like. The output interface may be configured to display (or present) output(s) generated (or provided) by the system 208. In some aspects of the present disclosure, the output interface may provide the output(s) based on an instruction provided by the user of the system 208, by way of the input interface. Examples of the output interface may include, but are not limited to, a digital display, an analog display, a touch screen display, an appearance of a desktop, and/or illuminated characters. Aspects of the present disclosure are intended to include or otherwise cover any type of the input interface and the output interface in the I/O interface 310, including known, related art, and/or later developed technologies without deviating from the scope of the present disclosure.
[0064] In one or more embodiments, the processing modules 306 may include one or more units/modules selected from any of a monitoring module 306-1, a determination module 306-2, a data processing module 206-3, and a transmitting module 306-4, coupled to each other by way of a second communication bus 314.
[0065] The processing modules 306 are configured to perform functions such as monitoring the data file ingestion, detecting the data file discrepancies, initiating the corrective actions, and ensuring data consistency across the system 208.
[0066] Referring to FIG. 3, the monitoring module 306-1 may be configured to monitor the ingestion of the data files from a first node among the edge nodes 204 into the DFS 206. The determination module 306-2 may be configured to determine whether a count of ingested data files in the DFS 206 is less than a specific count of the data files over a predefined time interval. In an implementation, the specific count of data files may correspond to a threshold count of data files. In another implementation, the specific count of data files may correspond to an expected or a preconfigured count of data files.
[0067] The data processing module 306-3 may be configured to compare identification information associated with the ingested data files with identification information associated with the data files upon determining that the count of the ingested data files in the DFS is less than the specific count of the data files. The data processing module 306-3 may be further configured to identify a set of non-ingested data files based on the comparison and switch from the first edge node to a second available edge node among the edge nodes 204 upon the identification of the set of the non-ingested data files.
[0068] The transmitting module 306-4 may be configured to transmit to the second available edge node, the data file pull request to retrieve the set of non-ingested data files from the EMS 101 and ingest the set of the non-ingested data files into the DFS 206. The monitoring module 306-1 may be further configured to monitor the ingestion of the set of non-ingested data files from the second available edge node into the DFS 206 based on an acceptance of the transmitted data file pull request by the second available edge node.
[0069] For instance, the data processing module 306-3 may retrieve the identification information associated with the ingested data files and the identification information associated with the data files from the database (not shown in FIGs) and to identify the set of non-ingested data files. The data processing module 306-3 may be further configured to identify a mismatch between the ingested data files and the data files based on a result of the comparison between the identification information associated with the ingested data files and the identification information associated with the data files. The set of non-ingested data files are identified based on the identified mismatch.
[0070] Further, the data processing module 306-3 may be configured to select the second available edge node among multiple edge nodes in a round robin manner. Round robin is a method used to distribute tasks evenly across available resources. Here using a round robin method, the second available edge node is selected in a cyclic order to balance a data file retrieval process.
[0071] Thus, the processing modules 306 handles computational tasks required for monitoring operations, managing requests, and processing data and is responsible for executing module’s algorithms and workflows and is configured to execute programs and other processes stored in the memory 304.
[0072] Although FIG. 3 illustrates one example of the system 208, various changes may be made to FIG. 3. For example, the system 208 may include any number of components in addition to the components shown in FIG. 3. Further, various components in FIG. 3 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
[0073] FIG. 4 illustrates a line diagram 400 depicting a flow for monitoring the data files ingestion in the multi-vendor network 105, in accordance with an embodiment of the present disclosure.
[0074] At step 402, the edge node 204 sends the data file pull request to the EMS 101 to fetch the required data files.
[0075] At step 404, after receiving the request, the edge node 204 pulls the required data files from the EMS 101.
[0076] At step 406, once the data files are pulled from the EMS 101, the edge node 204 copies the data files into the DFS 206.
[0077] At step 408, the application server 208 starts monitoring the ingestion of the data files into the DFS 206. The application server 208 monitors the ingestion of the data files for the predefined time interval.
[0078] At step 410, the application server 208 requests, from the DFS 206, a list of the data files copied to the DFS 206 in the predefined time interval.
[0079] At step 412, the DFS 206 sends the list of the data files to the application server 208.
[0080] At step 414, the application server 208 compares the list of the data files in the DFS 206 with the expected number of the data files to check if all the required files have been ingested successfully.
[0081] At step 416, if the expected number of the data files does not match an actual data file count in the DFS 206 within the predefined time interval, the application server determines that the data files are missing and submits the data file pull request to another available edge node to pull the missing data files to ensure data completeness. The other available edge node is selected in the round robin manner.
[0082] At step 418, the application server 208 starts monitoring the ingestion of the missing data files into the DFS 206 by the selected edge node.
[0083] At step 420, the selected edge node sends the data file pull request to the EMS 101 to pull the missing data files from the EMS 101 on accepting the request received from the application server 208.
[0084] At steps 422 and 424, the selected edge node pulls the missing data files from the EMS 101 and the missing data files are copied to the DFS 206.
[0085] FIG. 5 illustrates a flowchart of method 500 for monitoring the data files ingestion in the multi-vendor network 105, in accordance with an embodiment of the present disclosure. The method 500 comprises a series of operation steps indicated by blocks 502 through 514. The method 500 starts at block 502.
[0086] At block 502, the monitoring module 306-1 may monitor the ingestion of the data files from the edge nodes 204 into the DFS 206. The data files may comprise performance management data files, fault data files, configuration data files, accounting data files, and security data files from the EMS 101. In an implementation, in an ingestion process, the edge nodes 204 periodically fetch the data files from the EMS 101 and transmits the fetched data files into the DFS 206.
[0087] At block 504, the determination module 306-2 may determine whether the count of ingested data files in the DFS 206 is less than the specific count of the data files over the predefined time interval. In an implementation, if the count of the data files is as expected, then the method 500 comprises confirming a successful ingestion of the data files in the DFS 206. In a non-limiting example, if the expected or the specific count of the data files in the DFS 206 is 100 but only 95 data files are ingested, the determination module 306-2 detects a shortfall of 5 data files.
[0088] At block 506, the data processing module 306-3 may compare the identification information corresponding to the ingested data files with the identification information associated with the data files upon determining that the count of the ingested data files in the DFS is less than the specific count of the data files. The identification information associated with the ingested data files and the data files is retrieved from the database and comprises a file name, a unique file identifier, or a timestamp associated with each of the data files. In a non-limiting example, if the data files are named as file_001, file_002…file_100, but the data processing module 306-3 may find only the file_001 to file_095 that are ingested in the DFS 206 based on the comparison.
[0089] At block 508, the data processing module 306-3 may identify the set of non-ingested data files from multiple data files based on the comparison. In a non-limiting example, the data processing module 306-3 may identify file_096 to file_100 as the set of non-ingested data files.
[0090] At block 510, the data processing module 306-3 may switch from the first edge node to the second available edge node upon the identification of the set of the non-ingested data files. In a non-limiting example, say the first edge node ‘A’ failed to ingest some of the data files, file_096 to file_100, so the data processing module 306-3 may select edge node ‘B’ as the second available edge node using the round robin method.
[0091] At block 512, the transmitting module 306-4 may transmit the data file pull request to the second available edge node to retrieve the set of non-ingested data files from the EMS 101 and ingest the set of the non-ingested data files into the DFS 206. In a non-limiting example, the selected edge node B retrieve the file_096 to file_100 from the EMS 101 and ingest the file_096 to file_100 into the DFS 206.
[0092] At block 514, the monitoring module 306-1 may monitor the ingestion of the set of non-ingested data files from the second available edge node into the DFS 206 based on the acceptance of the transmitted request by the second available edge node. In a non-limiting example, the selected edge node B may successfully ingest the file_096 to file_100 into the DFS 206 and verifies a final count and completes data file ingestion process.
[0093] In another non-limiting example, the selected edge node B may ingest the file_096 to file_098 into the DFS 206 and identify file_099 and file_100 as the set of non-ingested data files. In this scenario, the data processing module 306-3 may switch from the second edge node to a third available edge node upon the identification of the set of the non-ingested data files. The data processing module 306-3 keep switching from one node to another in the round robin manner until data file ingestion process is completed.
[0094] FIG. 6 illustrates a schematic block diagram of a computing system for monitoring the data files ingestion in the multi-vendor network 105, in accordance with an embodiment of the present disclosure.
[0095] The computing system 600 may be any type of computer, including a server, a web server, a cloud server, etc. The one or more components of the computing system 600 may perform the functions similar to the components of the system 208 as disclosed herein with respect to FIG. 3.
[0096] The computing system 600 includes a network 610, a network interface 620, a processor 630, an Input/Output (I/O) interface 640 and a non-transitory computer readable storage medium 650 (hereinafter may also be referred to as the “storage medium 650” or the “storage media 650”).
[0097] The network interface 620 includes wireless network interfaces such as Bluetooth, Wi-Fi, Worldwide Interoperability for Microwave Access (WiMAX), General Packet Radio Service (GPRS), or Wideband Code Division Multiple Access (WCDMA) or wired network interfaces such as Ethernet, Universal Serial Bus (USB), or Institute of Electrical and Electronics Engineers-864 (IEEE-864).
[0098] The processor 630 may include various processing circuitry and communicate with the storage medium 650 and the I/O interface 640. The processor 630 is configured to execute instructions stored in the storage medium 650 and to perform various processes. The processor 630 may include an intelligent hardware device including a general-purpose processor, such as, for example, and without limitation, the CPU, an Application Processor (AP), a dedicated processor, or the like, a graphics-only processing unit such as a Graphics Processing Unit (GPU), a microcontroller, a Field-Programmable Gate Array (FPGA), a programmable logic device, a discrete hardware component, or any combination thereof. The processor 630 may be configured to execute computer-readable instructions 652 stored in the storage medium 650 to cause the system 208 to perform various functions.
[0099] The storage medium 650 stores a set of instructions 652 required by the processor 630 for controlling its overall operations. The storage media 650 may include an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, or the like. For example, the storage media 650 may include, but are not limited to, hard drives, floppy diskettes, optical disks, ROMs, RAMs, EPROMs, EEPROMs, flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. In one or more implementations, the storage media 650 includes a Compact Disk-Read Only Memory (CD-ROM), a Compact Disk-Read/Write (CD-R/W), and/or a Digital Video Disc (DVD).
[00100] In one or more implementations, the storage medium 650 stores computer program code configured to cause the computing system 600 to perform at least a portion of the processes and/or methods. Accordingly, in at least one embodiment, the computing system 600 performs the method for monitoring data files ingestion in the multi-vendor network 105.
[00101] Embodiments of the present disclosure have been described above with reference to flowchart illustrations of methods and systems according to the embodiments of the disclosure, and/or procedures, algorithms, steps, operations, formulae, or other computational depictions, which may also be implemented as computer program products. In this regard, each block or step of the flowchart, and combinations of blocks (and/or steps) in the flowchart, as well as any procedure, algorithm, step, operation, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code. As will be appreciated, any such computer program instructions may be executed by one or more computer processors, including without limitation a general-purpose computer or special purpose computer, or other programmable processing apparatus to perform a group of operations comprising the operations or blocks described in connection with the disclosed methods.
[0102] Further, these computer program instructions, such as embodied in computer-readable program code, may also be stored in one or more computer-readable memory or memory devices (for example, the memory 304 or the storage medium 650) that can direct a computer processor or other programmable processing apparatus to function in a particular manner, such that the instructions 652 stored in the computer-readable memory or memory devices produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s).
[0103] It will further be appreciated that the term “computer program instructions” as used herein refer to one or more instructions that can be executed by the processing modules 306 to perform one or more functions as described herein. The instructions 652 may also be stored remotely such as on a server, or all or a portion of the instructions can be stored locally and remotely.
[0104] Now, referring to the technical abilities and advantageous effect of the present disclosure, the one or more embodiments provides various operational advantages described below. The system and the method disclosed herein describes a methodology for monitoring the data files ingestion in the multi-vendor network by reducing human dependency in identifying the missing data files and retrieving the data files manually in a telecommunication network. Unlike conventional methods that suffer from delays due to network overload, process hang-ups, or connectivity disruptions, the disclosed method ensures timely and accurate data ingestion. A further potential advantage of the one or more embodiments disclosed herein may include facilitating a seamless transfer of the collected data from the edge nodes to the DFS for further processing and analysis, thereby improving data integrity and minimizing transmission delays. Another noteworthy advantage of the present disclosure may include but not limited thereto, improving reliability of network operations by implementing a robust and adaptive method for the edge node switching to maintain an uninterrupted data flow and enhanced network performance. The method further initiates the process of fetching the data files from another less utilized edge nodes from the EMS, thereby ensuring no performance statistics are missed. In addition, the method enables auto load balancing in case of any malfunctioning of the edge node or any planned outage in edge node servers, ensuring continuous data collection and efficient network monitoring.
[0105] Those skilled in the art will appreciate that the methodology described herein in the present disclosure may be carried out in other specific ways than those set forth herein in the above disclosed embodiments without departing from essential characteristics and features of the present invention. The above-described embodiments are therefore to be construed in all aspects as illustrative and not restrictive.
[0106] Those skilled in the art will appreciate that the methodology described herein in the present disclosure may be carried out in other specific ways than those set forth herein in the above disclosed embodiments without departing from essential characteristics and features of the present disclosure. The above-described embodiments are therefore to be construed in all aspects as illustrative and not restrictive.
[0107] The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Any combination of the above features and functionalities may be used in accordance with one or more embodiments.
[0108] In the present disclosure, each of the embodiments has been described with reference to numerous specific details which may vary from embodiment to embodiment. The foregoing description of the specific embodiments disclosed herein may reveal the general nature of the embodiments herein that others may, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications are intended to be comprehended within the meaning of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and is not limited in scope.
LIST OF REFERENCE NUMERALS
[0109] The following list is provided for convenience and in support of the drawing figures and as part of the text of the specification, which describe innovations by reference to multiple items. Items not listed here may nonetheless be part of a given embodiment. For better legibility of the text, a given reference number is recited near some, but not all, recitations of the referenced item in the text. The same reference number may be used with reference to different examples or different instances of a given item. The list of reference numerals is:
100 - Communication environment
101’ through 104’/ 101 - Element Management System (EMS)
101” though 104” - Nodes
105 - Multi-vendor network
110- North Bound Interface (NBI)
115 - Standard interface
200 - Communication system
204 - Edge node or edge nodes or one or more edge nodes
206 - Distributed File System (DFS)
208 - Application server/ Server/System
302- Processor
304 - Memory
306 - Processing circuitry
306-1 - Monitoring module
306-2 - Determination module
306-3 - Data processing module
306-4- Transmitting module
310 - Input/Output Interface (I/O Interface)
312 - First communication bus
314 - Second communication bus
400 - Line diagram for monitoring the data files ingestion
402-424 - Steps of line diagram 400
500 - Method for monitoring the data files ingestion
502 - 514-Steps of method 500
600 - Computing system
610 - Network
620 - Network Interface
630 - Processor
640 - I/O Interface
650 - Non transitory computer readable storage medium
652 - Instructions
,CLAIMS:We Claim:
1. A method (500) for monitoring data files ingestion in a multi-vendor network (105), the method comprising:
monitoring, by a monitoring module (306-1) of a server (208), an ingestion of a plurality of data files from a first edge node of one or more edge nodes (204) into a Distributed File System (DFS) (206);
determining, by a determination module (306-2) of the server (208) based on the monitoring, whether a count of ingested data files among the plurality of data files in the DFS (206) is less than a specific count of data files over a predefined time interval;
comparing, by a data processing module (306-3) of the server (208), identification information associated with the ingested data files with identification information associated with the plurality of data files based on a result of the determination that the count of the ingested data files in the DFS (206) is less than the specific count of the data files;
identifying, by the data processing module (306-3), a set of non-ingested data files from the plurality of data files based on the comparison;
switching, by the data processing module (306-3) upon the identification of the set of the non-ingested data files, from the first edge node to at least one second edge node among the one or more edge nodes (204);
transmitting, by a transmitting module (306-4) of the server (208) to the at least one second edge node, a request to retrieve the set of non-ingested data files from an Element Management System (EMS) (101) and ingest the set of the non-ingested data files into the DFS (206); and
monitoring, by the monitoring module (306-1) based on an acceptance of the transmitted request by the at least one second edge node, the ingestion of the set of non-ingested data files from the at least one second edge node into the DFS (206).
2. The method (500) as claimed in claim 1, wherein the plurality of data files comprises one or more of performance management data files, fault data files, configuration data files, accounting data files, and security data files.
3. The method (500) as claimed in claim 1, wherein the one or more edge nodes (204) periodically retrieve the plurality of data files from the EMS (101).
4. The method (500) as claimed in claim 1, wherein the identification information associated with the ingested data files and the plurality of data files comprises at least one of a file name, a unique file identifier, or a timestamp associated with each of the data files.
5. The method (500) as claimed in claim 1, wherein the identification information associated with the ingested data files and the plurality of data files is retrieved from a database.
6. The method (500) as claimed in claim 1, wherein identifying, by the data processing module (306-3), the set of non-ingested data files comprises:
identifying a mismatch between the ingested data files and the plurality of data files based on a result of the comparison between the identification information associated with the ingested data files and the identification information associated with the plurality of data files; and
identifying the set of non-ingested data files based on the identified mismatch.
7. The method (500) as claimed in claim 1, wherein, for switching from the first edge node to the at least one second edge node, the method comprises:
selecting, by the data processing module (306-3), the at least one second edge node among the plurality of edge nodes (204) in a round robin manner; and
switching, by the data processing module (306-3), from the first edge node to the selected at least one second edge node.
8. A system (208) for monitoring data file ingestion in a multi-vendor network (105), the system comprising:
a monitoring module (306-1) configured to monitor an ingestion of a plurality of data files from a first edge node of one or more edge nodes (204) into a Distributed File System (DFS) (206);
a determination module (306-2) configured to determine, based on the monitoring, whether a count of ingested data files among the plurality of data files in the DFS (206) is less than a specific count of data files over a predefined time interval;
a data processing module (306-3) configured to:
compare identification information associated with the ingested data files with identification information associated with the plurality of data files based on a result of the determination that the count of the ingested data files in the DFS (206) is less than the specific count of the data files;
identify a set of non-ingested data files from the plurality of data files based on the comparison; and
switch, upon the identification of the set of the non-ingested data files, from the first edge node to at least one second edge node among the one or more edge nodes (204); and
a transmitting module (306-4) configured to transmit, to the at least one second edge node, a request to retrieve the set of non-ingested data files from an Element Management System (EMS) (101) and ingest the set of the non-ingested data files into the DFS (206), wherein
the monitoring module (306-1) is further configured to monitor, based on an acceptance of the transmitted request by the at least one second edge node, the ingestion of the set of non-ingested data files from the at least one second edge node into the DFS (206).
9. The system (208) as claimed in claim 8, wherein the plurality of data files comprises one or more of performance management data files, fault data files, configuration data files, accounting data files, and security data files.
10. The system (208) as claimed in claim 8, wherein the one or more edge nodes (204) periodically retrieve the plurality of data files from the EMS (101).
11. The system (208) as claimed in claim 8, wherein the identification information associated with the ingested data files and the plurality of data files comprises at least one of a file name, a unique file identifier, or a timestamp associated with each of the data files.
12. The system (208) as claimed in claim 8, wherein the identification information associated with the ingested data files and the plurality of data files is retrieved from a database.
13. The system (208) as claimed in claim 8, wherein to identify the set of non-ingested data files, the data processing module (306-3) is configured to:
identify a mismatch between the ingested data files and the plurality of data files based on a result of the comparison between the identification information associated with the ingested data files and the identification information associated with the plurality of data files; and
identify the set of non-ingested data files based on the identified mismatch.
14. The system (208) as claimed in claim 8, wherein, to switch from the first edge node to the at least one second edge node, the data processing module (306-3) is configured to:
select the at least one second edge node among the plurality of edge nodes (204) in a round robin manner; and
switch from the first edge node to the selected at least one second edge node.
| # | Name | Date |
|---|---|---|
| 1 | 202421026224-STATEMENT OF UNDERTAKING (FORM 3) [29-03-2024(online)].pdf | 2024-03-29 |
| 2 | 202421026224-PROVISIONAL SPECIFICATION [29-03-2024(online)].pdf | 2024-03-29 |
| 3 | 202421026224-POWER OF AUTHORITY [29-03-2024(online)].pdf | 2024-03-29 |
| 4 | 202421026224-FORM 1 [29-03-2024(online)].pdf | 2024-03-29 |
| 5 | 202421026224-DRAWINGS [29-03-2024(online)].pdf | 2024-03-29 |
| 6 | 202421026224-DECLARATION OF INVENTORSHIP (FORM 5) [29-03-2024(online)].pdf | 2024-03-29 |
| 7 | 202421026224-FORM-26 [17-04-2024(online)].pdf | 2024-04-17 |
| 8 | 202421026224-Proof of Right [02-08-2024(online)].pdf | 2024-08-02 |
| 9 | 202421026224-Request Letter-Correspondence [25-02-2025(online)].pdf | 2025-02-25 |
| 10 | 202421026224-Power of Attorney [25-02-2025(online)].pdf | 2025-02-25 |
| 11 | 202421026224-Form 1 (Submitted on date of filing) [25-02-2025(online)].pdf | 2025-02-25 |
| 12 | 202421026224-Covering Letter [25-02-2025(online)].pdf | 2025-02-25 |
| 13 | 202421026224-FORM 18 [28-02-2025(online)].pdf | 2025-02-28 |
| 14 | 202421026224-DRAWING [28-02-2025(online)].pdf | 2025-02-28 |
| 15 | 202421026224-CORRESPONDENCE-OTHERS [28-02-2025(online)].pdf | 2025-02-28 |
| 16 | 202421026224-COMPLETE SPECIFICATION [28-02-2025(online)].pdf | 2025-02-28 |
| 17 | 202421026224-ORIGINAL UR 6(1A) FORM 1-060325.pdf | 2025-03-10 |
| 18 | Abstract.jpg | 2025-04-21 |