Abstract: ABSTRACT SYSTEM AND METHOD FOR ALARM RAISE / CLEAR SYNCHRONIZATION AND CLEAR RACE CONDITION HANDLING The present disclosure relates to a system (100) for an alarm raise / clear synchronization and clear race condition handling. The system includes a collector component (150) to receive a data set comprising a plurality of alarms generated by FCAPS from a network element. A parsing unit (212) is provided within the collector component, where the parsing unit (212) parses the plurality of alarms from the data set, and transforms the plurality of alarms into standardized format. A categorizing unit (214), is provided within the collector component (150), to categorize the plurality of alarms as either a raise alarm event, or a clear alarm event based on attributes associated with each of the alarm in the data set. A fault manager master (110) receives the plurality of alarms from the collector component, where the fault manager master assigns unique alarm identifiers (ids) for each alarm. Ref. Fig. 1
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR ALARM RAISE / CLEAR SYNCHRONIZATION AND CLEAR RACE CONDITION HANDLING
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to the field of network monitoring and event management, specifically addressing alarm generation, correlation, and resolution for critical events occurring within a network. The invention pertains to an integrated Alarm Management System (AMS) that efficiently handles the detection, processing, and resolution of various network alarms, providing a comprehensive and streamlined approach to manage network incidents.
BACKGROUND OF THE INVENTION
[0002] In Network Management Systems (NMS) are essential for monitoring and maintaining the health and performance of computer networks. These systems generate alarms or alerts when specific events occur, indicating potential issues or anomalies within the network infrastructure.
[0003] Conventional alarm systems face challenges in effectively managing the continuous influx of alarms, particularly in scenarios where rapid fluctuations or flapping events generate a significant number of raise and clear alarms. Flapping refers to the situation where alarms repeatedly alternate between raise and clear states due to unstable or intermittent network conditions.
[0004] In existing systems, when a raise alarm occurs, it is stored in the system's storage and subsequently processed. However, the challenge arises when a corresponding clear alarm is received. The system needs to associate the clear alarm with its corresponding raise alarm in order to remove the alert from an active window and move it to the archive. This association process becomes complex when multiple raise and clear events occur in quick succession.
[0005] For example, consider a situation where the CPU percentage of a device exceeds a critical threshold, such as 95%. This triggers a raise alarm indicating the high CPU usage. Once the CPU percentage comes down below the threshold, a clear alarm is sent to indicate that the problem is resolved. The system needs to accurately associate the clear alarm with the previous raise alarm to ensure proper closure of the alert.
[0006] Another issue in conventional systems is the lack of efficient alarm correlation. When multiple devices or components are affected by the same underlying problem, such as an interface outage, each affected device generates separate alarms. This leads to the generation of multiple individual tickets for closely related incidents, causing redundancy and inefficiencies in the resolution process. It is desirable to correlate these alarms based on common conditions or criteria, grouping them together for more effective incident management.
[0007] For instance, if an interface goes down, multiple devices connected to that interface may raise similar alarms. Instead of creating separate tickets for each device, the system should correlate these alarms and group them together under a single incident, simplifying the resolution process.
[0008] Additionally, the existing alarm systems often provide limited enrichment and contextual information about the alarms. Enrichment involves augmenting the basic alarm details, such as the problem description, host information, and IP address, with additional inventory data to identify the precise location of the affected device within the network infrastructure. This enrichment is crucial for efficient incident handling and troubleshooting.
[0009] Moreover, mis-associations between clear and raise alarms can occur when raise and clear events are received simultaneously or in rapid succession. This mis-association impacts the tracking of alarm histories, making it challenging to analyze historical alarm patterns and generate accurate reports.
[0010] There is a need for a solution to address above challenges.
SUMMARY OF THE INVENTION
[0011] One or more embodiments of the present disclosure provide a system and a method for an alarm raise / clear synchronization and clear race condition handling in network.
[0012] In one aspect of the present invention, a method for an alarm raise / clear synchronization and clear race condition handling is disclosed. The method includes receiving, by a one or more processor, a data set comprising a plurality of alarms generated by FCAPS (Fault, Configuration, Accounting, Performance, and Security) data from a network element at a collector component. Further, the method includes parsing, by the one or more processor, the plurality of alarms from the data set, and further transforming the plurality of alarms from the data set into a standardized format within the collector component. Further, the method includes categorizing, by the one or more processor, the plurality of alarms as either a raise alarm event, or a clear alarm event based on attributes associated with each of the alarm in the data set. Further, the method includes sending, by the one or more processor, at least one of the raise alarm event, or the clear alarm event to a fault manager master.
[0013] In another aspect of the present invention, a system for an alarm raise / clear synchronization and clear race condition handling is disclosed. The system includes a collector component configured to receive a data set comprising a plurality of alarms generated by FCAPs from a network element. A parsing unit is provided within the collector component, where the parsing unit is configured to parse the plurality of alarms from the data set, and transforming the plurality of alarms into standardized format. A categorizing unit is, provided within the collector component, configured to categorize the plurality of alarms as either a raise alarm event, or a clear alarm event based on attributes associated with each of the alarm in the data set. A fault manager master is configured to receive the plurality of alarms from the collector component, where the fault manager master is configured to assign unique alarm identifiers (ids) for each alarm.
[0014] In another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to receive a data set comprising a plurality of alarms generated by FCAPS data from a network element at a collector component, parse, the plurality of alarms from the data set, and further transform the plurality of alarms from the data set into a standardized format within the collector component, categorize the plurality of alarms as either a raise alarm event, or a clear alarm event based on attributes associated with each of the alarm in the data set, and send at least one of the raise alarm event, or the clear alarm event to a fault manager master.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG.1 illustrates a block diagram of a high-level architecture of an alarm synchronization and processing system, in accordance with one implementation of the present embodiment.
[0018] FIG. 2 illustrates a block diagram of a collector component included in the system provided for an alarm raise / clear synchronization and clear race condition handling, according to one or more embodiments of the present invention.
[0019] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0020] FIG. 4 shows a sequence flow diagram illustrating a method for alarm raise / clear synchronization and clear race condition handling by a collector component, according to various embodiments of the present disclosure.
[0021] FIG. 5 shows a sequence flow diagram illustrating a method for alarm raise / clear synchronization and clear race condition handling by a fault manager master, according to various embodiments of the present disclosure.
[0022] FIG. 6 shows a sequence flow diagram illustrating a method for alarm raise / clear synchronization and clear race condition handling by a clear fault manager, according to various embodiments of the present disclosure.
[0023] FIG. 7 shows a sequence flow diagram illustrating a method for alarm raise / clear synchronization and clear race condition handling by a clear retry fault manager, according to various embodiments of the present disclosure.
[0024] FIG. 8 shows a sequence flow diagram illustrating a method for alarm raise / clear synchronization and clear race condition handling by a fault manager auditor, according to various embodiments of the present disclosure.
[0025] FIG. 9 shows an example flow diagram of a method for alarm raise / clear synchronization and clear race condition handling, according to various embodiments of the present disclosure.
[0026] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0028] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0029] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0030] The present invention relates to a method and system for efficient alarm management in Network Management Systems (NMS). Conventional alarm systems face challenges in effectively managing the continuous influx of alarms, especially in scenarios involving rapid fluctuations or flapping events. The disclosed invention addresses these challenges by introducing a high Throughput-Per-Second (TPS) distributed I/O Cache and implementing a novel alarm association and retry mechanism.
[0031] The novel and inventive method involves receiving alarms from network elements and streaming them to a master Fault Manager (FM). The master FM stores the alarms in a distributed I/O Cache, associating raise and clear events separately against an alarm identifier (ID). The cache maintains a timestamp array for each alarm, enabling accurate tracking of alarm occurrences.
[0032] To optimize performance, only the IDs of the raise and clear alarms are streamed towards the raise fault manager and a clear fault manager, respectively, instead of transmitting the entire alarm data. The raise fault manager retrieves the alarms from the cache, updates metadata, enriches the data, and inserts it into a database. Similarly, the clear fault manager retrieves clear alarms, checks for corresponding raise alarms in the database, and processes the clearance accordingly.
[0033] In cases where a clear alarm is received without a corresponding raise alarm, a non-blocking retry mechanism is employed. The clear alarm goes into a retry mode, streaming itself or being stored in a messaging queue for a specified duration. Periodically, the clear retry fault manager checks for the associated raise alarm. If found, the clear alarm is processed; otherwise, it undergoes further retries. This mechanism ensures efficient association of raise and clear alarms, even in situations with delayed or intermittent alarm occurrences.
[0034] The system components work collaboratively to process alarms, maintain cache consistency, and provide a reliable and high-performance alarm management system. The disclosed invention significantly improves alarm management in the NMS by synchronizing out-of-order alarms, reducing database hits, enabling efficient alarm correlation, enhancing incident resolution, and providing enriched contextual information. The inventive step lies in the introduction of the distributed I/O Cache, the alarm association mechanism, and the non-blocking retry mechanism, which together optimize the performance and reliability of the NMS system.
[0035] Overall, this invention offers a comprehensive solution to the challenges faced by conventional alarm systems, leading to improved efficiency, accuracy, and effectiveness in managing alarms within network management environments.
[0036] FIG.1 illustrates a block diagram of a high-level architecture of an alarm synchronization and processing system (100), in accordance with one implementation of the present embodiment. The system (100) mainly includes, but may not be limited to, a collector component (150), a Master Fault Manager (FM) (105), a Distributed I/O Cache (135), and a Database (DB) (140). In an embodiment, the collector component (150) is within the system (100). In another embodiment, the collector component (150) is outside the system (100). The master fault manger (105) further includes a FM master (110), an auditor FM alternatively referred to as fault manager auditor (125), a raise manager auditor (130) alternatively referred to as FM raise, a clear fault manager (115) alternatively referred to as FM clear, and a clear retry fault manager (120) alternatively referred to as FM retry.
[0037] In one implementation, the collector component (150) is responsible for gathering FCAPs (Fault, Configuration, Accounting, Performance, and Security) data from various network elements (e.g., eNB, base station, a new radio (NR) base station, gNB, a centralized unit, or the like). The collector component (150) may include a collector and at least one distribution agent. The collector component (150) receives FCAPs data over different protocols, such as SNMP (Simple Network Management Protocol) or syslog, from network devices and systems. The collector component (150) then converts the received data into a generic alarm format (based on the known techniques), making it compatible with the system's processing requirements. After conversion, the collector component (150) forwards the alarms to the FM master (110) for further processing.
[0038] According to the embodiment, the FM master (110) receives the alarms from the collector component (150) and plays a central role in the alarm synchronization and processing system. It stores the alarms in the Distributed I/O Cache (135) for efficient processing. The FM master (110) continuously consumes alarms from the alarm stream (901) (as shown in FIG. 9) and checks if the alarms already exist in the cache. By comparing the received alarms with the existing ones, the FM master (110) updates the occurrence count and timestamp array of the alarms accordingly. This process helps in tracking the number of occurrences of each alarm and maintaining a history of their timestamps. Additionally, the FM master (110) produces unique alarm identifiers (ids) for each alarm, which are then streamed towards the raise manager auditor (130) and the clear fault manager (115) for further handling and processing.
[0039] According to the embodiment, the raise manager auditor (130) consumes the alarm identifiers (ids) streamed from the FM master (110). The raise manager auditor (130) retrieves the corresponding alarms from the Distributed I/O Cache (135) using the identifiers. Once the alarms are retrieved, the raise manager auditor (130) processes them further. The raise manager auditor (130) updates metadata associated with the alarms, enriches the alarms with additional information (e.g., adding time, location, user name) or context, and inserts them into the database (140). The raise manager auditor (130) may perform various operations on the alarms, such as planned event processing, AI-based correlation to identify patterns or related events, and trouble ticketing to initiate incident management processes.
[0040] The correlation, in general refers to, correlating the alarms by a type of the event. For example, if any interface went down and five nodes are connected to that interface. In this case, upon failure of the interface, all five nodes will raise the same type of alarm. So instead of raising five separate tickets, the system will correlate those five alarms, based on some common conditions, event, or criteria, i.e., failure of the interface. As a result of the correlation, the five alarms will be grouped together. In another implementation, an alarm for a planned event is also detected. For example, if any planned event is happening on a particular node, such alarm may not be recorded, and load gets rebooted. The raise alarm, in one implementation, may also be processed against a trouble ticket. Thus, the raise gets processed through three stages, i.e., planned events, correlation, and trouble tickets. It may consume more time of the raise alarm as compared to the clear alarm. As for the clear alarm, there is no provision of getting processed through three different stages. Therefore, it is evident that the clear alarm gets cleared earlier than the raise alarm, considering both alarms occurred at the same instant.
[0041] In one embodiment, the fault manager auditor (125) has a primary function of scanning the distributed IO cache (135) and diligently searching for any stranded alarms that may have been overlooked or not adequately processed. This fail-safe mechanism ensures that no alarms are lost or left unattended within the system. When the fault manager auditor (125) identifies the stranded alarms, it initiates the necessary processing steps to handle them appropriately, preventing any potential gaps in fault and alarm management. By regularly monitoring the cache and addressing any outstanding alarms, the fault manager auditor (125) contributes to the overall efficiency and reliability of the system, ensuring that all alarms are accounted for and processed in a timely manner.
[0042] According to the embodiment, the clear fault manager (115) consumes the alarm identifiers (ids) streamed from the FM master (110). The clear fault manager (115) retrieves the corresponding clear alarms from the Distributed I/O Cache (135) based on the identifiers received. The clear fault manager (115) then checks the database (140) for associated raise alarms. If the raise alarms are found in the database (140), indicating that they were previously raised and not yet cleared, the clear fault manager (115) performs clearance operations. It deletes the raise alarms from the active section, adds clearance metadata to the alarms, and stores them in the archived section of the database (140). On the other hand, if the associated raise alarms are not found in the database (140), implying that they might have been cleared previously, the clear fault manager (115) streams the clear alarms to the clear retry fault manager (120) for further retry processing.
[0043] According to the embodiment, the clear retry fault manager (120) consumes the retry alarm data streamed from the clear fault manager (115). It checks the database (140) for corresponding raise alarms based on the received retry alarm data. If the raise alarms are found in the database (140), indicating that they were not cleared previously, the clear retry fault manager (120) processes the clearances similar to the clear fault manager (115). However, if the raise alarms are not found, the clear retry fault manager (120) increments the retry count check (202) and reproduces the data into the Retry Stream for subsequent retries. This non-blocking retry mechanism ensures the system's performance by avoiding unnecessary blocking of application threads and enables the system to handle exceptional cases or delayed processing.
[0044] In accordance with the present embodiment, the Distributed I/O Cache (135) of the system is a high throughput and distributed architecture designed to provide the reliable and available in-memory database with disk persistence. It serves as a storage mechanism for the alarms received from the FM master (110). The cache stores the alarms based on their unique identifiers (ids) generated by the FM master (110). This storage approach allows for efficient retrieval and processing of alarm data. Additionally, the Distributed I/O Cache (135) efficiently updates the occurrence counts and timestamp arrays of existing alarms, eliminating the need for multiple database hits and optimizing performance.
[0045] According to the present embodiment, the database (140) serves as a non-structured (NoSQL) database that stores the FCAPs data, including alarms. It maintains both active and archived alarms, allowing for efficient retrieval and reporting. The raise manager auditor (130) and clear fault manager (115) interact with the database (140) to perform operations such as inserting new alarms, updating metadata, retrieving alarm information for processing, and storing cleared alarms in the archived section. The database (140) plays a crucial role in persistently storing and managing alarm data, ensuring its availability for analysis, reporting, and historical tracking.
[0046] Thus, the alarm processing system described incorporates various components working together seamlessly to improve efficiency and performance. The collector component (150) gathers the FCAPs data and converts it into a generic alarm format, while the FM master (110) stores alarms in the Distributed I/O Cache (135) and manages their updates and streaming. The raise manager auditor (130) processes alarms for metadata enrichment and storage in the DB (140), while the clear fault manager (115) handles clearance and archival of alarms. The clear retry fault manager (120) enables non-blocking retries for unmatched clear alarms, ensuring efficient processing. Overall, this system offers a synchronized and streamlined approach to alarm management, enhancing the reliability and effectiveness of network fault monitoring.
[0047] Referring to FIG. 2, FIG. 2 illustrates a block diagram of the collector component 150 provided for the alarm raise / clear synchronization and clear race condition handling, according to one or more embodiments of the present invention.
[0048] As per the illustrated embodiment, the collector component (150) includes one or more processors (202), a memory (204), an input/output interface unit (206), a display (208), and an input device (210). Further the collector component (150) may comprise one or more processors 202. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the collector component (150) includes one processor 202. However, it is to be noted that the collector component (150) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0049] The information related to the alarm may be provided or stored in the memory 204. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0050] The information related to the alarm may further be configured to render on the user interface 206. The user interface 206 may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface 206 may be rendered on a display 208, implemented using LCD display technology, OLED display technology, and/or other types of conventional display technology. The display 208 may be integrated within the system 100 or connected externally. Further the input device(s) 210 may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0051] The database (140) may be communicably connected to the processor (202), and the memory (204). The database (140) may be configured to store and retrieve data pertaining to features, or services of the access rights, attributes, approved list, and authentication data provided by an administrator. Further the collector component (150) may allow the system (100) to update/create/delete one or more information related to the alarm, which provides flexibility to roll out multiple variants of the alarm as per business needs. In another embodiment, the database (140) may be outside the system (100) and communicated through a wired medium and wireless medium.
[0052] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the memory (204) stores the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 100 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0053] In order for the system 100 to manage the alarm information, the processor 202 includes a parsing unit (212) and a categorizing unit (214). The parsing unit (212) and the categorizing unit (214) in an embodiment, is implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0054] In order for the system 108 for alarm raise / clear synchronization and clear race condition handling, the parsing unit (212) is configured to receive the data set including the plurality of alarms generated by the FCAPS from the network element. Further, the parsing unit (212) parses the plurality of alarms from the data set, and transforms the plurality of alarms into the standardized format. The categorizing unit (214) categorizes the plurality of alarms as either the raise alarm event, or the clear alarm event based on the attributes associated with each of the alarm in the data set. The categorizing unit (214) sends the plurality of alarms to the fault manager master (110).
[0055] FIG. 3 is an example schematic representation of the system 100 of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. FIG. 3 is an example illustration of the system of FIG. 2 interacting in which various entities. Referring to FIG. 3, FIG. 3 describes a system 100 for the alarm raise / clear synchronization and clear race condition handling. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the system 100 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0056] The collector component (150) receives the data set including the plurality of alarms generated by the FCAPS from the network element. The parsing unit (212) is provided within the collector component (150), where the parsing unit (212) parses the plurality of alarms from the data set, and transforms the plurality of alarms into the standardized format. A categorizing unit (214) is provided within the collector component (150) configured to categorize the plurality of alarms as either the raise alarm event, or the clear alarm event based on attributes associated with each of the alarm in the data set. The fault manager master (110) receives the plurality of alarms from the collector component (150). The fault manager master (110) assigns unique alarm identifiers (ids) for each alarm.
[0057] In an embodiment, the fault manager master (110) checks existence of at least one of the raise alarm event, or the clear alarm event in the distributed I/O cache (135). Further, the fault manager master (110) inserts at least one of the raise alarm event, or the clear alarm event into the distributed I/O cache (135). Further, the fault manager master (110) updates at least one of the raise alarm event, or the clear alarm event by adding the new alarm timestamp to the timestamp array, and increment in the occurrence count. Further, the fault manager master (110) assigns the unique alarm identifier (ids) to at least each one of the raise alarm event, or the clear alarm event. Further, the fault manager master (110) segregates and maps at least one of the raise alarm event, or the clear alarm event having the unique alarm identifiers with the raise fault manager (130), or the clear fault manager (115).
[0058] Further, the raise fault manager (130) runs periodically to consume the unique alarm identifiers and fetches the corresponding alarm data from the distributed I/O cache (135) and the database (140).
[0059] In an embodiment, further, the clear fault manager (115) runs periodically to consume the unique alarm identifiers and fetches the corresponding alarm data from the distributed I/O cache (135) and the database (140). Further, the clear fault manager (115) eliminates the raise alarm event from the active section and further adds the clearance metadata to the alarm to obtain the resolved alarm. Further, the clear fault manager (115) modifies the resolved alarm and further stores in the archived section of the database (140). Further, the clear fault manager (115) adds the retry count in case the raise alarm event is not received or processed.
[0060] In an embodiment, further, the clear retry fault manager (120) periodically runs to consume clearance data from the retry stream, further for each data consumed, is checked in the database for the corresponding raise alarm event. Further, the clear retry fault manager (120) eliminates the raise alarm event from the active section and further adds the clearance metadata to the raise alarm event to obtain another resolved alarm. Further, the clear retry fault manager (120) detects if the retry count has reached a threshold. Further, the clear retry fault manager (120) increments the retry count and reproduces the clear alarm data at the retry stream for another attempt. Further, the clear retry fault manager (120) logs the error if the retry count is exhausted. For example, consider a situation where the bandwidth usage percentage of the device exceeds a critical threshold, such as 97%. This triggers a raise alarm indicating the high bandwidth usage. But, the CPU percentage does not come down below the threshold, an alarm is sent again to indicate that the problem is not resolved. The system needs to accurately associate the clear alarm with the previous raise alarm to ensure proper closure of the alert.
[0061] In an embodiment, further, fault manager auditor (125) identifies stranded alarms present in the distributed I/O cache (135). The fault manager auditor (125) checks a clearance timestamp stamped on the raise alarm event, to prevent any issue causing clearance of the raise alarm event occurred, wherein the timestamp of the raise alarm event is compared with the clear alarm events.
[0062] FIG. 4 shows a sequence flow diagram illustrating a method 400 for alarm raise / clear synchronization and clear race condition handling by the collector component (150), according to various embodiments of the present disclosure.
[0063] At 402, the method includes receiving the data set including the plurality of alarms generated by the FCAPS data from the network element at the collector component (150). In an embodiment, the method allows the collector component (150) to receive the data set comprising the plurality of alarms generated by the FCAPs data from the network element.
[0064] At 404, the method includes parsing the plurality of alarms from the data set. In an embodiment, the method allows the collector component (150) to parse the plurality of alarms from the data set.
[0065] At 406, the method includes transforming the plurality of alarms from the data set into the standardized format within the collector component. In an embodiment, the method allows the collector component (150) to transform the plurality of alarms from the data set into the standardized format within the collector component.
[0066] At 408, the method includes categorizing the plurality of alarms as either the raise alarm event, or the clear alarm event based on the attributes associated with each of the alarm in the data set. In an embodiment, the method allows the collector component (150) to categorize the plurality of alarms as either the raise alarm event, or the clear alarm event based on attributes associated with each of the alarm in the data set.
[0067] At 410, the method includes sending the at least one of the raise alarm event, or the clear alarm event to the fault manager master (110). In an embodiment, the method allows the collector component (150) to send the at least one of the raise alarm event, or the clear alarm event to the fault manager master (150).
[0068] FIG. 5 shows a sequence flow diagram illustrating a method 500 for alarm raise / clear synchronization and clear race condition handling by the fault manager master (110), according to various embodiments of the present disclosure.
[0069] At 502, the method includes inserting the at least one of the raise alarm event, or the clear alarm event into the distributed I/O cache (135). In an embodiment, the method allows the fault manager master (110) to insert the at least one of the raise alarm event, or the clear alarm event into the distributed I/O cache (135).
[0070] At 504, the method includes updating at least one of the raise alarm event, or the clear alarm event by adding the new alarm timestamp to the timestamp array, and incrementing in the occurrence count. In an embodiment, the method allows the fault manager master (110) to update at least one of the raise alarm event, or the clear alarm event by adding the new alarm timestamp to the timestamp array, and increment in the occurrence count.
[0071] At 506, the method includes assigning the unique alarm identifier (ids) to at least each one of the raise alarm event, or the clear alarm event. In an embodiment, the method allows the fault manager master (110) to assign the unique alarm identifier (ids) to at least each one of the raise alarm event, or the clear alarm event.
[0072] At 508, the method includes segregating and mapping at least one of the raise alarm event, or the clear alarm event having the unique alarm identifiers with the raise fault manager (130), or the clear fault manager (115). In an embodiment, the method allows the fault manager master (110) to segregate and map the raise alarm event, or the clear alarm event having the unique alarm identifiers with the raise fault manager (130), or the clear fault manager (115).
[0073] FIG. 6 shows a sequence flow diagram illustrating a method 600 for alarm raise / clear synchronization and clear race condition handling by clear fault manager (115), according to various embodiments of the present disclosure.
[0074] At 602, the method includes running periodically to consume the unique alarm identifiers and fetch the corresponding alarm data from the distributed I/O cache (135) and the database (140). In an embodiment, the method allows the clear fault manager (115) to run periodically to consume the unique alarm identifiers and fetching the corresponding alarm data from the distributed I/O cache (135) and the database (140).
[0075] At 604, the method includes eliminating the raise alarm event from the active section and further adding the clearance metadata to the alarm to obtain the resolved alarm. In an embodiment, the method allows the clear fault manager (115) to eliminate the raise alarm event from the active section and further adding the clearance metadata to the alarm to obtain the resolved alarm.
[0076] At 606, the method includes modifying the resolved alarm and further storing in the archived section of the database (140). In an embodiment, the method allows the clear fault manager (115) to modify the resolved alarm and further storing in the archived section of the database (140).
[0077] At 608, the method includes adding the retry count in case the raise alarm event is not received or processed. In an embodiment, the method allows the clear fault manager (115) to add the retry count in case the raise alarm event is not received or processed.
[0078] FIG. 7 shows a sequence flow diagram illustrating a method 700 for alarm raise / clear synchronization and clear race condition handling by the clear retry fault manager (120), according to various embodiments of the present disclosure.
[0079] At 702, the method includes periodically running to consume clearance data from the retry stream, further for each data consumed, is checked in the database (140) for the corresponding raise alarm event. In an embodiment, the method allows the clear retry fault manager (120) to periodically run to consume clearance data from the retry stream, further for each data consumed, is checked in the database (140) for the corresponding raise alarm event.
[0080] At 704, the method includes eliminating the raise alarm event from the active section and further adding the clearance metadata to the raise alarm event to obtain another resolved alarm. In an embodiment, the method allows the clear retry fault manager (120) to eliminate the raise alarm event from the active section and further adding the clearance metadata to the raise alarm event to obtain another resolved alarm.
[0081] At 706, the method includes detecting if the retry count has reached the threshold. In an embodiment, the method allows the clear retry fault manager (120) to detect if the retry count has reached the threshold.
[0082] At 708, the method includes incrementing the retry count and reproducing the clear alarm data at the retry stream for another attempt. In an embodiment, the method allows the clear retry fault manager (120) to incrementing the retry count and reproducing the clear alarm data at the retry stream for another attempt.
[0083] At 710, the method includes logging the error if the retry count is exhausted. In an embodiment, the method allows the clear retry fault manager (120) to log the error if the retry count is exhausted.
[0084] FIG. 8 shows a sequence flow diagram illustrating a method 800 for alarm raise / clear synchronization and clear race condition handling by the fault manager auditor, according to various embodiments of the present disclosure.
[0085] At 802, the method includes identifying the stranded alarms present in the distributed I/O cache (135). In an embodiment, the method allows the fault manager auditor (125) to identify the stranded alarms present in the distributed I/O cache (135).
[0086] At 804, the method includes checking the clearance timestamp stamped on the raise alarm event to prevent any issue causing clearance of the raise alarm event occurred, where the timestamp of the raise alarm event is compared with the clear alarm events. In an embodiment, the method allows the fault manager auditor (125) to check the clearance timestamp stamped on the raise alarm event to prevent any issue causing clearance of the raise alarm event occurred.
[0087] FIG. 9 shows an example flow diagram of a method 900 for alarm raise / clear synchronization and clear race condition handling, according to various embodiments of the present disclosure.
[0088] In one implementation, the data is collected by the collector component (150). The collector component (150) is responsible for gathering FCAPS data from the Network Elements. It uses various protocols such as SNMP, REST, SOAP, Kafka, and others to receive Fault (Alarms) sent by the Network Elements.
[0089] Then, Alarm Parsing and Formatting is performed. The collected alarms are parsed and transformed into a standardized format at the collector component (150). This process involves extracting relevant information from the alarms and formatting them in a consistent manner. The alarms are then categorized as either 'raise Alarm' or 'clear Alarm' events based on their nature.
[0090] Further, the FM master (110) conducts processing of the alarms. The FM master (110) consumes the stream of alarms generated by the collector component (150). It reads the alarms based on their event types ('raise Alarm' or 'clear Alarm') and processes them accordingly. The FM Master's role is to handle the logic and operations related to the alarms, including storage and management.
[0091] Furthermore, the Cache Handling is performed. When an alarm is received, the FM master (110) checks if it already exists in the distributed I/O cache (135). If the alarm is not present, it is inserted into the cache. On the other hand, if the alarm is already in the cache, the FM master (110) updates it by adding the new alarm timestamp to the timestamp array and increments the occurrence count. This ensures that the latest information is retained and accessible for each alarm.
[0092] Further, streaming to Raise fault manager (130) / clear fault manager (115) is performed at block 201. After processing the alarms, the FM master (110) streams the unique identifiers (ids) of the alarms to the raise manager auditor (130) and the clear fault manager (115) based on certain conditions. These conditions determine when an alarm should be directed to the raise manager auditor (130) or the clear fault manager (115). For example, an alarm may be sent to the raise manager auditor (130) if it is the first occurrence, every nth occurrence, or if a certain configurable time has passed since the last occurrence.
[0093] Then after, raise manager auditor (130) processing is performed. The raise manager auditor (130) runs periodically and consumes the alarm ids from the Raise Stream. For each id, it fetches the corresponding alarm data from the cache. The raise manager auditor (130) then checks the database (140) to determine if the alarm already exists. If the alarm is present in the database, the raise manager auditor (130) updates it with the latest alarm data, including the alarm timestamp, occurrence count, and other relevant parameters. Some parameters may also be updated based on information from the previous occurrence, such as trouble ticket number or correlation data. If the alarm is not found in the database, it indicates a new alarm and the raise manager auditor (130) inserts it as a new entry.
[0094] After processing the raise manager auditor (130), the clear fault manager (115) processing is performed. Similar to the raise manager auditor (130), the clear fault manager (115) runs periodically and consumes the alarm ids from the clear stream. For each id, the clear fault manager (115) retrieves the corresponding clear alarm data from the cache. It then checks the database (140) to find the corresponding raise alarm. If the raise alarm is found and its raise timestamp is less than the clearance timestamp, the clear fault manager (115) removes the raise alarm from the active section and adds the clearance metadata to the alarm. The modified alarm is then stored in the archived section of the database. If the corresponding raise alarm is not found in the database, it means that the raise alarm was not received or processed yet. In such cases, the clear fault manager (115) adds a retry count to the clear alarm data and streams it to the Retry Stream for further processing.
[0095] If the raise alarm is not found in cache FM processing, the clear retry fault manager (120) Processing is performed. The clear retry fault manager (120) also operates periodically and consumes clearance data from the Retry Stream. For each data consumed, it checks the database for the corresponding raise alarm. If the raise alarm is found, the clear retry fault manager (120) processes the clearance in the same manner as the clear fault manager (115), deleting the raise alarm from the active section, adding clearance metadata, and storing it in the archived section. If the raise alarm is not found, the clear retry fault manager (120) checks if the retry count has reached its threshold, at block 202. If the count has not been exhausted, the retry count is incremented, and the clear alarm data is reproduced to the Retry Stream for another attempt. If the retry count is exhausted, an error is logged for further investigation. Time of retrying can be configured in different implementations. For example, 5 seconds, 15 seconds, and so on.
[0096] If the retrial count is exhausted as configured, then Non-Blocking Retry Mechanism may be implemented. The above method employs a non-blocking retry mechanism using a stream as a holding point. This mechanism ensures that the application threads remain free, enabling higher performance. It also addresses the issue of receiving clear alarms before the corresponding raise alarms (as mentioned in case 1). During the retry process, the raise alarm is inserted into the database, allowing the retry clear alarm to find it. For example, the clear alarm may be held for an hour. The holding period is configurable.
[0097] At next step, the fault manager auditor (125) is implemented. The fault manager auditor (125) runs at longer intervals to identify any stranded alarms present in the cache. This fail-safe mechanism ensures that no alarms are lost and helps maintain the integrity of the alarm management system.
[0098] At last, reporting and visualization is performed. Active and archived alarms, including their associated data, are fetched from the database for reporting purposes. These alarms are then visualized through a user interface (UI/UX), providing a comprehensive view of the network's fault and alarm status.
[0099] The method ensures efficient association of raise and clear alarms, even in situations with delayed or intermittent alarm occurrences. Overall, the method offers a comprehensive solution to the challenges faced by conventional alarm systems, leading to improved efficiency, accuracy, and effectiveness in managing alarms within network management environments.
[00100] The system components work collaboratively to process alarms, maintain cache consistency, and provide a reliable and high-performance alarm management system. The disclosed invention significantly improves alarm management in the NMS by synchronizing out-of-order alarms, reducing database hits, enabling efficient alarm correlation, enhancing incident resolution, and providing enriched contextual information. The inventive step lies in the introduction of the distributed I/O Cache, the alarm association mechanism, and the non-blocking retry mechanism, which together optimize the performance and reliability of the NMS system.
[00101] For the purpose of description, the methods 400-900 is described with the embodiments as illustrated in FIG. 1 to FIG. 3 and should nowhere be construed as limiting the scope of the present disclosure.
[00102] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS.1-9) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00103] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[00104] System- 100;
[00105] Fault manager- 105
[00106] Fault manager master - 110
[00107] Clear fault manager - 115
[00108] Clear retry fault manager - 120
[00109] Fault manager auditor - 125
[00110] Raise fault manager - 130
[00111] Distributed I/O cache - 135
[00112] Database - 140
[00113] Collector component - 150
[00114] Processor – 202
[00115] Memory – 204
[00116] User Interface – 206
[00117] Display – 208
[00118] Input device - 210
[00119] Parsing Unit -212;
[00120] Categorizing Unit - 214;
,CLAIMS:CLAIMS:
We Claim
1. A method for an alarm raise / clear synchronization and clear race condition handling, the method comprising the steps of:
receiving, by a one or more processor (202), a data set comprising a plurality of alarms generated by FCAPS (Fault, Configuration, Accounting, Performance, and Security) data from a network element at a collector component (150);
parsing, by the one or more processor (202), the plurality of alarms from the data set, and further transforming the plurality of alarms from the data set into a standardized format within the collector component (150);
categorizing, by the one or more processor (202), the plurality of alarms as either a raise alarm event, or a clear alarm event based on attributes associated with each of the alarm in the data set; and
sending, by the one or more processor (202), at least one of the raise alarm event, or the clear alarm event to a fault manager master (110).
2. The method as claimed in claim 1, wherein the parsing, comprises extracting, by the one or more processor (202), relevant information from the plurality of alarms, and formatting the plurality of alarms in a pre-defined set.
3. The method as claimed in claim 1, comprises checking, by the one or more processor (202), existence of at least one of the raise alarm event, or the clear alarm event in a distributed I/O cache (135).
4. The method as claimed in claim 3, comprises:
inserting, by the one or more processor (202), at least one of the raise alarm event, or the clear alarm event into the distributed I/O cache (135);
updating, by the one or more processor (202), at least one of the raise alarm event, or the clear alarm event by adding a new alarm timestamp to a timestamp array, and increment in an occurrence count; and
assigning, by the one or more processor (202), a unique alarm identifier (ids) to at least each one of the raise alarm event, or the clear alarm event.
5. The method as claimed in claim 4, comprises:
segregating and mapping, by the one or more processor (202), at least one of the raise alarm event, or the clear alarm event having the unique alarm identifiers with a raise fault manager (130), or a clear fault manager (115);
running, by the one or more processor (202), the raise fault manager (130) periodically to consume the unique alarm identifiers and fetch the corresponding alarm data from the distributed I/O cache (135) and a database (140);
running, by the one or more processor (202), the clear fault manager (115) periodically to consume the unique alarm identifiers and fetch the corresponding alarm data from the distributed I/O cache (135) and the database (140); and
eliminating, by the one or more processor (202), the raise alarm event from an active section by the clear fault manager (115) and further add a clearance metadata to the alarm to obtain a resolved alarm.
6. The method as claimed in claim 5, comprises:
modifying, by the one or more processor (202), the resolved alarm and further storing in an archived section of the database (140);
adding, by the one or more processor (202), a retry count by the clear fault manager (115), in case the raise alarm is not received or processed;
running, by the one or more processor (202), a clear retry fault manager (120) periodically to consume clearance data from a retry stream, further for each data consumed, the clear retry fault manager (120) checks the database (140) for the corresponding raise alarm event;
eliminating, by the one or more processor (202), the raise alarm event from an active section by the clear retry fault manager (120) and further add a clearance metadata to the raise alarm to obtain another resolved alarm;
detecting, by the one or more processor (202), if the retry count has reached a threshold by the clear retry fault manager (120);
incrementing, by the one or more processor (202), the retry count and reproducing the clear alarm data at the retry stream for another attempt; and
logging, by the one or more processor (202), an error if the retry count is exhausted.
7. The method as claimed in claim 1, comprises identifying, by the one or more processor (202), stranded alarms present in the distributed I/O cache (135).
8. The method as claimed in claim 1, comprises checking, by the one or more processor (202), a clearance timestamp stamped on the raise alarm event, to prevent any issue causing clearance of the raise alarm event occurred, wherein the timestamp of the raise alarm event is compared with the clear alarm events.
9. A system (100) for an alarm raise / clear synchronization and clear race condition handling, the system (100) comprising:
a collector component (150) configured to receive a data set comprising a plurality of alarms generated by FCAPS (Fault, Configuration, Accounting, Performance, and Security), from a network element;
a parsing unit (212) provided within the collector component (150), wherein the parsing unit (212) is configured to parse the plurality of alarms from the data set, and transform the plurality of alarms into standardized format;
a categorizing unit (214) provided within the collector component (150) configured to categorize the plurality of alarms as either a raise alarm event, or a clear alarm event based on attributes associated with each of the alarm in the data set; and
a fault manager master (110) configured to receive the plurality of alarms from the collector component, wherein the fault manager master (110) is configured to assign unique alarm identifiers (ids) for each alarm.
10. The system (100) as claimed in claim 9, wherein the collector component (150) is further configured to extract relevant information from the plurality alarms and formatting the plurality alarms in a pre-defined set.
11. The system (100) as claimed in claim 9, wherein the fault manager master (110) further checks existence of at least one of the raise alarm event, or the clear alarm event in a distributed I/O cache (135).
12. The system (100) as claimed in claim 11, wherein the fault manager master (110) is further configured to:
insert at least one of the raise alarm event, or the clear alarm event into the distributed I/O cache (135);
update at least one of the raise alarm event, or the clear alarm event by adding a new alarm timestamp to a timestamp array, and increment in an occurrence count;
assign a unique alarm identifier (ids) to at least each one of the raise alarm event, or the clear alarm event; and
segregate and map at least one of the raise alarm event, or the clear alarm event having the unique alarm identifiers with a raise fault manager (130), or a clear fault manager (115).
13. The system (100) as claimed in claim 12, wherein the raise fault manager (130) is configured to run periodically to consume the unique alarm identifiers and fetch the corresponding alarm data from the distributed I/O cache (135) and a database (140).
14. The system (100) as claimed in claim 12, wherein the clear fault manager (115) is configured to:
run periodically to consume the unique alarm identifiers and fetch the corresponding alarm data from the distributed I/O cache (135) and the database (140);
eliminate the raise alarm event from an active section and further add a clearance metadata to the alarm to obtain a resolved alarm;
modify the resolved alarm and further stores in an archived section of the database (140); and
add a retry count in case the raise alarm event is not received or processed.
15. The system (100) as claimed in claim 9, comprises the clear retry fault manager (120) is configured to:
periodically run to consume clearance data from a retry stream, further for each data consumed, is checked in the database for the corresponding raise alarm event;
eliminate the raise alarm event from an active section and further add a clearance metadata to the raise alarm event to obtain another resolved alarm;
detect if the retry count has reached a threshold;
increment the retry count and reproduce the clear alarm data at the retry stream for another attempt; and
log an error if the retry count is exhausted.
16. The system (100) as claimed in claim 9, comprises a fault manager auditor (125) is configured to:
identify stranded alarms present in the distributed I/O cache (135); and
check a clearance timestamp stamped on the raise alarm event, to prevent any issue causing clearance of the raise alarm event occurred, wherein the timestamp of the raise alarm event is compared with the clear alarm events.
| # | Name | Date |
|---|---|---|
| 1 | 202321046101-STATEMENT OF UNDERTAKING (FORM 3) [09-07-2023(online)].pdf | 2023-07-09 |
| 2 | 202321046101-PROVISIONAL SPECIFICATION [09-07-2023(online)].pdf | 2023-07-09 |
| 3 | 202321046101-FORM 1 [09-07-2023(online)].pdf | 2023-07-09 |
| 4 | 202321046101-FIGURE OF ABSTRACT [09-07-2023(online)].pdf | 2023-07-09 |
| 5 | 202321046101-DRAWINGS [09-07-2023(online)].pdf | 2023-07-09 |
| 6 | 202321046101-DECLARATION OF INVENTORSHIP (FORM 5) [09-07-2023(online)].pdf | 2023-07-09 |
| 7 | 202321046101-FORM-26 [20-09-2023(online)].pdf | 2023-09-20 |
| 8 | 202321046101-Proof of Right [22-12-2023(online)].pdf | 2023-12-22 |
| 9 | 202321046101-DRAWING [01-07-2024(online)].pdf | 2024-07-01 |
| 10 | 202321046101-COMPLETE SPECIFICATION [01-07-2024(online)].pdf | 2024-07-01 |
| 11 | Abstract-1.jpg | 2024-08-05 |
| 12 | 202321046101-Power of Attorney [11-11-2024(online)].pdf | 2024-11-11 |
| 13 | 202321046101-Form 1 (Submitted on date of filing) [11-11-2024(online)].pdf | 2024-11-11 |
| 14 | 202321046101-Covering Letter [11-11-2024(online)].pdf | 2024-11-11 |
| 15 | 202321046101-CERTIFIED COPIES TRANSMISSION TO IB [11-11-2024(online)].pdf | 2024-11-11 |
| 16 | 202321046101-FORM 3 [27-11-2024(online)].pdf | 2024-11-27 |
| 17 | 202321046101-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |