Abstract: ABSTRACT METHOD AND SYSTEM OF PROVIDING A UNIFIED DATA NORMALIZER WITHIN A NETWORK PERFORMANCE MANAGEMENT SYSTEM The present disclosure relates to a method [300] and a system [200] of providing a unified data normalizer within a network performance management system. The method comprises configuring, by a setup unit [202a] of a normalization system [202], a source information of one or more sources [204] through interface. The method comprises fetching, by a fetching unit [202b] of the normalization system[202], data from the one or more sources [204], processing, by a processing unit [202c] of the normalization system[202], data fetched from the one or more sources [204]; and storing, by a data storing unit [202d] of the normalization system[202], the processed data in a data lake [210]. [FIG. 2A]
FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM OF PROVIDING A UNIFIED DATA NORMALIZER WITHIN A NETWORK PERFORMANCE MANAGEMENT SYSTEM”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM OF PROVIDING A UNIFIED DATA
NORMALIZER WITHIN A NETWORK PERFORMANCE
MANAGEMENT SYSTEM
5 TECHNICAL FIELD
Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to method and system of providing a unified data normalizer within a network 10 performance management system.
BACKGROUND
The following description of the related art is intended to provide background
15 information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
20
Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network
25 performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take
30 precautionary measures accordingly.
2
In network performance management systems, network service integration and normalization of the integrations plays a critical role. A network performance management system typically ingests data from various network elements and monitoring tools. Data from various such sources needs to be normalised before it
5 can be analysed by the network performance management system. Generally, in the existing normalization modules/approaches, for every new integration, code level changes and integration efforts are required. The existing solutions fail to provide an approach for unified data normalization and parsing that can streamline the process of integrating and extracting data from various sources with just few
10 configurations in the system without making any code level changes. The existing solutions have various limitations such as complex configurations, manual efforts, lack of interoperability, and inefficient data integration across diverse systems and formats, etc. Moreover, the existing solutions do not provide system flexibility and the systems fail to adjust as per the format of the input data.
15
Thus, there exists an imperative need in the art to provide a solution that can overcome these and other limitations of the existing solutions.
SUMMARY OF THE DISCLOSURE
20
This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
25
An aspect of the present disclosure may relate to a method for providing a unified data normalizer within a network performance management system. The method comprises configuring, by a setup unit of a normalization module, a source information of one or more sources through an interface. The method comprises
30 fetching, by a fetching unit of the normalization module, a data from the one or more sources. The method comprises processing, by a processing unit of the
3
normalization module, the data fetched from the one or more sources. The method comprises storing, by a data storing unit of the normalization module, the processed data in a data lake.
5 Another aspect of the present disclosure may relate to a system for providing a unified data normalizer within a network performance management system. The system comprises a normalization system. The normalization system comprises a setup unit configured to configure, at a normalization module, source information of one or more sources through an Interface. The normalization system further
10 comprises a fetching unit connected to the setup unit and the fetching unit is configured to fetch, a data from the one or more sources. The normalization system further comprise a processing unit connected to the fetching unit and the processing unit is configured to process, data fetched from the one or more sources based on the configured. The normalization system further comprises a data storing unit that
15 is configured to store the processed data in the data lake.
Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for providing a unified data normalizer within a network performance management system, the instructions 20 include executable code which, when executed by a processor, cause the processor to: configure a source information of one or more sources; fetch a data from the one or more sources; process the data fetched from the one or more sources; and store the processed data in a data lake.
25
OBJECTS OF THE DISCLOSURE
Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below. 30
4
It is an object of the present disclosure to provide a method and a system of providing a unified data normaliser within a network performance management system.
5 It is another object of the present disclosure to provide an approach for unified data normalization and parsing which streamlines a process of integrating and extracting data from various sources.
It is another object of the present disclosure to provide an integration approach for 10 data parsing which involves designing a system that allows for seamless integration and parsing of data from various sources with just few configurations in the system without making any code level changes.
It is yet another object of the present disclosure to provide a unified data normalizer 15 for data parsing in Normalization systems that simplifies configuration, reduces manual effort, promotes interoperability, enables efficient data integration across diverse systems and formats, for increasing the system flexibility or making it more versatile to adjust as per the format of the input data.
20
DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and
25 systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system
30 according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such
5
drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
Fig. 1 illustrates an exemplary block diagram of a network performance 5 management system, in accordance with the exemplary embodiments of the present disclosure.
Fig. 2A illustrates a first exemplary normalization system configuration, in accordance with the exemplary embodiments of the present disclosure. 10
Fig. 2B illustrates a second exemplary normalization system configuration, in accordance with the exemplary embodiments of the present disclosure.
Fig. 2C illustrates an exemplary block diagram of the normalization system in 15 accordance with the exemplary embodiments of the present disclosure.
Fig. 3 illustrates an exemplary method flow diagram indicating the process of providing a unified data normalizer, in accordance with the exemplary embodiments of the present disclosure. 20
Fig. 4 illustrates an exemplary block diagram of a computing device upon which an embodiment of the present disclosure may be implemented.
The foregoing shall be more apparent from the following more detailed description 25 of the disclosure.
DETAILED DESCRIPTION
In the following description, for the purposes of explanation, various specific details
30 are set forth in order to provide a thorough understanding of embodiments of the
present disclosure. It will be apparent, however, that embodiments of the present
6
disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed 5 above.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the 10 art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
15 Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the
20 embodiments in unnecessary detail.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a 25 sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
30 The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter
7
disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of
5 ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
10
As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality
15 of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of
20 the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a
25 wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant,
30 tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may
8
contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
5 As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other
10 types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
As used herein “interface” or “user interface refers to a shared boundary across 15 which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called. 20
All modules, units, components used herein may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a 25 controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
As discussed in the background section, the current known solutions have several
shortcomings. The present disclosure aims to overcome the above-mentioned and
30 other existing problems in this field of technology by providing method and system
of providing a unified data normaliser within a network performance management
9
system. To provide the unified data normaliser, in a network performance management engine say integrated performance management (IPM), a Normalization layer/ system is provided with an approach for unified data normalization and parsing which streamlines the process of integrating and
5 extracting data from various sources. This integration approach for data parsing involves designing a system that allows for seamless integration and parsing of data from various sources with just few configurations in the system without making any code level changes. Moreover, the unified data normalizer for data parsing in the Normalization simplifies configuration, reduces manual effort, and promotes
10 interoperability, enabling efficient data integration across diverse systems and formats. It increases the system flexibility or makes it more versatile to adjust as per the format of the input data.
Hereinafter, exemplary embodiments of the present disclosure will be described 15 with reference to the accompanying drawings.
FIG. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention. Referring to FIG. 1, the network performance management
20 system [100] comprises various sub-systems such as: integrated performance management system [100a], normalization layer [100b], computation layer [100d], anomaly detection layer [100o], streaming engine [100l], load balancer [100k], operations and management system [100p], API gateway system [100r], analysis engine [100h], parallel computing framework [100i], forecasting engine [100t],
25 distributed file system, mapping layer [100s], distributed data lake [100u], scheduling layer [100g], reporting engine [100m], message broker [100e], graph layer [100f], caching layer [100c], service quality manager [100q] and correlation engine[100n]. Exemplary connections between these subsystems is also as shown in FIG. 1. However, it will be appreciated by those skilled in the art that the present
30 disclosure is not limited to the connections shown in the diagram, and any other
10
connections between various subsystems that are needed to realise the effects are within the scope of this disclosure.
Following are the various components of the system [100], the various components 5 may include:
Integrated performance management system [100a] comprise of a 5G performance engine [100v] and a 5G Key Performance Indicator (KPI) Engine [100u].
10 5G Performance Management Engine [100v]: The 5G Performance Management engine [100v] is a crucial component of the integrated system, responsible for collecting, processing, and managing performance counter data from various data sources within the network. The gathered data includes metrics such as connection speed, latency, data transfer rates, and many others. This raw data is then processed
15 and aggregated as required, forming a comprehensive overview of network performance. The processed information is then stored in a Distributed Data Lake [100u], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The 5G Performance Management engine [100v] also enables the reporting and visualization of this performance counter data, thus
20 providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
25 5G Key Performance Indicator (KPI) Engine [100u]: The 5G Key Performance Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of all the network elements. It uses the performance counters, which are collected and processed by the 5G Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI
30 engine [100u] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they
11
are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. The processed KPI data is then stored in the Distributed Data Lake [100u], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar
5 to the Performance Management engine, the KPI engine [100u] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
10
Ingestion layer: The Ingestion layer forms a key part of the Integrated Performance Management system. Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics,
15 Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes it by validating its integrity and correctness to ensure it is fit for further use. Following validation, the data is routed to various components of the system, including the Normalization layer, Streaming Engine, Streaming Analytics, and
20 Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
25
Normalization layer [100b]: The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that's been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization" reduces redundancy and
30 improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [100u], Caching Layer, and Graph
12
Layer, depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management system through the
5 exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine for detailed data examination, the Correlation Engine [100n] for detecting relationships among various data elements, the Service Quality Manager for maintaining and improving the quality of services, and the Streaming Engine for
10 processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
Caching layer [100c]: The Caching Layer [100c] in the Integrated Performance 15 Management system plays a significant role in data management and optimization. During the initial phase, the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer [100c]. The Caching Layer [100c] is a high-speed 20 data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [100c] serves as an intermediate layer between the data sources 25 and the sub-systems, such as the Analysis Engine, Correlation Engine [100n], Service Quality Manager, and Streaming Engine. The Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c].
30 Computation layer [100d]: The Computation Layer [100d] in the Integrated Performance Management system serves as the main hub for complex data
13
processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b]. The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [100u], Caching Layer [100c], and Graph Layer, and also feeds it to the Message
5 Broker. Within the Computation Layer [100d], several powerful sub-systems such as the Analysis Engine, Correlation Engine [100n], Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine performs in-depth data analytics to generate insights from the data. The Correlation Engine [100n]
10 identifies and understands the relations and patterns within the data. The Service Quality Manager assesses and ensures the quality of the services. And the Streaming Engine processes and analyses the real-time data feeds. In essence, the Computation Layer [100d] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer
15 [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
Message broker [100e]: The Message Broker [100e], an integral part of the Integrated Performance Management system, operates as a publish-subscribe 20 messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [100e] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-25 hoc consumers, the Message Broker [100e] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [100e] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design 30 and capabilities, the Message Broker [100e] forms a critical component in managing and delivering real-time data in the system.
14
Graph layer [100f]: The Graph Layer [100f], serving as the Relationship Modeler, plays a pivotal role in the Integrated Performance Management system. It can model a variety of data types, including alarm, counter, configuration, CDR data, Infra-5 metric data, 5G Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, the Relationship Modeler offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their interrelationships. Moreover, the Modeler should be adept at processing steps provided in the model and delivering 10 the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation System [100n], 5G Performance Management Engine, or 5G KPI Engine [100u]. With its powerful modeling and processing capabilities, the Graph Layer [100f] forms an essential part of the system, enabling the processing and analysis of complex relationships between 15 various types of network data.
Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task
20 might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [100u] or Distributed File System or sending it to another micro-service. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables
25 seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
30
15
Analysis Engine [100h]: The Analysis Engine [100h] forms a crucial part of the Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of
5 call flows. With the Analysis Engine [100h], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the
10 system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [100h] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
15 Parallel Computing Framework [100i]: The Parallel Computing Framework [100i] is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System
20 (DFS) [100j] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of
25 specific host lists for different computing tasks. The Parallel Computing Framework [100i] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
30 Distributed File System [100j]: The Distributed File System (DFS) [100j] is a critical component of the Integrated Performance Management System, enabling
16
multiple clients to access and interact with data seamlessly. This file system is designed to manage data files that are partitioned into numerous segments known as chunks. In the context of a network with vast data, the DFS [100j] effectively allows for the distribution of data across multiple nodes. This architecture enhances 5 both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [100j] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system. 10
Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized 15 server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-20 based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event 25 acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
30 Streaming Engine [100l]: The Streaming Engine [100l], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management
17
System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by
5 the Streaming Engine [100l]. After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [100l] cooperates with the Distributed Data Lake [100u], Message Broker [100e], and Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring
10 that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [100u], Message Broker [100e], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time. The streaming engine's [100l]ultimate goal is to provide fast, reliable, and efficient data streaming, contributing to the overall
15 performance of the management system.
Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System. The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of
20 API data, catered to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various
25 interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in Excel format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the
30 Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the
18
information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
The present invention focusses on the unified data normalizer within the network 5 performance management system [100]. FIG. 2A illustrates a first exemplary normalization system [202] configuration, in accordance with the exemplary embodiments of the present disclosure. Further FIG. 2B illustrates a second exemplary normalization system [202] configuration, in accordance with the exemplary embodiments of the present disclosure. 10
Furthermore, the normalization system [202] comprises at least one setup unit [202a], at least one fetching unit [202b], at least one processing unit [202c], at least one data storing unit [202d] as shown in Fig. 2C. The setup unit [202a], the fetching unit [202b], the processing unit [202c] and the storing unit [202d] are configured 15 to enable the various units/modules of the normalization system [202] and the network performance management system [100], for instance the units/modules as depicted in the Fig. 2A or Fig. 2B, to implement the features of the present disclosure.
20 Referring to FIG. 2A, featuring via the Normalization layer [100b], unified data normalization approach for data parsing, streamlines the process of integrating and extracting data from various sources [204]. Although a single data source is shown, it will be appreciated by those skilled in the art that the present disclosure is not limited thereto. For enabling said approach, one-time provisioning of source details
25 (i.e., the details of the source system from which data is to be received) in the normalization system [202] is provided through an exposed APIs/ data source details configuring system [206]. These source details include file type, file format, data delimiter, Network Element (NE) details, etc., along with fields and their types. Once these source details are entered and configured, the normalization system
30 [202] is ready to process the newly configured source data. This newly configured source data may be consumed by one or more external systems [208] and is also
19
stored in a data lake [210]. Also, if any new source from one or more source [204] is to be integrated with the normalization system [202], the source details are configured in normalization system [202] through exposed APIs and then normalization system [202] is set to process the data. 5
Referring to FIG.2B, as shown, the normalization system [202] may comprise of multiple normalization systems [202(1)], [202(2)] and [202(3)] (herein collectively referred as [202]). Although only three normalizers are shown in the Fig. 2B, the
10 present disclosure is not limited to this number, and use of any number of normalization systems [202] are within the scope of the present disclosure. A source data configuring system [206] is used to provide the source details to the normalization system [202], wherein the source data may be processed and configured by one or more normalization systems [202]. Also this source data is
15 received from one or more sources [204]. This newly configured source data may be consumed by one or more external systems [208] and is also stored in the data lake [210].
The present disclosure also encompasses that, the normalization system [202] may 20 comprise multiple normalization systems [202(1)], [202(2)], and [202(3)] (herein collectively referred to as [202]) for handling each data source type or each network element. The multiple normalization systems [202(1)], [202(2)], and [202(3)] may have various configurations for handling diverse types of data sources. Each normalization system [202(1)], [202(2)], and [202(3)] operates differently for each 25 type of data source.
Referring to Fig. 2C, as shown, the normalization system [202] comprises at least one setup unit [202a], at least one fetching unit [202b], at least one processing unit [202c], at least one storing unit [202d], all components connected to each other (said connections not shown in the block diagram for clarity). Particularly, the 30 setup unit [202a] is configured to configure source information of one or more sources [204] through an interface. such as HTTPS, gRPC, RESTful APIs and
20
WebSockets. In an implementation, the interface includes one or more Application Programming Interfaces (APIs).
Further, the present disclosure encompasses that the source information of one or 5 more sources are configured through an interface via one or more operations such as Create, Read, Update and Delete (CRUD) operations.
The disclosure encompasses that the configuration of the source information is a one-time provisioning of the source information of the one or more sources [204]. 10 Further, the source information comprises information associated with file type, file format, data delimiter, Network Element (NE) details, fields, and corresponding one or more types.
For example, the setup unit [202a] configures a source information including file 15 type such as CSV file or XML file or ASN.1 file or JSON file, file format such as one or more column headers, data delimiter such as “,” or “;”, Network Element (NE) details, fields such as latency, throughput, error rates, from the one or more source via an interface such as API.
20 Further, the fetching unit [202b] is configured to fetch, at the normalization system [202], a data from the one or more sources [204]. The one or more sources may include but not limited to one or more inventory data storage units, one or more probe data storing units, one or more infrastructure metric data storage unit, one or more call data storage units. Further the data may include but not limited to
25 inventory data, probe data, infrastructure metric data, call data records. In an implementation, the fetching unit [202b] is configured to extract a set of relevant data from the one or more sources [204]. Furthermore, one or more extraction rules may be defined and stored in the data lake [210] based on which such extraction of the set of relevant data is performed.
30
21
Further, the processing unit [202c] is configured to process, at the normalization system [202], the data fetched from the one or more sources [204] based on the configured source information. In an implementation, the processing of data comprises at least one of deduplication, transformation, and enrichment of the 5 fetched data. For example, deduplication ensures data consistency by removing one or more duplicate entries, while transformation standardizes the data for a uniform processing. Further, enrichment supplements the data with an additional context or one or more insights.
10 In an implementation, the data is processed via one or more data operations which helps in refining and optimizing the data for subsequent processing for ensuring accuracy, consistency, and relevance within the system.
In an implementation of the present solution as disclosed herein, the normalization 15 system [202] comprises a normalization layer [100b]. The normalization layer [100b] provides one or more essential functions for troubleshooting, operations, and overall management of the normalization process carried out by the system.
Additionally, the Normalization Layer [100b] serves to standardize, enrich, and
20 store data. It takes in data that's been ingested and adjusts it to a common standard,
making it easier to compare and analyse. This process of "normalization" reduces
redundancy and improves data integrity. Upon completion of normalization, the
data is stored in various databases like the Distributed Data Lake [100u], Caching
Layer, and Graph Layer, depending on its intended use. The choice of storage
25 determines how the data can be accessed and used in the future. Additionally, the
Normalization Layer [100b] produces data for the Message Broker, a system that
enables communication between different parts of the performance management
system through the exchange of data messages. Moreover, the Normalization Layer
[100b] supplies the standardized data to several other subsystems. These include
30 the Analysis Engine for detailed data analysis, the Correlation Engine [100n] for
detecting relationships among various data elements, the Service Quality Manager
22
for maintaining and improving the quality of services, and the Streaming Engine for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
5 Further, the data storing unit [202d] is configured to store, at the normalization system [202], the processed data in the data lake [210]. In an implementation, the system further comprises a normalizing unit [202f] configured to normalize the fetched data to a common data model before storing in the data lake [210]. The
10 normalizing unit [202f] simplifies one or more configurations, reduces manual effort, and promotes interoperability for enabling efficient data integration across one or more diverse systems and formats.
In an implementation of the present solution as disclosed herein, the system [200] 15 further comprises of an access management unit [202e] configured to provide access of the processed data to one or more external servers.
For example, upon configuring the source information, the processing unit [202c] proceeds to process data fetched from these sources, by performing tasks like
20 deduplication, transforming the data into a standardized format, and enriching with an additional information. Subsequently, the data storing unit [202d] is configured to store the processed data into the data lake [210]. Furthermore, the normalizing unit [202f] simplifies configurations, reduces manual effort, and promotes interoperability by normalizing the fetched data into a common data model before
25 storing it in the data lake [210], thereby enabling efficient integration of data across diverse systems and formats, such as merging one or more network performance metrics from one or more CSV files and real-time data from one or more network devices.
30 Additionally, the normalizing unit [202f] utilizes the source information to normalize the fetched data into the common data model. The source information
23
(such as file type, format specification and field attributes) helps to ensure that the fetched data is in one or more standard formats according to the one or more external servers, so that the fetched data may be easily used for further analysis.
5 Additionally, the setup unit [202a], the fetching unit [202b], the processing unit [202c], the data storing unit [202d], the access management unit [202e], and the normalizing unit [202f] are processors. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in
10 association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
Referring to FIG. 3 an exemplary method flow diagram [300] indicating the process 15 of providing a unified data normalizer, in accordance with the exemplary embodiments of the present invention is shown. In an implementation the method [300] is performed by the normalization system [202]. As shown in Fig. 3, the method [300] starts at step [302].
20 At step [304], the method [300] as disclosed by the present disclosure comprises configuring, by the setup unit [202a] of the normalization system [202], the source information of one or more sources [204] through an interface. The one or more sources may include but not limited to one or more inventory data storage units, one or more probe data storing units, one or more infra metric data storage unit, one
25 or more call data storage units. Further the data may include but not limited to inventory data, probe data, infra metric data, call data records. In an implementation, the interface includes one or more Application Programming Interfaces (APIs). In an implementation of the present disclosure as disclosed herein, the configure step comprises one-time provisioning of the source
30 information of the one or more sources [204]. Further, the source information
24
comprises information associated with file type, file format, data delimiter, Network Element (NE) details, fields, and corresponding one or more types.
For example, the setup unit [202a] configures a source information including file 5 type such as CSV file, file format such as one or more column headers, data delimiter such as “,” or “;”, Network Element (NE) details, fields such as latency, throughput, error rates, from the one or more source via an interface such as API.
At step [306], the method [300] as disclosed by the present disclosure comprises 10 fetching, by a fetching unit [202b] of the normalization system [202], a data from the one or more sources [204]. In an implementation, fetching the data includes extracting a set of relevant data from the one or more sources based on one or more extraction rules. These one or more extraction rules may be defined and stored in the data lake [210]. 15
At step [308], the method [300] as disclosed by the present disclosure comprises processing, by a processing unit [202c] of the normalization system [202], the data fetched from the one or more sources [204] based on the configured source information. In an implementation, the processing of data comprises at least one of 20 deduplication, transformation, and enrichment of the fetched data. For example, deduplication ensures data consistency by removing one or more duplicate entries, while transformation standardizes the data for a uniform processing. Further, enrichment supplements the data with an additional context or one or more insights.
25 For instance, the data comprises of one or more unique identifiers which are used to determine one or more duplicate errors. The data may be processed according to requirements of one or more external servers. Further the data may be enriched by deriving one or more new fields from one or more existing fields.
25
In an implementation, the data is processed via one or more data operations which helps in refining and optimizing the data for subsequent processing for ensuring accuracy, consistency, and relevance within the system.
5 In an implementation of the present solution an access management unit [202e] of the system [200] provides an access of the processed data to one or more external servers.
In an implementation of the present solution, the processed data obtained from the 10 processing unit [202c] is stored according to one or more standard formats of the one or more external servers. Further the one or more standard formats may include pre-defined standard formats stored in the storage unit [202d].
In an implementation of the present solution as disclosed herein, the normalization 15 system [202] comprises a normalization layer [100b]. The normalization layer [100b] provides one or more essential functions for troubleshooting, operations, and overall management of the normalization process carried out by the system.
Further, the normalization layer [100b] provides the processed data (i.e. 20 normalized data) to another sub-systems. The sub-systems include but not limited to Analysis Engine, Correlation Engine, Service Quality Manager and Streaming Engine.
25 At step [310], the method [300] as disclosed by the present disclosure comprises storing, by a storing unit of the normalization system [202], the processed data in a data lake [210].
The disclosure encompasses normalizing, by a normalizing unit [202f], the fetched
30 data to a common data model before storing the same in the data lake [210]. The
normalizing unit [202f] simplifies one or more configurations, reduces manual
26
effort, and promotes interoperability for enabling efficient data integration across one or more diverse systems and formats. Additionally, the normalizing unit [202f] utilizes the source information to normalize the fetched data into the common data model. The source information (such as file type, format specification and field 5 attributes) helps to ensure that the fetched data is in one or more standard formats according to the one or more external servers, so that the fetched data may be easily used for further analysis. The method [300] then terminates at step [312].
10 For example, upon configuring the source information, the processing unit [202c] proceeds to process data fetched from these sources, by performing tasks like deduplication, transforming the data into a standardized format, and enriching with an additional information. Subsequently, the data storing unit [202d] is configured to store the processed data into the data lake [210]. Furthermore, the normalizing
15 unit [202f] simplifies configurations, reduces manual effort, and promotes interoperability by normalizing the fetched data into a common data model before storing it in the data lake [210], enabling efficient integration of data across diverse systems and formats, such as merging one or more network performance metrics from one or more CSV files and real-time data from one or more network devices.
20
For example, by adopting the method for providing a unified data normalizer within a network performance management system, organizations may significantly streamline their data management processes. Firstly, the present disclosure simplifies configuration procedures, eliminating the need for one or more complex
25 setup steps and reducing the potential for one or more errors. Secondly, the present disclosure minimizes manual effort by automating one or more tasks related to data processing and transformation, freeing up valuable resources for other critical activities. Moreover, the promotion of interoperability means that data from various sources and formats can seamlessly integrate with one another. Ultimately, these
30 benefits or technical advantages collectively contribute to enhanced efficiency in
27
data integration processes, empowering the organization to make more informed decisions and derive greater value from their data resources.
The present disclosure also encompasses a non-transitory computer readable 5 storage medium storing instructions for providing a unified data normalizer within a network performance management system [100], the instructions include executable code which, when executed by a processor, cause the processor to: configure a source information of one or more sources [204] through an interface; fetch a data from the one or more sources [204]; process the data fetched from the 10 one or more sources [204] based on the configured source information; and store the processed data in a data lake [210].
FIG. 4 illustrates an exemplary block diagram of a computing device [1000] upon which an embodiment of the present disclosure may be implemented. In an
15 implementation, the computing device [1000] implements the method for providing a unified data normalizer within a network performance management system [100] using the system [200]. In another implementation, the computing device [1000] itself implements the method for providing a unified data normalizer within a network performance management system [100] using one or more units configured
20 within the computing device [1000], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
The computing device [1000] may include a bus [1002] or other communication mechanism for communicating information, and a hardware processor [1004]
25 coupled with bus [1002] for processing information. The hardware processor [1004] may be, for example, a general purpose microprocessor. The computer system [1000] may also include a main memory [1006], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [1002] for storing information and instructions to be executed by the processor [1004]. The main
30 memory [1006] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the
28
processor [1004]. Such instructions, when stored in non-transitory storage media accessible to the processor [1004], render the computer system [1000] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computer system [1000] further includes a read only memory 5 (ROM) [1008] or other static storage device coupled to the bus [1002] for storing static information and instructions for the processor [1004].
A storage device [1010], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [1002] for storing information and instructions.
10 The computer system [1000] may be coupled via the bus [1002] to a display [1012], such as a cathode ray tube (CRT), for displaying information to a computer user. An input device [1014], including alphanumeric and other keys, may be coupled to the bus [1002] for communicating information and command selections to the processor [1004]. Another type of user input device may be a cursor control [1016],
15 such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [1004], and for controlling cursor movement on the display [1012]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
20
The computer system [1000] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system [1000] causes or programs the computer system [1000] to be a special-purpose machine. According
25 to one embodiment, the techniques herein are performed by the computer system [1000] in response to the processor [1004] executing one or more sequences of one or more instructions contained in the main memory [1006]. Such instructions may be read into the main memory [1006] from another storage medium, such as the storage device [1010]. Execution of the sequences of instructions contained in the
30 main memory [1006] causes the processor [1004] to perform the process steps
29
described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The computer system [1000] also may include a communication interface [1018]
5 coupled to the bus [1002]. The communication interface [1018] provides a two-way
data communication coupling to a network link [1020] that is connected to a local
network [1022]. For example, the communication interface [1018] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
10 telephone line. As another example, the communication interface [1018] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [1018] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
15 various types of information.
The computer system [1000] can send messages and receive data, including program code, through the network(s), the network link [1020] and the communication interface 1018. In the Internet example, a server [1030] might 20 transmit a requested code for an application program through the Internet [1028], the ISP [1026], the local network [1022] and the communication interface [1018]. The received code may be executed by the processor [1004] as it is received, and/or stored in the storage device 1010, or other non-volatile storage for later execution.
25 For example, in a telecommunications organization implementing the method and system as encompassed by this disclosure in their network performance management system which involves configuration of one or more APIs to collect data from various network equipment vendors, fetches real-time performance data, standardizes it, and stores it in a data lake [210]. The method and system for
30 providing a unified data normalizer within a network performance management system enables the company to efficiently monitor and optimize network
30
performance across diverse equipment, reducing downtime and ensuring a seamless experience for their customers.
5 As is evident from the above, the present disclosure provides a technically advanced solution of providing the unified data normalizer. The unified data normalization approach as disclosed in the present disclosure overcomes the limitations of the existing solutions, simplifies configuration, reduces manual effort, enabling efficient data integration across diverse systems and formats. Additionally, the
10 method and system for providing a unified data normalizer increases flexibility, versatility of the network performance management system to adjust as per the format of an input data. Further the unified data normalization approach as disclosed in the present disclosure not only streamlines the configuration process but also significantly reduces the need for manual intervention. As a result, it
15 facilitates a seamless interoperability.
While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the 20 present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting
We Claim:
1. A method for providing a unified data normalizer within a network
performance management system [100], comprising:
‐ configuring, by a setup unit [202a] of a normalization system [202], a
source information of one or more sources [204] through an interface; ‐ fetching, by a fetching unit [202b] of the normalization system [202], a
data from the one or more sources [204]; ‐ processing, by a processing unit [202c] of the normalization system
[202], the data fetched from the one or more sources [204] based on the
configured source information; and ‐ storing, by a data storing unit [202d] of the normalization system [202],
the processed data in a data lake [210].
2. The method as claimed in claim 1, wherein the source information comprises information associated with at least one of a file type, a file format, a data delimiter, a Network Element (NE) details, and corresponding one or more types.
3. The method as claimed in claim 1, further comprising providing, by an access management unit [202e] of the normalization system [202], an access of the processed data to one or more external servers.
4. The method as claimed in claim 1, wherein the configuring comprises a one¬time provisioning of the source information of the one or more sources [204].
5. The method as claimed in claim 1, further comprises normalizing, by a normalizing unit [202f] of the normalization system [202], the fetched data to a common data model before storing in the data lake [210].
6. The method as claimed in claim 1, wherein the processing of the data comprises at least one of a deduplication, a transformation, and an enrichment of the fetched data.
7. A system [200] for providing a unified data normalizer within a network performance management system [100], comprising:
a normalization system [202] comprising: a setup unit [202a] configured to configure a source information of one or more sources [204] through an interface;
a fetching unit [202b] connected with the setup unit [202a], the fetching unit [202b] configured to fetch a data from the one or more sources [204];
a processing unit [202c] connected to the fetching unit [202b], the processing unit [202c] configured to process the data fetched from the one or more sources [204] based on the configured source information; and a data storing unit [202d] connected to the processing unit [202c], the data storing unit [202d] configured to store the processed data in a data lake [210].
8. The system [200] as claimed in claim 7, wherein the source information comprises information associated with a file type, a file format, a data delimiter, a Network Element (NE) details, and corresponding one or more types.
9. The system [200] as claimed in claim 7, wherein the normalization system [202] further comprises an access management unit [202e] configured to provide an access of the processed data to one or more external servers.
10. The system [200] as claimed in claim 7, wherein the setup unit [202a] is configured to configure the source information by a one-time provisioning of the source information of the one or more sources [204].
11. The system [200] as claimed in claim 7, further comprises a normalizing unit [202f] configured to normalize the fetched data to a common data model before storing in the data lake [210].
12. The system [200] as claimed in claim 7, wherein to process the data, the processing unit [202c] is configured to perform at least one of a deduplication, a transformation, and an enrichment of the fetched data.
| # | Name | Date |
|---|---|---|
| 1 | 202321047643-STATEMENT OF UNDERTAKING (FORM 3) [14-07-2023(online)].pdf | 2023-07-14 |
| 2 | 202321047643-PROVISIONAL SPECIFICATION [14-07-2023(online)].pdf | 2023-07-14 |
| 3 | 202321047643-FORM 1 [14-07-2023(online)].pdf | 2023-07-14 |
| 4 | 202321047643-FIGURE OF ABSTRACT [14-07-2023(online)].pdf | 2023-07-14 |
| 5 | 202321047643-DRAWINGS [14-07-2023(online)].pdf | 2023-07-14 |
| 6 | 202321047643-FORM-26 [18-09-2023(online)].pdf | 2023-09-18 |
| 7 | 202321047643-Proof of Right [23-10-2023(online)].pdf | 2023-10-23 |
| 8 | 202321047643-ORIGINAL UR 6(1A) FORM 1 & 26)-301123.pdf | 2023-12-08 |
| 9 | 202321047643-ENDORSEMENT BY INVENTORS [20-05-2024(online)].pdf | 2024-05-20 |
| 10 | 202321047643-DRAWING [20-05-2024(online)].pdf | 2024-05-20 |
| 11 | 202321047643-CORRESPONDENCE-OTHERS [20-05-2024(online)].pdf | 2024-05-20 |
| 12 | 202321047643-COMPLETE SPECIFICATION [20-05-2024(online)].pdf | 2024-05-20 |
| 13 | Abstract.1.jpg | 2024-06-28 |
| 14 | 202321047643-FORM 3 [01-08-2024(online)].pdf | 2024-08-01 |
| 15 | 202321047643-Request Letter-Correspondence [09-08-2024(online)].pdf | 2024-08-09 |
| 16 | 202321047643-Power of Attorney [09-08-2024(online)].pdf | 2024-08-09 |
| 17 | 202321047643-Form 1 (Submitted on date of filing) [09-08-2024(online)].pdf | 2024-08-09 |
| 18 | 202321047643-Covering Letter [09-08-2024(online)].pdf | 2024-08-09 |
| 19 | 202321047643-CERTIFIED COPIES TRANSMISSION TO IB [09-08-2024(online)].pdf | 2024-08-09 |