Sign In to Follow Application
View All Documents & Correspondence

Method And System Of Monitoring A Network

Abstract: ABSTRACT METHOD AND SYSTEM OF MONITORING A NETWORK The present disclosure relates to a system (120) and a method (500) of monitoring a network (105). The method (500) includes the step of receiving data from one or more next Generation Node B(gNodeBs) (305) via a probing agent. The method (500) includes the step of extracting features from the received data. The method (500) includes the step of labelling the received data based on the features extracted from the received data. The method (500) further includes the step of categorizing the labelled data into a training group and a testing group. The method (500) includes the step of training a training model utilizing the labelled data associated with the training group. The method (500) includes the step of evaluating the performance of the training model utilizing the labelled data associated with the testing group. Ref. Fig. 5

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2023
Publication Number
15/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM OF MONITORING A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of network management, and more particularly relates to a system and a method of monitoring the network.
BACKGROUND OF THE INVENTION
[0002] With the increase in the number of users, the network service provisions have been implementing up-gradations to enhance the service quality so as to keep pace with such high demand. With advancement of technology, there is a demand for the telecommunication service to induce up to date features into the scope of provision so as to enhance user experience and implement advanced monitoring mechanisms. There are regular data analyses to observe issues beforehand for which many data collection as well as assessment practices are implemented in a network.
[0003] Identification of an anomaly or the reason of any network breakdown in the modern telecommunications industry is a time-consuming and challenging operation due to the massive network infrastructure and the volume of data flowing across the network. To determine the cause of failure, the end user must manually analyze the vast amount of network data. Even with a live streaming data dashboard, the user must continuously monitor the data flow for any significant deviations. Examples of these anomalies include abnormal signal strength, call drop rates, and unusual traffic patterns.
[0004] Thus, in traditional systems, identifying the problem is a time and resource consuming activity and it often happens that the issues only found out after occurrence. This delay in addressing network issues can result in customer dissatisfaction and service disruptions.
[0005] There is a need for a mechanism, more specifically a system and method thereof to detect anomalies for proactive monitoring of user network data to determine underlying issues like weak signal strength, network congestion, or resource constraints.

SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and system of monitoring a network.
[0007] In one aspect of the present invention, the method of monitoring the network is disclosed. The method includes the step of receiving data from one or more next Generation Node Bs (gNodeBs) via a probing agent. The method includes the step of extracting features from the received data. The method further includes the step of labelling the received data based on the features extracted from the received data. The method further includes the step of categorizing the labelled data into a training group and a testing group. The method further includes the step of training a training model utilizing the labelled data associated with the training group. The method further includes the step of evaluating performance of the training model utilizing the labelled data associated with the testing group. Further the method includes the step of detecting the one or more anomalies in the network based on the evaluation of the performance of the training model.
[0008] In an embodiment, the method includes the step of updating the labelled data associated with the training group and the testing group based on receipt of the data in real time, wherein upon updating the labelled data associated with the training group and the testing group, the one or more processors utilizes the updated data for evaluating the performance of the training model and the detection of the one or more anomalies.
[0009] In an embodiment, the step of transmitting one or more alerts on detection of the one or more anomalies to the user interface of a user equipment.
[0010] In an embodiment, on receipt of the data from the gNodeBs, the method includes the step of converting the received data into a standard format.
[0011] In an embodiment, the features include at least call parameters, geographic coordinates, and network load metrics.
[0012] In an embodiment, the performance of the training model utilizing the labelled data associated with the testing group is evaluated based on one or more metrics, wherein the one or more metrics includes at least a root mean square error and a mean absolute error.
[0013] In an embodiment, the one or more anomalies is at least, fluctuation in signal strength and increase in call drops.
[0014] In another aspect of the present invention, the system of monitoring the network is disclosed. The system includes a receiving unit configured to receive data from one or more next Generation Node B(gNodeBs) via a probing agent. The system further includes an extraction unit configured to extract, features from the received data. The system further includes a labelling unit configured to label the received data based on the features extracted from the received data. The system further includes a categorizing unit configured to categorize the labelled data into a training group and a testing group. The system further includes a training unit configured to train a training model utilizing the labelled data associated with the training group. The system includes an evaluation unit configured to evaluate, performance of the training model utilizing the labelled data associated with the testing group. The system further includes a detection unit configured to detect the one or more anomalies in the network based on the evaluation of the performance of the training model.
[0015] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive data from one or more next Generation Node B(gNBs) via a probing agent. The processor is configured to extract, features from the received data. The processor is configured label, the received data based on the features extracted from the received data. The processor is configured to categorize, the labelled data into a training group and a testing group. The processor is configured to train, a training model utilizing the labelled data associated with the training group. The processor is configured to evaluate, performance of the training model utilizing the labelled data associated with the testing group. The processor is configured to detect, the one or more anomalies in the network based on the evaluation of the performance of the training model.
[0016] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0018] FIG. 1 is an exemplary block diagram of an environment of monitoring a network, according to one or more embodiments of the present invention;
[0019] FIG. 2 is an exemplary block diagram of a system of monitoring the network, according to one or more embodiments of the present invention;
[0020] FIG. 3 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0021] FIG. 4 is a flowchart diagram of monitoring the network, according to one or more embodiments of the present invention; and
[0022] FIG. 5 is a schematic representation of a method of monitoring the network, according to one or more embodiments of the present invention.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.

DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] FIG. 1 illustrates an exemplary block diagram of a environment 100 for monitoring a network 105, according to one or more embodiments of the present disclosure. In this regard, the environment100 includes a User Equipment (UE) 110, a server 115, the network 105 and a system 120 communicably coupled to each other for monitoring the network 105.
[0028] In an embodiment, the managing the network 105 refers to overseeing and optimizing various aspects of the network's performance, reliability, and resource allocation. The managing of the network involves monitoring network conditions, identifying issues such as, but not limited to, congestion or anomalies, and taking corrective or preventive actions to maintain smooth operation and quality of service (QoS). The key objectives of network management include, but not limited to, overseeing network performance, optimizing resource allocation, maintaining reliability, and monitoring network conditions. The examples of the managing the network 105 include, but not limited to, monitoring network conditions, identifying issues such as congestion or anomalies, taking corrective or preventive actions, and detecting security threats.
[0029] As per the illustrated embodiment and for the purpose of description and illustration, the UE 110 includes, but not limited to, a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0030] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0031] The environment100 includes the server 115 accessible via the network 105. The server 115 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0032] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0033] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0034] The environment100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is configured to monitor the network 105. As per one or more embodiments, the system 120 is adapted to be embedded within the server 115 or embedded as an individual entity.
[0035] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0036] FIG. 2 is an exemplary block diagram of the system 120 for monitoring the network 105, according to one or more embodiments of the present invention.
[0037] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a user interface 215, and a database 220. For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0038] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0039] In an embodiment, the user interface 215 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of such components include, but are not limited to, the UE 110 and the database 220.
[0040] The database 220 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 220 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] In order for the system 120 to monitor the network 105, the processor 205 includes one or more modules/units. In one embodiment, the one or more modules/units includes, but not limited to, a receiving unit 225, a conversion unit 230, a labelling unit 235, a categorizing unit 240, a training unit 245, an evaluation unit 250, a detection unit 255, a transmitting unit 260, and an updating unit 265 communicably coupled to each other for monitoring the network 105.
[0042] In one embodiment, the one or more modules may be used in combination or interchangeably for monitoring the network 105.
[0043] The receiving unit 225, the conversion unit 230, the labelling unit 235, the categorizing unit 240, the training unit 245, the evaluation unit 250, the detection unit 255, the transmitting unit 260, and the updating unit 265, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0044] In an embodiment, the receiving unit 225 is configured to receive data from one or more next Generation Node Bs(gNodeBs) 305 (as shown in FIG. 3) via a probing agent. The probing agent refers to a software or hardware component used to monitor, collect, and analyze data from the one or more gNodeBs 305. The data received from the one or more gNodeBs 305 via a probing agent includes various metrics such as, but not limited to, call parameters, geographic coordinates, network load metrics, and latency and delay metrics. For example, the geographic coordinates pertain to the location data of the gNodeBs 305 and a connected user device. The network load metrics pertains to the load on the gNodeB 305 which includes the number of users and the amount of traffic being handled by the gNodeB 305.
[0045] In one embodiment, the gNodeBs 305 are the Radio Access Network (RAN) element in the network 105 that facilitates wireless communication between user devices (such as smartphones, tablets, and IoT devices) and the core network. The gNodeBs 305 plays a critical role in ensuring efficient data transfer, user mobility management, and overall network performance.

[0046] In an embodiment, the call parameters include information such as, but not limited to, call setup success rates, call duration, dropped calls, and handover events. The geographic coordinates refer to location data linked to the one or more gNodeBs 305 and connected user devices. In particular, when the user device connects to the one or more gNodeBs 305, the connected user device shares its location data which can include the Global Positioning System (GPS) coordinates. Herein, the one or more gNodeBs 305 provides the geographic coordinates to the receiving unit 225. The geographic coordinates aid in the analysis of traffic patterns and coverage areas. The network load metrics includes, but not limited to, data on network traffic, bandwidth usage, the number of active connections, and system throughput, providing a comprehensive view of the network capacity and performance. The latency and delay metrics on the other hand focus on network delays, including, but not limited to, packet transmission times and round-trip delays between the one or more gNodeBs 305 and connected devices, which are critical for understanding the responsiveness and efficiency of the network 105.
[0047] Upon receiving the data from the one or more gNodeBs 305 via the probing agent, the conversion unit 230 is configured to convert the received data into a standard format. The conversion unit 230 is also responsible for transforming the data into the standard format. Standardizing the data ensures consistency and compatibility, makes the data easier to analyze, compare, and integrate with other data processes. Converting the received data into the standard format is crucial for maintaining data integrity and facilitating accurate analysis and monitoring. In one embodiment, the data received from the one or more gNodeBs 305 are in different formats such as at least one of, but not limited to, plain text, Hyper Text Markup Language (HTML), binary, Comma-Separated Values (CSV), eXtensible Markup Language (XML), JavaScript Object Notation (JSON) and images. In one embodiment, the standard format includes at least one of, but not limited to, the HTML, the XML, the CSV and the JSON.
[0048] For example, let us assume that data received from the one or more gNodeBs 305 is in XML format and the standard format is the JSON format. So, in order to convert the received data from the XML format into the JSON format, the received data is parsed which involves reading the received data and breaking the received data into at least one of, but not limited to, elements and attributes. The attributes are data objects containing one or more key-value pairs and arrays and the elements are used as a container to store text, elements, and attributes. Herein, if the XML data is , Alan then the id and the gender are attributes and the name is element. Further, the XML elements and attributes are mapped with the JSON elements and attributes. Thereafter, using the prestored libraries in the conversion unit 230, the received data in the XML format is converted in the JSON format.
[0049] Thereafter, the extraction unit 235 is configured to extract features from the received data. The features include at least call parameters, geographic coordinates, network load metrics, time of the day, and time of the week. In one embodiment, the extraction unit 235 extract features from the received data based on a predefined criteria such as the extracted features must include at least one of, but not limited to, the call parameters, the geographic coordinates, and the network load metrics. Herein, the predefined criteria is defined by at least one of, but not limited to, the user and the model based on the previous model training experience.
[0050] In an embodiment, the call parameters are data points related to the details and performance of voice or video calls made over the network 105. The examples of call parameters are some of, but not limited to, call setup time, call drop rate, and call quality metrics. The geographic coordinates refer to the location of the data of the network 105 from the UE 110. The location of the data is used to understand the geographical distribution of network activity and coverage. The examples of the geographic coordinates are some of, but not limited to, base station location, and user device location. The network load metrics provide information about the usage and capacity of network resources. The examples of the network load metrics are some of, but not limited to, traffic volume, bandwidth utilization, number of active connections.
[0051] Upon extracting the features from the received data, the labelling unit 240 is configured to label the received data based on the features extracted from the received data. The labelling unit 240 assigns labels to the data based on extracted features, categorizing the data into predefined classes such as, but not limited to, normal or anomalous and performance levels like high quality, moderate quality, or poor quality, depending on whether the data fits expected patterns or shows deviations. For example, let us assume that the extracted features from the received data includes at least one of, but not limited to, the call parameters, and the network load metrics such as a latency, a packet loss, and a jitter. In the event if the call parameters, and the network load metrics fits expected patterns then the features extracted from the received data are considered as the high quality. In the event if the call parameters, and the network load metrics deviates from the expected patterns then the features extracted from the received data are considered as the poor quality. Let us consider that expected patterns includes complete call conversion and no call drop, so if the call drop happens then the received data will be labelled as poor quality by the labelling unit 240.
[0052] Once the data has been labelled based on the extracted features, the categorizing unit 245 is configured to categorize the labelled data into a training group and a testing group. In one embodiment, based on the past or historical training of the model, the model provides an input to the categorizing unit 245 such that for training, the labelled data must be split into the training group and the testing group randomly. For example, a ratio for splitting the training group and the testing group includes at least one of, but not limited to, 70:30, 80:20, or 90:10. In yet another example, let us consider list of numbers representing a dataset such as 1 to 100. Further, the dataset is randomly split into the training group and the testing group in the ratio of 80:20 such as the numbers 1 to 80 are training group and 81 to 100 are testing group. The training group is a subset of the labelled data used to train machine learning models or algorithms. The training group helps the machine learning models to learn patterns and relationships within the labelled data. By analyzing the labelled data, the machine learning models adjust the parameters to recognize normal and anomalous behavior based on the features and labels provided. Once the data has been labeled with descriptive tags, the data is divided into two distinct groups. The two distinct group are, the training group and the testing group. The purpose of the categorization is to create separate datasets for different stages of the modeling process. The training group is used to teach the model about patterns and relationships in the data, while the testing group is used to assess how well the model performs on previously unseen data. For example, if the labelled data includes details on call quality, network load, and geographic coordinates, the training group will consist of a significant portion of the labelled data, with indicators specifying whether each case is normal or anomalous. The machine learning models then use the labelled data to learn how to detect patterns related to issues such as, but not limited to, high call drop rates or network congestion.
[0053] The testing group is a separate subset of the labelled data used to evaluate the performance of the trained the training model or algorithm. The testing group evaluates the trained training model or algorithm ability to generalize to new, unseen data, assessing its accuracy, reliability, and effectiveness in detecting anomalies, by providing fresh instances of call data. For example, if the training model was trained to detect anomalies in call setup times, the testing group may contain new instances of call data to see how well the training model identifies new anomalies or maintains machine learning model or algorithms performance.
[0054] Upon categorizing the labelled data into the training group and the testing group, the training unit 250 is configured to train the training model utilizing the labelled data associated with the training group. In other words, the labelled data is fed to the model by the training unit 250 for training. Once the data has been labeled with descriptive tags, the data is divided into two distinct groups. The two distinct group are, the training group and the testing group. The purpose of the categorization is to create separate datasets for different stages of the modeling process. The training group is used to teach the model about patterns and relationships in the data, while the testing group is used to assess how well the model performs on previously unseen data.
[0055] Subsequent to training the training model utilizing the labelled data associated with the training group, the evaluation unit 250 is configured to evaluate performance of the training model utilizing the labelled data associated with the testing group. By evaluating the model using from the testing group data, the evaluation unit 250 helps ensure that the model is not simply memorizing the training data but may also generalize well on new data. The evolution unit 250 is crucial for reliable anomaly detection and performance assessment in the network monitoring system.
[0056] The performance of the training model utilizing the labelled data associated with the testing group is evaluated based on one or more metrics. The one or more metrics comprises at least, a root mean square error and a mean absolute error. The root means square error measures the square root of the average of the squared differences between predicted values and actual values. The root mean square error provides an aggregate measure of the model’s prediction accuracy by penalizing larger errors more severely. The examples of the root mean square error includes, but not limited to, signal strength prediction, network load prediction, and latency prediction. The mean absolute error measures the average of the absolute differences between predicted values and actual values. The mean absolute error provides a straightforward measure of prediction accuracy, treating all errors equally without amplifying larger errors.
[0057] Further, the detection unit 255 is configured to detect the one or more anomalies in the network 105 by analyzing the performance evaluation of the training model. The process of detecting the one or more anomalies by comparing the model's predictions with expected outcomes is done by comparing the model's predictions with expected outcomes and identifying deviations that indicate unusual network behaviors, such as signal fluctuations or increased packet loss. The detection unit 225 uses metrics like root mean square error or mean absolute error to assess the model's accuracy and identify anomalies based on significant deviations from normal network patterns. The one or more anomalies refer to unusual or unexpected behaviors in the network 105 that deviate from normal operation. The one or more anomalies is at least one of, but not limited to, fluctuation in signal strength, increase in network traffic and increase in call drops. For example, let us assume that a network traffic handling capacity of the network 105 is 5000 requests in a day which acts as the threshold or the expected outcome. Herein, model's predictions are that the load of the network traffic is increased to 20000 requests. So, the increased network traffic is the unusual patterns in the network 105 which is inferred as the anomaly. The one or more anomalies occurs when the signal strength in the network 105 varies unexpectedly or significantly. In the stable network 105, signal strength is expected to remain consistent within certain thresholds. However, variations may result from interference, physical obstructions, or hardware malfunctions. The variations may degrade the quality of service, leading to issues like, such as, but not limited to, poor voice or video call quality, dropped connections, or slower data speeds. The call drop refers to the sudden termination of an ongoing voice or video call due to the network failure. An increase in call drops indicates a recurring or widespread issue affecting network stability, which could stem from factors like network congestion, faulty equipment, poor signal coverage, or issues with network routing. Frequent call drops degrade user experience and signal larger performance issues within the network infrastructure.
[0058] The updating unit 265 is configured to update the labelled data associated with the training group and the testing group based on receipt of the data in real time. For example, the system 120 is continuously or periodically receiving the data from the one or more gNodeBs 305 based on which the received data is labelled. In other words, the updating unit 265 updating the labelled data with new data received from the one or more gNodeBs 305. Upon updating the labelled data associated with the training group and the testing group, the training unit 245, the evaluation unit 250, and the detection unit 255 utilizes the updated data for evaluating the performance of the training model and the detection of the one or more anomalies.
[0059] Upon detecting the one or more anomalies in the network 105 based on the evaluation of the performance of the training model performance, the transmitting unit 260 is configured to transmit one or more alerts on detection of the one or more anomalies to one of the user interface 215 and the UE 110. The alerts appear as notifications or messages on the user interface 215. The alerts inform users of the detected network 105 issues and may include recommendations or instructions for addressing the problem. For example, when one or more anomalies are detected in the network 105, various alerts can be generated to inform users about potential issues that may occur in the network 105. In one embodiment, the transmitting unit 260 transmits at least one of, but not limited to, notifications and automated emails to the users including the details of the detected of the one or more anomalies. By, utilizing the one or more alerts the user can ensure timely and effective responses to anomalies, minimizing potential issues and improving overall system reliability.
[0060] FIG. 3 is an exemplary block diagram of an architecture 300 implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0061] The architecture 300 includes a gNodeBs 305, the probing unit 310, data consumers 315, a processing hub interface 320, a processing hub 325, a data pre-processing unit 330, a model training unit 335, an anomaly detection module 340, an alerting and response unit 345, and the database 220. The term “one or more gNodeBs 305” referred as “gNodeB 305” hereinafter, without limiting the scope of the disclosure.
[0062] The gNodeB 305 base stations provide connectivity within network 105 by connecting UE 110 to the network 105, generating data related to network conditions, user activity, and performance metrics, and transmitting this network data to the probing unit315 for extraction and analysis. The gNodeB 305 provides connectivity and generates network data, which is captured by probing unit 315.
[0063] The probing unit315 serves as an intermediary for data extraction from gNodeB 305 base stations by capturing network traffic data from multiple gNodeB 305, extracting relevant features such as, but not limited to, traffic volume, signal strength, and user behavior metrics, and forwarding the processed data to the processing hub 325 for further analysis. The probing unit 315 extracts features from the raw network data and sends extracts features to the processing hub 325.
[0064] Upon receiving the extracts features from the probing unit315, the processing hub 325 takes the extracted features, processes and prepares the data, trains machine learning models, and uses these models to detect and indicate the one or more anomalies in the network 105. The processing hub 325 is the central component responsible for handling and analyzing data, which includes several key functions such as the data pre-processing 330, the model training 335, the anomaly detection module 340, and the model training 335.
[0065] The data pre-processing 330 cleans raw data by removing noise and irrelevant information, organizes the data into the format suitable for analysis, and prepares the data through normalization, handling missing values, and feature selection. The model training 335 uses the pre-processed data to train the machine learning model by feeding the data into algorithms that learn patterns and relationships. Further, the model training 335 validates the model by testing the data on separate data sets and refining based on performance.
[0066] The anomaly detection module 340 employs the trained model to analyze new data in real-time, identifying deviations from normal behavior and flagging unusual patterns or one or more anomalies, which may indicate potential issues or irregularities in the network 105. Thereafter, the model training 335 uses the pre-processed data to train the machine learning model, with the categorizing into the training set specifically for the pre-processed data purpose. Once the model has been trained, the anomaly detection module 340 tests the trained model and identifies the one or more anomalies by comparing results with training models 335 results expected outcomes and flags any usual or suspicious network activities.
[0067] The processing hub interface 320 serves as the device between data collection of the gNodeB 305, the probing unit315 and the processing hub 325, channeling network data to the preprocessing and machine learning modules for analysis. Receives raw data from the gNodeBs 305 and the probing unit315 and forwards it to the processing hub 325. The processing hub interface 320 ensures that data flows smoothly from collection points to the analysis component and manages data transmission protocols to ensure that data is correctly formatted and efficiently transferred.
[0068] The data consumers 315 utilize the processed data for various operational and analytical purposes, including, but not limited to, network management systems, performance monitoring tools, and reporting dashboards, by requesting data from the processing hub 325 or database 220 to generate reports based on detected one or more anomalies. The data consumers 315 may request updated information about detected one or more anomalies, which helps to understand network issues and trends. By analyzing the data, the data consumers 315 can produce reports that provide insights into network performance, identify areas needing attention, and support decision-making processes aimed at optimizing network operations and addressing potential problems effectively.
[0069] The database 220 plays the crucial role by storing historical data, processed information, and trained models. The database 220 functions to keep raw, pre-processed, and analyzed data for future reference and long-term storage. Additionally, the database 220 supports retraining by providing data necessary for updating machine learning models, allowing for continuous improvement and adaptation to evolving network conditions. The database 220 also facilitates querying, enabling the retrieval of data for reporting, further analysis, or troubleshooting purposes.
[0070] FIG. 4 is an exemplary flowchart diagram of monitoring the network 105, according to one or more embodiments of the present invention.
[0071] At step 405, the process begins with collecting data, where the probing unit310 collects raw data from various gNodeB 305. The gNodeB 305 base stations generate network data that forms the foundation of the training dataset for the machine learning model. The probing unit 310 is responsible for processing the raw data, which includes, but not limited to, cleaning the data to remove noise and irrelevant information, handling missing values, and converting the data into a suitable format for analysis. Additionally, the probing unit 310 extracts relevant features such as, but not limited to, traffic volume, signal strength, and user behavior metrics, which are crucial for effective model training. The processed and feature-extracted data is used to train the machine learning model, enabling the data to learn patterns and relationships in the network data for subsequent one or more anomalies detection.
[0072] At step 410, after collecting and processing the data, the relevant features such as, but not limited to, call parameters, geographic coordinates, and network load metrics are extracted from the collected data to serve as inputs for the anomaly detection model. The data is labeled with descriptive tags, for example, the network sessions may be labeled as good, acceptable, or poor, based on factors like latency, packet loss, and jitter, while communication sessions may be marked as completed or dropped. After labeling the data, the labeled data is divided into training and testing sets. The training set used to train the machine learning model, and the testing set reserved for evaluating the model performance.
[0073] At step 415, the performance of the trained model is evaluated using a testing dataset, employing metrics such as root mean square error and mean absolute error to assess the accuracy and effectiveness. The model's capability to detect anomalies, such as, but not limited to, signal strength fluctuations and increased call drops, is critical for identifying network issues that may affect performance and user experience. The one or more anomalies detection process supports proactive network management and optimization by providing timely alerts about potential problems. To ensure the model remains effective as network conditions evolve, the system continuously collects new data, periodically retrains the model with updated datasets, and refines detection capabilities. The iterative process helps adapt to changing network requirements and maintains high performance in detecting and addressing emerging the one or more anomalies.
[0074] FIG. 5 is a schematic representation of a method 500 of monitoring the network 105, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0075] At step 505, the method 500 includes the step of receiving data from the gNodeBs 305 via the probing agent. On receipt of the data from the gNodeBs305, the method includes the step of converting the received data into the standard format. The standardizing the data ensures consistency and compatibility, making standard format easier to analyze, compare, and integrate with other data processes. Converting the received data int the standard format is crucial for maintaining data integrity and facilitating accurate analysis and monitoring.
[0076] At step 510, the method 500 includes the step of extracting features from the received data. The features include at least call parameters, geographic coordinates, and network load metrics. The call parameters are data points related to the details and performance of voice or video calls made over the network 105. The examples of call parameters are some of, but not limited to, call setup time, call drop rate, and call quality metrics. The geographic coordinates refer to the location of the data of the network from the user devices. The location of the data is used to understand the geographical distribution of network activity and coverage. The examples of the geographic coordinates are some of, but not limited to, base station location, and user device location. The network load metrics provide information about the usage and capacity of network resources. The examples of the network load metrics are some of, but not limited to, traffic volume, bandwidth utilization, number of active connections.
[0077] The step 515, the method 500 includes the step of labelling the received data based on the features extracted from the received data. The data assigns labels to the data based on extracted features, categorizing the data into predefined classes such as, but not limited to, normal or anomalous and performance levels like high quality, moderate quality, or poor quality, depending on whether the data fits expected patterns or shows deviations.
[0078] At step 515, the method 500 includes the step of categorizing the labelled data into the training group and the testing group. The training group is a subset of the labelled data used to train machine learning models or algorithms. The training group helps the machine learning models to learn patterns and relationships within the labelled data. The testing group is a separate subset of the labelled data used to evaluate the performance of the trained machine learning model or algorithm.
[0079] At step 520, the method 500 includes the step of training the training model utilizing the labelled data associated with the training group. Once the data has been labeled with descriptive tags, the data is divided into two distinct groups. The two distinct group are, the training group and the testing group. The purpose of the categorization is to create separate datasets for different stages of the modeling process. The training group is used to teach the model about patterns and relationships in the data, while the testing group is used to assess how well the model performs on previously unseen data.
[0080] At step 520, the method 500 includes the step of detecting the one or more anomalies in the network based on the evaluation of the performance of the training model. The one or more anomalies refer to unusual or unexpected behaviors in the network 105 that deviate from normal operation. The one or more anomalies is at least, fluctuation in signal strength and increase in call drops. The one or more anomalies occurs when the signal strength in the network 105 varies unexpectedly or significantly.
[0081] At step 525, the method 500 includes the step of evaluating performance of the training model utilizing the labelled data associated with the testing group. Updating the labelled data associated with the training group and the testing group based on receipt of the data in real time. Upon updating the labelled data associated with the training group and the testing group. The updated data for evaluating the performance of the training model and the detection of the one or more anomalies.
[0082] At step 530, the method 500 includes the step of detecting the one or more anomalies in the network 105 based on the evaluation of the performance of the training model. Further transmitting one or more alerts on detection of the one or more anomalies to the user interface 215 of the UE 110.
The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to receive data from the gNodeBs 305 via a probing agent. The processor 205 is configured to extract features from the received data. The processor 205 is configured to label the received data based on the features extracted from the received data. The processor 205 is configured to categorize the labelled data into the training group and the testing group. The processor 205 is configured to train the training model utilizing the labelled data associated with the training group. The processor 205 is configured to evaluate performance of the training model utilizing the labelled data associated with the testing group. The processor 205 is configured to detect the one or more anomalies in the network 105 based on the evaluation of the performance of the training model.
[0083] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0084] The present disclosure includes technical advancement offers a proactive anomaly detection system for the network monitoring, using machine learning to improve performance evaluation. The invention processes real-time data from network nodes, updates models continuously, and detects anomalies like signal fluctuations and call drops. The key features, such as call parameters and network load metrics, are automatically extracted and labeled, enabling precise anomaly identification. The invention supports real-time alerts and data conversion for seamless integration, while metrics like root mean square error and mean absolute error ensure reliable detection.
[0085] The present invention offers multiple advantages that enables accurate early detection of network issues by identifying unusual patterns in network data, leading to enhanced quality of service (QoS) through prompt resolution of signal strength and call drop anomalies, while also delivering cost savings by reducing downtime, minimizing equipment failures, improving network performance, and optimizing resource utilization.
[0086] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.


REFERENCE NUMERALS

[0087] Environment- 100
[0088] User Equipment (UE)- 110
[0089] Server- 115
[0090] Network- 105
[0091] System -120
[0092] Processor- 205
[0093] Memory- 210
[0094] User interface- 215
[0095] Database – 220
[0096] Receiving unit -v225
[0097] Conversion unit - 230
[0098] Labelling unit - 235
[0099] Categorizing unit - 240
[00100] Training unit - 245
[00101] Evaluation unit - 250
[00102] Detection unit - 255
[00103] Transmitting unit - 260
[00104] Updating unit - 265
[00105] Generation Node B (gNodeB) - 305
[00106] probing unit - 310
[00107] Data consumers - 315
[00108] Processing hub interface - 320
[00109] Processing hub - 325
[00110] Data pre-processing - 330
[00111] Model training - 335
[00112] Anomaly detection module - 340
[00113] Alerting and response - 345
,CLAIMS:CLAIMS
We Claim:
1. A method (500) of monitoring a network (105), the method (500) comprising the steps of:
receiving, by one or more processors (205), data from one or more next Generation Node B(gNodeBs) (305) via a probing agent;
extracting, by the one or more processors (205), features from the received data;
labelling, by the one or more processors (205), the received data based on the features extracted from the received data;
categorizing, by the one or more processors (205), the labelled data into a training group and a testing group;
training, by the one or more processors (205), a training model utilizing the labelled data associated with the training group;
evaluating, by the one or more processors (205), performance of the training model utilizing the labelled data associated with the testing group; and
detecting, by the one or more processors (205), the one or more anomalies in the network based on the evaluation of the performance of the training model.

2. The method (500) as claimed in claim 1, comprising the step of:
updating, by the one or more processors (205), the labelled data associated with the training group and the testing group based on receipt of the data in real time, wherein upon updating the labelled data associated with the training group and the testing group, the one or more processors (205) utilizes the updated data for evaluating the performance of the training model and the detection of the one or more anomalies.

3. The method (500) as claimed in claim 1, comprising the step of transmitting, by the one or more processors (205), one or more alerts on detection of the one or more anomalies to a user interface (215) of a user equipment (110).

4. The method (500) as claimed in claim 1, wherein on receipt of the data from the gNodeBs (305), the method (500) comprises the step of converting, by the one or more processors (205), the received data into a standard format.

5. The method (500) as claimed in claim 1, wherein the features include at least call parameters, geographic coordinates, and network load metrics.

6. The method (500) as claimed in claim 1, wherein the performance of the training model utilizing the labelled data associated with the testing group is evaluated based on one or more metrics, wherein the one or more metrics comprises at least a root mean square error and a mean absolute error.

7. The method (500) as claimed in claim 1, wherein the one or more anomalies is at least, fluctuation in signal strength and increase in call drops.

8. A system (120) for monitoring a network (105), the system (120) comprising:
a receiving unit (225) configured to receive, data from one or more next Generation Node B(gNodeBs) (305) via a probing agent;
an extraction unit (230) configured to extract, features from the received data;
a labelling unit (235) configured to label; the received data based on the features extracted from the received data;
a categorizing unit (240) configured to categorize, the labelled data into a training group and a testing group;
a training unit (245) configured to train, a training model utilizing the labelled data associated with the training group;
an evaluation unit (250) configured to evaluate, performance of the training model utilizing the labelled data associated with the testing group; and
a detection unit (255) configured to detect, the one or more anomalies in the network based on the evaluation of the performance of the training model.

9. The system (120) as claimed in claim 8, comprising:
an updating unit (265) configured to update, the labelled data associated with the training group and the testing group based on receipt of the data in real time, wherein upon updating the labelled data associated with the training group and the testing group, the training unit (245), the evaluation unit (250), and the detection unit (255) utilizes the updated data for evaluating the performance of the training model and the detection of the one or more anomalies.

10. The system (230) as claimed in claim 8, comprising a transmitting unit (260) configured to transmit, one or more alerts on detection of the one or more anomalies to a user interface (215) of a user equipment (110).

11. The system (120) as claimed in claim 8, comprising a conversion unit (230) configured to convert, the received data into a standard format.

12. The system (120) as claimed in claim 8, wherein the features include at least call parameters, geographic coordinates, and network load metrics.

13. The system (120) as claimed in claim 8, wherein the performance of the training model utilizing the labelled data associated with the testing group is evaluated based on one or more metrics, wherein the one or more metrics comprises at least a root mean square error and a mean absolute error.

14. The system (120) as claimed in claim 1, wherein the one or more anomalies is at least, fluctuation in signal strength and increase in call drops.

Documents

Application Documents

# Name Date
1 202321067392-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf 2023-10-07
2 202321067392-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf 2023-10-07
3 202321067392-POWER OF AUTHORITY [07-10-2023(online)].pdf 2023-10-07
4 202321067392-FORM 1 [07-10-2023(online)].pdf 2023-10-07
5 202321067392-FIGURE OF ABSTRACT [07-10-2023(online)].pdf 2023-10-07
6 202321067392-DRAWINGS [07-10-2023(online)].pdf 2023-10-07
7 202321067392-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf 2023-10-07
8 202321067392-FORM-26 [27-11-2023(online)].pdf 2023-11-27
9 202321067392-Proof of Right [12-02-2024(online)].pdf 2024-02-12
10 202321067392-DRAWING [06-10-2024(online)].pdf 2024-10-06
11 202321067392-COMPLETE SPECIFICATION [06-10-2024(online)].pdf 2024-10-06
12 Abstract.jpg 2024-12-07
13 202321067392-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202321067392-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202321067392-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202321067392-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202321067392-FORM 3 [31-01-2025(online)].pdf 2025-01-31