Sign In to Follow Application
View All Documents & Correspondence

Geometrical Digital Twin (Gdt) System And A Method For Generating Flight Route For Unmanned Aerial Vehicles (Uavs)

Abstract: The present disclosure relates to a geometrical digital twin (gdt) system and method for generating a flight route for Unmanned Aerial Vehicles (UAVs) within a geometric Digital Twin (gDT) created by a Neural Radiance Field (NeRF) for monitoring the structural health of a structure/building. The system includes a processor linked to a memory, executing instructions to capture images of a Region of Interest (ROI) from various viewpoints. The system uses Structure from Motion (SfM) and Multi-View Stereo (MVS) techniques to determine poses of the captured images, subsequently generating a gDT using a neural network. The gDT is calibrated to align digital coordinates with real-world coordinates. The system creates a light field representation of gDT for visualization from the calibrated images from different viewpoint, allowing the marking of checkpoints that define a UAV flight path for monitoring structural health enhancing the efficiency and accuracy of structural inspections.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 November 2024
Publication Number
06/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

IITI DRISHTI CPS Foundation
IIT Indore, Mhow, Indore, Indore - 453552, Madhya Pradesh, India.

Inventors

1. MAHAJAN, Aishwary
RL 410, ED-2, IIT Bhilai, Kutelabhata, Bhilai, Durg District, Chhattisgarh - 491002, India.
2. GEETHA, Ganesh Kolappan
410-B, ED-2, IIT Bhilai, Kutelabhata, Bhilai, Durg District, Chhattisgarh - 491002, India.

Specification

Description:TECHNICAL FIELD
[0001] The present disclosure relates generally to the field of structural health monitoring devices. In particular, the present disclosure relates to a simple, compact, and efficient a geometrical digital twin (gdt) system and a method for generating a flight path of an Unmanned Aerial Vehicles (UAVs) to monitor structural health.

BACKGROUND
[0002] Structural Health Monitoring (SHM) involves the use of sensors and data analysis techniques to monitor the condition of structures. SHM aims to detect damage and assess the structural integrity of buildings, bridges, and other infrastructure. Traditional SHM methods utilize sensors such as piezoelectric wafer active transducers (PWAS) to collect data and analyze maintenance reports. Emerging methodologies in SHM include the use of computer vision, which employs defect detection algorithms on 2D images and 3D data like point clouds.
[0003] In the field of SHM, the primary goal is to ensure the safety and reliability of structures by detecting and addressing potential issues before they lead to significant damage or failure. This involves continuous monitoring and analysis of structural data to identify any signs of wear, damage, or other anomalies. The use of advanced technologies such as LiDAR and photogrammetry has enhanced the ability to generate detailed geometrical digital twins (gDT) of structures, which can be used for more accurate and comprehensive monitoring.
[0004] One of the main challenges in achieving effective SHM is the high cost and complexity of traditional monitoring systems. High-resolution LiDAR systems, for example, are expensive and require specialized equipment and expertise. Additionally, the data generated by these systems can be difficult to process and analyze, particularly for large-scale structures. Photogrammetry, while more cost-effective, also has limitations in terms of data quality and the need for multiple viewpoints to create accurate models.
[0005] It is known from prior art that traditional SHM methods rely heavily on physical sensors and manual inspections, which can be time-consuming and prone to human error. These methods often require significant resources and expertise to implement and maintain. Furthermore, the data collected by these methods may not always provide a complete picture of the structural condition, leading to potential gaps in monitoring and assessment.
[0006] Document US20160284221A1 describes a process for generating routes for unmanned aerial vehicles (UAVs) using computer or mobile devices. This process involves determining optimized geospatial routes between origin and destination locations, including handling obstacles and no-fly zones. However, this document does not address the specific needs of SHM or the generation of detailed geometrical digital twins for monitoring purposes.
[0007] Document US6711293B1 describes a method and apparatus for identifying scale-invariant features in an image to locate objects. This method involves performing mathematical matrix operations on captured images to achieve a good match between test and standard images. However, this document does not focus on the use of computer vision for SHM or the generation of 3D models for structural monitoring.
[0008] Document CN113276130A describes a method for generating an optimized path for paint-spraying UAVs based on point cloud data. This method involves slicing the point cloud to get a planar view of the mission path and determining UAV dynamics. However, this document does not address the broader applications of SHM or the use of neural radiance fields for generating detailed digital twins.
[0009] There is, therefore, a well-established need in the art to overcome the shortcomings of conventional structural health monitoring devices by providing a simple, compact, and efficient geometrical digital twin (gdt) system and a method for generating a flight path of an Unmanned Aerial Vehicles (UAVs) to monitor structural health.

OBJECTS OF THE PRESENT DISCLOSURE
[0010] A general object of the present disclosure is to overcome the problems associated with existing structural health monitoring devices, by providing a simple, compact, efficient, and cost-effective geometrical digital twin (gdt) system and a method for generating a flight path of an Unmanned Aerial Vehicles (UAVs) to monitor structural health.
[0011] An object of the present disclosure is to a flight path for the UAV within the ROI for Structural Health Monitoring (SHM) with establishing any physical contact with the structure.
[0012] Another object of the present disclosure is to adaptively plan the UAV's flight path based on real-time data inputs or environmental changes.
[0013] Yet another object of the present disclosure is to update a geometrical Digital Twin (gDT) to reflect dynamic changes in a Region of Interest (ROI) for enhanced monitoring and inspection accuracy.
[0014] Yet another object of the present disclosure is to facilitate structural health monitoring by eliminating the need for repeated on-site physical presence and providing high-quality images along the camera ray at any point.

SUMMARY
[0015] Aspects of the present disclosure pertain to the field of structural health monitoring devices. In particular, the present disclosure relates to a simple, compact, and efficient geometrical digital twin (gDT) system and a method for generating a flight route for Unmanned Aerial Vehicles (UAVs).
[0016] According to an aspect, the proposed geometrical digital twin (gDT) system for generating a flight route for Unmanned Aerial Vehicles (UAVs) where the flight route within the geometric Digital Twin (gDT) is created by a Neural Radiance Field (NeRF) includes a processor. The processor is coupled to a memory. The memory stores instructions that are executable by the processor. The processor is configured to capture one or more images of a Region of Interest (ROI) from one or more viewpoints of a structure/building using an image processing unit of the UAV and determine one or more poses of the captured images using a Structure from Motion (SfM) technique and a Multi-View Stereo (MVS) technique of a first learning module. The processor is configured to generate a geometrical Digital Twin (gDT) of the ROI from the determined poses using a second learning module and calibrate the generated gDT to transform digital coordinates to real-world coordinates based on one or more predefined calibrations.
[0017] The processor is configured to create a light field representation from the calibrated gDT for visualization of the ROI and mark one or more checkpoints within the calibrated gDT using the created light filed representation such that the one or more checkpoints define a flight path for the UAV within the ROI for monitoring the structural health of a structure/building.
[0018] In an embodiment, the one or more check points may include a plurality of parameters. The plurality of parameters may include a start point and end point defined by a 3D Cartesian coordinates, one or more intermediate 3D coordinates between the start point and the end point, orientation of the UAV, approach speed between two checkpoints.
[0019] In an embodiment, the processor may be configured to actuate the image processing unit of the UAV throughout the flight path for capturing photos and starting/stopping video recording to enable inspection of the structure/building.
[0020] In an embodiment, the flight path may include metadata for each checkpoint. The metadata may include the intended actions for the image processing unit actions.
[0021] In an embodiment, the processor may be configured to convert the 3D Cartesian coordinates into GPS coordinates and input the GPS coordinates to the UAV.
[0022] In an embodiment, the system includes a feedback loop configured for adjusting the parameters of the neural network based on the accuracy of the generated flight path during mission execution.
[0023] According to an aspect, the proposed method for generating a flight route for Unmanned Aerial Vehicles (UAVs) includes capturing one or more images of the Region of Interest (ROI) from multiple viewpoints of a structure/building using an image processing unit (106) of the UAV and determining poses using a structure from Motion (SfM) technique and a Multi-View Stereo (MVS) technique of a first learning module. The method includes generating a geometrical Digital Twin (gDT) of the ROI from the determined poses using a second learning module and calibrating the generated gDT to transform digital coordinates to real-world coordinates using a processor based on one or more predefined calibrations.
[0024] The method includes creating a light field representation from the calibrated gDT for visualization of the ROI using the processor and marking one or more checkpoints within the calibrated gDT by the processor using the created light filed representation. The method includes converting one or more checkpoints into one or more GPS coordinates using the processor and transmitting the one or more GPS coordinates into the UAV using the processor.
[0025] Various objects, features, aspects, and advantages of the subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

BRIEF DESCRIPTION OF DRAWINGS
[0026] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure. The diagrams are for illustration only, which thus is not a limitation of the present disclosure.
[0027] FIG. 1, illustrates a blog diagram showing different components of the system and a method for generating a flight path of an Unmanned Aerial Vehicles (UAVs) to monitor structural health, in accordance with embodiments of the present disclosure.
[0028] FIG. 2, illustrates a flow diagram explaining the steps involved in a method for generating the flight path of the Unmanned Aerial Vehicles (UAVs) to monitor structural health, in accordance with embodiments of the present disclosure.

DETAILED DESCRIPTION
[0029] For the purpose of promoting an understanding of the principles of the present disclosure, reference will now be made to the various embodiments and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the present disclosure is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the present disclosure as illustrated therein being contemplated as would normally occur to one skilled in the art to which the present disclosure relates.
[0030] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the present disclosure and are not intended to be restrictive thereof.
[0031] Whether or not a certain feature or element was limited to being used only once, it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do not preclude there being none of that feature or element, unless otherwise specified by limiting language including, but not limited to, “there needs to be one or more…” or “one or more elements is required.
[0032] Reference is made herein to some “embodiments.” It should be understood that an embodiment is an example of a possible implementation of any features and/or elements of the present disclosure. Some embodiments have been described for the purpose of explaining one or more of the potential ways in which the specific features and/or elements of the proposed disclosure fulfil the requirements of uniqueness, utility, and non-obviousness.
[0033] Use of the phrases and/or terms including, but not limited to, “a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “further embodiment”, “furthermore embodiment”, “additional embodiment” or other variants thereof do not necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or in the context of more than one embodiment, or in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any features and/or elements described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.
[0034] Any particular and all details set forth herein are used in the context of some embodiments and therefore should not necessarily be taken as limiting factors to the proposed disclosure. The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by “comprises... a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
[0035] Embodiments explained herein relate to a simple, compact, and efficient geometrical digital twin (gdt) system and a method for generating a flight path of an Unmanned Aerial Vehicles (UAVs) to monitor structural health.
[0036] According to an aspect, the disclosed geometrical digital twin (gdt) system and a method for generating a flight path of an Unmanned Aerial Vehicles (UAVs) to monitor structural health integrates Neural Radiance Field (NeRF) technology with Unmanned Aerial Vehicles (UAVs) for adaptive path planning and structural health monitoring. Unlike conventional methods that rely on human inspectors or expensive LiDAR systems, the geometrical digital twin (gDT) generated from raw images captured by UAVs by using of Structure from Motion (SfM) and Multi-View Stereo (MVS) techniques of a first learning module (hereinafter interchangeably referred as “pose estimation module”) determines image poses, followed by the application of a second learning module (hereinafter interchangeably referred to as “neural network”) to create a light field representation/radiance field, results in a highly accurate and high-resolution digital representation of the Region of Interest (ROI). This digital twin is then calibrated to real-world coordinates, enabling precise and adaptive flight path planning for UAVs. The system's ability to mark checkpoints and convert them into GPS coordinates ensures accurate navigation and data collection. Additionally, the feedback loop for adjusting neural network parameters based on mission execution accuracy further enhances the system's performance. This combination of advanced computer vision, neural networks, and UAV technology provides a cost-effective, robust, and scalable solution for structural health monitoring, setting it apart from existing methods and filling a significant gap in the prior art.
[0037] Referring to FIG.1, the disclosed geometrical digital twin (gDT) system and a method for generating a flight route for Unmanned Aerial Vehicles (UAVs) for structural health monitoring leverages a Neural Radiance Field (NeRF) to create a geometrical digital twin (gDT) of a Region of Interest (ROI) and subsequently generates an adaptive flight path for the UAV. The system 100 includes a processor 102 coupled to a memory 104 that stores instructions executable by the processor 102.
[0038] The processor 102 is configured to capture one or more images of a Region of Interest (ROI) from one or more viewpoints of a structure/building using an image processing unit 106 of the UAV and determines one or more poses of the captured images using a Structure from Motion (SfM) technique and a Multi-View Stereo (MVS) technique of a first learning module 108 (hereinafter interchangeably referred as “pose estimation module”). The UAV captures raw images from different angles to cover the entire ROI comprehensively. The above techniques can enable the processor 102 to calculate the precise positions and orientations of the captured images, which are essential for accurate 3D reconstruction.
[0039] In addition, the processor 102 is configured to generate a geometrical Digital Twin (gDT) of the ROI from the determined poses using a second learning module 110 (hereinafter interchangeably referred as “neural network”) and calibrates the generated gDT to transform digital coordinates to real-world coordinates based on one or more predefined calibrations. The neural network processes the poses and corresponding images to generate a closed-form analytical solution known as the radiance field, which forms the basis of the gDT and the calibration ensures that the digital twin accurately represents the physical dimensions and locations of the ROI.
[0040] Further, the processor 102 is configured to create a light field representation/radiance field) from the calibrated gDT for visualization of the ROI and marks one or more checkpoints within the calibrated gDT using the created light field representation such that the one or more checkpoints define a flight path for the UAV within the ROI for monitoring the structural health of a structure/building. The radiance field provides high-quality images along the camera ray at any point, offering better views of the ROI compared to conventional methods and the checkpoints guide the UAV along an optimal path for data collection and inspection.
[0041] In an embodiment, the one or more checkpoints can include a plurality of parameters, including a start point and end point defined by 3D Cartesian coordinates, one or more intermediate 3D coordinates between the start point and the end point, orientation of the UAV, and approach speed between two checkpoints. The above parameters can ensure precise navigation and data collection by the UAV.
[0042] The flight path can include metadata for each checkpoint. The metadata can include the intended actions for the image processing unit 106 actions. The metadata can provide detailed instructions for the UAV's operations at each checkpoint, ensuring accurate and efficient data collection. The processor 102 can be configured to convert the 3D Cartesian coordinates into GPS coordinates and input the GPS coordinates to the UAV allowing the UAV to navigate using GPS, ensuring accurate positioning and movement along the flight path.
[0043] The system 100 can include a feedback loop configured for adjusting the parameters of the neural network based on the accuracy of the generated flight path during mission execution. The feedback loop can ensure continuous improvement and optimization of the flight path, enhancing the system's overall performance.
[0044] In an embodiment, the processor 102 can be configured to actuate the image processing unit 106 of the UAV throughout the flight path for capturing photos and starting/stopping video recording to enable inspection of the structure/building. The above functionality can allow continuous monitoring and data collection during the UAV's flight.
[0045] Referring to FIG.2, the disclosed method 200 for generating a flight route for Unmanned Aerial Vehicles (UAVs) includes a step 202 of capturing one or more images of the Region of Interest (ROI) from multiple viewpoints of a structure/building using an image processing unit 106 of the UAV and another step 204 of determining poses using a Structure from Motion (SfM) technique and a Multi-View Stereo (MVS) technique of a first learning module 108. The step 202 and 204 can enable the system 100 to collect raw image data from various angles to cover the entire ROI and calculate the precise positions and orientations of the captured images, which are essential for accurate 3D reconstruction respectively.
[0046] In addition, the method 200 includes a step 206 of generating a geometrical Digital Twin (gDT) of the ROI from the determined poses using a second learning module 110 and another step 208 of calibrating the generated gDT using a processor 102 to transform digital coordinates to real-world coordinates based on one or more predefined calibrations. In step 206, the neural network processes the poses and corresponding images to generate a closed-form analytical solution known as the radiance field, which forms the basis of the gDT and in step 208, the system 100 ensures that the digital twin accurately represents the physical dimensions and locations of the ROI.
[0047] The method 200 includes a step 210 of creating a light field representation from the calibrated gDT for visualization of the ROI using the processor 102 and another step 212 of marking one or more checkpoints within the calibrated gDT using the created light field representation using the processor 102. The radiance field provides high-quality images along the camera ray at any point, offering better views of the ROI compared to conventional methods.
[0048] Further, the method 200 includes a step 212 of marking one or more checkpoints within the calibrated gDT using the created light field representation using the processor 102 and another step 214 of converting one or more checkpoints into one or more GPS coordinates using the processor 102. The checkpoints guide the UAV along an optimal path for data collection and inspection such that the conversion enables the UAV to navigate using GPS, ensuring accurate positioning and movement along the flight path. Furthermore, the method 200 includes a step 216 of transmitting the one or more GPS coordinates into the UAV using the processor 102. The step 216 can enable the system 100 to send the GPS coordinates to the UAV, enabling it to follow the predefined flight path for structural health monitoring.
[0049] Thus, the above disclosed system 100 and method 200 offer several advantages over existing methods, including cost-effectiveness, higher data-collection robustness, and the ability to produce high-resolution gDTs of even complex and high-rise ROIs. The system 100 eliminates the need for repeated on-site physical presence, speeds up structural health monitoring, and provides better views of the ROI compared to conventional vision-based methods. Additionally, the system 100's use of cameras is significantly more cost-effective than expensive LiDARs, and it offers greater accessibility for tall-rise structures.
[0050] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE INVENTION
[0051] The present disclosure provides a simple, compact, efficient, and cost-effective system and a method for Unmanned Aerial Vehicles (UAVs) to monitor structural health.
[0052] The present disclosure provides a flight path for the UAV within the ROI for Structural Health Monitoring (SHM) with establishing any physical contact with the structure.
[0053] The present disclosure adaptively plans the UAV's flight path based on current/present data inputs or environmental changes.
[0054] The present disclosure updates a geometrical Digital Twin (gDT) to reflect dynamic changes in a Region of Interest (ROI) for enhanced monitoring and inspection accuracy.
[0055] The present disclosure facilitates structural health monitoring by eliminating the need for repeated on-site physical presence and providing high-quality images along the camera ray at any point.
, Claims:1. A Geometrical digital twin (gDT) system and a method for generating a flight route for Unmanned Aerial Vehicles (UAVs), the flight route within a geometric Digital Twin (gDT) is created by a Neural Radiance Field (NeRF), the system (100) comprising:
a processor (102) coupled to a memory (104) storing instructions that are executable by the processor (102) is configured to:
capture one or more images of a Region of Interest (ROI) from one or more viewpoints of a structure/building using an image processing unit (106) of the UAV;
determine one or more poses of the captured images using a Structure from Motion (SfM) technique and a Multi-View Stereo (MVS) technique of a first learning module (108) (pose estimation module);
generate a geometrical Digital Twin (gDT) of the ROI from the determined poses using a second learning module (110) (neural network);
calibrate the generated gDT to transform digital coordinates to real-world coordinates based on one or more predefined calibrations;
create a light field representation/radiance field from the calibrated gDT for visualization of the ROI; and
mark one or more checkpoints within the calibrated gDT using the created light filed representation such that the one or more checkpoints define a flight path for the UAV within the ROI for monitoring the structural health of a structure/building.
2. The system (100) as claimed in claim 1, wherein the one or more check points comprises a plurality of parameters, wherein the plurality of parameters comprises a start point and end point defined by a 3D Cartesian coordinates, one or more intermediate 3D coordinates between the start point and the end point, orientation of the UAV, approach speed between two checkpoints.
3. The system (100) as claimed in claim 1, wherein the processor is configured to actuate the image processing unit of the UAV throughout the flight path for capturing photos and starting/stopping video recording to enable inspection of the structure/building.
4. The system (100) as claimed in claim 3, wherein the flight path comprises metadata for each checkpoint, wherein the metadata comprises the intended actions for the image processing unit (106) actions.
5. The system (100) as claimed in claim 1, wherein the processor (102) is configured to convert the 3D Cartesian coordinates into GPS coordinates and input the GPS coordinates to the UAV.
6. The system (100) as claimed in claim 1, wherein the system (100) comprising a feedback loop configured for adjusting the parameters of the neural network based on the accuracy of the generated flight path during mission execution.
7. A method (200) for generating a flight route for Unmanned Aerial Vehicles (UAVs), the method (200) comprising:
capturing (202), using an image processing unit (106) of the UAV one or more images of the Region of Interest (ROI) from multiple viewpoints of a structure/building;
determining (204) poses, using a structure from Motion (SfM) technique and a Multi-View Stereo (MVS) technique of a first learning module (108) (pose estimation module);
generating (206), using a second learning module (110) (neural network) a geometrical Digital Twin (gDT) of the ROI from the determined poses;
calibrating (208), using a processor (102) the generated gDT to transform digital coordinates to real-world coordinates based on one or more predefined calibrations;
creating (210), using the processor (102) a light field representation from the calibrated gDT for visualization of the ROI;
marking (212), using the processor (102) one or more checkpoints within the calibrated gDT using the created light filed representation;
converting (214), using the processor (102), one or more checkpoints into one or more GPS coordinates; and
transmitting (216), using the processor (102), the one or more GPS coordinates into the UAV.

Documents

Application Documents

# Name Date
1 202421084181-STATEMENT OF UNDERTAKING (FORM 3) [04-11-2024(online)].pdf 2024-11-04
2 202421084181-FORM FOR SMALL ENTITY(FORM-28) [04-11-2024(online)].pdf 2024-11-04
3 202421084181-FORM FOR SMALL ENTITY [04-11-2024(online)].pdf 2024-11-04
4 202421084181-FORM 1 [04-11-2024(online)].pdf 2024-11-04
5 202421084181-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [04-11-2024(online)].pdf 2024-11-04
6 202421084181-EVIDENCE FOR REGISTRATION UNDER SSI [04-11-2024(online)].pdf 2024-11-04
7 202421084181-DRAWINGS [04-11-2024(online)].pdf 2024-11-04
8 202421084181-DECLARATION OF INVENTORSHIP (FORM 5) [04-11-2024(online)].pdf 2024-11-04
9 202421084181-COMPLETE SPECIFICATION [04-11-2024(online)].pdf 2024-11-04
10 Abstract1.jpg 2024-12-12
11 202421084181-FORM-9 [10-01-2025(online)].pdf 2025-01-10
12 202421084181-FORM FOR SMALL ENTITY [10-01-2025(online)].pdf 2025-01-10
13 202421084181-FORM 18 [10-01-2025(online)].pdf 2025-01-10
14 202421084181-EVIDENCE FOR REGISTRATION UNDER SSI [10-01-2025(online)].pdf 2025-01-10
15 202421084181-FORM-26 [11-01-2025(online)].pdf 2025-01-11
16 202421084181-Proof of Right [22-04-2025(online)].pdf 2025-04-22