Sign In to Follow Application
View All Documents & Correspondence

Device And Method For Drone Analytics Platform

Abstract: The present disclosure relates to a device (100) and method (200) for a drone analytics platform that enables real-time and batch processing of aerial data captured by unmanned aerial vehicles (UAVs). The device (100) 5 comprises a data ingestion module (10) with a live stream API (11), a data validation module (20), a data segregation module (30), and a data interface layer (40) with processing engines (41, 42, 43). An AI engine (50) executes AI models (51) to generate analytical outputs (52), visualized via an analytics interface (60). A cloud 10 interface (70) stores data and supports model evaluation using a comparison engine (71). A human validation interface (80) with feedback (81) enables continuous model improvement through an RLHF loop (82). An access control module (90) enforces secure, tiered access for users (91), admins (92), and super admins (93).

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 June 2024
Publication Number
25/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

AKIN ANALYTICS SOLUTIONS PRIVATE LIMITED
AKIN ANALYTICS, THUB 2.O, INORBIT MALL RD, VITTAL RAO NAGAR, MADHAPUR, HYDERABAD, TELANGANA - 500081, INDIA

Inventors

1. PULAPARTHI JANAKI
F.NO.505 , PNR HIGHNEST APARTMENT , HYDERNAGAR , KUKATPALLY , TELANGANA – 500085, INDIA.

Specification

DESC:Various embodiments of the disclosure are discussed in detail below. While 10 specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize
that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details
15 are described to provide a thorough understanding of the disclosure. However, in certain instances, known details are not described in order to avoid obscuring the description.
References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and such references mean at least one 20 of the embodiments.
Reference to "one embodiment", "an embodiment", “one aspect”, “some aspects”,
“an aspect” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the
disclosure. The appearances of the phrase "in one embodiment" in various places 25 in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
9
Moreover, various features are described which may be exhibited by some embodiments and not by others.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each
term is used. Alternative language and synonyms may be used 5 for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases,
synonyms for certain terms are provided.
A recital of one or more synonyms does not exclude the use of other synonyms. 10 The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only and is not intended to further limit the
scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to limit the scope of the disclosure, examples of instruments, 15 apparatus, methods and their related results according to the embodiments of the
present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of
the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the
20 art to which this disclosure pertains. In the case of conflict, the present document,
including definitions will control.Additional features and advantages of the disclosure will be set forth in the
description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and
25 advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims.
These and other features of the disclosure will become more fully apparent from the following description and appended claims or can be learned by the practice of the principles set forth herein.
10
As mentioned above, there is a need for an integrated, automated, and scalable solution that overcomes the above-mentioned limitations of conventional aerial
data analytics systems.
Referring to Figure 1, according to an aspect of the present disclosure, a device (100) for drone analytics platform is illustrated. The device (100) 5 represents core
hardware and software architecture that orchestrates the collection, processing, analysis, validation, visualization, and secure access of drone-captured data. The
device (100) comprises a data ingestion module (10) designed to accept real-time streams from one or more unmanned aerial vehicles (UAVs). These UAVs may 10 include rotary-wing, fixed-wing, or hybrid drone systems capable of transmitting
one or more data via a wireless or cellular network to a ground-based system. The data ingestion module (10) is interfaced with a live stream API (11), which serves
as the communication protocol layer to manage and regulate incoming streams.
This API (11) may support both synchronous and asynchronous transmissions, (15) enabling real-time monitoring or post-mission data uploading based on use case
requirements.
In some aspects of the present disclosure, the API (11) may include support for HTTP/2, MQTT, or proprietary streaming frameworks for low-latency performance.
20 Once data is received, it is transferred to a data validation module (20), which may implement rule-based filters and AI-based data quality assurance
submodules. This module (20) performs quality checks, including noise detection, signal clarity, completeness, and metadata integrity. For example, if image frames
arrive corrupted or have missing geo-tags, the module (20) flags them as invalid 25 and triggers alerts. The valid data is then passed forward, while invalid datasets are logged, reported, or discarded based on system policy. This module (20) may
also include temporal consistency checks or redundancy elimination logic to improve data throughput.
11
The validated datasets are subsequently routed to a data segregation module (30), which may be configured to classify inputs into three categories: image data,
video data, and 3D data streams. The module (30) tags and routes the data dynamically based on embedded file signatures, metadata, or streaming header
formats. The data segregation module (30) can also detect multi-5 modal payloads and split them accordingly—for instance, a synchronized RGB and LiDAR stream would be bifurcated into image and 3D data types. This module (30) may also
apply preliminary compression or transformation protocols to prepare data for further processing.
10 The segregated data is processed within a data interface layer (40), which includes
three independent engines such as an image processing engine (41), a video
processing engine (42), and a 3D data processing engine (43). The image
processing engine (41) handles still-frame imagery and performs operations such
as image enhancement, geo-rectification, and object segmentation. The video
15 processing engine (42) applies temporal frame stitching, motion tracking, and
anomaly detection algorithms suitable for continuous video feeds. The 3D data
processing engine (43) interprets volumetric or point cloud data, such as from
LiDAR, stereo vision, or photogrammetry sources. Each of these engines (41, 42,
43) converts raw input into structured formats interpretable by machine learning
20 models, such as feature vectors, normalized tensors, or graph-based
representations.
Once pre-processed, the data is transferred to an AI engine (50), which executes
one or more pre-trained AI models (51). These AI models (51) may include
convolutional neural networks (CNNs) for object detection, transformers for
25 spatial reasoning, or unsupervised learning algorithms for anomaly detection. The
engine (50) supports batch inference to process large volumes of data in parallel,
leveraging hardware accelerators such as GPUs, TPUs, or FPGA-based inference
engines. The analytical outputs (52) generated by these models (51) include
detections, classifications, annotations, and confidence metrics. These outputs
12
(52) can also be tuned in alternate embodiments using task-specific model
ensembles or federated learning models trained on distributed datasets.
The analytical outputs (52) are routed to an analytics interface (60), which
visualizes results through customizable dashboards. These dashboards can display
heat maps of anomaly density, geotagged overlays on digital 5 maps, and dynamic
3D reconstructions for terrain analysis or structural inspection. The interface (60)
supports user-interactive filtering, zooming, and time-series playback, facilitating
insight derivation in real time or on archived datasets. The analytics interface (60)
may be implemented as a web-based dashboard, mobile application, or command
10 center display, depending on operational requirements.
The entire data flow and outputs are centrally stored and coordinated through a
cloud interface module (70), which provides secure and scalable storage for
validated data, model outputs, and operational logs. This module (70) supports
integration with commercial cloud services like AWS, Azure, or private cloud
15 infrastructures using Kubernetes, OpenStack, or edge-compute platforms. The
cloud interface (70) also hosts comparison engines (71), which evaluate AI model
outputs (52) against user-validated ground truth to benchmark performance and
detect drift. These comparison engines generate performance metrics that feed
into the AI pipeline for monitoring and auditing.
20 To continuously improve the accuracy of the AI models (51), the device (100)
includes a human validation interface (80), through which domain experts can
review analytical outputs and submit corrections or confirmations as user
feedback (81). This feedback (81) is collected in a structured format—such as
bounding boxes, labels, or flags—and relayed to the reinforcement learning with
25 human feedback (RLHF) loop (82). The RLHF loop (82) uses this feedback (81)
to fine-tune the model weights through reward mechanisms, backpropagation, or
model retraining processes.
13
In some aspects of the present disclosure, the RLHF loop (82) may be augmented
with active learning or uncertainty sampling to prioritize the most informative
feedback.
To ensure secure access and operational transparency, the device (100) is
governed by an access control module (90). This module (5 90) enforces multifactor
authentication and implements tiered permissions such as a user module
(91), an admin module (92), and a super admin module (93). The user module
(91) allows basic interaction such as data uploads, report viewing, and annotation.
The admin module (92) permits user management, dataset labeling, and
10 configuration of analytics parameters. The super admin module (93) possesses
elevated privileges to manage system-wide settings, security protocols, and API
access. These modules (91, 92, and 93) may further be integrated with enterprise
identity management systems such as LDAP, SSO, or OAuth-based federated
login services.
15 The device further includes an alert management module (AMM), configured to
detect and notify stakeholders of anomalies identified by the AI engine (50). The
AMM may trigger alerts/ notifications via email, SMS, dashboard pop-ups, or
webhook integrations to external systems. The alerts can be prioritized using
severity scores computed from model confidence levels, spatial proximity to
20 restricted areas, or deviation thresholds set by operational guidelines.
Figure 2 illustrates an architectural representation of the device (100), where the
flow of data and processing stages are interconnected from data ingestion (10)
through to visualization (60), cloud interface module (70), human validation
interface (80), and access control (90). Each block operates in a modular and
25 interoperable fashion to support scalability, failover, and cross-industry
adaptability. The architecture may also include auxiliary subsystems like logging,
version control, and API gateways.
In operation, the drone analytics platform device (100) functions as an end-to-end
system for receiving, processing, analyzing, and visualizing drone-captured data
14
in real time or batch mode. Initially, one or more unmanned aerial vehicles
(UAVs) transmit data—such as images, videos, or 3D scans—via wireless
networks to the device's data ingestion module (10). The live stream API (11)
ensures seamless communication by supporting both synchronous and
asynchronous feeds, enabling real-time monitoring or post-5 flight data upload
depending on operational requirements.
The incoming data is then passed to the data validation module (20), where it
undergoes multiple levels of quality and compliance checks. These include rulebased
verification (such as format and metadata completeness) and AI-based
10 analysis (to detect noise, corruption, or redundancy). Valid data is forwarded to
the next stage, while invalid data triggers alerts for operator review or automatic
logging.
Once validated, the data is routed to the data segregation module (30), which
classifies the content into distinct streams—namely image, video, or 3D data. This
15 classification enables modality-specific processing within the data interface layer
(40), comprising an image processing engine (41), a video processing engine (42),
and a 3D data processing engine (43). Each engine transforms the raw input into
structured feature representations optimized for AI analysis, such as tensors for
images, temporal feature maps for videos, or point clouds for 3D scans.
20 The AI engine (50) receives the processed data and executes one or more pretrained
AI models (51) in a batch inference mode, generating analytical outputs
(52) such as object detections, anomaly flags, classifications, or measurements.
These outputs are then visualized through the analytics interface (60), which
provides configurable dashboards, geotagged overlays, heatmaps, and 3D
25 reconstructions, allowing end-users to derive actionable insights with ease.
Simultaneously, the alert management module (AMM) continuously monitors the
outputs (52) to detect any deviations or anomalies based on predefined rules or
AI-generated thresholds. Upon detection, the AMM issues real-time notifications
via integrated communication channels. All analytical results and raw/processed
15
data are stored in the cloud interface module (70), which also houses a
comparison engine (71) for evaluating AI model performance against humanvalidated
data.
To further refine AI accuracy, the human validation interface (80) allows users to
provide direct feedback (81) on the device’s predictions. This 5 feedback (81) is
processed within a reinforcement learning with human feedback (RLHF) loop
(82), enabling continuous model improvement through fine-tuning and adaptive
learning strategies.
Finally, device (100) access and operations are governed by the access control
10 module (90), which enforces tier-based permissions via distinct user roles: user
(91) for basic interaction and data viewing, admin (92) for managing operations
and configurations, and super admin (93) for complete oversight and control of
the device (100). This modular and cyclical workflow ensures secure, scalable,
and intelligent drone-based analytics tailored to diverse real-world applications.
15 Figure 3 illustrates a method (200) for drone analytics platform using the device
(100). The method (200) includes the following steps.
At step (201), receiving, via a live stream API (11), at least one real-time input
from one or more unmanned aerial vehicles (UAVs). The incoming data may
include image files, video streams, LiDAR-based 3D point clouds, or other
20 sensor-derived inputs transmitted over wireless communication channels in either
synchronous or asynchronous modes.
At step (202), validating, via a data validation module (20), the quality and
compliance of the received data. The module (20) applies rule-based and AI-based
filters to detect incomplete, corrupt, or noisy datasets. Invalid data is flagged and
25 triggers alerts, while valid data is passed on for further processing.
At step (203), classifying, via a data segregation module (30), the validated data
into at least image data, video data, and 3D data streams. The segregation is
16
dynamically performed based on file metadata, headers, or embedded identifiers,
and prepares the data for modality-specific interpretation.
At step (204), processing, via processing engines (41, 42, 43) of a data interface
layer (40), the classified data for structured interpretation suitable for AI
evaluation. This includes feature extraction, format 5 normalization, and
transformation of raw data into machine-readable representations such as tensors
or graph structures.
At step (205), analyzing, via an AI engine (50), the processed data using pretrained
AI models (51) in batch inference mode. The models execute tasks such as
10 object detection, anomaly classification, semantic segmentation, or pattern
recognition, generating corresponding analytical outputs (52) with associated
confidence metrics.
At step (206), visualizing, via an analytics interface (60), the analytical outputs
(52) in the form of interactive dashboards, charts, geospatial overlays, and 3D
15 reconstructions. Users can filter, navigate, and interact with the data to gain
actionable insights.
At step (207), triggering, via an alert management module (AMM), one or more
real-time alerts or notifications based on the anomalies, deviations, or threshold
violations detected in the analytical outputs (52). Alerts can be sent via email,
20 dashboard, or system integration protocols.
At step (208), storing, via a cloud interface (70), at least one of the validated data,
processed results, and intermediate analytics. The cloud interface (70) also
supports access to a comparison engine (71) for benchmarking model
performance and archiving for future use.
25 At step (209), receiving, via a human validation interface (80), user feedback (81)
for confirming or correcting AI predictions. The feedback (81) is collected
through a structured review interface and is used to ground-truth the model
predictions.
17
At step (210), retraining or fine-tuning, via a reinforcement learning with human
feedback (RLHF) loop (82), the AI models (51) using the validated feedback. This
improves model accuracy and adaptability over time by incorporating real-world
insights into model parameters.
In an exemplary embodiment, the device (100) may be deployed 5 for agricultural
field monitoring, where a fleet of drones equipped with multispectral and RGB
cameras captures aerial data over a large crop field. The live stream API (11) in
the data ingestion module (10) continuously receives this real-time feed and
forwards it to the data validation module (20). The device filters out overexposed
10 or blurred images due to wind or motion. Once validated, the data segregation
module (30) classifies the input as image data and routes it to the image
processing engine (41), which enhances contrast and identifies crop boundaries.
The AI engine (50), running a crop health prediction model (51), processes the
images and outputs an analytical report (52) showing early signs of nutrient
15 deficiency and pest infestation. This is visualized through the analytics interface
(60), where a color-coded heatmap overlay highlights areas of concern. Alerts are
generated through the alert management module (AMM), notifying farm
managers to take targeted corrective actions. User feedback (81) confirming the
AI’s accuracy is submitted via the human validation interface (80), and used in the
20 RLHF loop (82) to improve future predictions.
In another exemplary embodiment, the device (100) is used for highway
infrastructure inspection. UAVs perform routine flyovers of a highway corridor,
collecting high-resolution videos and 3D data of bridges, culverts, and road
surfaces. This data is streamed into the system using the live stream API (11),
25 validated for consistency and GPS metadata by the data validation module (20),
and segregated into video and 3D streams via the data segregation module (30).
The video data is processed through the video processing engine (42) for frameby-
frame defect detection, while the 3D point cloud data is analyzed by the 3D
processing engine (43) for structural deformities such as cracks, dips, or
30 misalignments. The AI engine (50) applies defect classification models (51) and
18
generates analytical outputs (52) identifying specific locations and severity
ratings. These results are displayed in a dashboard on the analytics interface (60),
with a 3D visualization of the affected areas. The cloud interface module (70)
stores this data for audit trails and integrates with a comparison engine (71) to
benchmark current defects against previous inspections. 5 Engineers provide
corrective or confirmatory feedback (81) through the human validation interface
(80), enhancing future defect identification through the RLHF loop (82).
In another exemplary embodiment, the device (100) is implemented in a disaster
response scenario, such as after a major flood or earthquake. Emergency response
10 drones collect real-time imagery and thermal data from affected zones and stream
this data to the data ingestion module (10). The system validates incoming feeds
for resolution and GPS continuity, segregates the input via module (30), and
routes them accordingly—thermal imagery to the image processing engine (41)
and aerial video to the video processing engine (42). The AI engine (50) runs
15 victim localization and damage assessment models (51), generating outputs (52)
that identify stranded individuals and high-risk areas such as collapsed buildings
or submerged roads. Alerts generated by the AMM trigger immediate
notifications to field teams and decision-makers. The analytics interface (60)
displays live maps with priority overlays for rescue teams. Real-time updates and
20 manual annotations from responders are captured via the human validation
interface (80), enhancing future emergency response capabilities through the
RLHF loop (82). Access to such critical dashboards is tier-controlled by the
access control module (90), ensuring that different agencies can coordinate
without unauthorized data leakage.
25 In another exemplary embodiment, the device (100) is employed in a construction
progress monitoring use case. UAVs fly pre-programmed routes daily or weekly
across large construction sites, capturing time-stamped video and 3D
photogrammetry data. The ingestion module (10) receives and streams the data
through the live stream API (11). After validation by module (20) and segregation
30 by module (30), 3D data is processed by the 3D engine (43) to generate up-to-date
19
site models. Simultaneously, video data is analyzed by the video processing
engine (42) for machinery and personnel activity detection. The AI engine (50)
compares current construction progress to BIM (Building Information Modeling)
plans using AI models (51), and outputs (52) include progress reports and
detected deviations or delays. These insights are presented 5 on the analytics
interface (60), and alerts through the AMM inform project managers of risks such
as overstaffing, unsafe zones, or idle machinery. Supervisors use the human
validation interface (80) to fine-tune detection accuracy, and role-specific access
to reports is managed via the access control module (90).
10 Each of the above embodiments demonstrates how the modular, intelligent, and
secure architecture of the device (100) enables effective deployment across a
variety of real-world domains, including agriculture, infrastructure, disaster relief,
and construction. The flexibility of components such as the AI engine (50), cloud
interface (70), and access control module (90) allows tailoring the system to meet
15 diverse operational requirements while ensuring continuous performance
improvement through human feedback.
The implementation set forth in the foregoing description do not represent all
implementations consistent with the subject matter described herein. Instead, they
are merely some examples consistent with aspects related to the described subject
20 matter. Although a few variations have been described in detain above, other
modifications or additions are possible. In particular, further features and/or
variations can be provided in addition to those set forth herein. For example, the
implementation described can be directed to various combinations and sub
combinations of the disclosed features and/or combinations and sub combinations
25 of the several further features disclosed above.
In addition, the logic flows depicted in the accompany figures and/or described herein do not necessarily require the particular order shown, or sequential order,
to achieve desirable results. Other implementations may be within the scope of the following claims. ,CLAIMS:A device (100) for drone analytics platform, comprising:
a data ingestion module (10) configured to receive at least one realtime
input from one or more unmanned aerial vehicles (UAVs) through a
live 5 stream API (11);
a data validation module (20) configured to filter one or more
incoming data for quality and compliance, wherein valid data is forwarded
for segregation and invalid data triggers alerts;
a data segregation module (30) configured to receive one or more
10 valid data from the data validation module (20) and classify validated data
into at least image data, video data, and 3D data streams;
a data interface layer (40) comprising an image processing engine
(41), a video processing engine (42), and a 3D data processing engine
(43), each configured to convert at least one input into one or more
15 structured interpretation for AI evaluation;
an AI engine (50) configured to execute one or more pre-trained AI
models (51) on the processed data in batch inference and generate
corresponding analytical outputs (52);
an analytics interface (60) configured to visualize the output data
20 (52) from the AI engine (50) in the form of interactive dashboards and
reports;
a cloud interface module (70) operatively coupled to the AI engine
(50), analytics interface (60), and data interface layer (40), said cloud
interface (70) configured to store at least one of a validated data, processed
25 result, and facilitate model training and validation;
a human validation interface (80) configured to collect user
feedback (81) and relay said feedback to a reinforcement learning with
human feedback (RLHF) loop (82) for model performance improvement;
and
21
an access control module (90) configured to enforce multi-factor
authentication and tier-based permissions for data access and control, said
access control module (90) comprising:
a user module (91) configured to input data and view reports;
an admin module (92) configured to manage 5 data and users;
and
a super admin module (93) configured to oversee and control
entire device operations.
2. The device (100) as claimed in claim 1, wherein the live stream API (11)
10 is configured to support synchronous and asynchronous drone data feeds
from a plurality of UAV sources.
3. The device (100) as claimed in claim 1, wherein the data validation
module (20) comprises rule-based filters and AI-based data quality
assurance submodules.
15 4. The device (100) as claimed in claim 1, wherein the data segregation
module (30) is configured to tag and route data types dynamically to
corresponding processing engines (41, 42, 43) of the data interface layer
(40).
5. The device (100) as claimed in claim 1, wherein the device (100)
20 comprises an alert management module (AMM) configured to generate
and display/ trigger one or more real-time alerts/ notifications to users
based on customizable anomaly thresholds.
6. The device (100) as claimed in claim 1, wherein the analytics interface
(60) is configured to support configurable dashboards for displaying
25 metrics, heat maps, geotagged overlays, and 3D reconstructions.
7. The device (100) as claimed in claim 1, wherein the cloud interface (70) is
configured to host comparison engines (71) to evaluate AI model
performance against human-validated outputs.
22
8. The device (100) as claimed in claim 1, wherein the RLHF loop (82)
utilizes the feedback to fine-tune the AI models (51) and improve
prediction accuracy over time.
9. A method (200) for drone analytics platform using a device (100), said
method (200) comprising 5 steps of:
receiving, via a live stream API (11), at least one real-time input
from one or more UAVs;
validating, via a data validation module (20), quality and
compliance of the received data, wherein valid data is forwarded for
10 segregation and invalid data triggers alerts;
classifying, via a data segregation module (30), the validated data
into at least image data, video data, and 3D data streams;
processing, via processing engines (41, 42, 43), the classified data
for structured interpretation for AI evaluation;
15 analyzing, via an AI engine (50), the processed data using pretrained
AI models (51) in batch inference and generate corresponding
analytical outputs (52);
visualizing, via an analytics interface (60), the output data (52) on
in the form of interactive dashboards and reports;
20 triggering, via an alert management module (AMM), one or more
real-time alerts/ notifications based on deviations and anomalies detected
in analytical outputs (52);
storing, via a cloud interface (70), at least one of a validated data,
processed result, and facilitate model training and validation;
25 receiving, via a human validation interface (80), user feedback (81)
for ground-truthing predictions; and
retraining, via an RLHF loop (82), the AI models (51) using
validated feedback to improve performance.
23
10. The method (200) as claimed in claim 9, wherein the AI engine (50)
executes batch inference on cloud infrastructure to scale processing
capacity.
11. The method (200) as claimed in claim 9, wherein the user feedback (81) is
benchmarked against AI model outputs (52) using a 5 comparison engine
(71), and model weights are adjusted accordingly through the RLHF loop
(82).
12. The method (200) as claimed in claim 9, wherein the real-time alerts/
notifications are prioritized based on severity scores computed from AI
10 predictions.
13. The method (200) as claimed in claim 9, wherein the method (200)
comprising the step of restricting user actions and dashboard visibility
according to role-based permissions defined in an access control module
(90).

Documents

Application Documents

# Name Date
1 202441046318-STATEMENT OF UNDERTAKING (FORM 3) [15-06-2024(online)].pdf 2024-06-15
2 202441046318-PROVISIONAL SPECIFICATION [15-06-2024(online)].pdf 2024-06-15
3 202441046318-FORM FOR STARTUP [15-06-2024(online)].pdf 2024-06-15
4 202441046318-FORM FOR SMALL ENTITY(FORM-28) [15-06-2024(online)].pdf 2024-06-15
5 202441046318-FORM 1 [15-06-2024(online)].pdf 2024-06-15
6 202441046318-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-06-2024(online)].pdf 2024-06-15
7 202441046318-EVIDENCE FOR REGISTRATION UNDER SSI [15-06-2024(online)].pdf 2024-06-15
8 202441046318-DECLARATION OF INVENTORSHIP (FORM 5) [15-06-2024(online)].pdf 2024-06-15
9 202441046318-CORRESPONDENCE-OTHERS [14-06-2025(online)].pdf 2025-06-14
10 202441046318-COMPLETE SPECIFICATION [14-06-2025(online)].pdf 2025-06-14
11 202441046318-FORM-9 [15-06-2025(online)].pdf 2025-06-15
12 202441046318-MSME CERTIFICATE [16-06-2025(online)].pdf 2025-06-16
13 202441046318-FORM28 [16-06-2025(online)].pdf 2025-06-16
14 202441046318-FORM 18A [16-06-2025(online)].pdf 2025-06-16
15 202441046318-FER.pdf 2025-10-06

Search Strategy

1 202441046318_SearchStrategyNew_E_Search_318E_03-10-2025.pdf