Sign In to Follow Application
View All Documents & Correspondence

A System For Multimodal Health Data Integration And Predictive Analytics And A Method Thereof

Abstract: A system for multimodal health data integration and predictive analytics and a method thereof [0027] The present invention relates to a system for multimodal health data integration and predictive analytics, utilizing wearable devices (101) equipped with sensors to collect real-time data. The multimodal data are stored in a cloud-based EHR platform (102) and processed by an AI-driven analytics engine (103). The analytics engine (103) preprocesses data through normalization and temporal alignment, extracts features using computer vision and ML algorithms, and fuses them into a joint representation vector through a multi-branch neural network to generate predictive health insights. A user interface (104) displays predictive insights and real-time alerts are transmitted to healthcare providers through telehealth dashboard. The present invention further relates to a method of collecting, storing, processing, and generating predictive insights from multimodal health data collected through wearable devices. (Figure 1)

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 July 2025
Publication Number
33/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

Aiira Tex Private Limited
#10, 5th Main, Behind Basavanna Temple, Hosakerehalli, Banashankari 3rd Stage, Bangalore-560085, Karnataka, India

Inventors

1. Mr. Harsha Krishnamurthy
C/o Aiira Tex Private Limited, #10, 5th Main, Behind Basavanna Temple, Hosakerehalli, Banashankari 3rd Stage, Bangalore-560085, Karnataka, India

Specification

Description:Technical field of the invention
[0002] The present invention relates to a system for multimodal health data integration from the wearable devices and performing predictive analytics. More particularly, the invention relates to an artificial intelligence (AI)-driven predictive analytics system that integrates multimodal health data from the wearable devices into a cloud-based Electronic Health Record (EHR) platform to manage, enhance and predict the user wellness. The invention further relates to a method for multimodal health data integration and predictive analytics using the wearable devices.
Background of the invention
[0003] The advent of wearable technologies and EHR platforms has significantly advanced healthcare delivery by enabling continuous patient monitoring and data-driven decision-making. The wearable devices, such as smartwatches and fitness trackers, collect a wealth of health data, such as heart rate, activity levels, and sleep patterns, while EHR platforms provide a centralized platform for storing and accessing the patient health information. Despite these advancements, existing systems face critical shortcomings in achieving interoperability, real-time data integration, and predictive analytics.
[0004] Further, the wearable devices often operate in isolation, lacking standardized protocols to integrate their data into EHR platforms effectively. This fragmentation limits the ability of healthcare providers to access a holistic view of patient health, hindering timely interventions and personalized care.
[0005] For instance, the Patent Application No. US2023290502A1 titled “Machine learning framework for detection of chronic health conditions” discloses a system and method for detecting chronic health conditions based on data collected by a wearable device such as an activity tracker or a smart watch. Deep learning algorithms are configured to process the monitored parameter data collected by the wearable device as well as additional embedding data obtained from health records corresponding to a user account registered to the wearable device. If any underlying health condition is detected, then the user can be notified directly, via the wearable device or an associated application or technology, or indirectly, via a primary care provider associated with the user.
[0006] The Patent Application No. US2019304582A1 titled “methods and system for real time, cognitive integration with clinical decision support systems featuring interoperable data exchange on cloud-based and blockchain networks” discloses a cloud-based, real-time architecture comprising an adaptive user interface providing responsive dashboards for real-time data presentation and user interaction, a hub controller for real-time data flow transformation, a data validation engine which incorporates cognitive natural language processing (NLP) to extract structured and unstructured patient record data from the EHR and employs methods that provide an optimally automated input to the CDS, cloud storage of aggregated CDS data and other clinical datasets, and plug-in support for additional cognitive capabilities such as predictive analytics and data mining.
[0007] The Patent Application No. IN202441087505 titled “cloud-based predictive healthcare system leveraging machine learning for early disease detection and patient monitoring” discloses a cloud-based predictive healthcare system comprising a data collection module to aggregate patient data from various sources, including EHR, wearable devices, and IoT sensors. The healthcare system uses a preprocessing module for data cleaning, feature extraction, and transformation and a machine learning predictive model system that uses real-time patient data to assess disease risk and predict onset. Further, the invention discloses a patient monitoring module that continuously monitors health metrics, computes risk assessment, and generates alerts and a cloud-based storage and processing system for secure data handling and cross-provider access.
[0008] The existing systems often fail to utilize advanced Artificial Intelligence (AI) and Machine Learning (ML) efficiently to filter and prioritize the multimodal data, resulting in data overload for clinicians and missed opportunities for early detection of health risks. Additionally, existing wearable technologies are constrained by their reliance on limited biomarkers, such as vital signs, without incorporating advanced multi-sensory data like eye movement, thermal imaging, or breath composition. These limitations prevent the early detection of complex conditions, such as neurological disorders, diabetes, or circulatory issues, which require correlated analysis of diverse data points. Moreover, the lack of real-time analytics and contextual recommendations in existing systems contributes to clinician burnout and inefficient care coordination.
[0009] Consequently, there is a need for an improved healthcare system that overcomes the limitations of existing solutions by efficiently integrating multimodal health data from wearable devices. The system enables proactive interventions, reduces data overload, improves real-time monitoring, and delivers personalized health insights by enhancing EHR platform interoperability, and employing advance AI-driven predictive analytics.
Summary of the invention
[0010] The present invention addresses the limitations of the prior art by introducing a system for multimodal health data integration and predictive analytics, aimed to enhance healthcare delivery through real-time monitoring and personalized insights. The system comprises wearable devices equipped with cameras, thermal sensors, gas sensors, and biometric sensors to collect multimodal health data comprising visual data such as pupil dilation, facial asymmetry, thermal data such as skin temperature, olfactory data such as volatile organic compounds from breath, and biometric data such as heart rate. The collected multimodal health data are transmitted to a cloud-based EHR platform, that securely stores and organizes the information for analysis. The system comprises an AI-driven analytics engine coupled to the cloud-based EHR platform, to process the multimodal health data. The AI-driven analytics engine normalizes and temporally aligns the multimodal health data; extracts feature from the health data using computer vision and ML algorithms. Further, the AI-driven analytics engine fuses feature into a joint representation vector through a multi-branch neural network, and generates predictive health insights, such as health risk scores and anomaly detection flags. The system further comprises a user interface that displays predictive health insights and transmits real-time alerts to healthcare providers through a telehealth dashboard.
[0011] The system further incorporates a range of sensors embedded within the wearable devices to detect multi-sensory data for recording a variety of health biomarkers and vital signs. The system processes visual data, such as pupil dilation, saccadic movement, and blink rate, through the AI-driven analytics engine and identifies neurological and systemic conditions, such as Parkinson’s disease, and concussions of a patient. Further, thermal data processing enables the detection of the conditions such as fever, infections, and circulatory disorders, while olfactory data processing identifies diseases such as diabetes and liver disease through volatile organic compounds in breath of a patient. Additionally, the system processes emotional states by analysing eye movement, skin conductance, and skin temperature, and detects Transient Ischemic Attacks (TIAs) and strokes using facial asymmetry and temperature imbalances. Furthermore, telehealth dashboard employed in the system enhances remote diagnostics by processing facial and ocular images to identify infections, inflammation, or neurological impairments.
[0012] The present invention further discloses a method comprising the steps of collecting multimodal health data in real time using wearable devices, storing the data in a cloud-based EHR platform, and processing the multimodal data with an AI-driven analytics engine. The processing further involves preprocessing the multimodal health data, extracting features using computer vision and machine learning algorithms from the health data, fusing features into a joint representation vector, and analysing the joint representation vector to generate predictive health insights, such as health risk scores and anomaly detection flags. The predictive health insights are displayed through the user interface and real-time alerts are transmitted to healthcare providers through telehealth dashboard, ensuring timely and personalized health management.
[0013] The present invention aims to provide a system for processing multisensory data through improved interoperability and usability of EHR platform, advance AI-driven analytics engine and real-time data integration. The advanced AI-driven analytics engine employed in the system efficiently filters and prioritizes multimodal health data collected from the wearable devices. By integrating multimodal data from wearable devices using standardized protocol into cloud-based EHR platform, the system enables early detection of health risks, such as neurological disorders, infections, and cardiovascular events, thereby improving patient outcomes through proactive interventions. The telehealth dashboard supports virtual consultations, reducing the burden on healthcare facilities. The disclosed system has applications in various healthcare settings, such as remote patient monitoring, chronic disease management, and emergency response.
Brief Description of drawings
[0014] Figure 1 illustrates a block diagram of a system for multimodal health data integration and predictive analytics, in accordance with an embodiment of the invention.
[0015] Figure 2 illustrates a flowchart for a method for multimodal health data integration and predictive analytics, in accordance with an embodiment of the invention.
Detailed description of the invention
[0016] In order to more clearly and concisely describe and point out the subject matter of the claimed invention, the following definitions are provided for specific terms, which are used in the following written description.
[0017] The term “Joint Representation Vector” refers to a unified, fixed-length numerical vector that encapsulates features extracted from multimodal data sources or modalities such as text, images, or sensor data into a single, cohesive representation.
[0018] The term “Vector Mapping” refers to a process of transforming data into a numerical vector representation in a vector space, capturing essential features and patterns for classification and analysis, using feature extraction techniques.
[0019] The present invention relates to a system for multimodal health data integration and predictive analytics. The system collects multimodal health data from wearable devices, store the collected heath data in structured format in a cloud based EHR platform, process the health data set with an AI-driven analytics engine, and provides predictive healthcare insights. The system integrates diverse data streams, including visual, thermal, olfactory, and biometric data, collected in real-time from wearable devices, and processes them through an AI-driven analytics engine. The AI-driven analytics engine employs advanced techniques, such as computer vision, machine learning, and feature fusion, to generate predictive health insights. The health insights are transmitted to a cloud-based EHR platform through a telehealth dashboard, enabling real-time monitoring, anomaly detection, and timely clinical interventions.
[0020] Figure 1 illustrates a block diagram of a system (100) for multimodal health data integration and predictive analytics in accordance with an embodiment of the invention. The system (100) comprises wearable devices (101) that collects the multimodal health data, such as visual, thermal, olfactory, and biometric data of a patient body through the cameras (101a), thermal sensors (101b), gas sensors (101c) and biometric sensors (101d) respectively, from a patient body in a real-time. The wearable devices include devices such as wrist-worn device, head mount device, or body worn sensor patch. The visual data includes pupil dilation, saccadic movement, blink rate, facial asymmetry. The thermal data includes skin temperature and heat distribution. The olfactory data includes volatile organic compounds from breath composition, and the biometric data includes heart rate and skin conductance. The system further comprises a cloud-based EHR platform (102) that receives the multimodal health data collected from the wearable devices (101) and stores the health data in structured format. The cloud-based EHR platform utilizes a microservices architecture, organized into healthcare services, infrastructure, and DevOps components, comprising Continuous Integration/Continuous Deployment (CI/CD) pipelines, microservice orchestration, log management, monitoring, and alerting. The data transmission from wearable devices to the cloud-based EHR platform is managed through a central API Gateway Layer, utilizing standardized protocols such as Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR) for seamless and secure data exchange. The API Gateway routes requests and manages communication between the wearable devices and backend services, with API documentation provided through API Gateway Swagger. The platform supports multi-tenancy, enabling access for different user types such as patients, doctors, hospitals and integrates with external third-party services through multiple API integrations. Further, an AI-driven analytics engine (103) processes the heath data in real-time through preprocessing, feature extraction, feature fusion, and generates predictive health insights, such as health risk scores and anomaly detection flags. The preprocessing involves normalizing and temporally aligning the multimodal health data wherein the data formats are standardized across modalities and synchronized based on timestamps to align temporal sequences. The feature extraction employs computer vision and ML algorithms, and feature fusion uses muti-branch neural network to create a joint representation vector. The multi-branch neural network comprises a first branch configured to process the visual data using Convolution Neural Networks (CNNs), a second branch configured to process biometric data using Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks and a third branch configured to process scalar olfactory data using Multi-Layer Perceptrons (MLPs). The hardware integration process for wearable devices follows a unified approach, ensuring compatibility through rigorous testing of electrical connections, data accuracy, and communication reliability, with calibration to adjust for sensor inaccuracies using reference points. The system further comprises a user interface (104) that displays predictive health insights and transmits real-time alerts to healthcare providers through a telehealth dashboard, enabling timely intervention based on user-specific baselines established over time. The user-specific baseline is established by analysing historical multimodal health data collected from the wearable devices (101) over a predetermined period of time. The system (100) continuously updates the user-specific baseline using temporally aggregated data and trend analysis to improve personalized risk prediction over time.
[0021] The system (100) processes multimodal health data to detect a range of neurological, systemic, and emotional conditions through an AI-driven analytics engine (103). The visual data, including pupil dilation, saccadic movement, blink rate, facial asymmetry, and ocular images, are processed using computer vision and ML algorithms to identify conditions such as Parkinson’s disease, concussions, diabetes, infections, inflammation, neurological impairments, TIAs, and stroke, with real-time image processing enabling remote analysis through a telehealth dashboard. The AI-driven analytics engine performs facial and ocular detection using computer vision algorithms selected from Haar cascade classifiers, Multi-Task Cascaded Convolutional Neural Networks (MTCNN), and RetinaFace models. The thermal data, comprising skin temperature and heat distribution patterns, are analysed using ML algorithms to detect fever, infections, or circulatory disorders. The olfactory data, consisting of Volatile Organic Compounds (VOCs) from breath composition, are processed to identify conditions such as diabetes, liver disease, and infections. The AI-driven analytics engine (103) employs time-series classification models selected from Long Short-Term Memory (LSTM) and Convolutional Neural Network–LSTM (CNN-LSTM) to detect temporal patterns in biometric data. Additionally, the multimodal health data, including eye movement, skin conductance, and skin temperature, are processed using a stress-detection algorithm to assess the emotional state of the user. The AI-driven analytics engine (103) integrates these analyses to generate comprehensive health insights, that are stored in the cloud-based EHR platform (102) and displayed through a user interface for clinical review and real-time alerts.
[0022] Figure 2 illustrates a flowchart for a method (200) of multimodal health data integration and predictive analytics in accordance with an embodiment of the invention. The method (200) comprises a step (201) of collecting multimodal health data, including visual, thermal, olfactory, and biometric data, from a patient body in real-time through wearable devices equipped with cameras, thermal sensors, gas sensors, and biometric sensors. At step (202), the collected multimodal health data are transmitted and stored in a cloud-based EHR platform in a structured format. At step (203), the multimodal health data are processed by an AI-driven analytics engine, wherein the processing includes preprocessing the multimodal health data by normalizing the data across modalities to account for differences in units, ranges, and sampling frequencies and temporally aligning the multimodal health data based on synchronized timestamps to ensure time corelated analysis across health data streams (203a); extracting features from the preprocessed multimodal health data using computer vision and ML algorithms (203b); fusing the extracted features into a joint representation vector using a multi-branch neural network, wherein each branch is configured to process distinct health data modality (203c); and analysing the joint representation vector to identify correlations and patterns among the various health data (203d). At step (204), predictive health insights, such as health risk scores and anomaly detection flags, are generated based on the identified correlations and patterns and transmitted to healthcare providers through the telehealth dashboard for real-time monitoring.
[0023] According to an embodiment of the invention, the AI-driven analytics engine processes image data captured by cameras and thermal data captured by the thermal sensors embedded in the wearable device to detect facial, ocular and skin-related health cues using computer vision and ML algorithms in real-time. The cameras record High Frame Rate (HFR), Red Blue Green (RGB) video of the face and eyes of the patient, with data transmission to the cloud-based EHR platform facilitated by APIs such as HL7 FHIR, managed through the central API Gateway for secure and efficient data exchange. The infrared or depth cameras provide improved contour detection in low-light conditions. The thermal sensors capture temperature gradients across facial regions. The collected multimodal health data from the wearable devices are transmitted to the cloud-based EHR platform, which is built on a scalable microservices architecture with CI/CD pipelines for continuous updates and log management for system reliability. The multimodal health data are processed by the AI-driven analytics engine through preprocessing, feature extraction, feature fusion. The preprocessing involves frame stabilization and denoising withface and eyes detection performed using Haar cascade and deep learning models, such as Multi-Task Cascaded Convolutional Neural Network (MTCNN) and RetinaFace. Further, landmarks for eyes, eyebrows, eyelids and pupils are extracted using Dlib or MediaPipe. The feature extraction comprises temporal analysis of pupil diameter using ellipse fitting over the iris, and blink detection is performed using Eye Aspect Ratio (EAR) and frame differencing. The saccadic velocity and amplitude are calculated using gaze vector tracking. The extracted features are passed into a Long Short-Term Memory (LSTM) or Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) model for temporal pattern recognition and the outputs are used to detect anomalies indicative of fatigue, neurological conditions and stress. The results are streamed into the cloud-based EHR platform, where anomaly scores trigger alerts or anomaly flag records for clinical review, with multi-tenancy support ensuring secure access for various stakeholders.
[0024] According to another embodiment of the invention, the AI-driven analytics engine processes eye movement metrics to quantify and interpret pupil dilation, saccadic movement, and blink rate as biomarkers of neurological and psychological states. For pupil dilation, iris segmentation is performed using thresholding, edge detection, or Convolutional Neural Network (CNN)-based segmentation, such as U-Net, followed by ellipse fitting to estimate pupil diameter. The pupil size changes are continuously monitored in response to Pupillary Light Reflex (PLR), light variations, and emotional stimuli, such as stress or anxiety. For blink rate detection, facial landmarks around the eyes are analysed in sequential frames to compute the Eye Aspect Ratio (EAR), defined as the vertical distance between eyelids divided by the horizontal width of the eye. A drop in EAR below a predefined threshold across a sequence of frames indicates a blink, and blink frequency is recorded over time. For saccadic movement tracking, gaze direction is estimated using vector mapping from the pupil center to the facial plane, tracking eye movement trajectories across sequential frames. The saccade amplitude (in degrees), velocity (in degrees per second), and latency from stimuli are computed, with abnormal metrics linked to conditions such as Parkinson’s disease, Attention Deficit Hyperactivity Disorder (ADHD), and fatigue. The processed metrics are integrated into the cloud-based EHR platform, where anomalies trigger alerts for clinical review.
[0025] According to yet another embodiment of the invention, the AI-driven analytics engine employs a multimodal AI model to fuse data from various biometric channels such as visual, olfactory, and biometric to identify correlations and detect patterns linked to health outcomes. The AI analytics engine processes input health data modalities, including video streams, thermal maps, gas sensor outputs, and standard vital signs. The preprocessing pipelines normalize data across sampling frequencies, apply noise filtering and outlier detection using techniques such as Kalman filters, and perform temporal alignment and time-window segmentation. The feature fusion utilizes a multi-branch neural network, where each branch encodes a specific modality such as Convolutional Neural Networks (CNNs) for image data, Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks for temporal vitals, and Multi-Layer Perceptrons (MLPs) for scalar data, such as volatile organic compounds from breath composition. The learned embeddings are concatenated to form a joint representation vector. The correlation and pattern recognition are achieved through attention mechanisms that highlight the most predictive modalities, coupled with a multi-task loss function that simultaneously predicts stress levels, disease probability scores, and anomaly detection flags. The resulting insights are integrated into the cloud-based EHR platform that utilizes a scalable data layer with various databases and storage systems for secure data handling, and is accessed through a user interface, enabling real-time health monitoring and clinical alerts.
[0026] The disclosed system provides an electronic health record of a patient through multimodal health data integration from wearable devices and generating predictive insights using advanced AI and ML algorithms to enhance personalized health monitoring and disease detection. The technical implementation involves a frontend built using the Ionic Framework for the mobile application and Angular for the web interface, ensuring a responsive and user-friendly experience. The backend is powered by Node.js, enabling efficient processing and data management. The My Structured Query Language (MySQL) database is utilized in the system for secure storage of multimodal health data within the cloud-based EHR platform. The authentication is implemented using JSON Web Token (JWT)-based authentication to ensure secure access to the system. Further, the system is hosted on cloud infrastructure such as Amazon Web Services (AWS) to provide reliability and scalability. The application of the disclosed invention spans clinical settings, remote telehealth monitoring, and personalized health management, enabling healthcare providers to access real-time predictive insights, detect anomalies, and deliver timely interventions, while empowering users to monitor their health proactively through wearable devices and an intuitive interface.

Reference numbers:
Components Reference Numbers
100 System
101 Wearable device
101a Camera
101b Thermal sensor
101c Gas sensor
101d Biometric sensor
102 Cloud-based EHR platform
103 AI-driven analytics engine
104 User interface
, Claims:We claim:
1. A system for multimodal health data integration and predictive analytics, the system (100) comprising:
a. one or more wearable devices (101) configured to collect multimodal health data such as visual data, thermal data, olfactory data, and biometric data of a patient body in real-time utilizing plurality of cameras (101a), thermal sensors (101b), gas sensors (101c), and biometric sensors (101d);
b. a cloud-based Electronic Health Record (EHR) platform (102) configured to receive and store the collected multimodal health data from the wearable devices (101);
c. an artificial intelligence (AI)-driven analytics engine (103) coupled to the cloud-based EHR platform (102), wherein the AI-driven analytics engine (103) is configured to:
i. preprocess the multimodal health data through standardizing health data formats across plurality of modalities and synchronizing the health data based on timestamps to align one or more temporal sequences;
ii. extract one or more features from the multimodal health data employing a set of computer vision and Machine Learning (ML) algorithms;
iii. fuse the extracted features into a joint representation vector utilizing a multi-branch neural network; and
iv. generate plurality of predictive health insights comprising a health risk score and an anomaly detection flag, based on the joint representation vector; and
d. a user interface (104) configured to display the predictive health insights;
wherein the system (100) is configured to provide one or more real-time predictive health insight and one or more risk assessment derived from a user-specific baseline established over time.
2. The system (100) as claimed in claim 1, wherein the one or more wearable devices (101) comprise at least one of a wrist-worn device, a head-mounted device, or a body-worn sensor patch.
3. The system (100) as claimed in claim 1, wherein the multi-branch neural network comprises:
a. a first branch configured to process the visual data using plurality of Convolutional Neural Networks (CNNs);
b. a second branch configured to process biometric data using plurality of Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks; and
c. a third branch configured to process olfactory data using plurality of Multi-Layer Perceptrons (MLPs).
4. The system (100) as claimed in claim 1, wherein the AI-driven analytics engine (103) performs facial and ocular detection using one or more computer vision algorithms selected from Haar cascade classifiers, Multi-Task Cascaded Convolutional Neural Networks (MTCNN), and Retina Face models.
5. The system (100) as claimed in claim 1, wherein the AI-driven analytics engine (103) employs time-series classification models selected from the LSTM and Convolutional Neural Network–LSTM (CNN-LSTM) to detect one or more temporal patterns in biometric data.
6. The system (100) as claimed in claim 1, wherein the system (100) continuously updates the user-specific baseline using temporally aggregated data and trend analysis to improve personalized risk prediction over time.
7. A method for multimodal health data integration and predictive analytics, the method (200) comprises the steps of:
a. collecting the multimodal health data, such as visual data, thermal data, olfactory data, and biometric data from a patient body, in real time using one or more wearable devices equipped with plurality of cameras, thermal sensors, gas sensors, and biometric sensors (201);
b. storing the multimodal health data in a cloud-based EHR platform (202);
c. processing the multimodal health data using an AI-driven analytics engine (203), wherein the processing comprises:
i. preprocessing the multimodal health data using normalizing the data across modalities to account for differences in one or more units, ranges, and sampling frequencies and temporally aligning the data based on plurality of synchronized timestamps; (203a);
ii. extracting one or more features from the preprocessed multimodal health data using a set of computer vision and ML algorithms (203b);
iii. fusing the extracted features into a joint representation vector using a multi-branch neural network, wherein each branch is configured to process a distinct health data modality (203c); and
iv. analysing the joint representation vector to identify correlations and patterns (203d); and
d. generating a predictive health insight comprising at least a health risk score and an anomaly detection flag. (204).

8. The method as claimed in claim 7, comprising the step of transmitting real-time alerts to healthcare providers through a telehealth dashboard, the alerts comprising one or more health risk score and anomaly detection flag, to enable remote analysis of the predictive health insight.

Documents

Application Documents

# Name Date
1 202541072322-STATEMENT OF UNDERTAKING (FORM 3) [30-07-2025(online)].pdf 2025-07-30
2 202541072322-REQUEST FOR EARLY PUBLICATION(FORM-9) [30-07-2025(online)].pdf 2025-07-30
3 202541072322-PROOF OF RIGHT [30-07-2025(online)].pdf 2025-07-30
4 202541072322-POWER OF AUTHORITY [30-07-2025(online)].pdf 2025-07-30
5 202541072322-FORM-9 [30-07-2025(online)].pdf 2025-07-30
6 202541072322-FORM FOR STARTUP [30-07-2025(online)].pdf 2025-07-30
7 202541072322-FORM FOR SMALL ENTITY(FORM-28) [30-07-2025(online)].pdf 2025-07-30
8 202541072322-FORM 1 [30-07-2025(online)].pdf 2025-07-30
9 202541072322-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-07-2025(online)].pdf 2025-07-30
10 202541072322-EVIDENCE FOR REGISTRATION UNDER SSI [30-07-2025(online)].pdf 2025-07-30
11 202541072322-DRAWINGS [30-07-2025(online)].pdf 2025-07-30
12 202541072322-DECLARATION OF INVENTORSHIP (FORM 5) [30-07-2025(online)].pdf 2025-07-30
13 202541072322-COMPLETE SPECIFICATION [30-07-2025(online)].pdf 2025-07-30
14 202541072322-STARTUP [01-08-2025(online)].pdf 2025-08-01
15 202541072322-FORM28 [01-08-2025(online)].pdf 2025-08-01
16 202541072322-FORM 18A [01-08-2025(online)].pdf 2025-08-01