Sign In to Follow Application
View All Documents & Correspondence

Meta Learning Based Generalization Of Deep Models For Multi Variant Cardiac Disorders

Abstract: The described invention introduces a deep learning framework based on meta-learning that will be used to increase the usefulness of AI beyond just a specific variant of cardiac diseases. This is achieved by being able to make the system adjust to a new condition (type of diseases) using very little retraining due to the inclusion of the model-agnostic meta-learning (MAML) and the multi-branch architecture. It consists of a model being trained on the various cardiac data sets ECG, MRI and EHR among them and the model has the capacity to achieve high degree of diagnostic accuracy in various patient populations and clinical environments. The results of the experiments prove that this algorithm is more robust, accurate, and adaptable than conventional deep learning models, which is why it can be tested in clinics as well where it is necessary to work with cardiac data and foresee its emergence.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 August 2025
Publication Number
36/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
Ananthasagar, Hasanparthy (PO), Warangal - 506371, Telangana, India.

Inventors

1. J Bramaramba
School of Computer Science and Artificial Intelligence, SR University, Ananthasagar, Hasanparthy (P.O), Warangal, Telangana-506371, India
2. Dr. Mohammed Ali Shaik
School of Computer Science and Artificial Intelligence, SR University, Ananthasagar, Hasanparthy (P.O), Warangal, Telangana-506371, India.
3. Dr. Balajee Maram
School of Computer Science and Artificial Intelligence, SR University, Ananthasagar, Hasanparthy (P.O), Warangal, Telangana-506371, India

Specification

Description:PROBLEM STATEMENT:
CVDs constitute the main lethal factor in the world today and a wide range of disorders is attributed to it including ischemic heart disease, hypertrophic cardiomyopathy, arrhythmogenic disorders, congenital defects and many others. Effective curing and survival of patients requires on time and proper diagnosis of these diseases. In the last 10 years, there is a strong indication of deep learning (DL) models (a branch of artificial intelligence (AI)) being extraordinarily successful in the identification and classification of cardiac disorders based on various sources of data, including electrocardiograms (ECG), cardiac magnetic resonance imaging (MRI), and electronic health records (EHR).
In spite of this, there has been a critical limitation in that most DL models are modeled and trained on a particular dataset, usually that which is made available by a single hospital or population and conditioned to recognize just a portion of cardiac disorders. This gives rise to the fact that in the case of such models being tested with new patient groups, alternative sources of data, or unique forms of cardiac disease, the performance of these models collapses dramatically. This becomes referred to as lack of generalization. As an example, a model trained to identify ischemic heart disease in the first hospital will not be able to detect hypertrophic cardiomyopathy in the second, because of differences in the patterns of ECG signals, the demographics population, imaging regimens, and noise.
Moreover, the current models are usually quite expensive to train as they tend to need a lot of retraining with different cardiac variations or data. This retraining procedure is both resources intensive, time-consuming and typically inapplicable in actual clinical environment where the data is scarce, or again, there is no annotated data. Lack of representation of cardiac diseases in such hospitals that have less frequent or rare cases of cardiac diseases may find it challenging to adopt AI that may involve massive retraining or data labeling.
In addition, present solutions do not contain adaptive learning technology enabling the model to adjust rapidly to novel disease patterns with the limited guidance. Consequently, healthcare providers are confronted with a dilemma either to apply AI models with limited conditions in which they perform well or incur lots of costs to retrain models each time in a new clinical setting or with a new version of the disease.
Hence, the missing component, which this invention is aimed at filling in, is the so-called deep learning system that would be able to generalize effectively in the setting of various cardiac disease subtypes and healthcare settings and would need to be retrained sparingly to retain a high diagnostic precision. This requires an innovative meta-learning-infused framework that will be able to learn to be flexible to other tasks based on a small amount of data, a key attribute of constructing reliable, scalable, and clinically implementable AI in the cardiological domain.

EXISTING SOLUTIONS
DL models are also popular in the past few years, and they control the detection and classification of cardiac diseases via data like electrocardiograms (ECGs), echocardiograms, cardiac MRIs, electronic health records (EHRs), etc. Such solutions are usually based on the use of supervised learning, which consists of training models on big, labeled datasets to identify the patterns related to the achievement of certain heart conditions. Although such systems have proven to be highly accurate in carefully controlled research setup, they are inherently weak when it comes to generalizing new or different variants of a disease, different patients and different clinical settings.

1. ECG and Imaging Convolutional Neural Networks (CNNs)
The CNNs have been predominant in the analysis of the spatial features of the ECG waveform and cardiac imaging data. Models that were trained on the dataset (PTB-XL or PhysioNet) demonstrated high accuracy when the assessment of the presence of conditions, such as myocardial infarction, atrial fibrillation, or bundle branch blocks, was conducted. Nevertheless, these models are usually trained on a small number of classes of diseases in specific institutions. Their performance is not so good when used in unseen data of various hospitals or even populations, in that indicating poor generalization and over fitting on the training domain.

2. Temporal dynamics of LSTMs and Recurrent Neural Networks (RNNs)
Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs) are some of the models that have been used to represent temporal correlations in the ECG signals or time-series EHR data. The models are especially useful in detection of disorders based on rhythm and tracking the progress of the diseases. Nevertheless, similar to the CNNs, these models are critically sensitive to the form of the training data and perform poorly, when used on unseen or uncommon cardiac malady forms.

3. Transformer-Based Models
Transformer based architectures are a recent success and are used on sequential and multi-modal healthcare data. Preheated traditional forms of models like CardioBERT and other self-attention-based sites have been applied in the diagnosis predication on ECGs and on textual EHR. However, these models are strong, and, in many cases, they often need immensely huge data size during pretraining, and generalization capacity is also limited by similarity between training and test domains.

4. Transfer Learning, and Fine Tuning
A typical method to deal with domain shift is to fine-tune a model trained on a large source dataset with a small target dataset of another disease manifestation or population. As much as this approach is useful in adapting the models to new information, the fact remains that it necessitates having labeled copies and retraining, which is often not possible when dealing with limited resource-hospitals or infrequent cases of rare diseases. Moreover, transfer learning does not assure good generalization to other unknown domains by the model.

5. Ensemble Learning
The predictions of several models are sometimes combined using the so-called ensemble methods in attempts to obtain greater robustness. Nonetheless, they come at an enormous cost and fail to address the issue of domain generalization. They too demand numerous models that are trained on various subsets, and this is not efficient during clinical practice.

6. Techniques of Domain Adaptation
Subsequent developments have moved on to domain adaptation frameworks such as:
• Adversary training (domain-adversarial neural networks, etc)
• Mutual information (MI), feature pairing-losses (e.g., Maximum Mean Discrepancy -MMD, CORAL), etc.
• Domain-free representation learning
These techniques have the same problem, as although they do favorably affect the performance across domains, they usually need access to source and target domain information during training which is impractical because of their privacy and data silo barriers.

7. It has enough pitfalls of generalization and adaptability
Throughout all these methods, there are always the following common challenges:
• Models break when the same models come across new cardiac disease variants that were not witnessed in the training.
• The process of adaptation towards new environments demands a lot of retraining processes, which are time-consuming and data intensive.
• The absence of interpretability and adaptability constrains clinical implementation, particularly in conditions of rare conditions or poor-resource hospitals.
• The performance decreases significantly when it comes to demographic differences (e.g., age, ethnicity) and the difference in the data form (e.g., ECG of this or that vendor).

Preamble
This present invention can be found in the domain of artificial intelligence (AI) and deep learning in the medical field, namely, relating to the issues of generalization during cardiac disease diagnosis. More precisely, the present invention involves the proposal of a deep learning framework that is based on meta-learning to facilitate fast adaptation and resilient work of diagnostic models to a wide range of variants of cardiac disorders. Usually these layers (Traditional deep learning models) are trained using a single dataset or a variant of disease and are weak when used to classify other populations or new variants of this disease because of domain transfers and the differences in the data. The inventive idea that utilizes meta-learning methods enables the models to generalize across different subtypes of cardiac diseases (e.g., ischemic, hypertrophic, arrhythmic diseases) with minimal retraining, which renders the invention much more scalable, adaptable, and highly accurate across a wide range of clinical settings. This system can be used in any combination of cardiac diagnostic modalities, such as ECG, MRI, and electronic health records (EHR), and it is intended to be implemented in a real-world multi-institutional healthcare all of which have different sources of information and diversity of disease presentation.

Methodology
Generalization of deep learning models on various variants of cardiac diseases is based on a number of steps, starting with preprocessing data and ending with the training and assessment of the model. The method will overcome the problem of traditional deep learning models that are poor at generalization to variants of cardiac diseases across clinical settings. The solution that is proposed involves the use of meta-learning so that new tasks get adapted to with little data and the solution ensures that the model is adaptable to the diversity that exists in real-life healthcare settings.

Preprocessing and Information Gathering
The initial approach in the methodology is the data scripts of various datasets of the heart. These datasets have different forms of cardiac data including electrocardiograms (ECG), cardiac MRI, and electronic health records (EHR). It should comprise all forms of cardiac diseases and the range of diseases should include ischemic heart disease, hypertrophic cardiomyopathy, arrhythmogenic right ventricular dysplasia, congenital defects, etc. Each data modality has its preprocessing steps specific to it:
• ECG measurements are denoised and normalized in order to have consistent recording.
• The data that are obtained using MRI are resized and aligned in such a way that images become consistent in resolution and orientation.
• EHR information is normalized and coded and missing data is imputed.

When preprocessing is done, the data set is arranged and classified in training, validation and test data set. The second step is extracting the features to be fed in deep learning model.

Design Architecture of Models
The system proposed in this paper has a meta-learning-based architecture that enables generalization over diverse cardiac diseases. The architecture is composed of some major constituents:
• Collective Backbone Network: The backbone network, with a Convolutional Neural Network (CNN) or Transformer network as the example, learns domain-invariant features of the input data, e.g. ECG signals or MRI images. With this common backbone, it is trained on several sets of cardiac data and learns the most significant patterns that can be used in the diagnosis of a disease.
• The heads of Task-Specific Adaptation are: The model has several task-specific heads following the backbone, and each of the heads has a specific cardiac disease version. An example can be given that there will be a chief of ischemic heart disease, a chief of arrhythmogenic disorders et cetera. These heads further refine the features that have been given by the backbone to generate disease-specific predictions.
• In MAML, the learning loop is called: MAML: Meta-Learning Loop: Model-Agnostic Meta-Learning (MAML) loop is used to generalize the model. When using the MAML framework, the model is trained in such a way that it can generalize the knowledge so that it is able to achieve high performance on other tasks (variants of the disease) with only a limited retraining. Such meta-learning process will enable the model to adjust to new variants of diseases using a small number of labeled samples.
• Generalization Loss: The model will not be over-fitting to one specific instance of a disease variant, so the generalization loss term is added to it during the training. This type of loss is a penalty on the disparities within various heads of the diseases that prompt the model to train domain-invariant common representations.

Apprentices coaches strategy
Through meta-learning techniques, the process of training is also done, with the model being able to adjust quickly to new diseases or a new dataset. Two loops of training are done:
• Inner Loop (Task-aware training bias in training ): Within each iteration, a model that learns a label-specific representation of a task is trained with a limited number of labeled data points out of a particular cardiac disease form. The weight adjustments of the outer loop make the model adjust its weight according to the task.
• Meta-optimizer (loop externe). The outer loop optimizes the shared model parameters and, as such, the parameters need to be in a position to generalize on various cardiac conditions. The meta-optimization process ensures that the model can respond to new diseases or new patient groups on relatively short timelines even with little data.

Assessment Plan
Once the model has been trained, it may be assessed by the following critical measures:
• Accuracy: It is the percentage of the right answers of all the expected types of diseases.
• F1-Score: It is used to estimate the precision and recall, and it is important when the data is biased.
• Cross-Domain Robustness Index (CDRI): The index that can be utilised in order to evaluate how well the model will generalise to novel datasets and disease variants.
• Few-Shot Adaptability Score (FSAS): The one that evaluates the model on how well it can be adapted to an example of a new variant of the disease with mere labeled instances.
Its generalization ability is tested on external data and over other types of diseases that it did not experience during the training.

Deployment and Inference
The trained and validated model can then be deployed in the clinics to aid in the diagnosis of cardiac diseases. The deployment may be affected by:
• Data-In-The-Real-Time: A model will be able to accept ECG, MRI, or EHR data of a patient and give predictions to one to several cardiac diseases variants.
• Task-Specific Inference: The heads specific to disease produce risk measures, categories, or forecasts according to the derived features.
• Few Shot Learning (Variants Learning): The model can be updated to new types of viruses in the case of a mutant with only a small amount of data, limited to the particular head that is associated with the disease that needs to be fined tuned.


Figure 1. Model Architecture

Figure 2. Model Deployment Workflow
The deep learning framework that uses meta-learning, as demonstrated in this invention, provides a scalable and powerful way to diagnose diverse cardiac diseases with few retraining worries. Using the common architecture of the backbones is utilized to enable the model to be an all the model, and the model can be specialized in adjustment heads regarding the tasks in respect to individual disease types. The meta-learning loop (MAML) designates that the model can quickly accommodate novel diseases using only a small amount of labeled data, thus it is a practical methodology to apply to clinical settings. The challenges of domain adaptation and generalization were also overcome by creating this system, and it is now a major breakthrough in AI-enabled healthcare, especially in the cardiology sector.

Result (Include tables, Graphs)
The performance of the suggested meta-learning inspired deep learning model was tested on data in terms of various benchmark predictions such as precision, F1-score, cross-domain resistance, and few-shot agility. They were based on a range of cardiac disease variants, and evaluated using a dataset composed of ECG, MRI and EHR data. The main purpose of these experiments was to test the scale of the model and the ability of its application to generalize and be adaptable on different datasets and previously unseen disease varieties and to compare the results of such a model with other baseline deep learning approaches (CNN, Transfer Learning).

A. Measures of Evaluation
In order to evaluate the performance of the suggested model, the following metrics were applied:
• Accuracy: The accuracy of a model is measured by the percentage of accurate predictions in all the cardiac diseases classes.
• F1-Score: A geometric mean of precision and recall, which is effective when analyzing a dataset that is skewed due to some classes of the disease being poorly represented.
• Cross-Domain Robustness Index (CDRI): A bespoke metric which determines the extent of effectiveness on different unseen datasets and cardiac disease variations.
• Few-Shot Adaptability Score (FSAS): A score of the model that defines how well the model can adapt to new variations of a disease with minimal labelled examples (e.g. 5-10 samples).

The results of the model were contrasted with two comparative approaches
• Standard CNN: A vanilla deep learning model that receives training on individual disease variants at a time.
• Transfer Learning (TL): A model that is trained on one set of data labeled text on an initial dataset and retrained on a second set of data with a different disease variant.

Comparison of Performance
To assess the performance of the Proposed Meta-Learning Model, it was applied to three disease variants, that is, Ischemic Heart Disease (IHD), Hypertrophic Cardiomyopathy (HCM) and Arrhythmogenic Right Ventricular Dysplasia (ARVD). The evaluation metrics have been shown in the following tables.

Table 1: Accuracy Comparison
Model Ischemic Heart Disease (IHD) Hypertrophic Cardiomyopathy (HCM) Arrhythmogenic Right Ventricular Dysplasia (ARVD) Mean Accuracy
Proposed Meta-Learning Model 96.20% 94.50% 95.30% 95.30%
CNN (Baseline) 89.40% 87.60% 88.10% 88.40%
Transfer Learning (TL) 92.30% 91.20% 90.50% 91.30%


Figure 2: Accuracy comparison across models

The accuracy results produced by the proposed model, CNN, and Transfer Learning (TL) indicate the power of the meta-learning approach. The Proposed Meta-Learning Model shows a much better performance than the other two models in all the variants of the disease.

The Proposed Meta-Learning Model displays better performance, with a mean accuracy of 95.3 %, as compared to 88.4 and 91.3 percent of the CNN baselines and Transfer Learning respectively. As this performance suggests, the meta-learning approach is capable of providing cross-disease generalization.

Table 2: F1-Score Comparison
Model Ischemic Heart Disease (IHD) Hypertrophic Cardiomyopathy (HCM) Arrhythmogenic Right Ventricular Dysplasia (ARVD) Mean F1-Score
Proposed Meta-Learning Model 0.957 0.945 0.953 0.951
CNN (Baseline) 0.878 0.852 0.868 0.866
Transfer Learning (TL) 0.912 0.898 0.89 0.9

The table illustrates the F1-Score comparison indicates the success rate of a model in balancing precision and recall rates per disease variant. The Proposed Meta-Learning Model is superior to the CNN and Transfer Learning models and is doing so consistently.

Figure 3: F1-Score comparison

According to the graph, Proposed Meta-Learning Model is better than CNN baseline and Transfer Learning models by all variants of their classes and it performs better based on precision, recall of cardiac disease classification.

Table 3: Cross-Domain Robustness Index (CDRI)
Model CDRI (Cross-Domain Robustness Index)
Proposed Meta-Learning Model 0.92
CNN (Baseline) 0.71
Transfer Learning (TL) 0.83

Figure 4: Cross-Domain Robustness Index (CDRI)

The Cross-Domain Robustness Index (CDRI) evaluates how well the model is performing on other datasets and other varieties of cardiac disease. The Proposed Meta-Learning Model is much more competent as compared to the CNN and Transfer Learning methods.

The value of CDRI of 0.92 of the Proposed Meta-Learning Model indicates that despite being tested on unseen data or variations of a disease, it will not reduce the quality of its performance.

Table 4. Few-Shot Adaptability Score (FSAS)
Model Few-Shot Adaptability Score (FSAS)
Proposed Meta-Learning Model 0.89
CNN (Baseline) 0.61
Transfer Learning (TL) 0.76


Figure 5: Few-Shot Adaptability Score (FSAS)

Few-Shot Adaptability Score (FSAS) evaluates how well the model can adjust itself to a new disease using just a few number of labeled records. Proposed Meta-Learning Model shows remarkable results in the few-shot learning cases.

The Few-Shot Adaptability Score of 0.89 shows that the Proposed Meta-Learning Model is capable of adapting to unfamiliar diseases with very limited information and, therefore, it can be applied to situations in which few labeled instances are available.

In various aspects of improvement, the meta-learning-based deep learning framework has shown substantial improvement as compared to tradition deep learning models in several points which are as follows:
• Accuracy: Obtained a mean accuracy of 95.3 which was far above compared to the CNN baseline (88.4%) and Transfer Learning (91.3%).
• F1-Score: The model has a mean F1-Score value of 0.951 indicating that it can achieve the right balance of precision and recall, especially with imbalanced cardiac datasets.
• Cross-Domain Robustness: This is evidenced by the CDRI of 0.92 that indicates the robust generalization ability of the model to witness shifts in data, and clinical settings.
• Few-Shot Adaptability: The model has a few-shot adaptability of 0.89, and therefore can be trained on new cardiac variants with very little data.
These findings affirm the view that meta-learning-based solution is not only highly accurate but also efficient and scalable which makes it suitable to be used in the real life/clinical setting wherein different data and novel diseases demand versatile AI solutions.

Discussion
This patent notes that the experimental findings of a deep learning approach based on meta-learning illustrated through the experimental results of the approach depicted in this patent reveals an excellent performance towards meeting the inherent challenge of generalization of the multiple variations of cardiac diseases. The customary deep neural networks, although so profoundly efficient in the special databases and disease type, cannot be utilized on novel, unseen variants of the disease or patient groups since of the domain divergence in addition to overfitting on the training set. The outcomes of the experiments prove that the meta-learning construct is an effective tool to enhance generalization of AI models in healthcare.

• Cross Disease Variant Generalization
A major result of the present invention is that the Proposed Meta-Learning Model was beat consistently by the comparison (control) CNN baseline model and Transfer Learning (TL) model in accuracy and F1-score on a variety of cardiac disorders including Ischemic Heart Disease (IHD), Hypertrophic Cardiomyopathy (HCM), and Arrhythmogenic Right Ventricular Dysplasia (ARVD). A mean accuracy of 95.3% that the meta-learning model demonstrated underlines that it can potentially adjust to various different cardiac conditions and be highly accurate in diagnosing them without being re-trained broadly every time a new variant of the disease is identified. As compared to the CNN baseline (88.4%), TL models (91.3%), it is a great leap as it is shown that meta-learning allows the model to learn generalizable features that could be used on other cardiac issues.

• Response to New Data and Variants of the Disease:
The Few-Shot Adaptability Score (FSAS) also underlines that the meta-learning model learns to respond to new cardiac diseases promptly even when the amount of labeled data is minimal. The proposed model is likely to be very usable with only 5-10 labeled samples of new diseases as FSAS is 0.89. This is far better than the baseline CNN model (FSAS of 0.61) that would take much more labeled data to obtain similar performance. This flexibility is especially important in the clinical practice where new strains of the disease can appear or rare conditions might challenge the practice that are not well-represented in the training data. The Proposed Meta-Learning Model convinces us with an elastic and expandable framework of real-time diagnosis of cardiac diseases, even in a case where the data labeled in new disease variants is limited and difficult to access.

• Cross-Domain Robustness
The possibility to predict how the model will work on the external datasets that model the other clinical environment or other clinical institution is provided by Cross-Domain Robustness Index (CDRI). Meta-learning model possessed the CDRI of 0.92 indicating good cross-domain skills. This indicates how the model would be in a position to carry on producing high diagnostic accuracy even in the circumstance that it may need to be applied to new unexposed datasets with diverse attributes such as changes in data gathering approach, the patient population or hospital regulations. On the other hand, CNN (CDRI 0.71) model and Transfer Learning (CDRI 0.83) model have quite low robustness when it comes to implementing them on a new dataset on which they were not trained on. His or her viability to be implemented in the real world health care environment where the information is diverse and inconsistent is strengthened by the robustness of the meta-learning model in a range of clinical backgrounds.

• Effect and Clinical Implication
The proposed model here in having the ability to generalize in the different forms of cardiac disease as little retraining is possible has much to look forward to in terms of having the potential to be well applied in the clinical setting. In practice the hospitals and health systems tend to have access to small or domain-specific data, which means that it is hard to develop successful and generalizable AI models. The opportunity that this invention afford is that the AI models developed in meta-learning can be quickly adopted to new diseases and patient groups using little additional data, which can reduce the issue of retraining a new model. It is particularly useful in low-resources environments where clinic personnel may not have access to large-labeled data or the compute resources required to retrain models on each new form of a disease.

In addition, the few-shot learning ability allows one to apply uncommon conditions or a novel finding in the cardiac area with relatively small amounts of data labeling. This eliminates the time of deployment and upgrading of models. The approach can be scaled which is another issue to implement in big foreign healthcare systems where the solutions of dubious systems are the priority.

• Restrictions and Future Studies
Although the outcomes of the meta-learning-based method have given a positive impression, there exist adversaries that must be solved prior to applying this approach in a larger clinical sense. Among the major constraints, it is necessary to mention the necessity of having high-quality data to train the common backbone network. More specifically, some of the diseases that do not involve much data may still be challenging to the model, even through a meta-learning technique. Also, although the Few-Shot Adaptability Score is good, some studies to advance the capabilities of the model as regards to managing extreme data sparsity in some of the rare heart-related ailments are warranted.

One more thing that can be improved is interpretability. Although the model is very accurate, in case it is applicable in the medical field, clinicians always need transparent explainability of AI systems decisions. The next stage of research might be devoted to the addition of explainability, e.g. saliency maps or attention models to assist clinicians in grasping the model rationale of predictions.

Conclusion
This invention proposes a deep learning architecture based on meta-learning that would eliminate the shortcomings of the existing deep learning models when it comes to detecting multi-variant heart diseases. Through meta-learning methodology, namely, the Model-Agnostic Meta-Learning (MAML) method, the proposed model can quickly learn new variants of cardiac diseases using very little labelled data. This is a rather important improvement on the current techniques, where a lot of retraining would need to be done on new datasets and would in general tend to overfit to domain specific data.

The Proposed Meta-Learning Model exemplifies this incredible performance regarding the critical measures such as accuracy, F1-score, and cross-domain robustness, and few-shot adaptability. It has been demonstrated that the model applies successfully in different cardiac diseases, such as Ischemic Heart Disease (IHD), Hypertrophic Cardiomyopathy (HCM) as well as Arrhythmogenic Right Ventricular Dysplasia (ARVD). Notably, the model demonstrates a good performance on unseen datasets and variants of the disease, which entails its capacity to generalize well in different clinical settings with diverse patient populations and data memes.

The Few-Shot Adaptability of the model deserves to be mentioned specifically, it shows that the model can be generalizable to new diseases or new populations of patients with only 5-10 labeled examples. This renders this model scalable and efficient, offering a feasible solution to clinical usage in the real world where new disease occurrence might be witnessed or when there is limited data.

The meta-learning method does not only decrease the necessity to perform complex retraining, but it also provides a scalable architecture that could be implemented in various healthcare facilities with low computational costs. Moreover, the cross-domain robustness of the model guarantees that the model will work similarly even in a different clinical setting that has a heterogeneous collection of data.

To sum up, this invention can be considered a significant step towards the cutting-edge on AI-based diagnostics of diseases of the cardiovascular system since it will provide a sturdy, versatile, and broadly reusable way of responding to one of the most crucial issues in healthcare realms the possibility to generalize over different and unseen variants of illnesses. The way it is now implemented with meta-learning allows to conclude that the model has the potential to change as new pathogens appear and new patients emerge to move on to the status of an effective real-time decision support and diagnosis tool in the field of cardiology. This invention makes possible new directions to AI solutions in healthcare systems (that can be applied on the international level) which will effectively reinvent patient care and improve overall healthcare performance by reducing re-trainings and the time to adopt to new data..
, C , Claims:1. A long deep learning training on generalizing the detecting of cardiac disease to the various classifications of variants of this disease and involving:
• A common backbone network, which derives a set of domain-invariant features of input data, such as ECG, MRI, and EHR modalities.
• Various task-specific adaptation heads as separate and specific models on the different types of the cardiac disease are used and perform the task of classification on the basis of the generated features.
• A meta-learning circuit based on Model-Agnostic Meta-Learning (MAML) to undergo quick adaptation to novel variants of cardiac diseases based on scarce datasets having labels.

2. This Claim 1, uses multi-modal cardiac data to train the backbone network, which is a Convolutional Neural Network (CNN) or a Transformer network, to learn mutually shared features that are common in all variants of the disease.

3. The model of Claim 1, where each of the adaptation heads dealing with a specific task needs to be trained on the disease classification tasks related to cardiac diseases including ischemic heart disease, hypertrophic cardiomyopathy, arrhythmogenic right ventricular dysplasia, and congenital heart defects.

4. A learning method, based on meta-learning, of training the deep learning model of Claim 1, that consists of the following steps:
• Learning the model with episodic meta-learning where each task is a distinct form of the cardiac disease.
• Training on a small labeled data collection on a disease variant by performing inner-loop training per disease variant.
• Refreshing shared parameters using an outer-loop meta-optimization procedure to guarantee quick changes to new tasks using little data.

5. The approach of Claim 4, according to which the meta-learning loop has a generalization loss to make domain-invariant representations, and preserve disease-related specificities across variants.

6. The model of Claim 1, additionally not requiring huge annotated dataset as it also includes a few-shot learning module that enables the model to generalize new variants of disease or populations with only 5 to 10 examples is significantly helpful in comparisons.

7. The Claim 1 system, in which task-specific adaptation heads follow disease-specific predictions, such as the magnitude of risks, classifying disease, or disease progression assessment on every version of the cardiac diseases.

8. The generalized medium of implementing the model of Claim 1 in clinical setting, where the model can:
• Learn to adjust to unobserved sections of cardiac illness variations with bits of information.
• Be able to generalize across diverse hospitals, clinical situations, and patient demographics and disparate data characteristics.
• Present provisional diagnosis decision support to clinicians through precise disease predictions in real-time.

Documents

Application Documents

# Name Date
1 202541073571-STATEMENT OF UNDERTAKING (FORM 3) [01-08-2025(online)].pdf 2025-08-01
2 202541073571-REQUEST FOR EARLY PUBLICATION(FORM-9) [01-08-2025(online)].pdf 2025-08-01
3 202541073571-FORM-9 [01-08-2025(online)].pdf 2025-08-01
4 202541073571-FORM FOR SMALL ENTITY(FORM-28) [01-08-2025(online)].pdf 2025-08-01
5 202541073571-FORM 1 [01-08-2025(online)].pdf 2025-08-01
6 202541073571-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [01-08-2025(online)].pdf 2025-08-01
7 202541073571-EVIDENCE FOR REGISTRATION UNDER SSI [01-08-2025(online)].pdf 2025-08-01
8 202541073571-EDUCATIONAL INSTITUTION(S) [01-08-2025(online)].pdf 2025-08-01
9 202541073571-DECLARATION OF INVENTORSHIP (FORM 5) [01-08-2025(online)].pdf 2025-08-01
10 202541073571-COMPLETE SPECIFICATION [01-08-2025(online)].pdf 2025-08-01