Sign In to Follow Application
View All Documents & Correspondence

A Deep Learning Based System For The Early Prediction Of Dementia In Alzheimer’s Disease

Abstract: Disclosed herein is a deep learning-based system for the early prediction of dementia in Alzheimer’s disease (100) comprises a data acquisition module (102) configured to receive and integrate multi-modal medical data. The system also includes a hybrid deep learning engine (104) comprising a convolutional neural network (CNN), a sequential learning network selected from a long short-term memory (LSTM) network or a Transformer network. The system also includes a multi-modal fusion module (106) configured to combine spatial and temporal features into a unified predictive representation. The system also includes an interpretability module (108) configured to generate clinically relevant outputs including risk scores, attention maps, and feature relevance indicators to enable explainable predictions and improve clinical decision-making. The system also includes a healthcare integration interface (110) configured to provide the predictive results to electronic medical record (EMR) systems, clinical dashboards, or healthcare practitioners in real-time.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 September 2025
Publication Number
44/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. MRS. SRUTHI THOTA
RESEARCH SCHOLAR, SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR. N. SHARMILA BANU
ASSISTANT DEAN (RESEARCH) & ASSISTANT PROFESSOR (CS&AI), SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF DISCLOSURE
[0001] The present disclosure relates generally relates to the field of medical diagnostics and artificial intelligence. More specifically, it pertains to a deep learning-based system for the early prediction of dementia in Alzheimer’s disease.
BACKGROUND OF THE DISCLOSURE
[0002] Alzheimer’s disease (AD) represents one of the most pressing neurological disorders of the modern age, profoundly affecting individuals, families, and healthcare systems across the globe. Characterized as a progressive neurodegenerative disorder, AD leads to the deterioration of memory, cognition, and functional independence. Dementia, which encompasses the clinical syndrome resulting from Alzheimer’s and related disorders, remains the leading cause of dependency and disability among older adults worldwide. Since Alois Alzheimer’s seminal description of the disorder, substantial research efforts have been directed toward understanding its etiology, clinical manifestations, and potential therapeutic interventions. Despite these endeavors, AD continues to lack a definitive cure, and early detection remains one of the most critical factors in managing disease progression and improving quality of life.
[0003] The socio-economic consequences are staggering, as costs related to medical care, assisted living, and lost productivity exceed hundreds of billions annually. Beyond financial implications, the disease exerts an immeasurable toll on caregivers, often leading to psychological stress, depression, and burnout. These realities underscore the urgent need for methods that can enable clinicians to identify the disease at its earliest stages, when interventions are most effective at slowing cognitive decline.
[0004] Pathophysiologically, AD is characterized by hallmark features such as extracellular deposition of amyloid-beta plaques and intracellular accumulation of neurofibrillary tangles composed of hyperphosphorylated tau protein. These molecular changes lead to progressive synaptic dysfunction, neuronal death, and subsequent brain atrophy, particularly in regions critical for learning and memory, such as the hippocampus and cerebral cortex. Although these biomarkers can be identified through advanced imaging and cerebrospinal fluid (CSF) analysis, such diagnostic procedures are invasive, costly, or inaccessible in many regions. This has created a pressing demand for non-invasive, reliable, and accessible approaches that could facilitate early-stage detection before irreversible neuronal damage occurs.
[0005] Traditionally, clinicians have relied on neuropsychological assessments and clinical evaluations to diagnose Alzheimer’s-related dementia. Tools such as the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Clinical Dementia Rating (CDR) are widely employed to assess memory, orientation, attention, and executive function. While these instruments provide valuable insights into cognitive performance, they are often limited in sensitivity during the earliest preclinical or mild cognitive impairment (MCI) stages. Many individuals with early pathological changes remain undetected until symptoms become pronounced, thereby reducing the window of opportunity for therapeutic intervention. Furthermore, these cognitive tests can be influenced by factors such as education, cultural background, or coexisting medical conditions, leading to variability in diagnostic accuracy.
[0006] Medical imaging techniques, particularly structural and functional neuroimaging, have significantly advanced the understanding of Alzheimer’s disease progression. Magnetic resonance imaging (MRI) allows for the evaluation of brain atrophy, particularly in the hippocampal and cortical regions, while positron emission tomography (PET) provides insights into amyloid-beta deposition and glucose metabolism. Diffusion tensor imaging (DTI) and functional MRI (fMRI) offer additional perspectives into white matter integrity and connectivity changes associated with cognitive decline. However, despite their clinical utility, these imaging modalities are not without challenges. High costs, limited accessibility, radiation exposure in PET scans, and the need for specialized equipment restrict widespread adoption for routine screening. Consequently, their use is often confined to research settings or specialized diagnostic centers, leaving a gap in early, scalable, population-wide detection.
[0007] Alongside imaging, biomarker research has garnered substantial attention. The analysis of cerebrospinal fluid (CSF) biomarkers, including amyloid-beta 42, total tau, and phosphorylated tau, has demonstrated strong associations with disease pathology. Blood-based biomarkers are also emerging as potential non-invasive alternatives, offering the possibility of cost-effective and accessible diagnostic tools. Despite encouraging progress, variability in biomarker expression, assay standardization issues, and ethical considerations surrounding disclosure of biomarker results remain barriers to their universal clinical implementation.
[0008] Over the past two decades, computational models have been increasingly integrated into neuroscience and clinical research to address the limitations of traditional diagnostic tools. Advances in machine learning and artificial intelligence (AI) have enabled the processing of vast amounts of medical data, uncovering subtle patterns that may escape human observation. Early applications of machine learning in dementia research involved statistical models and conventional classifiers, such as support vector machines (SVMs), logistic regression, and decision trees, applied to imaging or cognitive datasets. These models demonstrated improved predictive capabilities compared to traditional methods, but often struggled with generalizability, scalability, and the capacity to integrate heterogeneous data sources.
[0009] Deep learning, as a subfield of artificial intelligence, has revolutionized the capacity of computational models to analyze complex data with high-dimensional features. By utilizing multi-layered neural networks inspired by the human brain, deep learning models can extract hierarchical representations from raw data, enabling the detection of subtle abnormalities in imaging scans, speech patterns, or clinical histories. Convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based architectures have been applied in Alzheimer’s research with promising results. For example, CNNs excel at analyzing MRI and PET images to identify structural changes, while RNNs capture temporal patterns in longitudinal cognitive testing data. These approaches offer the possibility of enhancing sensitivity to preclinical disease stages where traditional methods are insufficient.
[0010] Another key challenge in dementia research is the heterogeneity of disease progression. Patients may present with variable symptom onset, progression speed, and comorbidities. This variability complicates efforts to design universal diagnostic models. The integration of multimodal data—combining imaging, genetics, biomarkers, clinical notes, and behavioral information—has been suggested as a more holistic approach to prediction and classification. Deep learning methods, particularly those capable of handling heterogeneous inputs, provide a promising avenue for overcoming this challenge. Nonetheless, issues such as data privacy, limited sample sizes, and the need for large annotated datasets remain hurdles to implementation.
[0011] Ethical considerations also occupy a central role in dementia research and early prediction. Early diagnosis has potential benefits, such as enabling lifestyle modifications, enrollment in clinical trials, and planning for future care. However, premature or inaccurate predictions may lead to psychological distress, stigmatization, or unnecessary medical interventions. Ensuring transparency, interpretability, and fairness in AI-based models is therefore crucial in building trust among patients, clinicians, and regulatory bodies. Researchers continue to debate the appropriate balance between providing valuable predictive information and protecting individuals from unintended harm.
[0012] The progression of technological infrastructure has also contributed significantly to advancements in dementia prediction research. The expansion of electronic health records (EHRs), wearable sensors, and mobile health applications has generated unprecedented quantities of patient data. These data sources, if harnessed appropriately, could provide continuous, real-world insights into cognitive and functional decline outside of clinical settings. Speech recognition technologies, natural language processing (NLP), and passive monitoring of daily activities have all emerged as complementary approaches to traditional diagnostic tools. AI-based systems trained on such diverse datasets hold the promise of identifying subtle deviations in behavior and cognition that signal the earliest manifestations of dementia.
[0013] Despite the substantial progress made, there remain numerous limitations in the current landscape of Alzheimer’s disease prediction and management. High variability in study methodologies, limited reproducibility of results across different populations, and disparities in access to advanced diagnostic tools contribute to ongoing challenges. Additionally, while computational models show remarkable performance in controlled experimental settings, their translation into real-world clinical practice remains constrained by factors such as interpretability, integration with existing workflows, and regulatory approval processes.
[0014] The historical trajectory of dementia research highlights a gradual evolution from purely clinical assessments toward biomarker-driven and computational approaches. Initial emphasis was placed on observable symptoms and post-mortem pathology, followed by a growing reliance on imaging and biochemical analyses. In recent years, the focus has increasingly shifted toward predictive modeling and early detection, spurred by recognition that interventions are most beneficial before significant neuronal loss occurs. The convergence of neuroscience, biomedical engineering, and artificial intelligence has therefore emerged as a critical frontier in the battle against Alzheimer’s disease.
[0015] Ultimately, the background context of dementia and Alzheimer’s disease reveals a multifaceted landscape of challenges, opportunities, and unmet needs. The growing global burden, combined with limitations of existing diagnostic approaches, underscores the necessity for more effective and accessible early prediction tools. While advances in deep learning and computational neuroscience have shown promising directions, the integration of such technologies into routine practice remains at a developmental stage, requiring rigorous validation, ethical consideration, and clinical acceptance.
[0016] Thus, in light of the above-stated discussion, there exists a need for a deep learning-based system for the early prediction of dementia in Alzheimer’s disease.
SUMMARY OF THE DISCLOSURE
[0017] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0018] According to illustrative embodiments, the present disclosure focuses on a deep learning-based system for the early prediction of dementia in Alzheimer’s disease which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0019] An objective of the present disclosure is to integrate multimodal medical data, including clinical records, neuroimaging, and cognitive assessments, into the system for comprehensive prediction.
[0020] Another objective of the present disclosure is to design and develop a deep learning-based system that can accurately predict the early onset of dementia in patients with Alzheimer’s disease.
[0021] Another objective of the present disclosure is to reduce reliance on manual diagnostic interpretation by automating the early detection process through artificial intelligence.
[0022] Another objective of the present disclosure is to enhance the sensitivity and specificity of dementia prediction at early stages where clinical symptoms are subtle and often overlooked.
[0023] Another objective of the present disclosure is to minimize the time, cost, and variability associated with traditional diagnostic procedures using an efficient AI-driven approach.
[0024] Another objective of the present disclosure is to establish a scalable and adaptable framework that can be implemented in diverse healthcare settings for widespread applicability.
[0025] Another objective of the present disclosure is to assist clinicians in decision-making by providing reliable, data-driven insights into dementia risk assessment.
[0026] Another objective of the present disclosure is to improve patient outcomes through early intervention strategies enabled by timely and accurate predictions.
[0027] Another objective of the present disclosure is to ensure the system’s robustness by training and validating it on large, heterogeneous datasets representing diverse patient populations.
[0028] Yet another objective of the present disclosure is to contribute to advancing research in AI-driven healthcare by demonstrating the potential of deep learning models in neurological disorder prediction.
[0029] In light of the above, a deep learning-based system for the early prediction of dementia in Alzheimer’s disease comprises a data acquisition module configured to receive and integrate multi-modal medical data. The system also includes a hybrid deep learning engine comprising a convolutional neural network (CNN), a sequential learning network selected from a long short-term memory (LSTM) network or a Transformer network. The system also includes a multi-modal fusion module configured to combine spatial and temporal features into a unified predictive representation. The system also includes an interpretability module configured to generate clinically relevant outputs including risk scores, attention maps, and feature relevance indicators to enable explainable predictions and improve clinical decision-making. The system also includes a healthcare integration interface configured to provide the predictive results to electronic medical record (EMR) systems, clinical dashboards, or healthcare practitioners in real-time.
[0030] In one embodiment, the data acquisition module is further configured to preprocess medical data by performing normalization, noise reduction, and missing data imputation prior to model input.
[0031] In one embodiment, the convolutional neural network (CNN) of the hybrid deep learning engine is configured to extract spatial features from neuroimaging modalities including magnetic resonance imaging (MRI), positron emission tomography (PET), or computed tomography (CT).
[0032] In one embodiment, the sequential learning network of the hybrid deep learning engine is configured to capture temporal dependencies from longitudinal patient records, neurocognitive assessments, or speech and behavioral data.
[0033] In one embodiment, the hybrid deep learning engine is trained using a federated learning framework to preserve patient data privacy across multiple healthcare institutions.
[0034] In one embodiment, the multi-modal fusion module employs attention-based feature fusion techniques to assign adaptive weights to spatial and temporal features prior to predictive modeling.
[0035] In one embodiment, the interpretability module is configured to generate saliency maps, Grad-CAM visualizations, or Shapley value-based relevance scores to highlight clinically important biomarkers.
[0036] In one embodiment, the interpretability module is further configured to provide uncertainty quantification measures to indicate the confidence level of predictions.
[0037] In one embodiment, the healthcare integration interface is configured to deliver predictive results in standardized medical formats for compatibility with electronic medical record (EMR) systems.
[0038] In one embodiment, the healthcare integration interface further supports real-time alerts and notifications for clinicians to facilitate proactive interventions.
[0039] These and other advantages will be apparent from the present application of the embodiments described herein.
[0040] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0041] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0042] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0043] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0044] FIG. 1 illustrates a flowchart outlining sequential step involved in a deep learning-based system for the early prediction of dementia in Alzheimer’s disease, in accordance with an exemplary embodiment of the present disclosure;
[0045] FIG. 2 illustrates a block diagram of a deep learning-based system for the early prediction of dementia in Alzheimer’s disease, in accordance with an exemplary embodiment of the present disclosure.
[0046] Like reference, numerals refer to like parts throughout the description of several views of the drawing;
[0047] The deep learning-based system for the early prediction of dementia in Alzheimer’s disease, which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0048] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
[0049] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0050] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0051] The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0052] The terms “having”, “comprising”, “including”, and variations thereof signify the presence of a component.
[0053] Referring now to FIG. 1 to FIG. 2 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a flowchart outlining sequential step involved in a deep learning-based system for the early prediction of dementia in Alzheimer’s disease, in accordance with an exemplary embodiment of the present disclosure.
[0054] A deep learning-based system for the early prediction of dementia in Alzheimer’s disease 100 comprises a data acquisition module 102 configured to receive and integrate multi-modal medical data. The data acquisition module 102 is further configured to preprocess medical data by performing normalization, noise reduction, and missing data imputation prior to model input.
[0055] The system also includes a hybrid deep learning engine 104 comprising a convolutional neural network (CNN), a sequential learning network selected from a long short-term memory (LSTM) network or a Transformer network. The convolutional neural network (CNN) of the hybrid deep learning engine 104 is configured to extract spatial features from neuroimaging modalities including magnetic resonance imaging (MRI), positron emission tomography (PET), or computed tomography (CT). The sequential learning network of the hybrid deep learning engine 104 is configured to capture temporal dependencies from longitudinal patient records, neurocognitive assessments, or speech and behavioral data. The hybrid deep learning engine 104 is trained using a federated learning framework to preserve patient data privacy across multiple healthcare institutions.
[0056] The system also includes a multi-modal fusion module 106 configured to combine spatial and temporal features into a unified predictive representation. The multi-modal fusion module 106 employs attention-based feature fusion techniques to assign adaptive weights to spatial and temporal features prior to predictive modeling.
[0057] The system also includes an interpretability module 108 configured to generate clinically relevant outputs including risk scores, attention maps, and feature relevance indicators to enable explainable predictions and improve clinical decision-making. The interpretability module 108 is configured to generate saliency maps, Grad-CAM visualizations, or Shapley value-based relevance scores to highlight clinically important biomarkers. The interpretability module 108 is further configured to provide uncertainty quantification measures to indicate the confidence level of predictions.
[0058] The system also includes a healthcare integration interface 110 configured to provide the predictive results to electronic medical record (EMR) systems, clinical dashboards, or healthcare practitioners in real-time. The healthcare integration interface 110 is configured to deliver predictive results in standardized medical formats for compatibility with electronic medical record (EMR) systems. The healthcare integration interface 110 further supports real-time alerts and notifications for clinicians to facilitate proactive interventions.
[0059] FIG. 1 illustrates a flowchart outlining sequential step involved in a deep learning-based system for the early prediction of dementia in Alzheimer’s disease.
[0060] At 102, the data acquisition module plays a critical role in gathering diverse types of medical data. This includes neuroimaging scans, cognitive test scores, demographic records, and other relevant clinical data. Since Alzheimer’s disease manifests across multiple dimensions structural brain changes, memory deficits, and behavioral variations the module is configured to receive and integrate this multi-modal information, ensuring that the system has a comprehensive dataset to work with. The incoming data undergoes necessary preprocessing steps, such as normalization and artifact removal, before being transferred to the deep learning engine for analysis.
[0061] At 104, once the data is prepared, it enters the hybrid deep learning engine, which functions as the analytical core of the system. This engine combines different network architectures to capture both spatial and temporal patterns associated with dementia progression. A convolutional neural network (CNN) processes neuroimaging data to detect structural changes in brain regions linked to Alzheimer’s disease. In parallel, a sequential learning network, which may be a long short-term memory (LSTM) network or a Transformer-based architecture, analyzes temporal data such as patient history, longitudinal cognitive scores, and sequential clinical observations. This dual-structured engine allows the system to not only recognize static structural features but also track evolving cognitive changes, ensuring a holistic understanding of the disease progression.
[0062] At 106, following this, the extracted features from both CNN and sequential learning components converge in the multi-modal fusion module. This module is designed to combine spatial features derived from imaging with temporal patterns from sequential data, effectively generating a unified predictive representation. By integrating multiple data modalities, the fusion process enhances the robustness and accuracy of predictions. For example, subtle brain atrophy detected in neuroimaging may be correlated with gradual cognitive decline captured in test scores, enabling the system to strengthen its confidence in identifying early signs of dementia. The fusion ensures that the prediction is not biased toward a single type of data but rather reflects the complexity of real-world patient profiles.
[0063] At 108, the prediction generated by the fusion process is then passed into the interpretability module, which transforms the output into clinically actionable insights. This module ensures that the system’s decisions are transparent and explainable, addressing one of the major concerns in AI-driven healthcare. It provides risk scores indicating the probability of dementia onset, attention maps highlighting specific brain regions or data segments influencing the decision, and feature relevance indicators that show which clinical variables were most significant in the prediction. These explainable outputs empower clinicians to understand not only the what but also the why behind the model’s decision, thereby increasing trust and facilitating informed decision-making in clinical practice.
[0064] At 110, the predictive insights generated by the interpretability module are delivered through the healthcare integration interface. This interface ensures seamless communication between the system and existing healthcare infrastructure. The results can be transmitted directly into electronic medical record (EMR) systems, displayed on clinical dashboards, or shared with healthcare practitioners in real-time. This real-time integration allows doctors to use the predictive results during patient consultations, enabling proactive interventions, treatment adjustments, or referrals to specialists. The seamless flow from data acquisition to integration ensures that the system operates not merely as a research tool but as a practical clinical decision-support system that can be adopted in real-world healthcare settings.
[0065] FIG. 2 illustrates a block diagram of a deep learning-based system for the early prediction of dementia in Alzheimer’s disease.
[0066] The process begins at the input stage, which serves as the central entry point for different types of patient data. From this stage, three main categories of data are collected and directed for further processing. The first category includes MRI/PET data, which contains neuroimaging information about brain structures and metabolic activity. This type of data is essential for identifying subtle structural and functional abnormalities that may indicate early signs of dementia. The second category consists of cognitive test results, which capture information about memory, attention, reasoning, and other cognitive functions that often decline during the early phases of Alzheimer’s disease. The third input is genetic markers, which represent genetic predispositions or mutations, such as APOE variants, that increase the likelihood of developing dementia.
[0067] Once these diverse inputs are collected, they are directed into the data processing unit. This unit performs the initial harmonization of multi-modal data, ensuring that imaging data, test results, and genetic information are standardized and formatted for effective analysis. By preprocessing and aligning the data, the system minimizes the need for heavy manual intervention while maintaining accuracy and consistency. The processed data is then transferred into the hybrid data model, which integrates convolutional neural networks (CNNs) and long short-term memory (LSTM) networks. The CNN component specializes in analyzing spatial features from MRI and PET scans, extracting relevant imaging biomarkers that may reveal early-stage dementia-related changes. The LSTM component, on the other hand, is designed to capture sequential and temporal dependencies from longitudinal cognitive test scores and genetic variations, allowing the system to detect progressive changes over time.
[0068] The outputs generated by the hybrid model are diverse and clinically meaningful. One of the key outputs is the risk score, which quantifies the likelihood that a patient will develop dementia in the near future. This score provides clinicians with a probabilistic measure of disease progression, aiding in early intervention. Another critical output is the heat map, a visual interpretability tool that highlights specific brain regions or features most responsible for the model’s prediction. This enhances trust in the system by allowing clinicians to understand why a particular prediction was made. Additionally, the system generates recommendations, which may include clinical advice, further diagnostic steps, or tailored therapeutic strategies to mitigate risk. Finally, all these outputs risk score, heat map, and recommendations are consolidated into the output stage, which serves as the final decision-support interface for healthcare providers.
[0069] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0070] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0071] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0072] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0073] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:
I/We Claim:
1. A deep learning-based system for the early prediction of dementia in Alzheimer’s disease (100) comprising:
a data acquisition module (102) configured to receive and integrate multi-modal medical data;
a hybrid deep learning engine (104) comprising a convolutional neural network (CNN), a sequential learning network selected from a long short-term memory (LSTM) network or a Transformer network;
a multi-modal fusion module (106) configured to combine spatial and temporal features into a unified predictive representation;
an interpretability module (108) configured to generate clinically relevant outputs including risk scores, attention maps, and feature relevance indicators to enable explainable predictions and improve clinical decision-making;
a healthcare integration interface (110) configured to provide the predictive results to electronic medical record (EMR) systems, clinical dashboards, or healthcare practitioners in real-time.
2. The system (100) as claimed in claim 1, wherein the data acquisition module (102) is further configured to preprocess medical data by performing normalization, noise reduction, and missing data imputation prior to model input.
3. The system (100) as claimed in claim 1, wherein the convolutional neural network (CNN) of the hybrid deep learning engine (104) is configured to extract spatial features from neuroimaging modalities including magnetic resonance imaging (MRI), positron emission tomography (PET), or computed tomography (CT).
4. The system (100) as claimed in claim 1, wherein the sequential learning network of the hybrid deep learning engine (104) is configured to capture temporal dependencies from longitudinal patient records, neurocognitive assessments, or speech and behavioral data.
5. The system (100) as claimed in claim 1, wherein the hybrid deep learning engine (104) is trained using a federated learning framework to preserve patient data privacy across multiple healthcare institutions.
6. The system (100) as claimed in claim 1, wherein the multi-modal fusion module (106) employs attention-based feature fusion techniques to assign adaptive weights to spatial and temporal features prior to predictive modeling.
7. The system (100) as claimed in claim 1, wherein the interpretability module (108) is configured to generate saliency maps, Grad-CAM visualizations, or Shapley value-based relevance scores to highlight clinically important biomarkers.
8. The system (100) as claimed in claim 1, wherein the interpretability module (108) is further configured to provide uncertainty quantification measures to indicate the confidence level of predictions.
9. The system (100) as claimed in claim 1, wherein the healthcare integration interface (110) is configured to deliver predictive results in standardized medical formats for compatibility with electronic medical record (EMR) systems.
10. The system (100) as claimed in claim 1, wherein the healthcare integration interface (110) further supports real-time alerts and notifications for clinicians to facilitate proactive interventions.

Documents

Application Documents

# Name Date
1 202541094061-STATEMENT OF UNDERTAKING (FORM 3) [30-09-2025(online)].pdf 2025-09-30
2 202541094061-REQUEST FOR EARLY PUBLICATION(FORM-9) [30-09-2025(online)].pdf 2025-09-30
3 202541094061-POWER OF AUTHORITY [30-09-2025(online)].pdf 2025-09-30
4 202541094061-FORM-9 [30-09-2025(online)].pdf 2025-09-30
5 202541094061-FORM FOR SMALL ENTITY(FORM-28) [30-09-2025(online)].pdf 2025-09-30
6 202541094061-FORM 1 [30-09-2025(online)].pdf 2025-09-30
7 202541094061-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-09-2025(online)].pdf 2025-09-30
8 202541094061-DRAWINGS [30-09-2025(online)].pdf 2025-09-30
9 202541094061-DECLARATION OF INVENTORSHIP (FORM 5) [30-09-2025(online)].pdf 2025-09-30
10 202541094061-COMPLETE SPECIFICATION [30-09-2025(online)].pdf 2025-09-30