Abstract: A MACHINE LEARNING-BASED SYSTEM FOR NON-INVASIVE NAFLD CLASSIFICATION USING MULTIMODAL CLINICAL, IMAGING, AND ELASTOGRAPHY DATA The present invention relates to a non-invasive, scanner-independent, multimodal system for the classification of Non-Alcoholic Fatty Liver Disease (NAFLD). The system uniquely integrates clinical adiposity metrics—including waist-hip ratio, visceral fat index, and body fat percentage—with Controlled Attenuation Parameter (CAP) elastography values and ultrasound imaging features extracted using a pre-trained Convolutional Neural Network (CNN). A multi-layer perceptron (MLP) processes the clinical data, while imaging features are fused using a gated or attention-based feature fusion mechanism to enhance modality interaction. A hybrid classification model comprising MLP and XGBoost is employed for binary or multiclass NAFLD prediction. To address scanner variability and ensure robustness across imaging sources, a CycleGAN-based domain adaptation technique is incorporated. Model interpretability is achieved through SHAP (SHapley Additive exPlanations), enabling transparent and clinically meaningful insights into predictions. The invention offers improved diagnostic accuracy, generalizability, and applicability in clinical, telemedicine, and AI-assisted radiology workflows.
Description:FIELD OF THE INVENTION
The present invention relates to the field of medical diagnostics and artificial intelligence, specifically to a non-invasive, multimodal system for the classification of Non-Alcoholic Fatty Liver Disease (NAFLD). It involves the integration of clinical, imaging, and machine learning techniques for enhanced diagnostic accuracy and clinical applicability.
BACKGROUND OF THE INVENTION
Current NAFLD diagnostic methods rely on invasive liver biopsy or standalone non-invasive techniques such as ultrasound and elastography, which have limitations in accuracy, operator dependence, and scanner variability. Traditional BMI-based risk assessments fail to differentiate between adiposity and lean mass, reducing predictive reliability. This invention addresses these gaps by integrating clinical adiposity measures, ultrasound-based deep learning features, and CAP elastography data into a machine learning-driven system that provides a robust, non-invasive, and scanner-independent NAFLD classification model, improving early detection and risk stratification.
Current diagnostic techniques for NAFLD include ultrasound evaluations, elastography, and biomarker-based risk assessments, each with its own set of drawbacks. The commonly used B-mode ultrasound is dependent on the operator and lacks uniformity across different machines, reducing its reliability. The hepatorenal index (HI) provides a basic assessment of steatosis but is not universally applicable due to varying imaging conditions. Elastography methods, like the Controlled Attenuation Parameter (CAP) from FibroScan, offer a quantitative assessment of liver fat but are not typically combined with other predictive markers such as clinical and ultrasound-based features. Biomarker-based risk scores, including the Fatty Liver Index (FLI), NAFLD Fibrosis Score (NFS), and Hepatic Steatosis Index (HSI), rely on blood tests and BMI but struggle to accurately distinguish between muscle mass and fat, limiting their precision in fat quantification. Several AI-based methods have been developed to enhance ultrasound-based NAFLD diagnosis, but these primarily utilize convolutional neural networks (CNNs) trained on ultrasound images without integrating CAP elastography or clinical adiposity metrics. Although deep learning has been used for grading liver steatosis, current solutions do not address scanner dependency, structured feature fusion, or multimodal integration.
Conventional disease detection models typically depend on either clinical biomarkers or imaging techniques alone, which can restrict their sensitivity and precision. While serum biomarkers offer crucial biochemical insights, they might not fully represent the entire spectrum of disease progression, particularly in complex conditions. Conversely, imaging techniques provide detailed structural and functional perspectives of the body but may overlook subtle biochemical changes. Current AI models, especially those utilizing machine learning, have made notable progress by analyzing extensive datasets to improve diagnostic capabilities. Nonetheless, many of these models still focus on a single modality, either biomarker-based or imaging-based, and may not fully exploit the advantages of both. In contrast, fusion-based AI models that integrate serum biomarkers with imaging data deliver a more thorough analysis, enhancing overall diagnostic accuracy by addressing the limitations of each individual method. This integration can improve sensitivity, specificity, and overall performance in detecting diseases at both early and advanced stages.
OBJECTIVES OF THE INVENTION
Main objective of the present invention is to develop a non-invasive, scanner-independent, multimodal system for accurate classification of Non-Alcoholic Fatty Liver Disease (NAFLD).
Another objective of the present invention is to integrate clinical adiposity-related biomarkers, CAP elastography values, and ultrasound imaging features into a unified diagnostic framework.
Another objective of the present invention is to implement an attention-based feature fusion technique that effectively combines multimodal inputs for enhanced diagnostic precision
Another objective of the present invention is to utilize a hybrid classification model combining MLP and XGBoost algorithms for improved predictive performance in both binary and multiclass classification settings.
Another objective of the present invention is to provide model interpretability using SHAP (SHapley Additive exPlanations), enabling transparency in clinical decision-making processes.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The proposed invention introduces a non-invasive, scanner-independent, multimodal system for NAFLD classification that uniquely integrates clinical adiposity-based metrics, deep learning-extracted ultrasound features, and CAP elastography values. By substituting BMI with waist-hip ratio, visceral fat index, and body fat percentage, and employing an advanced multimodal fusion technique using attention-based learning, the model achieves improved predictive accuracy and generalizability across various imaging conditions. Additionally, domain adaptation methods reduce scanner variability, making this approach more robust, reproducible, and clinically applicable than existing solutions. A pre-trained CNN is utilized to extract liver characteristics, while an attention-based fusion mechanism effectively combines various data sources. To maintain scanner independence, the system uses CycleGAN for domain adaptation. A hybrid classifier, combining MLP and XGBoost, assesses NAFLD risk with SHAP-based interpretability, making it apt for clinical decision-making, telemedicine, and AI-assisted radiology workflows.
Herein enclosed a non-invasive, scanner-independent, multimodal system for classification of non-alcoholic fatty liver disease (NAFLD), comprising:
a module for receiving clinical data comprising adiposity-related metrics including waist-hip ratio, visceral fat index, and body fat percentage;
a multi-layer perceptron (MLP) feature extractor configured to process said clinical data;
a module for receiving Controlled Attenuation Parameter (CAP) elastography input and extracting CAP features;
an ultrasound image input module configured to accept ultrasound liver images;
a convolutional neural network (CNN) module configured to extract imaging features from the ultrasound images;
a feature fusion layer employing gated fusion and/or attention-based fusion mechanisms to integrate features from said MLP, CAP, and CNN modules;
a hybrid classifier comprising MLP and XGBoost models configured to perform NAFLD classification in binary or multiclass modes;
a model interpretability module employing SHAP (SHapley Additive exPlanations) for explaining prediction outcomes;
a domain adaptation module using CycleGAN to reduce scanner-induced variability in ultrasound data, ensuring model robustness and scanner independence.
A method for classifying NAFLD using the system as claimed in claim 1, comprising the steps of:
acquiring clinical and imaging inputs;
extracting modality-specific features;
fusing features via gated or attention-based fusion;
performing classification using a hybrid MLP and XGBoost classifier;
interpreting the prediction using SHAP-based methods; and
adapting imaging data across scanners using CycleGAN to ensure generalizability.
The CNN is pre-trained on liver ultrasound datasets to extract domain-relevant features.
The feature fusion layer utilizes attention-based mechanisms to assign adaptive weights to each modality during feature integration.
The hybrid classifier combines the outputs of MLP and XGBoost using a weighted or ensemble strategy for improved prediction accuracy.
The model interpretability module provides visual or quantitative explanations for model predictions to support clinical decision-making.
The model is deployable in clinical, telemedicine, and AI-assisted radiology workflows for non-invasive screening and risk stratification of NAFLD.
The traditional BMI is replaced by more specific adiposity markers including waist-hip ratio, visceral fat index, and body fat percentage for enhanced clinical relevance.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In some embodiments of the present invention, relates to a non-invasive, scanner-independent, multimodal system for the classification of Non-Alcoholic Fatty Liver Disease (NAFLD). The system uniquely integrates clinical adiposity metrics—including waist-hip ratio, visceral fat index, and body fat percentage—with Controlled Attenuation Parameter (CAP) elastography values and ultrasound imaging features extracted using a pre-trained Convolutional Neural Network (CNN).
In some embodiments of the present invention, a multi-layer perceptron (MLP) processes the clinical data, while imaging features are fused using a gated or attention-based feature fusion mechanism to enhance modality interaction. A hybrid classification model comprising MLP and XGBoost is employed for binary or multiclass NAFLD prediction.
In some embodiments of the present invention, to address scanner variability and ensure robustness across imaging sources, a CycleGAN-based domain adaptation technique is incorporated. Model interpretability is achieved through SHAP (SHapley Additive exPlanations), enabling transparent and clinically meaningful insights into predictions.
In some embodiments of the present invention, the invention offers improved diagnostic accuracy, generalizability, and applicability in clinical, telemedicine, and AI-assisted radiology workflows.
Herein enclosed a non-invasive, scanner-independent, multimodal system for classification of non-alcoholic fatty liver disease (NAFLD), comprising:
a module for receiving clinical data comprising adiposity-related metrics including waist-hip ratio, visceral fat index, and body fat percentage;
a multi-layer perceptron (MLP) feature extractor configured to process said clinical data;
a module for receiving Controlled Attenuation Parameter (CAP) elastography input and extracting CAP features;
an ultrasound image input module configured to accept ultrasound liver images;
a convolutional neural network (CNN) module configured to extract imaging features from the ultrasound images;
a feature fusion layer employing gated fusion and/or attention-based fusion mechanisms to integrate features from said MLP, CAP, and CNN modules;
a hybrid classifier comprising MLP and XGBoost models configured to perform NAFLD classification in binary or multiclass modes;
a model interpretability module employing SHAP (SHapley Additive exPlanations) for explaining prediction outcomes;
a domain adaptation module using CycleGAN to reduce scanner-induced variability in ultrasound data, ensuring model robustness and scanner independence.
A method for classifying NAFLD using the system as claimed in claim 1, comprising the steps of:
acquiring clinical and imaging inputs;
extracting modality-specific features;
fusing features via gated or attention-based fusion;
performing classification using a hybrid MLP and XGBoost classifier;
interpreting the prediction using SHAP-based methods; and
adapting imaging data across scanners using CycleGAN to ensure generalizability.
The CNN is pre-trained on liver ultrasound datasets to extract domain-relevant features.
The feature fusion layer utilizes attention-based mechanisms to assign adaptive weights to each modality during feature integration.
The hybrid classifier combines the outputs of MLP and XGBoost using a weighted or ensemble strategy for improved prediction accuracy.
The model interpretability module provides visual or quantitative explanations for model predictions to support clinical decision-making.
The model is deployable in clinical, telemedicine, and AI-assisted radiology workflows for non-invasive screening and risk stratification of NAFLD.
The traditional BMI is replaced by more specific adiposity markers including waist-hip ratio, visceral fat index, and body fat percentage for enhanced clinical relevance.
EXAMPLE 1
BEST METHOD
The proposed invention introduces a non-invasive, scanner-independent, multimodal system for NAFLD classification that uniquely integrates clinical adiposity-based metrics, deep learning-extracted ultrasound features, and CAP elastography values. By substituting BMI with waist-hip ratio, visceral fat index, and body fat percentage, and employing an advanced multimodal fusion technique using attention-based learning, the model achieves improved predictive accuracy and generalizability across various imaging conditions. Additionally, domain adaptation methods reduce scanner variability, making this approach more robust, reproducible, and clinically applicable than existing solutions. A pre-trained CNN is utilized to extract liver characteristics, while an attention-based fusion mechanism effectively combines various data sources. To maintain scanner independence, the system uses CycleGAN for domain adaptation. A hybrid classifier, combining MLP and XGBoost, assesses NAFLD risk with SHAP-based interpretability, making it apt for clinical decision-making, telemedicine, and AI-assisted radiology workflows.
NOVELTY:
Many current models concentrate on either serum biomarkers or imaging methods for disease detection. Nonetheless, an AI model that merges both traditional serum biomarkers and imaging data can greatly improve diagnostic precision. By integrating these approaches, the model can gather complementary insights, enhancing its ability to accurately identify diseases. This combined strategy offers a more comprehensive understanding of the condition, minimizing the likelihood of false positives or negatives, and ultimately delivering a more dependable diagnostic tool.
, Claims:1. A non-invasive, scanner-independent, multimodal system for classification of non-alcoholic fatty liver disease (NAFLD), comprising:
a module for receiving clinical data comprising adiposity-related metrics including waist-hip ratio, visceral fat index, and body fat percentage;
a multi-layer perceptron (MLP) feature extractor configured to process said clinical data;
a module for receiving Controlled Attenuation Parameter (CAP) elastography input and extracting CAP features;
an ultrasound image input module configured to accept ultrasound liver images;
a convolutional neural network (CNN) module configured to extract imaging features from the ultrasound images;
a feature fusion layer employing gated fusion and/or attention-based fusion mechanisms to integrate features from said MLP, CAP, and CNN modules;
a hybrid classifier comprising MLP and XGBoost models configured to perform NAFLD classification in binary or multiclass modes;
a model interpretability module employing SHAP (SHapley Additive exPlanations) for explaining prediction outcomes;
a domain adaptation module using CycleGAN to reduce scanner-induced variability in ultrasound data, ensuring model robustness and scanner independence.
2. A method for classifying NAFLD using the system as claimed in claim 1, comprising the steps of:
a. acquiring clinical and imaging inputs;
b. extracting modality-specific features;
c. fusing features via gated or attention-based fusion;
d. performing classification using a hybrid MLP and XGBoost classifier;
e. interpreting the prediction using SHAP-based methods; and
f. adapting imaging data across scanners using CycleGAN to ensure generalizability.
3. The method as claimed in claim 2, wherein the CNN is pre-trained on liver ultrasound datasets to extract domain-relevant features.
4. The method as claimed in claim 2, wherein the feature fusion layer utilizes attention-based mechanisms to assign adaptive weights to each modality during feature integration.
5. The method as claimed in claim 2, wherein the hybrid classifier combines the outputs of MLP and XGBoost using a weighted or ensemble strategy for improved prediction accuracy.
6. The method as claimed in claim 2, wherein the model interpretability module provides visual or quantitative explanations for model predictions to support clinical decision-making.
| # | Name | Date |
|---|---|---|
| 1 | 202541046934-STATEMENT OF UNDERTAKING (FORM 3) [15-05-2025(online)].pdf | 2025-05-15 |
| 2 | 202541046934-REQUEST FOR EARLY PUBLICATION(FORM-9) [15-05-2025(online)].pdf | 2025-05-15 |
| 3 | 202541046934-POWER OF AUTHORITY [15-05-2025(online)].pdf | 2025-05-15 |
| 4 | 202541046934-FORM-9 [15-05-2025(online)].pdf | 2025-05-15 |
| 5 | 202541046934-FORM FOR SMALL ENTITY(FORM-28) [15-05-2025(online)].pdf | 2025-05-15 |
| 6 | 202541046934-FORM 1 [15-05-2025(online)].pdf | 2025-05-15 |
| 7 | 202541046934-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-05-2025(online)].pdf | 2025-05-15 |
| 8 | 202541046934-EVIDENCE FOR REGISTRATION UNDER SSI [15-05-2025(online)].pdf | 2025-05-15 |
| 9 | 202541046934-EDUCATIONAL INSTITUTION(S) [15-05-2025(online)].pdf | 2025-05-15 |
| 10 | 202541046934-DRAWINGS [15-05-2025(online)].pdf | 2025-05-15 |
| 11 | 202541046934-DECLARATION OF INVENTORSHIP (FORM 5) [15-05-2025(online)].pdf | 2025-05-15 |
| 12 | 202541046934-COMPLETE SPECIFICATION [15-05-2025(online)].pdf | 2025-05-15 |