Sign In to Follow Application
View All Documents & Correspondence

A Machine Learning Based System For Non Invasive Nafld Classification Using Multimodal Clinical, Imaging, And Elastography Data

Abstract: A MACHINE LEARNING-BASED SYSTEM FOR NON-INVASIVE NAFLD CLASSIFICATION USING MULTIMODAL CLINICAL, IMAGING, AND ELASTOGRAPHY DATA The invention relates to a machine learning-based system and method for non-invasive classification of non-alcoholic fatty liver disease (NAFLD). The system integrates multimodal clinical, imaging, and elastography data into a unified diagnostic framework. Clinical metrics include waist-hip ratio, visceral fat index, and body fat percentage, providing improved adiposity measures over conventional BMI. Ultrasound imaging features capture structural and textural liver characteristics, while elastography parameters provide quantitative fat deposition measures. A fusion module combines multimodal features using attention-based learning, and a classification module categorizes disease severity levels. A feedback loop refines predictions based on confirmed outcomes, while domain adaptation ensures reproducibility across different scanners. Interpretability outputs highlight key predictive features to support clinical decision-making. Data security is ensured through encryption and access control. The method enables accurate, non-invasive, and scalable NAFLD diagnosis, reducing reliance on biopsy and improving early detection, risk stratification, and patient management in clinical and telemedicine settings.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 September 2025
Publication Number
42/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. AMULYA. YEDLA
RESEARCH SCHOLAR, SCHOOL OF CS & AI, SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR. N. SHARMILA BANU
ASSISTANT PROFESSOR, SCHOOL OF CS & AI, SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
The invention relates to healthcare technologies and artificial intelligence systems. More particularly, it concerns a non-invasive, machine learning-based diagnostic model for classifying non-alcoholic fatty liver disease (NAFLD) by integrating multimodal clinical parameters, ultrasound imaging features, and elastography values, thereby providing robust, accurate, and scanner-independent classification for improved clinical decision-making.
BACKGROUND OF THE INVENTION
Current NAFLD diagnostic methods rely on invasive liver biopsy or standalone non-invasive techniques such as ultrasound and elastography, which have limitations in accuracy, operator dependence, and scanner variability. Traditional BMI-based risk assessments fail to differentiate between adiposity and lean mass, reducing predictive reliability. This invention addresses these gaps by integrating clinical adiposity measures, ultrasound-based deep learning features, and CAP elastography data into a machine learning-driven system that provides a robust, non-invasive, and scanner-independent NAFLD classification model, improving early detection and risk stratification.
Current diagnostic techniques for NAFLD include ultrasound evaluations, elastography, and biomarker-based risk assessments, each with its own set of drawbacks. The commonly used B-mode ultrasound is dependent on the operator and lacks uniformity across different machines, reducing its reliability. The hepatorenal index (HI) provides a basic assessment of steatosis but is not universally applicable due to varying imaging conditions. Elastography methods, like the Controlled Attenuation Parameter (CAP) from FibroScan, offer a quantitative assessment of liver fat but are not typically combined with other predictive markers such as clinical and ultrasound-based features. Biomarker-based risk scores, including the Fatty Liver Index (FLI), NAFLD Fibrosis Score (NFS), and Hepatic Steatosis Index (HSI), rely on blood tests and BMI but struggle to accurately distinguish between muscle mass and fat, limiting their precision in fat quantification. Several AI-based methods have been developed to enhance ultrasound-based NAFLD diagnosis, but these primarily utilize convolutional neural networks (CNNs) trained on ultrasound images without integrating CAP elastography or clinical adiposity metrics. Although deep learning has been used for grading liver steatosis, current solutions do not address scanner dependency, structured feature fusion, or multimodal integration.
US20250032212A1: A radiological clip for locating marked tissue within a diagnostic ultrasound image may transmit an ultrasound identification (USID) signal within a bandwidth of ultrasound imaging pulses. A same ultrasound imaging array that receives the ultrasound imaging pulses may also receive the USID signal. An ultrasound imaging apparatus may generate an ultrasound image of the tissue that indicates a location and ID of the radiological clip based on the ultrasound imaging pulses and the USID signal received from the radiological clip within the imaging field. The radiological clip may generate an activation signal in response to receiving an ultrasound imaging pulse, generate an identification signal responsive to the activation signal, and transmit a USID signal based on the identification signal. The USID signal may include encoded signals corresponding to different bits within a multiple-bit identification tag to uniquely identify the radiological clip within a diagnostic image.
US20220414870: A method for performing classification of the severity of at least one liver disease from non-invasive radiographic images is disclosed. The method includes: providing radiographic images of slices of the abdomen of a patient; pre-processing the radiographic images by: segmenting liver and spleen, thus achieving a spleen binary mask and a liver binary mask per slice, and normalizing the images with each other, thus achieving normalized radiographic images per slice; for each slice, from the liver binary mask and the normalized radiographic images, extracting a liver parameter; from at least one spleen binary mask, extracting a spleen parameter; and classifying, in function of both parameters and by help of a trained Machine Learning model, the severity of liver disease between one among a group of liver disease at early stage and a group of liver disease at advanced stage.
NAFLD is traditionally diagnosed through invasive liver biopsy or standalone non-invasive techniques such as ultrasound, elastography, or biomarker-based risk scores. These methods suffer from limitations including operator dependency, scanner variability, and poor differentiation between adiposity and lean mass. Current AI models primarily analyze ultrasound images but fail to integrate clinical adiposity metrics and elastography data, reducing reliability and generalizability. The present invention solves this problem by introducing a multimodal system that fuses clinical adiposity indices, deep learning-extracted ultrasound features, and elastography measures into a unified classification model, offering accurate, non-invasive, and reproducible diagnosis.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The invention introduces a machine learning-based system comprising modules for clinical data acquisition, ultrasound feature extraction, elastography measurement, multimodal fusion, classification, interpretability, and security. Clinical metrics include waist-hip ratio, visceral fat index, and body fat percentage, replacing conventional BMI-based measures. Ultrasound imaging features are extracted using trained models that capture liver texture and structural characteristics. Elastography parameters, including controlled attenuation, provide quantitative measures of fat deposition.
The multimodal fusion module integrates these data sources through attention-based learning to capture complementary information. A classifier processes the fused features to categorize patients according to NAFLD severity. A feedback loop updates the learning model with outcomes, improving future predictions. Domain adaptation techniques reduce scanner dependency, ensuring cross-platform applicability.
The system is designed to deliver interpretability through feature attribution methods, enabling clinicians to understand the basis of predictions. It can be integrated into clinical workflows, telemedicine platforms, and AI-assisted radiology systems, improving diagnostic precision, patient monitoring, and risk stratification.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The proposed invention introduces a non-invasive, scanner-independent, multimodal system for NAFLD classification that uniquely integrates clinical adiposity-based metrics, deep learning-extracted ultrasound features, and CAP elastography values. By substituting BMI with waist-hip ratio, visceral fat index, and body fat percentage, and employing an advanced multimodal fusion technique using attention-based learning, the model achieves improved predictive accuracy and generalizability across various imaging conditions. Additionally, domain adaptation methods reduce scanner variability, making this approach more robust, reproducible, and clinically applicable than existing solutions. A pre-trained CNN is utilized to extract liver characteristics, while an attention-based fusion mechanism effectively combines various data sources. To maintain scanner independence, the system uses CycleGAN for domain adaptation. A hybrid classifier, combining MLP and XGBoost, assesses NAFLD risk with SHAP-based interpretability, making it apt for clinical decision-making, telemedicine, and AI-assisted radiology workflows.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The proposed invention introduces a non-invasive, scanner-independent, multimodal system for NAFLD classification that uniquely integrates clinical adiposity-based metrics, deep learning-extracted ultrasound features, and CAP elastography values. By substituting BMI with waist-hip ratio, visceral fat index, and body fat percentage, and employing an advanced multimodal fusion technique using attention-based learning, the model achieves improved predictive accuracy and generalizability across various imaging conditions. Additionally, domain adaptation methods reduce scanner variability, making this approach more robust, reproducible, and clinically applicable than existing solutions. A pre-trained CNN is utilized to extract liver characteristics, while an attention-based fusion mechanism effectively combines various data sources. To maintain scanner independence, the system uses CycleGAN for domain adaptation. A hybrid classifier, combining MLP and XGBoost, assesses NAFLD risk with SHAP-based interpretability, making it apt for clinical decision-making, telemedicine, and AI-assisted radiology workflows.
Many current models concentrate on either serum biomarkers or imaging methods for disease detection. Nonetheless, an AI model that merges both traditional serum biomarkers and imaging data can greatly improve diagnostic precision. By integrating these approaches, the model can gather complementary insights, enhancing its ability to accurately identify diseases. This combined strategy offers a more comprehensive understanding of the condition, minimizing the likelihood of false positives or negatives, and ultimately delivering a more dependable diagnostic tool.
The invention provides a multimodal diagnostic system designed for accurate, non-invasive classification of NAFLD. The system integrates clinical, imaging, and elastography data streams to overcome the limitations of single-modality approaches.
The clinical data acquisition module collects parameters such as waist-hip ratio, visceral fat index, and body fat percentage. These metrics provide a more accurate representation of adiposity compared to conventional BMI, thereby reducing errors in assessing fat content and metabolic risk.
The imaging module processes ultrasound scans of the liver. It extracts features related to echogenicity, texture, and structural patterns of the organ. These features are obtained using pre-trained models optimized for medical image analysis, providing robust inputs for classification.
The elastography module incorporates quantitative measures such as controlled attenuation parameters, which represent fat deposition in the liver. These values offer additional diagnostic information, complementing ultrasound imaging and clinical data.
The fusion module combines clinical, imaging, and elastography inputs into a unified feature space. Attention-based mechanisms prioritize the most informative features across modalities. This multimodal approach ensures that diagnostic predictions are based on complementary and diverse data sources.
The classification module applies machine learning algorithms to the fused feature set to determine NAFLD severity levels. It categorizes patients into normal, mild, moderate, or severe steatosis classes. By integrating multimodal features, the classifier achieves higher sensitivity and specificity compared to conventional models.
A feedback loop updates the model using outcomes from confirmed cases. This allows the system to continuously improve prediction accuracy and adapt to evolving patient populations.
Domain adaptation methods are employed to reduce scanner dependency. Imaging data acquired from different ultrasound devices may have varying quality and parameters. By applying adaptation techniques, the model maintains consistent performance across diverse imaging conditions, ensuring reproducibility.
The system further provides interpretability through feature attribution methods such as importance scoring. Clinicians can view which features contributed most to the classification, improving trust and aiding clinical decision-making.
A centralized dashboard presents predictions, risk levels, and interpretability outputs. It allows integration into hospital information systems, telemedicine workflows, and radiology platforms.
Data security and patient privacy are maintained through encryption, access control, and audit trails. The system is designed to comply with healthcare data protection standards, ensuring safe deployment.
The invention thereby transforms NAFLD diagnosis by offering an accurate, non-invasive, multimodal system that reduces reliance on invasive biopsy and enhances clinical confidence.
Best Method of Working
The best method of working involves deploying the system as a clinical decision support tool integrated with hospital infrastructure. Patient data including adiposity measures, ultrasound images, and elastography values are collected and processed by respective modules. The fusion and classification modules generate predictions in real time, which are displayed on a clinician dashboard with interpretability scores.
Feedback from confirmed diagnoses is fed back into the model, enhancing its predictive capabilities. The system can be deployed on cloud-based servers or within hospital networks, ensuring scalability. Integration with telemedicine platforms allows remote diagnosis and monitoring of patients. This method ensures reproducibility, scanner independence, and improved diagnostic reliability for large patient populations.

, Claims:1. A system for non-invasive classification of non-alcoholic fatty liver disease, comprising:
o a clinical data acquisition module configured to collect adiposity-based parameters including waist-hip ratio, visceral fat index, and body fat percentage;
o an imaging module configured to extract structural and textural features from liver ultrasound scans;
o an elastography module configured to provide quantitative fat deposition measures;
o a fusion module configured to integrate multimodal features using attention-based learning mechanisms;
o a classification module configured to categorize disease severity levels based on fused features;
o a feedback loop configured to update learning models using outcomes from confirmed diagnoses;
o a domain adaptation module configured to reduce variability across imaging scanners;
o an interpretability module configured to generate feature attribution outputs;
o a security framework configured to maintain encryption, authentication, and access control;
wherein all modules operate together to provide accurate, non-invasive, and reproducible NAFLD classification.
2. The system as claimed in claim 1, wherein the clinical data acquisition module substitutes body mass index with adiposity-based metrics for improved diagnostic precision.
3. The system as claimed in claim 1, wherein the imaging module extracts echogenicity, texture, and structural features of the liver.
4. The system as claimed in claim 1, wherein the elastography module applies quantitative controlled attenuation parameters for fat assessment.
5. The system as claimed in claim 1, wherein the fusion module employs attention-based weighting to prioritize informative features across modalities.
6. A method for non-invasive classification of non-alcoholic fatty liver disease, comprising the steps of:
o collecting clinical data including waist-hip ratio, visceral fat index, and body fat percentage;
o processing liver ultrasound scans to extract structural and textural imaging features;
o acquiring elastography-based fat deposition parameters;
o fusing multimodal features into a unified representation;
o classifying patients into disease severity categories based on fused features;
o updating classification models with confirmed diagnostic outcomes;
o applying domain adaptation techniques to minimize scanner dependency;
o generating interpretability outputs identifying key predictive features;
o securing all patient data with encryption and role-based access control.
7. The method as claimed in claim 6, wherein clinical adiposity measures replace body mass index to provide greater diagnostic reliability.
8. The method as claimed in claim 6, wherein the fusion step integrates features through attention-based multimodal learning.
9. The method as claimed in claim 6, wherein interpretability outputs provide feature importance rankings to aid clinical understanding.
10. The method as claimed in claim 6, wherein deployment is performed within hospital systems or telemedicine platforms for scalable diagnosis.

Documents

Application Documents

# Name Date
1 202541089578-STATEMENT OF UNDERTAKING (FORM 3) [19-09-2025(online)].pdf 2025-09-19
2 202541089578-REQUEST FOR EARLY PUBLICATION(FORM-9) [19-09-2025(online)].pdf 2025-09-19
3 202541089578-POWER OF AUTHORITY [19-09-2025(online)].pdf 2025-09-19
4 202541089578-FORM-9 [19-09-2025(online)].pdf 2025-09-19
5 202541089578-FORM FOR SMALL ENTITY(FORM-28) [19-09-2025(online)].pdf 2025-09-19
6 202541089578-FORM 1 [19-09-2025(online)].pdf 2025-09-19
7 202541089578-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [19-09-2025(online)].pdf 2025-09-19
8 202541089578-EVIDENCE FOR REGISTRATION UNDER SSI [19-09-2025(online)].pdf 2025-09-19
9 202541089578-EDUCATIONAL INSTITUTION(S) [19-09-2025(online)].pdf 2025-09-19
10 202541089578-DRAWINGS [19-09-2025(online)].pdf 2025-09-19
11 202541089578-DECLARATION OF INVENTORSHIP (FORM 5) [19-09-2025(online)].pdf 2025-09-19
12 202541089578-COMPLETE SPECIFICATION [19-09-2025(online)].pdf 2025-09-19