Sign In to Follow Application
View All Documents & Correspondence

Computational Intelligence Based On Disease And Symptoms Using Machine Learning Techniques

Abstract: COMPUTATIONAL INTELLIGENCE BASED ON DISEASE AND SYMPTOMS USING MACHINE LEARNING TECHNIQUES The present invention provides a hybrid diagnostic system integrating machine learning and deep learning techniques for comprehensive disease classification. By combining K-Nearest Neighbors (KNN) for structured patient data analysis and Convolutional Neural Networks (CNN) with U-Net for medical image segmentation, the system achieves enhanced diagnostic accuracy. Advanced preprocessing techniques, including normalization and oversampling, improve model performance, while a decision fusion mechanism integrates structured and unstructured data for a final classification output. Cloud-based deployment ensures scalability, security, and continuous updates. The proposed system significantly improves diagnostic accuracy, making it a robust and efficient tool for automated medical analysis.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 February 2025
Publication Number
10/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. DR. VIJAYA CHANDRA JADALA
SR UNIVERSITY, ANANTHASAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. KUSHI RAJ KANCHU
SR UNIVERSITY, ANANTHASAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
3. PAGADALA ANANYA
SR UNIVERSITY, ANANTHASAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
4. RANGA VIHASITH
SR UNIVERSITY, ANANTHASAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
This invention pertains to the application of machine learning and deep learning techniques for medical diagnostics. By leveraging K-Nearest Neighbors (KNN) for structured data and Convolutional Neural Networks (CNN) for image analysis, the framework bridges structured and unstructured data processing, enhancing disease diagnosis accuracy in diverse medical scenarios.
BACKGROUND OF THE INVENTION
Healthcare has witnessed transformative advancements with the adoption of machine learning (ML) and deep learning (DL) technologies. Traditional diagnostic systems often struggle to analyze complex datasets, which include structured records of symptoms and unstructured medical images. While ML models like K-Nearest Neighbors (KNN) excel in structured data classification, they are limited in handling image-based tasks. Conversely, DL models, particularly Convolutional Neural Networks (CNNs), have demonstrated exceptional performance in medical imaging by extracting hierarchical features. However, standalone methods may lack the versatility required for comprehensive diagnostics.
Current approaches face challenges such as class imbalance, limited annotated datasets, and scalability. The integration of U-Net for segmentation and CNN for feature extraction has enabled detailed analysis of medical images, while KNN effectively processes structured datasets. This dual-model framework combines the strengths of ML and DL, addressing key limitations.
This invention bridges the gap between structured and unstructured data, providing a scalable and robust diagnostic tool. It demonstrates high accuracy in both data types, enabling medical practitioners to diagnose diseases effectively. With robust preprocessing methods and model optimization, the framework overcomes traditional barriers, paving the way for enhanced diagnostic accuracy and patient outcomes.
Existing diagnostic systems in healthcare rely heavily on either traditional machine learning (ML) or deep learning (DL), each with distinct strengths and limitations. ML techniques like K-Nearest Neighbors (KNN), Decision Trees, and Random Forests excel in analyzing structured datasets but fall short when processing unstructured data such as medical images. Conversely, DL models, particularly Convolutional Neural Networks (CNNs), excel in image-based tasks but may lack the flexibility required for structured data analysis.
Standalone approaches struggle with data diversity, class imbalance, and scalability. Traditional ML methods, while simple and interpretable, often fail to generalize well in high-dimensional datasets. DL methods, although powerful, demand large annotated datasets and substantial computational resources, which can be challenging in medical scenarios.
The proposed methodology integrates KNN and CNN within a single framework, addressing these limitations. KNN processes structured disease-symptom data, achieving high accuracy (92%), while CNN, combined with U-Net, handles medical image segmentation and classification, achieving a validation accuracy of 87%. U-Net’s segmentation capabilities ensure precise localization of diseased regions, and CNN’s feature extraction offers robust image analysis.
Data preprocessing, including label encoding, normalization, and augmentation, plays a vital role in enhancing model performance. Oversampling techniques mitigate class imbalance, while dropout layers in CNN reduce overfitting risks.
By bridging structured and unstructured data processing, the proposed framework provides a scalable, accurate, and versatile solution for disease diagnostics. Compared to existing methodologies, it demonstrates superior generalization, robustness, and applicability across diverse datasets. This innovation transforms medical diagnostics, offering a comprehensive tool adaptable to various healthcare needs.
Metric KNN (Structured Data) CNN (Image Data)
Accuracy 92% 87%
Precision 90% N/A (Binary)
Recall 88% N/A (Binary)
F1-Score 89% N/A (Binary)
Overfitting Risk Low Minimal
The primary objective of this invention is to develop an integrated diagnostic framework combining machine learning and deep learning techniques for accurate disease and symptom classification. It aims to enhance diagnostic accuracy by processing both structured and unstructured medical data, addressing the limitations of standalone models. The system ensures precise analysis through U-Net-based segmentation and CNN-driven feature extraction, complemented by KNN classification for structured data. Robust preprocessing methods, including normalization and augmentation, improve model reliability and generalization. This innovative approach bridges structured and image-based data processing, providing a scalable, efficient solution for comprehensive and automated healthcare diagnostics.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The present invention provides a hybrid diagnostic framework that integrates machine learning and deep learning techniques for enhanced disease classification and medical image analysis. By combining KNN for structured data processing and CNN with U-Net for image segmentation, the system overcomes the limitations of standalone diagnostic approaches.
The framework processes structured and unstructured medical data through distinct yet interconnected components. The KNN module analyzes structured patient data, achieving a high classification accuracy of 92%. CNN, complemented by U-Net segmentation, processes medical images, ensuring precise diseased region localization with an accuracy of 87%.
A key innovation of the system is its robust preprocessing pipeline. Data normalization and augmentation techniques enhance feature extraction, while oversampling addresses class imbalance. Dropout layers in CNN prevent overfitting, allowing for better generalization across varied datasets.
The system operates in a modular fashion, where structured and unstructured data inputs are preprocessed separately before being fed into respective ML and DL models. Results from both models are integrated using a decision fusion mechanism, providing a final, reliable disease classification output.
To ensure scalability and real-world applicability, the invention supports cloud-based deployment, enabling remote diagnostics and automated updates. This hybrid diagnostic system improves medical decision-making by offering a comprehensive, automated, and highly accurate disease classification tool suitable for healthcare professionals and research institutions.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
This invention introduces a dual-model diagnostic framework that integrates machine learning (KNN) and deep learning (CNN) techniques to address diverse medical data challenges. The system leverages KNN for structured data classification and CNN with U-Net for image segmentation and feature extraction. By combining these methodologies, the invention ensures high accuracy in both structured and unstructured data processing. Key innovations include robust preprocessing methods like normalization and augmentation, which improve model training and generalization. U-Net facilitates precise localization of diseased regions in images, enhancing diagnostic reliability. This integrated framework bridges traditional ML and DL strengths, overcoming limitations of standalone models. It provides a scalable, versatile solution for automated disease diagnosis, paving the way for improved healthcare outcomes.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: BLOCK DIAGRAM OF THE PROPOSED SYSTEM
The block diagram represents the proposed diagnostic framework, showcasing key components: data preprocessing, U-Net-based image segmentation, CNN-based feature extraction, and KNN classification for structured data. Preprocessing ensures uniformity and robustness, while U-Net segments diseased regions in medical images. CNN extracts hierarchical features, aiding disease identification. KNN classifies structured symptom data, integrating seamlessly with image-based diagnostics. This architecture highlights the dual-model framework's versatility, bridging structured and unstructured data for comprehensive disease diagnostics.
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The hybrid diagnostic system consists of multiple interconnected modules designed to handle structured and unstructured medical data. The primary components include a structured data classifier (KNN), a deep learning-based image processing unit (CNN with U-Net), and a decision fusion mechanism that integrates both outputs.
The structured data classifier utilizes KNN to analyze patient symptoms, medical history, and demographic details. The data preprocessing stage normalizes input values, encodes categorical variables, and applies feature selection techniques to enhance classification accuracy. KNN assigns new cases based on the majority class of the nearest neighbors, ensuring robust symptom-based diagnosis.
For unstructured data, the CNN-U-Net module processes medical images to identify and segment diseased regions. The CNN architecture extracts hierarchical features, while U-Net refines segmentation through its encoder-decoder structure. The segmented output highlights abnormalities, aiding radiologists in medical assessments.
The integration of structured and unstructured data is achieved through a decision fusion mechanism. Outputs from KNN and CNN-U-Net undergo weighted aggregation, ensuring that both numerical symptom data and visual image analysis contribute to the final diagnostic decision.
To enhance model performance, data preprocessing techniques such as normalization, augmentation, and class balancing are employed. Oversampling increases the representation of minority classes, while dropout layers in CNN prevent overfitting, ensuring better generalization.
The system is designed for scalability, supporting cloud-based storage and inference. Data security is ensured through encryption and compliance with healthcare regulations. Automated model updates allow continuous improvement based on new medical data and evolving diagnostic techniques.
The hybrid framework outperforms standalone ML and DL models by leveraging their respective strengths. KNN achieves an accuracy of 92% in structured data classification, while CNN with U-Net reaches 87% validation accuracy in medical image analysis. The combined system improves diagnostic reliability and adaptability, making it an essential tool for modern healthcare applications.
ALGORITHM OF PROPOSED SYSTEM:
· Segmentation Using U-Net:
• · Input medical images are passed through the U-Net model.
• The encoder captures spatial features via Conv2D layers, and the decoder reconstructs detailed pixel-level regions using Conv2DTranspose layers.
· Feature Extraction:
• · The segmented output is processed through CNN layers to extract hierarchical features such as edges and textures.
· KNN Classification:
• · Structured data and extracted features are input to the KNN model.
• The algorithm classifies diseases based on similarity metrics between feature vectors, ensuring accurate predictions for structured and image-based data.
, Claims:1. A computerized diagnostic system for medical disease classification and analysis, comprising:
a structured data processing module utilizing a K-Nearest Neighbors (KNN) algorithm configured to classify patient symptoms and structured medical records, wherein the KNN module receives structured input data, normalizes it, applies label encoding, and performs similarity-based classification to generate a disease prediction;
an image processing module incorporating a Convolutional Neural Network (CNN) integrated with a U-Net model, wherein the U-Net model segments medical images to isolate regions of interest, and the CNN extracts features from segmented images for disease classification;
a data preprocessing unit configured to perform label encoding, normalization, data augmentation, and oversampling to enhance model accuracy and mitigate class imbalance issues;
a model training and validation unit configured to optimize the CNN and KNN models through iterative training, wherein dropout layers are applied in the CNN model to reduce overfitting and validation accuracy is computed to assess model performance;
an inference engine configured to combine structured and unstructured data processing outputs to generate a comprehensive diagnostic report, wherein the KNN-based structured data classification and CNN-based image classification are integrated to produce a unified disease diagnosis;
an output module configured to present diagnostic results, segmented medical images, and classification confidence scores to healthcare professionals through a user interface, wherein the output is accessible via a cloud-based or on-premise healthcare platform.
2. The system as claimed in claim 1, wherein the KNN module classifies structured patient data, including symptoms, demographic details, and medical history, with high accuracy.
3. The system as claimed in claim 1, wherein the CNN module extracts hierarchical features from medical images, providing robust image-based disease detection.
4. The system as claimed in claim 1, wherein the U-Net model enhances medical image segmentation, ensuring precise localization of diseased regions.
5. The system as claimed in claim 1, wherein data preprocessing techniques including normalization, augmentation, and oversampling improve model accuracy and generalization.
6. The system as claimed in claim 1, wherein a decision fusion mechanism integrates structured and unstructured data analysis results to generate a final disease classification.
7. The system as claimed in claim 1, wherein dropout layers in CNN mitigate overfitting, ensuring model robustness in real-world medical applications.
8. The system as claimed in claim 1, wherein cloud-based deployment enables remote diagnostics, data security, and automated updates for enhanced scalability.

Documents

Application Documents

# Name Date
1 202541014662-STATEMENT OF UNDERTAKING (FORM 3) [20-02-2025(online)].pdf 2025-02-20
2 202541014662-REQUEST FOR EARLY PUBLICATION(FORM-9) [20-02-2025(online)].pdf 2025-02-20
3 202541014662-POWER OF AUTHORITY [20-02-2025(online)].pdf 2025-02-20
4 202541014662-FORM-9 [20-02-2025(online)].pdf 2025-02-20
5 202541014662-FORM FOR SMALL ENTITY(FORM-28) [20-02-2025(online)].pdf 2025-02-20
6 202541014662-FORM 1 [20-02-2025(online)].pdf 2025-02-20
7 202541014662-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-02-2025(online)].pdf 2025-02-20
8 202541014662-EVIDENCE FOR REGISTRATION UNDER SSI [20-02-2025(online)].pdf 2025-02-20
9 202541014662-EDUCATIONAL INSTITUTION(S) [20-02-2025(online)].pdf 2025-02-20
10 202541014662-DRAWINGS [20-02-2025(online)].pdf 2025-02-20
11 202541014662-DECLARATION OF INVENTORSHIP (FORM 5) [20-02-2025(online)].pdf 2025-02-20
12 202541014662-COMPLETE SPECIFICATION [20-02-2025(online)].pdf 2025-02-20