Sign In to Follow Application
View All Documents & Correspondence

Method And System For Enhanced Skin Cancer Classification Using Deep Learning With Comparative Analysis

Abstract: Method and System for Enhanced Skin Cancer Classification Using Deep Learning with Comparative Analysis The present invention discloses a method and system for improving skin cancer classification accuracy using deep learning techniques. The system addresses the challenges of classifying skin lesions as benign or malignant by combining the strengths of multiple deep learning models and conducting a comparative analysis to select the most effective model. The system utilizes a pre-processing step to enhance image quality, employs VGG-19 and Inception deep learning models for training and testing, and evaluates their performance through comparative analysis. The model with the highest accuracy is then used for final classification, improving the overall accuracy and reliability of skin cancer detection.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
10 September 2024
Publication Number
38/2024
Publication Type
INA
Invention Field
BIO-CHEMISTRY
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. CHAKRADHAR ADUPA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. YALAMANCHI SRI HARSHA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
3. GANDHAM SATHWIKA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
4. JUNNUTHULA SWATHI
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
5. GOPALAPURAPU KAMAL
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
The present invention relates to the field of medical image analysis and machine learning, particularly addressing the challenges associated with accurate skin cancer classification using deep learning techniques.
BACKGROUND OF THE INVENTION
Skin cancer is a major health concern, and early detection is crucial for successful treatment. Deep learning has shown promise in automating the analysis of skin lesion images for classification. However, existing solutions often rely on single deep learning models and may not achieve optimal accuracy. There is a need for a system that combines the strengths of multiple models and selects the most effective one for improved classification performance.
None of the prior art either alone or in combination with one another disclose what the present invention has disclosed.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The present invention provides a method and system for enhanced skin cancer classification comprising:
1. Data Collection: Gathering a dataset of skin lesion images, including both benign and malignant cases.
2. Preprocessing: Enhancing the quality of the images through techniques like resizing, normalization, and augmentation.
3. Model Training and Testing: Training and testing multiple deep learning models, specifically VGG-19 and Inception, on the preprocessed dataset.
4. Comparative Analysis: Evaluating the performance of the models based on accuracy and other relevant metrics.
5. Model Selection: Selecting the model with the highest accuracy for final classification.
6. Classification: Utilizing the selected model to classify new skin lesion images as benign or malignant.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Disclosed herein a system for classifying skin lesions as benign or malignant, comprising: a data collection module to gather skin lesion images, a preprocessing module to standardize and enhance image quality, a deep learning model trained to classify the images, a model selection module to evaluate and choose the optimal model based on accuracy and performance metrics.
In another embodiment, the preprocessing module resizes skin lesion images to a standard resolution for uniform model input.
In another embodiment, the preprocessing module applies normalization techniques to adjust pixel values to a consistent range, improving model convergence and performance.
In another embodiment, further comprising a data augmentation module that increases the dataset size by applying transformations such as rotation, flipping, scaling, and shifting to the original images.
In another embodiment, the deep learning model includes at least one of the following architectures: VGG-19 or Inception, trained to identify features associated with benign and malignant lesions.
In another embodiment, the model selection module compares the performance of multiple models using metrics including accuracy, precision, recall, and F1-score to select the optimal model.
In another embodiment, further comprising a testing module to evaluate the performance of the trained models on a separate testing dataset to determine their ability to generalize to new data.
In another embodiment, wherein the deep learning model is trained for a specific number of epochs, allowing it to learn the distinguishing patterns between benign and malignant lesions.
In another embodiment, further comprising a classification module to process new skin lesion images through the selected model and predict whether the lesion is benign or malignant.
In another embodiment, the preprocessing module ensures that new skin lesion images are resized and normalized in the same manner as the training data to maintain consistency during classification.
The present invention provides a method and system for enhanced skin cancer classification comprising:
1. Data Collection and Preprocessing:
The proposed invention begins with the collection of a dataset of skin lesion images, which includes a balanced representation of both benign and malignant cases. This ensures that the dataset is comprehensive and unbiased, allowing the model to learn effectively from various types of skin lesions.
To standardize and improve the quality of the collected images, several preprocessing techniques are applied. These steps include:
• Resizing: All images are resized to a standard resolution to ensure uniformity across the dataset, facilitating more efficient model training and testing.
• Normalization: Pixel values of the images are normalized to ensure that the data is in a consistent range, which helps the model to converge faster and perform better.
• Data Augmentation: Techniques such as rotation, flipping, scaling, and shifting are applied to artificially increase the size and variability of the dataset. This process helps in reducing overfitting and improves the model’s generalization to new, unseen data.
2. Model Training and Testing:
Once the dataset has been preprocessed, it is divided into training and testing sets. The training set is used to train multiple deep learning models, while the testing set is used to evaluate their performance.
For this invention, two state-of-the-art deep learning architectures, VGG-19 and Inception, are used for the classification of skin lesions.
• Training: The models are trained on the preprocessed dataset over a specified number of epochs (e.g., 15). During the training phase, the models learn to identify the features and patterns that distinguish benign skin lesions from malignant ones.
• Testing: After training, the models are evaluated using the testing dataset, which consists of images that were not used during training. This evaluation process is critical for assessing how well the models generalize to new data.
3. Comparative Analysis and Model Selection:
To determine which model performs the best, a comparative analysis is conducted based on several performance metrics, including:
• Accuracy: The ratio of correctly classified instances to the total instances.
• Precision: The proportion of true positive results among all positive predictions.
• Recall: The proportion of true positive results among all actual positives.
• F1-Score: A weighted average of precision and recall, providing a balanced measure of the model's performance.
Based on this comparative analysis, the model demonstrating the highest accuracy and balanced performance across other metrics is selected as the optimal model for skin lesion classification.
4. Classification:
Once the optimal model has been selected, it is deployed to classify new skin lesion images. The process is as follows:
• Preprocessing: New skin lesion images undergo the same preprocessing steps as the training data (resizing, normalization, etc.) to ensure consistency in the input format.
• Prediction: The selected model then classifies the new images as either benign or malignant, providing a prediction of the disease state. This prediction helps in early detection and diagnosis of skin cancer, thereby improving patient outcomes.
This system offers a robust, automated solution for skin lesion classification, leveraging the power of deep learning to achieve accurate and reliable results. The combination of data augmentation, deep learning, and careful model selection ensures that the invention can effectively aid in the diagnosis of skin cancer.
, Claims:1. A system for classifying skin lesions as benign or malignant, comprising: a data collection module to gather skin lesion images, a preprocessing module to standardize and enhance image quality, a deep learning model trained to classify the images, a model selection module to evaluate and choose the optimal model based on accuracy and performance metrics.
2. The system as claimed in claim 1, wherein the preprocessing module resizes skin lesion images to a standard resolution for uniform model input.
3. The system as claimed in claim 1, wherein the preprocessing module applies normalization techniques to adjust pixel values to a consistent range, improving model convergence and performance.
4. The system as claimed in claim 1, further comprising a data augmentation module that increases the dataset size by applying transformations such as rotation, flipping, scaling, and shifting to the original images.
5. The system as claimed in claim 1, wherein the deep learning model includes at least one of the following architectures: VGG-19 or Inception, trained to identify features associated with benign and malignant lesions.
6. The system as claimed in claim 1, wherein the model selection module compares the performance of multiple models using metrics including accuracy, precision, recall, and F1-score to select the optimal model.
7. The system as claimed in claim 1, further comprising a testing module to evaluate the performance of the trained models on a separate testing dataset to determine their ability to generalize to new data.
8. The system as claimed in claim 1, wherein the deep learning model is trained for a specific number of epochs, allowing it to learn the distinguishing patterns between benign and malignant lesions.
9. The system as claimed in claim 1, further comprising a classification module to process new skin lesion images through the selected model and predict whether the lesion is benign or malignant.
10. The system of claim 1, wherein the preprocessing module ensures that new skin lesion images are resized and normalized in the same manner as the training data to maintain consistency during classification.

Documents

Application Documents

# Name Date
1 202441068270-STATEMENT OF UNDERTAKING (FORM 3) [10-09-2024(online)].pdf 2024-09-10
2 202441068270-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-09-2024(online)].pdf 2024-09-10
3 202441068270-POWER OF AUTHORITY [10-09-2024(online)].pdf 2024-09-10
4 202441068270-FORM-9 [10-09-2024(online)].pdf 2024-09-10
5 202441068270-FORM FOR SMALL ENTITY(FORM-28) [10-09-2024(online)].pdf 2024-09-10
6 202441068270-FORM 1 [10-09-2024(online)].pdf 2024-09-10
7 202441068270-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-09-2024(online)].pdf 2024-09-10
8 202441068270-EVIDENCE FOR REGISTRATION UNDER SSI [10-09-2024(online)].pdf 2024-09-10
9 202441068270-EDUCATIONAL INSTITUTION(S) [10-09-2024(online)].pdf 2024-09-10
10 202441068270-DRAWINGS [10-09-2024(online)].pdf 2024-09-10
11 202441068270-DECLARATION OF INVENTORSHIP (FORM 5) [10-09-2024(online)].pdf 2024-09-10
12 202441068270-COMPLETE SPECIFICATION [10-09-2024(online)].pdf 2024-09-10
13 202441068270-FORM 18 [17-02-2025(online)].pdf 2025-02-17