Sign In to Follow Application
View All Documents & Correspondence

System For Automated Colorectal Cancer Detection Using Ct Scan Images

Abstract: A system for automated colorectal cancer detection using CT scan images comprising a data acquisition module convert 3D CT image into 2D image slices, a preprocessing module for images resizing, normalization, augmentation and grayscale conversion, a dataset splitting module labels and partitions images into normal or cancerous for training, validation and testing sets, a U-Net model definition module structures neural network for image segmentation and classification, a model training module trains on labeled data, a model evaluation module evaluate performance using accuracy, precision and recall, a prediction module for diagnostic classification on new images, a deployment module deploy trained model for clinical use and generate diagnostic reports, a continuous learning module collect new labeled image data to improve diagnostic accuracy, a Grad-CAM visualization module highlight regions within images for diagnose and an integration module provides interoperability with health record and facilitate report generation access by clinical personnel.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 August 2025
Publication Number
36/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR University
Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.

Inventors

1. Huma Farha
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
2. Nousheen Parveen
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
3. Adeba Saniya
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
4. Mohammad Ibadur Rahman
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
5. Dr. L.M.I.Leo Joseph
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
6. Dr. Sandip Bhattacharya
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
7. Dr. Sudip Bhattacharya
AIIMS, Deoghar, Jharkhand, India.

Specification

Description:FIELD OF THE INVENTION

[0001] The present invention relates to a system for automated colorectal cancer detection using CT scan images that is developed to enhance diagnostic accuracy and efficiency by converting CT scan 3D images into 2D slices, standardizing images and performing segmentation and classification to aid clinicians in making faster and more informed decisions.

BACKGROUND OF THE INVENTION

[0002] Colorectal cancer is one of the most common and life-threatening diseases affecting people worldwide. Early and accurate detection is benefit in improving patient outcomes and reducing mortality rates. The detection of colorectal cancer using CT scan images has presented several challenges in clinical practice. Radiologists are required to manually analyze hundreds of CT slices, which is time-consuming and prone to human error due to fatigue or subjective interpretation. Early-stage abnormalities or subtle cancerous regions are easily missed without enhanced imaging support, lack of standardized image handling, and limited capability in accurately segmenting and classifying cancerous tissue. Ensure proper detection by identify cancerous regions to support clinicians in making timely and informed decisions based on CT scan image analysis.

[0003] Traditional process relied on manual interpretation by radiologists, which presents several difficulties. Radiologists visually examine a large number of 2D slices generated from 3D CT scans to identify abnormal regions such as tumors or polyps. This is time-consuming, labor-intensive and highly dependent on the experience and concentration of the medical professional. Variability in image quality and patient anatomy further complicates accurate diagnosis. Inconsistent interpretation between different radiologists also affect diagnostic outcomes. Additionally, manual review does not scale well with increasing imaging demands in clinical settings. These challenges highlight the limitations of traditional diagnostic approaches and the need for more efficient, consistent and accurate methods to assist in the early detection of colorectal cancer.

[0004] WO2005030266A3 relates to optical imaging of colorectal cancer (CRC) in patients. The contrast agents may be used in diagnosis of CRC, for follow up of progress in disease development, and for follow up of treatment of CRC. Further, the invention provides methods for optical imaging of CRC in patients.

[0005] US20060292078A1 relates to optical imaging of colorectal cancer (CRC) in patients. The contrast agents may be used in diagnosis of CRC, for follow up of progress in disease development, and for follow up of treatment of CRC. Further, the invention provides methods for optical imaging of CRC in patients.

[0006] Conventionally, many systems have been developed for detecting colorectal cancer using CT scan images, however the devices mentioned in the prior arts have limitations pertaining to convert 3D images into 2d image slices, standardized images for better analysis, image segmentation and classification for precise detection for effective clinical decision-making and integrating with hospital databases allowing to access diagnostic results directly.

[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a system that is required to be capable of converting 3D medical images into 2D slices, apply standardization techniques for easy analysis, perform image segmentation and classification to accurately identify abnormalities for improved clinical decision-making and enable integration with hospital databases to provide direct access to diagnostic results.

OBJECTS OF THE INVENTION

[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.

[0009] An object of the present invention is to develop a system that is capable of early and accurate detection of colorectal cancer in CT scan images by highlighting abnormal regions to help doctors for faster and more informed decisions during diagnosis.

[0010] Another object of the present invention is to develop a system that is capable of standardize CT scan images by resizing, normalizing pixel values, brightness adjustment, and converting images to grayscale when required for analysis.

[0011] Another object of the present invention is to develop a system that is capable of neural network tailored for image segmentation and classification for
for precise detection of normal and cancerous tissue enabling effective clinical decision-making.

[0012] Another object of the present invention is to develop a system that is capable of utilizing the training subset to train the model on labeled data to learn meaningful patterns and features from images of cancerous and non-cancerous regions in CT scan diagnostics.

[0013] Yet another object of the present invention is to develop a system that is capable of integrating with hospital systems and databases, allowing medical staff to access diagnostic results directly.

[0014] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.

SUMMARY OF THE INVENTION

[0015] The present invention relates to a system for automated colorectal cancer detection using CT scan images that is develop to train the model on labeled data to learn and recognize patterns distinguishing normal from cancerous tissue, enabling reliable diagnostic.

[0016] According to an embodiment of the present invention, a system for automated colorectal cancer detection using CT scan images comprises of a data acquisition module configured to collect CT scan image data and convert three-dimensional image data into two-dimensional image slices, a preprocessing module operatively connected to the data acquisition module and configured to resize images to a fixed dimension, normalize pixel values, apply data augmentation techniques including rotation, flipping, brightness adjustment, and convert images to grayscale when required, a dataset splitting module operatively connected to the preprocessing module to label the processed images into “Normal” and “Cancerous” categories and to split the labeled data into training, validation and testing subsets, a U-Net model definition module operatively connected to the dataset splitting module and configured to define a U-Net convolutional neural network architecture for image segmentation and classification, a model training module operatively connected to the U-Net model definition module and the training subset and configured to train the model on labeled data, a continuous learning module operatively connected to the model training module and the deployment module to collect new labeled image data from the deployed environment and to retrain and update the model using the collected data to improve diagnostic accuracy and adaptability over time, the continuous learning module to automatically initiate model retraining based on performance degradation or the periodic availability of newly labeled datasets from deployed clinical usage.

[0017] According to another embodiment of the present invention, the system further, comprises of a model evaluation module operatively connected to the trained model and the testing subset to evaluate the performance of the trained model using one or more evaluation metrics including accuracy, precision and recall, a prediction module operatively connected to the trained model to perform diagnostic classification on new CT scan images and a deployment module operatively connected to the prediction module and configured to deploy the trained model for clinical use and to generate real-time diagnostic reports, a Grad-CAM visualization module operatively connected to the prediction module to generate visual explanations of the model's output in the form of heat maps, and to highlight regions within the CT scan images that most significantly contributed to the model's diagnostic decision, an integration module operatively connected to the deployment module and to hospital’s database, to enable seamless integration of the system with hospital networks, provide interoperability with electronic health record systems, and facilitate real-time report generation and access by clinical personnel and the integration module includes one or more application programming interfaces (APIs) to enable bidirectional communication with external diagnostic systems and hospital databases.

[0018] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates a flow chart depicting a system for automated colorectal cancer detection using CT scan images.

DETAILED DESCRIPTION OF THE INVENTION

[0020] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.

[0021] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.

[0022] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.

[0023] The present invention relates to a system for automated colorectal cancer detection using CT scan images that is develop to enhance diagnostic accuracy and efficiency by converting CT scan 3D images into 2D slices, standardizing images, performing segmentation and classification using a model and integrating with hospital databases for aiding clinicians in making faster and more informed decisions.

[0024] Referring to Figure 1, illustrate a flow chart depicting a system for automated colorectal cancer detection using CT scan images. The system disclosed herein comprises of a data acquisition module is the for collecting raw CT scan image data and converts three-dimensional (3D) scan data into structured two-dimensional (2D) image slices for processing. CT scans generate 3D data as a series of cross-sectional images that represent various tissue densities within the body. The module utilizes imaging protocols to accurately capture high-resolution datasets and employs filtered back projection or iterative reconstruction to generate sequential 2D slices from the 3D images. The 2D slices retain the spatial and anatomical features necessary for accurate diagnosis while reducing the computational complexity involved in analyzing full 3D data. The module ensures consistent orientation and alignment of slices, standardizes image formats and artifact correction techniques to improve image clarity.

[0025] After generating 2D images slices, a preprocessing module connected with the data acquisition module to resizes all image slices to a fixed dimension for uniform input shape across the dataset to match the expected input size, then normalizes pixel values for scaling intensity values to a range to stabilize. To improve model generalization and robustness, the module applies data augmentation techniques such as random rotation, horizontal and vertical flipping and brightness adjustment, simulating variations encountered for clinical imaging. The augmentations help prevent overfitting and improve performance on unseen data and when required based on clinical or model-specific needs, the module converts color or multi-channel images to grayscale by computing a weighted sum of the red, green and blue (RGB) channels. The module ensures that all input data is standardized, diversified and optimized.

[0026] Once CT scan 2D images input data is standardized, a dataset splitting module, connected to the preprocessing module to organize and structure labeled CT scan data for effective model training and evaluation, by assigning labels “Normal” or “Cancerous” to each processed image slice based on annotations provided by radiologists or clinical datasets. The labels are essential for supervised learning to distinguish between healthy and cancerous tissue. Once labeled, the module divides the dataset into three key subsets such as training, validation, and testing.

[0027] The training set is used to teach the model patterns associated with each category, the validation set is used to fine-tune model parameters and prevent overfitting and the testing set is reserved for unbiased performance evaluation. The splitting is done using randomized or stratified sampling methods to maintain class balance and ensure that each subset accurately represents the overall distribution of the data. This ensures that the learning process is robust, reproducible and generalizable, ultimately contributing to a more accurate and clinically reliable diagnostic model.

[0028] After labeling the processed images, a U-Net model definition module, connected to the dataset splitting module for constructing the architecture of a U-Net convolutional neural network (CNN) for medical image segmentation and classification tasks. The U-Net features a symmetric encoder-decoder structure where the encoder path captures context through a series of convolutional and pooling layers that reduce spatial dimensions while extracting features. The decoder path in turn performs up sampling and combines high-resolution features from the encoder via skip connections to help retain spatial details crucial for precise segmentation. This enables pixel-wise classification, allowing the model to identify cancerous regions within CT images, the U-Net model definition module ensures an efficient framework tailored for high-accuracy segmentation and classification in medical imaging applications.

[0029] Once neural network (CNN) construct for medical image segmentation and classification task, a model training module connected to both the U-Net model definition module and the training subset for optimizing the neural network using labeled CT scan images. Once the U-Net architecture is defined, this module initializes the training process by feeding the model with input images and corresponding labels “Normal” or “Cancerous.” During training, the model processes each image through forward propagation to predict segmentation maps or classification labels, which are then compared against the true labels to learn complex patterns distinguishing cancerous tissues from normal ones. Once the model achieves satisfactory performance metrics, saved for evaluation and deployment for clinical inference and diagnostics.

[0030] After that, a model evaluation module which is connected to the trained U-Net model and the testing subset to assess the model's diagnostic performance. After training is complete, the model is applied to the test dataset comprising CT images to predict whether each image is “Normal” or “Cancerous.” The predicted outputs are compared with the true labels to compute key evaluation metrics such as accuracy (the overall correctness of predictions), precision (the proportion of true positives among all predicted positives, indicating how reliably the model identifies cancerous cases), and recall (the proportion of true positives among all actual positives, reflecting the model’s sensitivity to detecting cancer). These metrics provide a comprehensive understanding of the model’s strengths and weaknesses, especially critical in medical diagnostics where false negatives or false positives have significant consequences. The module ensures that only models meeting high clinical accuracy standards proceed to deployment for safeguarding the reliability and effectiveness of the diagnostic in healthcare.

[0031] A prediction module is connected to the trained U-Net model to perform diagnostic classification on new, incoming CT scan images. When a new patient scan is input, the module preprocesses the data in alignment with training standards, then forwards the image through the trained model. The model outputs a segmentation to highlighting regions indicative of “Normal” or “Cancerous.” The prediction module interprets these outputs to generate diagnostic information, to enhance clinical usability by predictions accompanied by visual overlays on the original scan, allowing radiologists to quickly localize suspicious regions. The module is for speed and accuracy, enabling integration into clinical workflows. This module ensures that the trained model to assist in early detection, diagnosis, and decision-making in healthcare environments.

[0032] A Grad-CAM visualization module connected to the prediction module, enhances the interpretability of the model’s diagnostic outputs by generating visual explanations in the form of heat maps. Using Gradient-weighted Class Activation Mapping (Grad-CAM) to identifies and highlights the specific regions within a CT scan image that most significantly influenced the model’s classification decision whether “Normal” or “Cancerous.” Grad-CAM works by calculating the gradients of the predicted class score with respect to the final convolutional layers of the trained U-Net model. These gradients are used to compute a weighted combination of the feature maps, producing a localization map that reveals areas the model focused on during prediction. The resulting heat map is then overlaid on the original CT image, providing clinicians with a visual guide to validating and cross-referencing the model's predictions with clinical observations. The Grad-CAM visualization module supporting safer and more informed clinical use.

[0033] A deployment module connected to the prediction module for integrating the trained model into clinical environments for enabling diagnostic support and automated report generation. Once the model evaluated and validated, the deployment module facilitates integration with hospital systems, such as Picture Archiving and Communication Systems (PACS) and Electronic Health Records (EHR). When new CT scan images are processed, the prediction module to classify the scans, then automatically compiles the results including diagnostic labels, confidence scores and any associated visual outputs like Grad-CAM heat maps into structured diagnostic reports. These reports formatted in compliance with medical documentation standards and made accessible to radiologists, physicians and other clinical personnel through secure user interfaces or hospital databases.

[0034] A continuous learning module connected to both the model training module and the deployment module. Once the model is deployed in a real-world environment, this module collects newly labeled CT scan images including diagnostic outcomes verified by radiologists or pathology results from actual patient cases, systematically organizes this data. The continuous learning module enhances the model’s diagnostic accuracy, robustness and generalization capabilities, particularly changes in imaging protocols, disease variations, or patient demographics and also implement performance monitoring to initiates automatic retraining when a model accuracy or new data is detected. This ensures that the system remains clinically relevant and up-to-date. Ultimately, the continuous learning module transforms the system into a self-improving diagnostic tool, aligning with the dynamic nature of medical practice and promoting long-term reliability and adaptability.

[0035] The continuous learning module is further enhanced with the capability to automatically initiate model retraining based on specific triggers, such as performance degradation or the periodic availability of newly labeled datasets from ongoing clinical deployment, continuously monitors the model’s diagnostic performance by key metrics like accuracy, precision and recall. When a statistically significant decline in performance is detected potentially due to shifts in imaging protocols, patient demographics or emerging disease patterns the module flags the need for retraining. Additionally, configured to initiate updates at regular intervals when a predefined volume of newly labeled CT scan data, verified by clinical experts, becomes available.

[0036] This retraining process is carried out by feeding the combined historical and new data back into the model training module, ensuring that the updated model reflects the most current and diverse clinical knowledge. The retrained model then replaces the previous version within the deployment pipeline with minimal disruption. This adaptive approach ensures the system remains accurate, reliable and responsive to changing clinical environments, transforming the model into a dynamic learning system capable of long-term self-improvement and sustained clinical relevance.

[0037] An integration module, operatively connected to the deployment module and the hospital’s database, ensures secure incorporation of the diagnostic into existing hospital. This module is responsible for establishing interoperability with Electronic Health Record (EHR), Picture Archiving and Communication Systems (PACS) and other clinical data repositories through standardized protocols such as HL7, FHIR, or DICOM., facilitates real-time bidirectional data exchange, allowing patient CT scan images to be retrieved, processed and the diagnostic results including labels, heat maps and structured reports to be stored and accessed directly within the hospital’s records.

[0038] The integration module also enables automated real-time report generation, making diagnostic insights immediately available to radiologists, oncologists and other clinical personnel via secure dashboards or interfaces. The integration module reduces manual data handling, minimizes delays in diagnosis and ensures that generated insights are readily accessible and used by healthcare professionals, thereby enhancing efficiency, accuracy and decision-making in patient care.

[0039] The integration module includes one or more Application Programming Interfaces (APIs) to facilitate bidirectional communication between the diagnostic system and external hospital databases or diagnostic platforms. These APIs serve as standardized, secure gateways that allow the system to receive input data such as CT scan images and patient metadata from hospital systems and in return, send back diagnostic outputs including classification results, segmentation maps, Grad-CAM heat maps and auto generated reports.

[0040] The APIs support real-time data exchange, error handling, version control and access management to ensure integration with clinical workflows while maintaining data security and compliance with healthcare regulations such as HIPAA. This connectivity not only allows healthcare professionals to access insights directly within their existing interfaces but also enables the system to fetch updated clinical labels and outcomes, supporting continuous learning. Overall, the inclusion of robust APIs empowers the integration module to function as a critical bridge between diagnostics and the complex, multi-system environment of modern hospitals.

[0041] The present invention works best in the following manner, where the system disclosed herein comprises the data acquisition module to collects CT scan image data and converts three-dimensional data into two-dimensional image slices suitable for processing. The preprocessing module, connected to the data acquisition module to standardizes the images by resizing to fixed dimensions, normalizing pixel values, applying data augmentation techniques like rotation, flipping, and brightness adjustment and converting images to grayscale when required. Processed images are then sent to the dataset splitting module to labels them into “Normal” and “Cancerous” categories and partitions the dataset into training, validation and testing subsets. The U-Net model definition module defines a convolutional neural network tailored for image segmentation and classification and the model training module utilizes the training subset to train the model on labeled data. Once trained, the model evaluation module assesses performance on the testing subset using metrics such as accuracy, precision and recall. The prediction module applies the trained model to classify new CT images in real time and the deployment module delivers the model for clinical use, generating diagnostic reports. The system includes the continuous learning module, which monitors model performance and collects new labeled data from clinical use to periodically retrain and update the model. The Grad-CAM visualization module provides visual heat maps to highlight influential regions in the scan, while the integration module equipped with APIs ensures bidirectional communication with hospital systems for seamless workflow integration.

[0042] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , Claims:1) A system for automated colorectal cancer detection using CT scan images, comprising:

i) a data acquisition module configured to collect CT scan image data and convert three-dimensional image data into two-dimensional image slices;

ii) a preprocessing module operatively connected to the data acquisition module and configured to resize images to a fixed dimension, normalize pixel values, apply data augmentation techniques including rotation, flipping, and brightness adjustment, and convert images to grayscale when required;

iii) a dataset splitting module operatively connected to the preprocessing module and configured to label the processed images into “Normal” and “Cancerous” categories and to split the labeled data into training, validation, and testing subsets;

iv) a U-Net model definition module operatively connected to the dataset splitting module and configured to define a U-Net convolutional neural network architecture for image segmentation and classification;

v) a model training module operatively connected to the U-Net model definition module and the training subset and configured to train the model on labeled data;

vi) a model evaluation module operatively connected to the trained model and the testing subset and configured to evaluate the performance of the trained model using one or more evaluation metrics including accuracy, precision, and recall; and

vii) a prediction module operatively connected to the trained model and configured to perform diagnostic classification on new CT scan images; and a deployment module operatively connected to the prediction module and configured to deploy the trained model for clinical use and to generate real-time diagnostic reports.

2) The system as claimed in claim 1, wherein a continuous learning module operatively connected to the model training module and the deployment module, configured to collect new labeled image data from the deployed environment and to retrain and update the model using the collected data to improve diagnostic accuracy and adaptability over time.

3) The system as claimed in claim 1, wherein a Grad-CAM visualization module operatively connected to the prediction module, wherein to generate visual explanations of the model's output in the form of heat maps, and to highlight regions within the CT scan images that most significantly contributed to the model's diagnostic decision.

4) The system as claimed in claim 1, wherein an integration module operatively connected to the deployment module and to hospital’s database, to enable seamless integration of the system with hospital networks, provide interoperability with electronic health record systems, and facilitate real-time report generation and access by clinical personnel.

5) The system as claimed in claim 1, wherein the continuous learning module is further configured to automatically initiate model retraining based on performance degradation or the periodic availability of newly labeled datasets from deployed clinical usage.

6) The system as claimed in claim 1, wherein the integration module includes one or more application programming interfaces (APIs) to enable bidirectional communication with external diagnostic systems and hospital databases.

Documents

Application Documents

# Name Date
1 202541074900-STATEMENT OF UNDERTAKING (FORM 3) [06-08-2025(online)].pdf 2025-08-06
2 202541074900-REQUEST FOR EARLY PUBLICATION(FORM-9) [06-08-2025(online)].pdf 2025-08-06
3 202541074900-PROOF OF RIGHT [06-08-2025(online)].pdf 2025-08-06
4 202541074900-POWER OF AUTHORITY [06-08-2025(online)].pdf 2025-08-06
5 202541074900-FORM-9 [06-08-2025(online)].pdf 2025-08-06
6 202541074900-FORM FOR SMALL ENTITY(FORM-28) [06-08-2025(online)].pdf 2025-08-06
7 202541074900-FORM 1 [06-08-2025(online)].pdf 2025-08-06
8 202541074900-FIGURE OF ABSTRACT [06-08-2025(online)].pdf 2025-08-06
9 202541074900-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-08-2025(online)].pdf 2025-08-06
10 202541074900-EVIDENCE FOR REGISTRATION UNDER SSI [06-08-2025(online)].pdf 2025-08-06
11 202541074900-EDUCATIONAL INSTITUTION(S) [06-08-2025(online)].pdf 2025-08-06
12 202541074900-DRAWINGS [06-08-2025(online)].pdf 2025-08-06
13 202541074900-DECLARATION OF INVENTORSHIP (FORM 5) [06-08-2025(online)].pdf 2025-08-06
14 202541074900-COMPLETE SPECIFICATION [06-08-2025(online)].pdf 2025-08-06