Abstract: The present disclosure provides a system (100) for the detection of diseases, comprising a computing device (102) configured to capture facial images of individuals and a server (104) in communication with said computing device (102). Said server (104) is configured to receive the captured facial images, extract data related to facial landmarks from said images, where said facial landmarks include the eyes, under-eye areas, cheeks, and lines around the nose, and house a storage unit containing a database of facial images associated with known medical conditions and a processing unit. Furthermore, said server (104) hosts a deep learning algorithm adapted to analyze the extracted landmark data by comparing it with the database to identify patterns and features indicative of specific diseases. A diagnostic module (106) is incorporated within said server (104), designed to utilize the analysis provided by the deep learning algorithm to predict the presence of diseases based on the identified facial patterns and features. This system enables non-invasive, efficient, and detection of diseases, facilitating timely intervention and treatment. Fig. 1 Drawings / FIG. 1 / FIG. 2 / FIG. 3 Overall flow of the process / FIG. 4 User interface of the system
Description:.
SYSTEM AND METHOD FOR THE DETECTION OF DISEASES
Field of the Invention
The present disclosure generally relates to medical diagnostics through digital image processing. Further, the present disclosure particularly relates to a system for the detection of diseases by analyzing facial images.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
To prepare the background in the longest format for the specified system (100) for the detection of diseases, it is imperative to have an understanding of the independent claim for the invention. Since the detailed claim revolves around a system for the detection of diseases using facial image analysis, the background will focus on the general area of medical diagnostics through digital image processing and the application of artificial intelligence in healthcare.
The field of medical diagnostics has witnessed significant advancements over the years, transitioning from traditional symptom-based assessments to more sophisticated, technology-driven approaches. Among these, the utilization of digital image processing for disease detection has emerged as a promising area. Traditional diagnostic methods often require invasive procedures or the analysis of biological samples, which might not always be feasible or efficient for disease detection.
The application of digital image processing in medical diagnostics offers a non-invasive alternative that can provide immediate insights into potential health conditions based on visible symptoms. This approach is particularly beneficial for diseases that manifest physical symptoms observable through changes in facial features, such as certain dermatological conditions, neurological disorders, and systemic diseases that affect the skin and facial musculature.
Advancements in computer vision and deep learning have further enhanced the capabilities of digital image processing. Deep learning algorithms, especially those based on neural networks, have shown exceptional proficiency in recognizing patterns and features in images that are indicative of specific health conditions. These algorithms can analyze vast datasets of facial images, identifying subtle changes that may elude human observers.
However, the effective implementation of such technologies faces several challenges. The accuracy of disease detection significantly depends on the quality of the facial images captured and the subsequent extraction of relevant data. Factors such as lighting, image resolution, and the subject's orientation can impact the data quality. Furthermore, the creation and maintenance of a comprehensive database of facial images associated with known medical conditions are crucial for the deep learning algorithm to perform comparative analyses.
Another critical aspect is the need for continuous learning and improvement of the deep learning algorithm. As new data becomes available, the system must adapt and refine its diagnostic capabilities. This necessitates a robust infrastructure for data storage, processing, and analysis, which can handle the complexity and volume of data involved in such applications.
In light of the above discussion, there exists an urgent need for solutions that overcome the problems associated with conventional systems and/or techniques for the detection of diseases. The proposed system (100) offers a non-invasive, efficient, and potentially more accurate method for disease detection through the analysis of facial images, leveraging advanced deep learning algorithms. This system addresses the limitations of current diagnostic methods by providing an accessible, non-invasive, and rapid means of identifying diseases at an stage, thereby facilitating timely intervention and treatment.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
The present disclosure outlines a sophisticated system designed for the detection of diseases, leveraging the capabilities of digital image processing and deep learning technologies. At the core of this system is a computing device, specifically configured to capture facial images of individuals. These images are essential for the initial data collection phase, which focuses on the visual cues present in the facial features that may indicate underlying medical conditions.
In an embodiment, the captured images are transmitted to a server that plays a pivotal role in the subsequent analysis. This server is equipped with advanced processing capabilities to receive and preprocess the facial images. Preprocessing includes several critical steps such as normalization, resizing, and contrast adjustment, aimed at enhancing the accuracy of landmark extraction. This meticulous preparation of the images is foundational for the reliable extraction of data related to facial landmarks, specifically targeting areas of potential diagnostic significance such as the eyes, under-eye areas, cheeks, and lines around the nose Forehead, chin, and colour.
In an embodiment, the server houses a storage unit that contains a comprehensive database of facial images associated with known medical conditions, alongside a processing unit. This database serves as a reference point for the deep learning algorithm hosted on the server, which is tasked with analyzing the extracted landmark data. By comparing the data with the information stored in the database, the algorithm identifies patterns and features indicative of specific diseases, showcasing the power of machine learning in medical diagnostics.
In an embodiment, to further enhance the diagnostic accuracy and adaptability of the system, the server integrates machine learning techniques alongside the primary deep learning algorithm. This integration allows for a more nuanced extraction and analysis of facial landmark data. Additionally, the server employs multiple neural network architectures within the deep learning framework, facilitating a comparative analysis of the extracted data. This approach not only increases the reliability of disease prediction but also the robustness of the entire system against varying conditions and data quality.
In an embodiment, the diagnostic capabilities of the system are encapsulated within a diagnostic module incorporated into the server. This module utilizes the analysis provided by the deep learning algorithm to predict the presence of diseases. A user interface included in the diagnostic module displays the diagnostic results with some of the symptoms and recommended actions to healthcare providers. This feature promotes efficient interpretation of the data and facilitates subsequent medical intervention, bridging the gap between automated disease detection and practical healthcare application.
In an embodiment, the system’s accessibility and utility are further enhanced by a mobile application that interfaces with the computing device. This application allows for the remote capture of facial images and their subsequent submission to the server, broadening the scope of the system’s application and making it more user-friendly. Additionally, the server includes a process for the anonymization of facial images prior to their analysis, ensuring patient privacy and adherence to data protection standards.
In an embodiment, the server is configured to generate alerts for healthcare providers when a disease prediction meets or exceeds a predefined confidence threshold. This feature ensures that high-risk detections are promptly brought to the attention of medical professionals, allowing for timely clinical intervention. This proactive alert system underscores the system’s potential to significantly impact public health by facilitating the detection of diseases.
The method for the detection of diseases using the system comprises several steps, starting with the capture of facial images by the computing device. The images are then transmitted to the server, where facial landmark data is extracted and analyzed by the deep learning algorithm. This analysis, grounded in the comparison against a database of facial images associated with known medical conditions, leads to the prediction of diseases. The diagnostic predictions are displayed through the diagnostic module on the server, thereby facilitating non-invasive and detection of diseases.
Brief Description of the Drawings
The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a system aimed at revolutionizing the detection of diseases through the analysis of facial images, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a method for the detection of diseases using the system, in accordance with the embodiments of the present disclosure.
FIG. 3 illustrates a method flow diagram for the detection of diseases, in accordance with the embodiments of the present disclosure.
FIG. 4 illustrates an exemplary graphical user interface (GUI), in accordance with the embodiments of the present disclosure.
Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a system (100) aimed at revolutionizing the detection of diseases through the analysis of facial images, in accordance with the embodiments of the present disclosure. This system is comprised of a computing device (102) and a server (104), each playing a crucial role in the detection process. The computing device (102) is specifically configured to capture high-quality facial images of individuals. These images serve as the primary data source for the system’s diagnostic processes.
Upon capturing the facial images, the computing device (102) transmits these images to the server (104). This server (104) is engineered to perform several key operations on the received images. Initially, it receives the facial images, marking the beginning of the diagnostic process. The server (104) is then responsible for extracting data related to facial landmarks from the images. The focus is on specific pattern of facial regions that are often indicators of various diseases—namely, the eyes, under-eye areas, cheeks, and lines around the nose. This selective extraction is vital for identifying potential signs of medical conditions accurately.
The server (104) houses a sophisticated storage unit, which contains a comprehensive database of facial images associated with known medical conditions. This database acts as a benchmark for identifying diseases in new images. Additionally, the server encompasses a processing unit dedicated to managing and analyzing the vast amounts of data involved in the detection process.
Central to the server’s (104) functionality is the hosting of a deep learning algorithm. This algorithm is specifically adapted to analyze the extracted landmark data by comparing it with the existing database to identify patterns and features indicative of specific diseases. The deep learning algorithm represents the system’s analytical core, utilizing advanced computational techniques to discern subtle indications of health issues from facial features.
A diagnostic module (106) is seamlessly incorporated within the server (104). This module is the culmination point of the system’s analysis, where the data processed by the deep learning algorithm is translated into actionable insights. It is designed to predict the presence of diseases based on the identified facial patterns and features, providing a crucial tool for disease detection. The diagnostic module (106) enables the system to offer preliminary assessments of individuals’ health, potentially identifying diseases at an stage when they are more easily treatable.
The system (100) further enhances the accuracy of landmark extraction through preprocessing techniques applied to the received facial images. These techniques include normalization, resizing, and contrast adjustment, ensuring that the images are in an optimal state for analysis. Such preprocessing is instrumental in maximizing the effectiveness of the subsequent landmark data extraction and analysis.
To ensure the system remains at the forefront of diagnostic capability, the server’s (104) storage unit is configured to periodically update its database with new facial images from recent diagnostics. This continuous learning and improvement of the deep learning algorithm ensure that the system’s diagnostic accuracy improves over time, adapting to new patterns and features associated with diseases.
Employing multiple neural network architectures within the deep learning algorithm allows for a comparative analysis of the extracted landmark data. This multiplicity increases the reliability and robustness of the system’s disease prediction capabilities by leveraging diverse analytical perspectives.
The diagnostic module (106) includes a user interface designed for healthcare providers and patients. This interface displays diagnostic results and recommended actions, promoting efficient interpretation and facilitating timely medical intervention. The inclusion of a mobile application enhances the system’s accessibility and utility, patient can allowing for remote capture of facial images and their submission to the server (104) health care provider can analyse remotely.
Integrating machine learning techniques alongside deep learning for facial landmark data extraction and analysis improves the system’s adaptability and diagnostic accuracy. Moreover, the server (104) ensures patient privacy and data protection through the anonymization of facial images prior to analysis.
The system is configured to generate alerts for healthcare providers when a disease prediction meets or exceeds a predefined confidence threshold. This feature ensures prompt clinical attention to high-risk detections, underscoring the system’s potential to significantly impact public health through disease detection.
In an embodiment, the server (104) is further configured to preprocess the received facial images to enhance the accuracy of landmark extraction. Techniques such as normalization, resizing, and contrast adjustment are employed to ensure that the images are optimally prepared for subsequent analysis. Such preprocessing steps are critical for maintaining the integrity of the data extracted from the facial images. Normalization adjusts the scale of the image data to a standard range, resizing modifies the dimensions of the images to a uniform size, and contrast adjustment enhances the clarity of the images, ensuring that key facial landmarks can be accurately identified. The application of these preprocessing techniques by the server (104) significantly improves the reliability of landmark extraction, which is foundational for the effective analysis of facial images for disease prediction.
In another embodiment, the server's (104) storage unit is configured to periodically update the database with new facial images from recent diagnostics. This process enables continuous learning and improvement of the deep learning algorithm, ensuring that the system (100) remains adaptive and up-to-date with the latest diagnostic data. The inclusion of new facial images enriches the database, providing a broader basis for comparison during the analysis of landmark data. Such continuous updating is instrumental in enhancing the algorithm’s ability to accurately identify patterns and features indicative of specific diseases, thereby improving the overall diagnostic accuracy of the system (100).
In a further embodiment, the server (104) employs multiple neural network architectures within the deep learning algorithm for a comparative analysis of the extracted landmark data. This approach increases the reliability and robustness of disease prediction by leveraging the strengths of different neural network models. Each architecture may be uniquely suited to identifying certain patterns or features within the data, and their combined use allows for a more comprehensive analysis. Such comparative analysis significantly enhances the system’s (100) capability to accurately predict diseases based on facial landmarks, by effectively utilizing the diverse analytical power of multiple neural networks.
In an embodiment, the diagnostic module (106) on the server (104) includes a user interface for displaying diagnostic results and recommended actions to healthcare providers. This interface promotes efficient interpretation of the data and facilitates subsequent medical intervention. By presenting the results and recommendations in a clear and accessible manner, the user interface ensures that healthcare providers can quickly understand and act upon the diagnostic insights provided by the system (100). This feature is essential for integrating the system (100) into clinical workflows, thereby enhancing the practical utility of the technology in a healthcare setting.
In an embodiment, the system (100) further comprises a mobile application that interfaces with the computing device (102), allowing for the remote capture of facial images and their subsequent submission to the server (104). This mobile application significantly enhances the system’s (100) accessibility and utility by enabling users to capture and submit facial images without the need for specialized equipment or direct interaction with healthcare providers. The ability to remotely capture and transmit images for analysis makes the system (100) more adaptable to various contexts, including telemedicine and home-based care, thereby broadening its applicability and reach.
In an embodiment, the server (104) integrates machine learning techniques alongside deep learning for the extraction and analysis of facial landmark data. This integration further improves the system’s (100) adaptability and diagnostic accuracy. Machine learning techniques complement the capabilities of deep learning by providing additional methods for data analysis, such as feature selection and optimization algorithms, which can enhance the precision of disease prediction. The combined use of machine learning and deep learning enriches the analytical framework of the system (100), offering a more nuanced and adaptable approach to the detection of diseases.
In an embodiment, the server (104) includes a process for the anonymization of facial images prior to their analysis, ensuring patient privacy and adherence to data protection standards. This process involves removing or obscuring personal identifiers from the images, thereby safeguarding the confidentiality of individuals' information. The commitment to privacy and data protection is paramount, especially in the context of healthcare, and the anonymization process ensures that the system (100) operates within ethical and legal frameworks concerning patient data.
In an embodiment, the server (104) is configured to generate alerts for healthcare providers when a disease prediction meets or exceeds a predefined confidence threshold. This configuration ensures prompt clinical attention to high-risk detections, facilitating timely intervention. The generation of alerts based on a confidence threshold allows healthcare providers to prioritize cases that require immediate attention, thereby optimizing the allocation of resources and ensuring that individuals with a higher likelihood of disease receive prompt care. This feature underscores the system’s (100) potential to significantly impact public health through disease detection and intervention.
FIG. 2 illustrates a method (200) for the detection of diseases using the system (100), in accordance with the embodiments of the present disclosure. Capturing (202) involves using a computing device (102) to take facial images of individuals. This step is crucial as it initiates the diagnostic process by providing the raw data necessary for analysis. The quality and clarity of these images directly impact the effectiveness of the subsequent steps. Transmitting (204) encompasses sending the captured images from the computing device (102) to the server (104). This transfer is essential for the analysis of the images, relying on secure and efficient communication protocols to ensure the data's integrity and confidentiality are maintained. Extracting (206) refers to the process on the server (104) where data related to facial landmarks is derived from the images. This step focuses on identifying specific facial regions that are indicative of health conditions, serving as a foundation for the diagnosis. Analyzing (208) is conducted by a deep learning algorithm on the server (104), where the extracted landmark data is compared against a database of facial images associated with known medical conditions. This comparison seeks to identify patterns and features that are indicative of diseases. Predicting (210) is the step where the presence of diseases is determined based on the analysis provided by the deep learning algorithm. This crucial phase translates the complex data analysis into actionable diagnostic information, marking a significant step towards disease detection. Displaying (212) involves presenting the diagnostic predictions through the diagnostic module (106) on the server (104). This final step makes the results accessible, facilitating non-invasive and detection of diseases by providing clear and actionable insights to healthcare providers.
FIG. 3 illustrates a method flow diagram for the detection of diseases, in accordance with the embodiments of the present disclosure. An input image serves as the starting point, wherein the image undergoes a feature extraction process to identify relevant data points. Concurrently, a dataset is collected and annotated, which is then used to train a U-Net model—a type of convolutional neural network known for its efficacy in biomedical image segmentation. This trained U-Net model calculates the size of specific areas within the input image, contributing to the creation of a numeric dataset. Subsequently, another neural network, a Convolutional Neural Network (CNN), is trained with this numeric dataset for refined pattern recognition and feature analysis. Following the training phase, the system undergoes testing to evaluate the model's performance in disease detection tasks. An expert review is conducted to ensure the validity and accuracy of the model's diagnostic capabilities. The final stage of the process solidifies the trained model's parameters, culminating in a saved model ready for practical application in disease detection, effectively streamlining the process from image input to diagnostic outcome.
FIG. 4 illustrates an exemplary graphical user interface (GUI), in accordance with the embodiments of the present disclosure. In the GUI, users are prompted to upload a facial image, which is then processed to overlay facial landmarks as visualized on the sample image within the interface. Additional input fields allow users to specify their gender and age, which are likely to be important factors in the disease prediction algorithm. Upon entering the required information, the 'Predict' button initiates the analysis, with the result—'Predicted Disease: 1'—displayed below, indicating the identification of a potential disease. A 'Reset' button is available to clear the inputs and start a new session.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
Claims
I/We claim:
A system (100) for the detection of diseases, comprising:
a computing device (102) configured to capture facial images of individuals;
a server (104) in communication with said computing device (102), wherein said server (104) is configured to:
receive the captured facial images from the computing device (102);
extract data related to facial landmarks from said images, where said facial landmarks are selected from the eyes, under-eye areas, cheeks, and lines around the nose other as per above;
house a storage unit containing a database of facial images associated with known medical conditions and a processing unit;
host a deep learning algorithm that is adapted to analyze the extracted landmark data by comparing it with the database to identify patterns and features indicative of specific diseases; and
a diagnostic module (106) incorporated within the server (104), designed to utilize the analysis provided by the deep learning algorithm to predict the presence of diseases based on the identified facial patterns and features.
The system (100) of claim 1, wherein the server (104) is further configured to preprocess the received facial images for enhanced accuracy of landmark extraction, including but not limited to normalization, resizing, and contrast adjustment techniques.
The system (100) of claim 1, wherein the server's (104) storage unit is configured to periodically update the database with new facial images from recent diagnostics, enabling continuous learning and improvement of the deep learning algorithm.
The system (100) of claim 1, wherein the server (104) employs multiple neural network architectures within the deep learning algorithm for a comparative analysis of the extracted landmark data, thereby increasing the reliability and robustness of disease prediction.
The system (100) of claim 1, wherein the diagnostic module (106) on the server (104) includes a user interface for displaying diagnostic results and recommended actions to healthcare providers, promoting efficient interpretation and subsequent medical intervention.
The system (100) of claim 1, further comprising a mobile application that interfaces with the computing device (102), allowing for remote capture of facial images and their subsequent submission to the server (104), thus enhancing the system’s accessibility and utility.
The system (100) of claim 1, wherein the server (104) integrates machine learning techniques alongside deep learning for the extraction and analysis of facial landmark data, further improving the system’s adaptability and diagnostic accuracy.
The system (100) of claim 1, wherein the server (104) includes a process for the anonymization of facial images prior to their analysis, ensuring patient privacy and adherence to data protection standards.
The system (100) of claim 1, wherein the server (104) is configured to generate alerts for healthcare providers when a disease prediction meets or exceeds a predefined confidence threshold, ensuring prompt clinical attention to high-risk detections.
A method (200) for the detection of diseases using the system (100), comprising the steps of:
capturing (202) facial images of individuals with the computing device (102);
transmitting (204) the captured images to the server (104);
extracting (206) facial landmark data from the images on the server (104);
analyzing (208) the extracted landmark data with the deep learning algorithm on the server (104) by comparing it against the database of facial images associated with known medical conditions to identify patterns and features indicative of diseases;
predicting (210) the presence of diseases based on the analysis provided by the deep learning algorithm;
displaying (212) the diagnostic predictions through the diagnostic module (106) on the server (104), thereby facilitating non-invasive and detection of diseases.
SYSTEM AND METHOD FOR THE DETECTION OF DISEASES
The present disclosure provides a system (100) for the detection of diseases, comprising a computing device (102) configured to capture facial images of individuals and a server (104) in communication with said computing device (102). Said server (104) is configured to receive the captured facial images, extract data related to facial landmarks from said images, where said facial landmarks include the eyes, under-eye areas, cheeks, and lines around the nose, and house a storage unit containing a database of facial images associated with known medical conditions and a processing unit. Furthermore, said server (104) hosts a deep learning algorithm adapted to analyze the extracted landmark data by comparing it with the database to identify patterns and features indicative of specific diseases. A diagnostic module (106) is incorporated within said server (104), designed to utilize the analysis provided by the deep learning algorithm to predict the presence of diseases based on the identified facial patterns and features. This system enables non-invasive, efficient, and detection of diseases, facilitating timely intervention and treatment.
Fig. 1
Drawings
/
FIG. 1
/
FIG. 2
/
FIG. 3 Overall flow of the process
/
FIG. 4 User interface of the system
, Claims:I/We claim:
A system (100) for the detection of diseases, comprising:
a computing device (102) configured to capture facial images of individuals;
a server (104) in communication with said computing device (102), wherein said server (104) is configured to:
receive the captured facial images from the computing device (102);
extract data related to facial landmarks from said images, where said facial landmarks are selected from the eyes, under-eye areas, cheeks, and lines around the nose other as per above;
house a storage unit containing a database of facial images associated with known medical conditions and a processing unit;
host a deep learning algorithm that is adapted to analyze the extracted landmark data by comparing it with the database to identify patterns and features indicative of specific diseases; and
a diagnostic module (106) incorporated within the server (104), designed to utilize the analysis provided by the deep learning algorithm to predict the presence of diseases based on the identified facial patterns and features.
The system (100) of claim 1, wherein the server (104) is further configured to preprocess the received facial images for enhanced accuracy of landmark extraction, including but not limited to normalization, resizing, and contrast adjustment techniques.
The system (100) of claim 1, wherein the server's (104) storage unit is configured to periodically update the database with new facial images from recent diagnostics, enabling continuous learning and improvement of the deep learning algorithm.
The system (100) of claim 1, wherein the server (104) employs multiple neural network architectures within the deep learning algorithm for a comparative analysis of the extracted landmark data, thereby increasing the reliability and robustness of disease prediction.
The system (100) of claim 1, wherein the diagnostic module (106) on the server (104) includes a user interface for displaying diagnostic results and recommended actions to healthcare providers, promoting efficient interpretation and subsequent medical intervention.
The system (100) of claim 1, further comprising a mobile application that interfaces with the computing device (102), allowing for remote capture of facial images and their subsequent submission to the server (104), thus enhancing the system’s accessibility and utility.
The system (100) of claim 1, wherein the server (104) integrates machine learning techniques alongside deep learning for the extraction and analysis of facial landmark data, further improving the system’s adaptability and diagnostic accuracy.
The system (100) of claim 1, wherein the server (104) includes a process for the anonymization of facial images prior to their analysis, ensuring patient privacy and adherence to data protection standards.
The system (100) of claim 1, wherein the server (104) is configured to generate alerts for healthcare providers when a disease prediction meets or exceeds a predefined confidence threshold, ensuring prompt clinical attention to high-risk detections.
A method (200) for the detection of diseases using the system (100), comprising the steps of:
capturing (202) facial images of individuals with the computing device (102);
transmitting (204) the captured images to the server (104);
extracting (206) facial landmark data from the images on the server (104);
analyzing (208) the extracted landmark data with the deep learning algorithm on the server (104) by comparing it against the database of facial images associated with known medical conditions to identify patterns and features indicative of diseases;
predicting (210) the presence of diseases based on the analysis provided by the deep learning algorithm;
displaying (212) the diagnostic predictions through the diagnostic module (106) on the server (104), thereby facilitating non-invasive and detection of diseases.
SYSTEM AND METHOD FOR THE DETECTION OF DISEASES
| # | Name | Date |
|---|---|---|
| 1 | 202421033171-OTHERS [26-04-2024(online)].pdf | 2024-04-26 |
| 2 | 202421033171-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 3 | 202421033171-FORM FOR SMALL ENTITY [26-04-2024(online)].pdf | 2024-04-26 |
| 4 | 202421033171-FORM 1 [26-04-2024(online)].pdf | 2024-04-26 |
| 5 | 202421033171-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 6 | 202421033171-EVIDENCE FOR REGISTRATION UNDER SSI [26-04-2024(online)].pdf | 2024-04-26 |
| 7 | 202421033171-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202421033171-DRAWINGS [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202421033171-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf | 2024-04-26 |
| 10 | 202421033171-COMPLETE SPECIFICATION [26-04-2024(online)].pdf | 2024-04-26 |
| 11 | 202421033171-FORM-9 [07-05-2024(online)].pdf | 2024-05-07 |
| 12 | 202421033171-FORM 18 [08-05-2024(online)].pdf | 2024-05-08 |
| 13 | 202421033171-FORM-26 [12-05-2024(online)].pdf | 2024-05-12 |
| 14 | 202421033171-FORM 3 [13-06-2024(online)].pdf | 2024-06-13 |
| 15 | 202421033171-RELEVANT DOCUMENTS [17-04-2025(online)].pdf | 2025-04-17 |
| 16 | 202421033171-POA [17-04-2025(online)].pdf | 2025-04-17 |
| 17 | 202421033171-FORM 13 [17-04-2025(online)].pdf | 2025-04-17 |