Sign In to Follow Application
View All Documents & Correspondence

System And Method For Evaluating Health Conditions Of Subject By Processing Image Using Inference Models

Abstract: SYSTEM AND METHOD FOR EVALUATING HEALTH CONDITIONS OF SUBJECT BY PROCESSING IMAGE USING INFERENCE MODELS The present disclosure provides a system (100) for evaluating one or more health conditions of at least one region of a subject or a target by processing, using a plurality of containerized inference models for image classification, an image of at least one region of the subject or the target. The system (100) includes a portable edge device (104) that (i) determines quality of an image of at least one region of the subject (102), (ii) downloads one or more containerized inference models from an application download service or embedded database (200) based on a health condition to be evaluated and a type of image to be evaluated, (iii) evaluate the one or more health conditions by concurrently executing the one or more containerized inference models for simultaneous evaluation of one or more features of on health disease or simultaneous evaluation of one or more health conditions, in single image. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 September 2021
Publication Number
02/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@myipstrategy.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-01-10
Renewal Date

Applicants

SHASHANK GARG
C-21 SMILEE GREENS, HUSKUR ROAD, BANGALORE – 560099.
ARVIND KUMAR SINGH
46, NEWLANDS ROAD, GLASGOW, G43 2JG, UNITED KINGDOM
ISHA GARG
C-21 SMILEE GREENS, HUSKUR ROAD, BANGALORE – 560099.
RAJARAMAN SUBRAMANIAN
#22, GIRI BUILDING, 8TH MAIN, 2ND CROSS, 3RD STAGE, 4TH BLOCK25 BASAVESHWARANAGAR, BANGALORE – 560079.
ANANTHA P KINNAL
#14, 8TH MAIN, MC LAYOUT, VIJAYANAGAR, BANGALORE – 560040.

Inventors

1. SHASHANK GARG
C-21 SMILEE GREENS, HUSKUR ROAD, BANGALORE – 560099.
2. ARVIND KUMAR SINGH
46, NEWLANDS ROAD, GLASGOW, G43 2JG, UNITED KINGDOM
3. ISHA GARG
C-21 SMILEE GREENS, HUSKUR ROAD, BANGALORE – 560099.
4. RAJARAMAN SUBRAMANIAN
#22, GIRI BUILDING, 8TH MAIN, 2ND CROSS, 3RD STAGE, 4TH BLOCK25 BASAVESHWARANAGAR, BANGALORE – 560079.
5. ANANTHA P KINNAL
#14, 8TH MAIN, MC LAYOUT, VIJAYANAGAR, BANGALORE – 560040.

Specification

DESC:SYSTEM AND METHOD FOR EVALUATING HEALTH CONDITIONS OF SUBJECT BY PROCESSING IMAGE USING INFERENCE MODELS
BACKGROUND
Technical Field
[0001] The present disclosure generally relates to medical devices or health care devices; more specifically, the present disclosure relates to system and method for evaluating one or more health conditions of at least one region of a subject or a target by processing, using a plurality of containerized inference models for image classification, an image of at least one region of the subject or the target.
Description of the Related Art
[0002] Diagnosis is a process of identifying underlying disease conditions that enable clinical decisions about treatment and prognosis to be made. When a diagnosis is accurate and made promptly, a patient has the best opportunity for a positive health outcome. Besides that, early diagnosis may help people take control of their disease condition, avoid the possibility of secondary diseases, and plan for future care and treatment. Studies also indicate that periodic screening may help to detect disease in its earliest stages and may reduce the incidence of secondary diseases. For example, the early detection of diabetic retinopathy among diabetics, through periodic screening may help to reduce the incidence of blindness.
[0003] Given the potential complications of diabetic retinopathy leading to loss of vision and even blindness, mass screening programs are being applied for early diagnosis and treatment for the population of diabetics worldwide. However, screening, particularly in resource-poor countries, poses additional challenges due to the shortage of trained medical experts. Further, even a large number of developing countries are grossly under-served by trained experts. These challenges have driven interest in the use of artificial intelligence (AI)–based assessment systems for diagnostic assistance. Such a grading system in a primary care setting often requires diabetic patients to come to the clinical care setting for periodic screening for diabetic retinopathy. However, people in a rural area, and the disabled or elderly patients face additional challenges in availing such screening facilities in their primary care settings. Further, AI-based grading systems that are currently available are often expensive and can interpret retinal images taken only on expensive clinic-based fundus cameras, thereby putting these grading systems out of reach for those working with limited resources in the primary care setting in most developing countries. If diabetic retinopathy is not detected early in an individual subject, it may result in an irreversible loss of vision. The result of all these challenges may well be an increase in the loss of vision amongst the larger population, a situation that may be preventable with early and periodic screening.
[0004] Some of the existing approaches provide portable devices for disease screening. Such portable devices are specifically optimized for the screening of a single type of disease and may not be adaptable and flexible enough to be used for other diseases where early diagnosis may be beneficial like in the detection of several types of cancers.
[0005] In some existing approaches, analysis of acquired data is performed remotely via transmission of data through a cloud-based server. Such approaches are only suitable for large well-equipped hospitals and clinics, and they are not ideal for deployment in low resource community settings where there is a great need for large-scale, periodic screening programs for early detection and prevention of potentially fatal diseases like cancer. Furthermore, existing approaches do not provide for networked connectivity and are certainly not suitable, where early real-time analysis may be required for life saving reasons.
[0006] Some of the existing solutions use smartphone-based systems for monitoring and detecting diseases. Use of a smartphone raises the price point and does not have the added AI and edge capability.
[0007] Therefore, there is a need to address the aforementioned technical drawbacks in existing technologies in evaluating one or more health conditions of at least one region of a subject or a target from the interpretation of images in low-cost, and low-resource community settings.
SUMMARY
[0008] In view of the foregoing, an embodiment herein provides a system for evaluating one or more health conditions of at least one region of a subject or a target by processing, using a plurality of containerized inference models for image classification, an image of at least one region of the subject or the target. The system includes a portable edge device that includes at least one communication link, an embedded service that enables to configure or reconfigure one or more parameters of the portable edge device through the at least one communication link, a memory that stores a set of instructions, data, and model weights; and a processor that is configured to execute the set of instructions to perform one or more operations of the portable edge device. The processor is configured to (i) determine a quality of an image of at least one region of the subject or the target by processing the image to indicate if the quality of the image is unclassifiable, (ii) access, by a user, a plurality of containerized inference models for image classification from at least one of an application download service or an embedded database, based on one or more health conditions to be evaluated and a type of image when the quality of the image is classifiable, (iii|) execute the plurality of containerized inference models on at least one of (i) a direct execution mode in which the plurality of containerized inference models have been prebuilt for direct execution on an edge computing platform of the portable edge device or (ii) a hardware accelerator execution mode in which the plurality of containerized inference models have been pre-built for execution on a hardware accelerator on the portable edge device, and (iv) evaluate the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the plurality of containerized inference models for image classification, and classifying the image by concurrently executing each of the plurality of containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image. Each containerized inference model for image classification runs independently to concurrently execute the plurality of inference models that are implemented in different machine learning languages. The health condition evaluating functionality of the portable edge device is reconfigured by accessing the plurality of containerized inference models for image classification according to a type of health condition to be evaluated at a given time. The system indicates the user to recapture the image if the image is unclassifiable.
[0009] In some embodiments, the portable edge device includes an imaging device that is configured to capture a plurality of images of a plurality of regions of the subject or the target. The imaging device may include one or more illumination sources, one or more imaging accessories, and one or more optical pathways. The imaging device may be reconfigured based on a region of the subject or the target to be imaged at a given time by modifying at least one of wavelength, intensity, or brightness of at least one illumination source or selecting at least one imaging accessory or modifying at least one optical pathway, using the embedded service.
[0010] Each inference model for image classification may be split into a plurality of modules and each of the plurality of modules may be containerized as microservices to execute the plurality of modules independently and concurrently for classification of the image to evaluate the one or more health conditions.
[0011] In some embodiments, the system includes a server that is configured to construct and containerize a plurality of inference models for image classification using docker composition tools.
[0012] The server may be configured to construct each of the inference model(s) for image classification by: (i) receiving one or more gradable, curated image datasets related to an inference model to be constructed or updated, from a secure system, (ii) creating a curator pool including one or more graders for interpreting the one or more gradable image datasets and providing interpreted results to a moderator for curating the interpreted results of each grader and thus generating the curated datasets required for the inference model creation, (iii) generating a reference standard based on the curated interpreted results, and (iv) constructing each of the inference model(s) for image classification based on the generated reference standard. The reference standard is updated periodically on receiving new curated and gradable image datasets from the secure system.
[0013] In one aspect, there is provided a portable edge device for evaluating one or more health conditions of at least one region of a subject or a target by processing, using a plurality of containerized inference models for image classification, an image of at least one region of the subject or the target. The portable edge device includes at least one communication link, an embedded service that enables to configure or reconfigure one or more parameters of the portable edge device through the at least one communication link, a memory that stores a set of instructions, data, and model weights; and a processor that is configured to execute the set of instructions to perform one or more operations of the portable edge device. The processor is configured to (i) determine a quality of an image of at least one region of the subject or the target by processing the image to indicate if the quality of the image is unclassifiable, (ii) access, by a user, a plurality of containerized inference models for image classification from at least one of an application download service or an embedded database, based on one or more health conditions to be evaluated and a type of image when the quality of the image is classifiable, (iii|) execute the plurality of containerized inference models on at least one of (i) a direct execution mode in which the plurality of containerized inference models have been prebuilt for direct execution on an edge computing platform of the portable edge device or (ii) a hardware accelerator execution mode in which the plurality of containerized inference models have been pre-built for execution on a hardware accelerator on the portable edge device, and (iv) evaluate the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the plurality of containerized inference models for image classification, and classifying the image by concurrently executing each of the plurality of containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image, wherein each containerized inference model for image classification runs independently to concurrently execute the plurality of inference models that are implemented in different machine learning languages. The health condition evaluating functionality of the portable edge device is reconfigured in real time by accessing the plurality of containerized inference models for image classification according to a type of health condition to be evaluated at a given time. The portable edge device indicates the user to recapture the image if the image is unclassifiable.
[0014] In another aspect, there is provided a method for evaluating one or more health conditions of at least one region of a subject or a target using a portable edge device in an image of at least one region of the subject or the target. The portable edge device processes, using a plurality of containerized inference models for image classification, the image. The method includes (i) determining a quality of an image of at least one region of the subject or the target by processing the image to indicate if the quality of the image is unclassifiable, (ii) accessing, by a user, a plurality of containerized inference models for image classification from at least one of an application download service or an embedded database, based on one or more health conditions to be evaluated and a type of image when the quality of the image is classifiable, (iii|) executing the plurality of containerized inference models on at least one of (i) a direct execution mode in which the plurality of containerized inference models have been prebuilt for direct execution on an edge computing platform of the portable edge device or (ii) a hardware accelerator execution mode in which the plurality of containerized inference models have been pre-built for execution on a hardware accelerator on the portable edge device, and (iv) evaluating the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the plurality of containerized inference models for image classification, and classifying the image by concurrently executing each of the plurality of containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image. Each containerized inference model for image classification runs independently to concurrently execute the plurality of inference models that are implemented in different machine learning languages. The method includes reconfiguring the health condition evaluating functionality of the portable edge device in real time by accessing the plurality of containerized inference models for image classification according to a type of health condition to be evaluated at a given time. The method includes indicating the user to recapture the image if the image is unclassifiable.
[0015] In some embodiments, the portable edge device includes an imaging device that is configured to capture a plurality of images of a plurality of regions of the subject or the target. The imaging device may include one or more illumination sources, one or more imaging accessories, and one or more optical pathways.
[0016] The method may include reconfiguring the imaging device based on a region of the subject or the target to be imaged at a given time by modifying at least one of wavelength, intensity or brightness of at least one illumination source or selecting at least one imaging accessory or modifying at least one optical pathway, using the embedded service.
[0017] The method may include splitting each inference model for image classification into a plurality of modules and containerizing each of the plurality of modules as microservices to execute the plurality of modules independently and concurrently for classification of the image to evaluate the one or more health conditions.
[0018] The method may include constructing and containerizing a plurality of inference models for image classification using docker composition tools.
[0019] The method may include constructing each of the inference models for image classification by: (i) receiving one or more gradable, curated image datasets related to an inference model to be constructed or updated, from a secure system, (ii) creating a curator pool including one or more graders for interpreting the one or more gradable image datasets and providing interpreted results to a moderator for curating the interpreted results of each grader and thus generating the curated datasets required for the inference model creation, (iii) generating a reference standard based on the curated interpreted results, and (iv) constructing each of the inference model(s) for image classification based on the generated reference standard, wherein the reference standard is updated periodically on receiving new curated and gradable image datasets from the secure system.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The embodiments herein will be better understood from the following detailed descriptions with reference to the drawings, in which:
[0021] FIG. 1 is a block diagram that illustrates a system for evaluating one or more health conditions of at least one region of a subject or a target according to an embodiment herein;
[0022] FIG. 2 is a block diagram that illustrates a portable edge device of FIG. 1 according to an embodiment herein;
[0023] FIG. 3 is a schematic illustration of a process of concurrent execution of one or more inference models for simultaneous evaluation of one or more health conditions of a subject or at target according to an embodiment herein;
[0024] FIG. 4 is a schematic illustration of a hardware implementation options for a portable edge device for evaluation of one or more health conditions of a subject or target according to an embodiment herein;
[0025] FIG. 5 is a schematic illustration of a hardware implementation of an illumination source according to an embodiment herein;
[0026] FIG. 6 is a schematic illustration of a process of data curation for improving the performance of one or more inference models according to an embodiment herein;
[0027] FIG. 7 is a block diagram that illustrates a process of building containerized inference models according to an embodiment herein;
[0028] FIG. 8A illustrates a front view of an exemplary portable edge device for image-based evaluation of one or more health conditions of a subject or a target according to an embodiment herein;
[0029] FIG. 8B illustrates a diagonal view of an exemplary portable edge device for image-based evaluation of one or more health conditions of a subject or a target according to an embodiment herein;
[0030] FIG. 9 illustrates an overview of a system architecture for image-based evaluation of one or more health conditions of a subject or a target according to an embodiment herein;
[0031] FIG. 10 illustrates an alternative overview of a system architecture for a process of image-based evaluation of one or more health conditions of a subject or a target according to an embodiment herein;
[0032] FIG. 11 illustrates a method for evaluating one or more health conditions of at least one region of a subject or a target in an image using an artificial intelligence-based portable edge device according to an embodiment herein;
[0033] FIG. 12 illustrates a software architecture of an operating environment of a computing unit, in accordance with the embodiments herein; and
[0034] FIG. 13 illustrates a hardware architecture of a computing unit, in accordance with the embodiments herein.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0035] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0036] As mentioned, there remains a need for an approach evaluating one or more health conditions of at least one region of a subject or a target in low-cost and low-resource settings. The embodiments herein achieve this by providing a system and method that use a portable edge device for autonomously evaluating, in near real time, one or more health conditions of at least one region of a subject or a target by processing, using a plurality of containerized inference models for image classification, an image of at least one region of the subject or the target. Referring now to the drawings and more particularly to FIGS. 1 through 13, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[0037] As used herein, several terms are defined below
[0038] The term “health condition” used herein refers to a condition or a state of health of a subject which is within a range from normal to severe.
[0039] The term “optical pathway” used herein refers to a pathway that is created after illuminating an illumination source.
[0040] The term “communication link” used herein refers to a communication channel that connects the portable edge device with one or more external devices for data transmission.
[0041] The term “inference model” used herein refers to a trained model is used to infer or predict against previously unseen data.
[0042] The term “containerized inference model” used herein refers to an inference model that is built into a container. The container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
[0043] The term “inference engine” used herein refers to a component that interprets and evaluates the facts in the knowledge base in order to arrive a decision.
[0044] FIG. 1 is a block diagram that illustrates a system 100 for evaluating one or more health conditions of at least one region of a subject (102) or a target according to an embodiment herein. The system 100 includes a portable edge device 104, and a server 116. The portable edge device 104 includes an imaging device 106, a memory 108 that stores a set of instructions, data and model weights, a processor 110 that executes the set of instructions to perform one or more actions of the portable edge device 104, an inference engine 112, one or more communication links, and an embedded service, that are communicatively connected with each other. The embedded service enables to configure or reconfigure one or more parameters of the portable edge device 104 through the at least one communication link. The one or more parameters may be hardware parameters or software parameters. The portable edge device 104 is a multipurpose examination device. The portable edge device 104 receives an image of at least one region of the subject 102 or the target from the imaging device 106 or an external imaging source through a network 114. The imaging device 106 includes one or more illumination sources, one or more optical pathways and one or more imaging accessories for illuminating and imaging different regions or parts of the subject 102. The one or more illumination sources may be light emitting diode (LED) based illumination sources. The subject 102 may be a human, an animal, or a plant. The imaging device 106 is reconfigurable to capture an image of different parts or regions of the subject 102 by modifying the wavelength, intensity, and brightness of at least one illumination source or selecting the one or more imaging accessories or modifying at least one optical pathway remotely or locally through a microcontroller. The imaging accessories may include, but not limited to, lenses. The lenses may be auto-focus lenses or manual focus lenses. The imaging device 106 is a software reconfigurable imaging device. The imaging device 106 is reconfigured through the network 114 according to a region or a part of the subject 102 to be imaged. For example, the imaging device 106 may be used to capture, but not limited to, an image of an eye, an ear, a skin surface, a cervical surface, an oral cavity, a throat, any plant surface, or various other human and animal organs. In some embodiments, the portable edge device 104 includes the imaging device 106 optionally. The portable edge device 104 is not limited by the functionality of the built-in imaging device 106 since it can acquire images that may have been externally captured and transmitted to the portable edge device 104 through the network 114. The portable edge device 104 with built-in imaging device 106 be the most common use-case scenario in community-based diagnostic screening programs since the portable edge device 104 may operate in a stand-alone environment with image acquisition and analysis integrated on an edge platform.
[0045] The portable edge device 104 uses the embedded web service running on the portable edge device 104, to enable programmable interfaces of the portable edge device 104 to be accessed remotely either through a companion mobile application, or through a web application on the internet.
[0046] The network 114 may be a local area network (LAN), a wide area network (WAN), a public network, a private network, a proprietary network, a public telephone switched network (PSTN), the Internet, a wireless network, a virtual network, or any combination thereof.
[0047] The portable edge device 104 determines the quality of the image by processing the received image to decide whether the received image is classifiable or not. If the quality of the image is unclassifiable, the portable edge device 104 indicates a user or the subject 102 to recapture the image till the portable edge device 104 is able to classify the image. The portable edge device 104 may evaluate the quality of the received image to detect the classifiability of the image, using a machine learning algorithm and provides a Go decision or a No-Go decision. The Go decision indicates that the image is classifiable. The No-Go decision indicates that the image is unclassifiable. The portable edge device 104 detects classifibality of the image-based quality parameters that include, but not limited to, resolution, artifacts, noise, or blur factors. This reduces the cost and effort required to obtain gradable images since corrective action can be taken immediately. The portable edge device 104 may display the captured image through a user interface. The user interface may include a display screen, a keyboard, a mouse, or any other input devices. The display screen may be a touch screen that includes one or more soft buttons that enable the user to perform one or more operations on the portable edge device 104.
[0048] The portable edge device 104 accesses or downloads, by a user, one or more containerized inference models from an application download service or an embedded database based on the one or more health conditions to be evaluated or a type of image to be evaluated, when the quality of the image is classifiable. The portable edge device 104 has the flexibility to select an appropriate inference model that may be pre-loaded into the portable edge device 104, and for classifying the images in real time. The one or more inference models may be a pre-built artificial intelligence (AI) model or machine learning (ML) model, using advanced techniques such as regression, classification and neural networks to detect, classify and grade a particular health condition. The one or more inference models may be written in any of the programming languages such as Python, R, Java, or Javascript that are typically used for the development of machine learning models and using tools, libraries and packages that are suitable for the development of these models. The one or more inference models may be non-containerized inference models. The portable edge device 104 may access or download the one or more containerized inference models, in real time, from a companion mobile application, or from a cloud-based backend service over the internet. The one or more containerized inference models are pre-trained for evaluating specific type of images to detect a specific type of health condition.
[0049] The application download service may include, but is not limited to, a cloud server, an application store server, a web server identified via uniform resource locator address, and a file transfer protocol service network server. The one or more inference models may be a native application, non-native application, or containerized application. In some embodiments, pre-built docker images of the one or more inference models are prepared at the server 116 and made available for download to the portable edge device 104.
[0050] The portable edge device 104 may include the embedded database to store the one or more images, the one or more containerized inference models, or one or more non-containerized inference models. or other information such as demographic data of the subject 102, geographical location of the subject, a subject identification number, and date-time stamp data. Subject privacy is ensured through anonymization techniques on the identity of the subject 102.
[0051] The health condition evaluating functionality of the portable edge device 104 is reconfigured in real time by accessing the plurality of containerized inference models for image classification according to a type of health condition or a type of image to be evaluated at a given time. By way of reconfiguring the portable edge device 104, the portable edge device 104 is capable of detecting different health condition such as eye diseases, intracranial disease, cervical cancer, oral pathologies, skin pathologies, other human diseases, plant diseases, and veterinary diseases. For example, if a health condition to be detected is diabetic retinopathy, the portable edge device 104 downloads the recently updated or developed inference models related to diabetic retinopathy from the application download service. Similarly, if multiple health conditions are to be detected from a single image, the portable edge device 104 downloads one or more of the recently updated or developed inference models related to these multiple health conditions from the application download service. For example, in the case of eye diseases, the multiple health conditions may include diabetic retinopathy, macular edema, age-related macular degeneration, and glaucoma.
[0052] The portable edge device 104 executes, using the inference engine 112, the one or more containerized inference models in at least one of a native environment, a docker environment, or on a hardware accelerator based on a format of the one or more inference models. The inference engine 112 may be at least one of a native inference engine or a container inference engine. The one or more inference models that execute directly on an edge-computing platform of the portable edge device 104 are called native inference engine. The one or more inference models that execute on top of a docker engine are called container inference engine. The portable edge device 104 executes the one or more inference models in the hardware accelerator if the one or more inference models are compute-intensive models. The hardware accelerator may be implemented using a variety of hardware technologies such as a Graphical Processing Unit (GPU), an Application-specific integrated circuit (ASIC), a System-on-Chip (SOC), a System on-Module (SOM), a Field Programmable Gate Array (FPGA), a Tensor Processing Unit (TPU), or a Neural Processing Unit (NPU) as long as the appropriate mathematical computation libraries for the specific programming languages are available for the underlying target hardware.
[0053] The portable edge device 104 evaluate the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the containerized inference models for image classification, and classifying the image by concurrently executing each of the containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image. In some embodiments, at least one containerized inference model is executed for classification of the image to detect the health condition of the subject 102.
[0054] The system 100 is configured to classify, using the one or more containerized inference models, the detected one or more health conditions on a severity scale based on a severity of the detected one or more health condition and generate a report based on the classification of the one or more health condition to assist an expert in decision making and recommendations.
[0055] In one exemplary embodiment, the portable edge device 104 detects diabetic retinopathy by (i) capturing a retinal image of the subject 102 using the imaging device 106 or obtaining the retinal image from the external imaging source, (ii) downloading one or more containerized inference models related to the diabetic retinopathy from the application download service, (iii) classifying the retinal image by concurrently executing the one or more containerized inference models to detect one or more features related to the diabetic retinopathy and (iv) detecting presence or absence of the diabetic retinopathy based on inferences from the one or more containerized inference models. In some embodiments, the portable edge device 104 detects simultaneously macular edema, age-related macular degeneration, and glaucoma along with diabetic retinopathy from the same retinal image by concurrently executing the one or more containerized inference models. For that, the portable edge device 104 downloads and concurrently executes the one or more inference models related to specified diseases in addition to diabetic retinopathy for simultaneous detection of diabetic retinopathy, macular edema, age related macular degeneration, and glaucoma from a single retinal image.
[0056] In some embodiments, the portable edge device 104 transmits the classified images of certain health condition to the server 116 for further re-training of the one or more inference models with the latest classified images and to generate newly pre-built inference models or engine capable of improved performance.
[0057] In some embodiments, the server 116 is configured to construct and containerize a plurality of inference models for image classification using docker composition tools.
[0058] The portable edge device 104 may be used for the classification of (i) images acquired from radiological devices such as X-ray machines, or other digital image acquisition devices, (ii) images acquired from anatomic pathology, including microscopy images, (iii) images acquired from microbiology specimens, (iv) ultrasound images, (v) ophthalmoscopic images, or (vi) otoscopic images if the appropriate inference models have been downloaded to the portable edge device 104.
[0059] The re-configuration capabilities of the portable edge device 104 includes at least one of (i) updating all operating firmware (host operating system, device drivers, embedded web service etc.), (ii) selecting the one or more illumination sources for the imaging device 106 (iii) selecting the one or more imaging accessories for the imaging device 106 (iv) modifying or selecting one or more optical pathways, (v) loading or changing the one or more containerized inference models or inference models to be used for detection and classification of the one or more health conditions.
[0060] The portable edge device 104 is configured to tag the image of the subject 102 with a geolocation of the subject 102 thereby after a predetermined time period (e.g., 1-3 years) the same subject can be photographed again in the same location. Geotagging is also useful for epidemiological reasons as the gathered data can be analyzed for incidence.
[0061] The system 100 for image-acquisition and detection of health conditions operate in an autonomous manner in real-time, i.e., once the user has chosen a specific inference model or an inference engine and downloaded the specific inference model or the inference engine to the AI based portable edge device 104, a classification process of the inference model does not require any assistance further from a back-end service. The autonomous operation may be useful when the system 100 is deployed in a remote location that may or may not have reliable connectivity.
[0062] The portable edge device 104 is designed for mobility, with attributes such as small footprint, low power consumption, battery operation or electric power operation, and lightweight. The flexibility and programmability provided by the imaging device 106 and an inference model for image classification make the portable edge device 104 highly suitable for mass screening programs in low-resource community settings, for a variety of health condition that may be detected and classified in real-time since a pre-trained AI/ML model may be downloaded to the portable edge device 104.
[0063] The system 100 may perform a process of data curation for improving the performance of the one or more inference models.
[0064] In some embodiments, the system 100 includes one or more portable edge devices that are geographically distributed.
[0065] In some embodiments, the system 100 includes block chain for securely sharing and storing all information.
[0066] The system 100, due to its attributes of portability and mobility as well as autonomous operation of the one or more containerized inference models in near real-time, may be used in low resource community settings where periodic screening is an essential component of mass screening programs for the prevention of diseases through early detection. Autonomous operation is particularly useful in such situations because sufficient expertise may not be available locally.
[0067] The system 100 may be used in a variety of application environments such as in standalone mode, single client in a cloud environment, single client in a telemedicine environment, multiple clients in a cloud environment, and multiple clients in a telemedicine environment.
[0068] In the standalone mode, the portable edge device 104 may be deployed without any assistance from the back-end service. A mobile application may be used to control the parameters of the portable edge device 104, for capturing images and downloading pre-built containerized inference models for classification of the captured images to detect the one or more health conditions and storing inference results in local persistent storage. The portable edge device 104 may be deployed in standalone mode where reliable connectivity may not be easily available. The embedded web server may provide access to the parameters of the portable edge device 104 to the mobile application through representational state transfer (REST) based endpoints.
[0069] In the single client in the cloud environment mode, the portable edge device 104 operates as a single client within a cloud-based service, which may be deployed in places where reliable connectivity is available but fully autonomous operation is not always required. A REST-based web application may be used to remotely access the re-configuration parameters of the portable edge device 104 through the embedded web service of the portable edge device 104 which provides various services through REST-based endpoints.
[0070] In the single client in a tele-consultation environment mode, a domain consultant or human expert who is remotely accessing the portable edge device 104 over the internet may guide the user of the portable edge device 104 and interpret the classification results provided by the portable edge device 104.
[0071] In the multiple clients in the cloud-based environment mode, multiple portable edge devices operate as the multiple clients which may be geographically distributed, in the cloud-based environment.
[0072] In the multiple clients in a tele-consultation environment mode, the teleconsultation service is provided to the multiple clients.
[0073] In some embodiments, the system 100 includes a portable edge device 104 that includes smartphone computing configuration. The portable edge device 104 includes hardware of a smartphone and its operating system that provide core computational and communication facilities for the portable edge device 104. The portable edge device 104 further includes an illumination source as a pluggable module on a camera of the smartphone, and an optional hardware-accelerator as a pluggable universal serial bus (USB) module and the one or more inference models as containerized software images that may be executed in an environment of the host operating system provided by the smartphone. The smartphone computing configuration based portable edge device may be used for image classification applications in low volumes.
[0074] In one exemplary embodiment, the portable edge device 104 is used in mass screening program for triaging purpose in low-resource community settings. In such scenarios, a trained technician may use the portable edge device 104 to obtain images of the subjects and the inference engine 112, which is built into the portable edge device 104 or downloadable over the communication link, may provide recommendations whether to see a health expert or not, in real-time. During screening, a Go decision implies that there is a probability that the health condition of the subject 102 is abnormal for which the subject 102 may be referred for further investigation. A No-Go decision indicates that this subject 102 may be ruled out from screening at this stage. This functionality is advantageous in that the subjects who might require special attention or not are filtered. Triaging may help in an optimal use of the scarce resources and expertise in low-resource settings.
[0075] In one exemplary embodiment, the portable edge device 104 is used in mass screening program to evaluate and grade the one or more health conditions for assistive diagnosis. The screening may be done periodically since the subjects may require periodic checkups. The one or more grades for a health condition may include 1) Level 1: Normal, 2) Level 2: Mildly abnormal, (beginning stage of illness) 3) Level 3: Moderately abnormal, 4) Level 4: Highly abnormal, 5) Level 5: Severely abnormal.
[0076] FIG. 2 is a block diagram that illustrates a portable edge device 104 of FIG. 1 according to an embodiment herein. The portable edge device 104 includes a database 202, an image acquisition module 204, an image preprocessing module 206, an inference model downloading module 208, an inference model execution module 210, a classification module 212, a report generation module 214, and a display module 216. The one or more modules are communicatively connected with the database 202. The database 202 is an embedded database. The portable edge device 104 may use the database 202 for persistent storage and retrieval of platform configuration data, demographic data of the subject 102, images of one or more subjects, one or more containerized inference models or engines, location information of the subject 102, and other data. All data elements may be stored in the database 202 as documents for example in JSON format, for ease of access and manipulation on the portable edge device 104 as well as for export to the server 116 or backend service. The images of one or more subjects may be stored in a specialized GridFS format for storing large objects. The physical storage on the portable edge device 104 consists of Flash Read Only Memory (ROM) and an external secure digital (SD) Card may be used for additional storage.
[0077] The image acquisition module 204 receives an image of at least one region or part of the subject 102 from the imaging device 106 or an external imaging source. The image preprocessing module 206 determines the quality of the image by processing the received image to decide whether the received image is classifiable or not. If the quality of the image is unclassifiable, the portable edge device 104 indicates a user or the subject 102 to recapture the image till the portable edge device 104 is able to classify the image. The inference model downloading module 208 accesses or downloads one or more containerized inference models from an application download service or the embedded database 202 based on a health condition to be evaluated and the type of image to be evaluated. The type of image may be an image of internal region or organ or an image of external region or organ of the subject 102 or target. The inference model execution module 210 executes the one or more containerized inference models in at least one of a native environment, a docker environment, or on a hardware accelerator based on a format or version of the one or more containerized inference models. The classification module 212 evaluates the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the one or more of containerized inference models for image classification, and classifies the image by concurrently executing each of the containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image. The report generation module 214 generates a report based on classification of the image for assistive diagnosis and recommendations. The display module 216 displays at least one of the captured images or the report.
[0078] FIG. 3, with reference to FIGS. 1 and 2, is a schematic illustration of a process of concurrent execution of one or more containerized inference models for simultaneous evaluation of one or more health conditions of the subject 102 or the target according to an embodiment herein. At step 302, one or more images are received for analysis to evaluate the one or more health conditions. At step 304, analysis of image for one or more health conditions is initiated. At step 306, a process of parallel evaluation of one or more features of one health condition or one or more health conditions is initiated. At steps 308A-N, one or more images are preprocessed to detect the quality of the one or more images. If the quality of the one or more images is unclassifiable, it goes to step 302 to obtain a classifiable image. At steps 310A-N, one or more containerized inference models are concurrently executed for simultaneous evaluation of one or more features of the one health condition or simultaneous evaluation of one or more health conditions in a single image. At step 312, results from the one or more containerized inference models are combined for further evaluation. At step 314, the iteration process is completed. At step 316, all results are combined after the completion of an iteration. At step 318, combined results are given to a rules engine to evaluate the one or more health conditions.
[0079] FIG. 4, with reference to FIGS. 1 through 3, is a schematic illustration of a hardware implementation options for a portable edge device 104 for evaluation of one or more health conditions the subject 102 or the target according to an embodiment herein. A user 402 provides configuration file 404 to the portable edge device 104 for the configuration of one or more containerized inference models. The one or more containerized inference models are received from a solution stack 410 through application-specific APIs 406. The portable edge device 104 includes different hardware 408 (as shown in FIG. 4) for executing the one or more containerized inference models.
[0080] FIG. 5, with reference to FIGS. 1 through 4, is a schematic illustration of a hardware implementation of an illumination source 500 according to an embodiment herein. The illumination source 500 includes a microcontroller 502, one or more digital to analog converters (DAC) 504A-N, and one or more power amplifiers 506A-N, and one or more light emitting diodes (LEDs) 508A-N. The microcontroller 502 enables the modification of wavelength, intensity and brightness of the one or more LED 508A-N for optimizing lighting conditions for capturing images of different parts of the subject 102. The one or more digital to analog converters 504A-N converts digital signals into analog signals. The one or more power amplifiers 506A-N amplifies the analog signal to modify the wavelength, intensity and brightness of the one or more LED 508A-N for optimizing the lighting conditions for the successful capture of an image of the subject 102.
[0081] FIG. 6, with reference to FIGS. 1 through 5, is a schematic illustration of a process of data curation for improving the performance of one or more inference models according to an embodiment herein. At step 602, a user deposits datasets that include one or more images, for classification and evaluation of the one or more health conditions. At step 604, the datasets are analyzed to evaluate of a health condition using the one or more containerized inference models in an edge platform. At step 606, classified and anonymized data is transferred to a cloud server or a backend server for improving the one or more inference models. At step 608, downloads the classified and anonymized data from the cloud server or the backend server. At steps 610 and 612, classified and anonymized data is collected and a curator pool is created. At step 614, the image curation process is mediated. At step 616, the moderator performs curation. At step 618, the curation and recommendation from the moderator on the one or more inference models for improving the one or more inference models is received. At step 620, the user receives inference related to the health condition from the edge platform using the improved one or more containerized inference models.
[0082] In some embodiments, the method includes constructing the one or more inference models by: (i) receiving one or more gradable, curated image datasets related to an inference model to be constructed or updated, from a secure system, (ii) creating a curator pool including one or more graders for interpreting the one or more gradable image datasets and providing interpreted results to a moderator for curating the interpreted results of each grader and thus generating the curated datasets required for the inference model creation, (iii) generating a reference standard based on the curated interpreted results, and (iv) constructing each of the inference model(s) for image classification based on the generated reference standard. The reference standard is updated periodically on receiving new curated and gradable image datasets from the secure system. The reference standard includes one or more instructions for modelling the inference model to arrive a decision and recommendations associated with the decision. The one or more gradable image data sets are deposited in the secure system by one or more users. The secure system may be a block chain system.
[0083] FIG. 7, with reference to FIGS. 1 through 6, is a block diagram that illustrates a process of building containerized inference models according to an embodiment herein. A build container 702 constructs a built solution 706 such as containerized inference models using one or more build tools 704A-N. A process of constructing the containerized inference models may be performed in a backend server or the server 116. The built solution 706 may be executed in a docker engine 708 for classification and detection of one or more health conditions. A containerized inference models includes an entire runtime environment: an application and all its dependencies, libraries and other binaries, and configuration files needed to run the containerized inference models, bundled into one package. Containerization allows for greater modularity. In some embodiments, the application split into modules (such as the database, the application front end, and so on) using microservices approach. Applications built in using the microservices approach enable to make changes to the modules without having to rebuild the entire application. Each inference model for image classification may be split into a plurality of modules and each of the plurality of modules may be containerized as microservices to execute the plurality of modules independently and concurrently for classification of the image to evaluate the one or more health conditions.
[0084] FIG. 8A, with reference to FIGS. 1 through 7, illustrates a front view of an exemplary portable edge device 800 for image-based evaluation of one or more health conditions of a subject or a target according to an embodiment herein. The front view of the portable edge device 800 includes a display 802, one or more buttons 804A-N on the display 802, a battery storage module 806, a handle 808, and a plug-in interface 810. The display 802 may be a touch screen. In some embodiments, the portable edge device 800 do not include display. A companion mobile application or a web application may be used to control one or more parameters of the portable edge device 800 that includes, but not limited to, enabling to capture an image of the subject, modifying illumination source or one or more imaging accessories or optical pathway, and enabling to download or change the one or more inference models. Then, the portable edge device 800 autonomously perform image classification and detection of the one or more health conditions.
[0085] The one or more buttons 804A-N enable a user to perform one or more operations related to the portable edge device 800. The one or more operations include, but not limited to, capturing an image, and recapturing the image. The portable edge device 800 further includes an imaging device and a computing unit (not shown in FIG. 8A) for performing one or more operations of the portable edge device 800. The portable edge device 800 performs embodiments described herein with reference to FIGS. 1 through 7. The portable edge device 800 may include one or more indicators such as, but not limited to, a green light indicator and a red-light indicator. The portable edge device 800 may indicate the user or an operator by the green light indicator when the quality of the image is classifiable. Similarly, the portable edge device 800 may indicate the user or an operator by the red-light indicator when the quality of the image is unclassifiable.
[0086] FIG. 8B, with reference to FIGS. 1 through 8A, illustrates a diagonal view of an exemplary portable edge device 800 for image-based evaluation of one or more health conditions of a subject or a target according to an embodiment herein. The diagonal view of the portable edge device 800 includes the display 802, one or more buttons 804A-N, a handle 808, a trigger 812 for capturing an image of a subject, an illumination source 814, a mirror 816 for reflecting light, a computing unit 818 to perform or more operations of the portable edge device 800, an imaging device 820, an optical path 824, a forehead rest 826 and a slot 828 to view the subject for imaging. The portable edge device 800 performs embodiments described herein with reference to FIGS. 1 through 7.
[0087] FIG. 9, with reference to FIGS. 1 through 8B, illustrates an overview of a system architecture 900 for a process of image-based evaluation of one or more health conditions of a subject or a target according to an embodiment herein. At step 902, an image of at least one part or a region of a subject is captured. At step 904, a captured image is transferred for preprocessing of the image. At step 906, one or more users receive the captured image from the imaging device and transfer the captured image for preprocessing of the image. At step 908, the one or more images are processed to detect the quality of the one or more images. At step 910, the one or more images are transferred to an embedded database or a backend server for storage if the one or more images are classifiable. If the one or more images are unclassifiable, it repeats steps 902 to 908. At step 912, the one or more images are transferred for evaluation of health condition. At step 914, the one or more images are analyzed in a cloud environment using the one or more containerized inference models to evaluate the one or more health conditions. At step 916, a report with analyzed results is transferred to an end device. At step 918, the one or more images are transferred to an inference engine that is in a remote location. The inference engine analyzes the one or more images using the one or more containerized inference models to evaluate the one or more health conditions. At step 920, a report with analyzed results is transferred to a domain expert for assistive diagnosis. At step 922, the report with analyzed results is transferred to a database. At step 924, the report with analyzed results is transferred from the database to the end device. At step 926, the domain experts curate the report and transfer it to the inference engine in the remote location. At step 928, the curated report is converted into a binary format. At step 930, the binary format of the curated report is converted into a machine-readable format. At step 932, the machine-readable format of the curated report is stored in a backend server for training the one or more inference models. At step 934, containerized images of the one or more inference models are built and made available for downloading to the cloud environment, remote location, or an edge device. At step 936, the containerized images of the one or more inference models are downloaded to the edge device for evaluating the one or more images. At step 940, a user reconfigures the edge device for classifying different health conditions.
[0088] FIG. 10, with reference to FIGS. 1 through 9, illustrates an alternative overview of a system architecture 1000 for image-based evaluation of one or more health conditions of the subject 102 or the target according to an embodiment herein. The system architecture 1000 includes an edge tier 1002, a platform tier 1006 that includes a service platform 1008, an artificial intelligence (AI) layer 1010, and an enterprise tier 1012. The edge tier 1002 captures one or more images of at least one part or region of the subject 102 or other information using an imaging device or any edge nodes and transfers the one or more images to the platform tier 1006. The edge tier 1002 communicates with the platform tier 1006 through an access network. The platform tier 1006 receives, processes, and forwards control commands from the enterprise tier 1012 to the edge tier 1002. The platform tier 1006 consolidates, processes and analyzes data flows using the AI layer 1010, from the edge tier 1002 and other tiers. The platform tier 1006 provides management functions for devices and assets. The platform tier 1006 also offers non-domain-specific services such as data query and analytics. The platform tier 1006 communicates with the enterprise tier 1012 through a service network. The platform tier 1006 communicates with the AI layer 1010 through a control network. The enterprise tier 1012 implements domain-specific applications, decision support systems and provides interfaces to end-users including operation specialists. The enterprise tier 1012 receives data flows from the edge tier 1002 and platform tier 1006. The enterprise tier 1012 also issues control commands to the platform tier 1006 and edge tier 1002.
[0089] FIG. 11, with reference to FIGS. 1 through 10, illustrates a method for evaluating one or more health conditions of at least one region of a subject or a target in an image using an artificial intelligence-based portable edge device 104 according to an embodiment herein. At step 1102, an image of at least one part or region of the subject 102 is acquired. The image of the subject may be acquired using an imaging device 106 of the portable edge device 104. In some embodiments, the image of the subject or target may be acquired from an external imaging source through a communication network. The subject may be a human, an animal, or a plant. The imaging device 106 of the portable edge device 104 includes one or more accessories for imaging different parts of the subject. The imaging device 106 of the portable edge device 104 includes one or more illumination sources for illuminating the subject for imaging and one or more optical pathways. The one or more illumination sources may be reconfigured by modifying wavelength, intensity and brightness of at least one illumination source remotely or locally to optimize the one or more illumination sources for imaging different parts of the subject 102. The one or more illumination sources may be reconfigured based on a part or region of the subject 102 to be imaged.
[0090] At step 1104, the quality of the image is determined by processing the image of at least one region of the subject 102 or the target to indicate if the quality of the image is unclassifiable. If the quality of the image is unclassifiable, the method repeats step 1102 to acquire the image again, till a classifiable image is obtained.
[0091] At step 1106, one or more containerized inference models are accessed or downloaded from an application download service or the embedded database 200 based on a health condition to be evaluated or a type of the image to be evaluated. The application download service may include, but not limited to, a cloud server, an application store server, a web server identified via uniform resource locator address, and a file transfer protocol service network server. The one or more inference models may be a native application, non-native application, or containerized application. In some embodiments, prebuilt docker images of the one or more inference models are prepared at a backend server and made available for download to the portable edge device.
[0092] At step 1108, the one or more inference models are executed on at least one of (i) a direct execution mode in which the plurality of containerized inference models have been prebuilt for direct execution on an edge computing platform of the portable edge device or (ii) a hardware accelerator execution mode in which the plurality of containerized inference models have been pre-built for execution on a hardware accelerator on the portable edge device. The method includes reconfiguring the health condition evaluating functionality of the portable edge device in real time by accessing the plurality of containerized inference models for image classification according to a type of health condition to be evaluated at a given time. The method includes indicating the user to recapture the image if the image is unclassifiable.
[0093] At step 1110, the one or more health conditions are evaluated in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the containerized inference models for image classification, and classifying the image by concurrently executing each of the containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image. Each containerized inference model for image classification runs independently to concurrently execute the plurality of inference models that are implemented in different machine learning languages.
[0094] FIG. 12, with reference to FIGS. 1 through 11, illustrates a software architecture of an operating environment of a computing unit 1200, in accordance with the embodiments herein. The computing unit 1200 may be used for practicing the embodiments herein, with reference to FIGS. 1 through 11. The portable edge device 104 of FIG.1 may use the computing unit 1200 for detecting one or more health conditions of a subject according to the embodiments herein. The operating environment includes an embedded operating system 1202, a communication link layer 1204, a communication network layer 1206, a communication transport layer 1208, an application layer 1210, an embedded web service 1212, a PL (programming language) environment 1214, and a docker engine 1216. The computing unit 1200 is communicatively connected to one or more user devices 1220A-N and a server 1222 through a network 1218. In some embodiments, the one or more user devices 1220A-N includes, but not limited to, computers, a mobile device, a smartphone, a personal digital assistant (PDA), a notebook, tablets, smartwatches, internet of things (IoT) devices, connected vehicles, a global positioning system (GPS) device, any network enabled device, and the like. The network 1218 may be a local area network (LAN), a wide area network (WAN), a public network, a private network, a proprietary network, a public telephone switched network (PSTN), the Internet, a wireless network, a virtual network, or any combination thereof.
[0095] ]The embedded operating system 1202 performs one or more basic tasks that include file management, memory management, process management, handling input and output, and controlling peripheral devices. The embedded operating system 1202 provides a hosting environment for computation of the one or more containerized inference models, storage of images, analysis of images through the one or more containerized inference models based on inference engines in real-time and communication of the results back to the Server 1222. The embedded operating system 1202 may include Apple macOS, Microsoft Windows, Google's Android OS, Linux Operating System, and Apple iOS. The embedded operating system 1202 may be implemented in a variety of technologies such as a System-on-Chip (SOC), an application-specific integrated circuit (ASIC), a System-on-Module (SOM), as long as the appropriate mathematical computation libraries are implemented on the underlying target hardware.
[0096] Network stacks that include the communication link layer 1204, the communication network layer 1206, and the communication transport layer 1208, are used to communicate with the one or more user devices 1220A-N, the server 1222 and the like by sending and receiving data. The communication link layer 1204 may include IEEE-802.3, IEEE-802.11 a/ac/b/g/n, and GSM/LTE-4G. The communication network layer 1206 may include IP, IPv4/IPv6, and IP Sec. The communication transport layer 1208 may include TCP or UDP.
[0097] The application layer 1210 initiates communication with the one or more user devices 1220A-N, the server 1222, and the like using the communication link layer 1204, the communication network layer 1206, and the communication transport layer 1208 to receive and send the data. The application layer 1210 enables a user to interact with the computing unit 1200 or any other remote device. The one or more inference models may be downloaded from one or more user devices 1220A-N, or the server 1222. The application layer 1210 may include HTTP/HTTP, REST APIs, and web sockets.
[0098] The embedded web service 1212 or embedded web server is used to control various hardware interfaces in the computing unit 1200, to download from a list of pre-trained inference models, and for other system maintenance and software configuration tasks. The embedded web service 1212 facilitates software reconfigurability of the computing unit 1200. The embedded web service 1212 controls re-configuration of the functional components, downloading of inference models from a remote server or a smartphone application, management of local persistent storage for the inference models, images, inference or classification results, execution of the inference models, and other administrative tasks.
[0099] The programming language (PL) environment 1214 enables the user to deploy the pre-built inference models that may be written in any language such as Python, R, Java, or JavaScript using the appropriate development tools. The programming language (PL) environment 1214 enables the deployment of a native application and comprises the native libraries and packages on which the inference models have been pre-built. The docker engine 1216 facilitates different types of inference models to be downloaded as container images and execution of the inference models for detection of different diseases from a captured image or for analyzing different parts of an image. The docker engine 1216 orchestrates the concurrent execution of different container applications, ensuring that each container’s execution environment is independent of the execution environment of other containers. Sharing of data between collaborating containers, if required, is done either through shareable, locally available persistent storage or through the available communication protocols. Pre-built docker images of the inference engines may be prepared at the server 1222 and made available for download to the computing device 1200. The computing device application may be changed through downloadable one or more inference models.
[0100] FIG. 13 with reference to FIGS. 1 through 12, illustrates a hardware architecture of a computing unit 1300, in accordance with the embodiments herein. A representative hardware environment for practicing the embodiments herein 8, with reference to FIGS. 1 through 12. This schematic drawing illustrates a hardware configuration of a server/portable edge device/computer system/computing device in accordance with the embodiments herein. The portable edge device 104 of FIG.1 may use the computing unit 1300 for detecting one or more health conditions of a subject according to the embodiments herein. As shown, the computing device 1300 includes a memory 1302, a central processing unit (CPU) 1304, and a hardware accelerator 1306 that is connected to an internal bus system 1308. The memory 1302, the central processing unit (CPU) 1304, and the hardware accelerator 1306 are connected to a communication unit 1310 via the internal bus system 1308. The communication unit 1310 may include USB-C, Ethernet, WIFI, Bluetooth, 4G-LTE, GPS. The memory unit 1302 stores control logic (software) and data which may take the form of random-access memory (RAM). The internal bus system 1308 connects the memory 1302, the central processing unit (CPU) 1304, and the hardware accelerator 1306 to an expansion system bus 1314. A bridge 1312 connects the internal bus system 1308 and the expansion system bus 1314. The expansion system bus 1314 further connects the memory 1302, the central processing unit (CPU) 1304, and the hardware accelerator 1306 to an optical pathway 1316 and a microcontroller 1318. The computing device 1300 may include a flash memory for persistent storage, an expansion storage (microSD) and global positioning system (GPS) for recording location information.
[0101] The computing device 1300 may take the form of a desktop computer, or a laptop computer, or a server, or a workstation, or a game console, or an embedded system, or an edge computing platform, or a smart device computing platform, or a cloud computing platform. Furthermore, the computing device 1300 may take the form of various other devices including, but not limited to a personal digital assistant (PDA) device, a mobile phone device, a smart phone, a smart television, etc. Additionally, although not shown, the computing device 1300 may be coupled to a network (e.g., a telecommunications network, a local area network (LAN), a wireless network, a wide area network (WAN) such as the Internet, a peer-to-peer network, a cable network, or the like) for communication purposes through an I/O interface.
[0102] In some embodiments, the system and method generate metadata that is used to identify each image through a globally unique identification (GUID) for each portable edge device, geo-location information (using the built-in GPS receiver), set up the communication channels (Wi-Fi, Bluetooth, GSM/LTE-4G), set up the one or more illumination sources for capturing images, selecting and downloading one or more pre-built inference models to be used for classification and executing the one or more pre-built inference models to detect the one or more diseases. Each image is stored with its metadata so that all the features of the image are always available for analysis and archival purposes.
[0103] The present disclosure, due to its attributes of portability and mobility as well as autonomous operation of the one or more inference models in near real-time, may be used in low resource community settings where periodic screening is an essential component of mass screening programs for the prevention of diseases through early detection.
[0104] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope.
,CLAIMS:1). A system (100) for evaluating one or more health conditions of at least one region of a subject (102) or a target by processing, using a plurality of containerized inference models for image classification, an image of at least one region of the subject (102) or the target, the system (100) comprising:
a portable edge device (104) that comprises
at least one communication link;
an embedded service that enables to configure or reconfigure one or more parameters of the portable edge device (104) through the at least one communication link;
a memory (108) that stores a set of instructions, data, and model weights; and
a processor (110) that is configured to execute the set of instructions to
determine a quality of an image of at least one region of the subject (102) or the target by processing the image to indicate if the quality of the image is unclassifiable, wherein the image is recaptured if the image is unclassifiable,
characterized in that, the processor (110) is configured to
access, by a user, a plurality of containerized inference models for image classification from at least one of an application download service or an embedded database (200), based on one or more health conditions to be evaluated and a type of image when the quality of the image is classifiable, wherein the health condition evaluating functionality of the portable edge device (104) is reconfigurable in real-time by accessing the plurality of containerized inference models for image classification according to a type of health condition to be evaluated at a given time;
execute the plurality of containerized inference models on at least one of (i) a direct execution mode in which the plurality of containerized inference models have been prebuilt for direct execution on an edge computing platform of the portable edge device (104) or (ii) a hardware accelerator execution mode in which the plurality of containerized inference models have been pre-built for execution on a hardware accelerator on the portable edge device (104); and
evaluate the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the plurality of containerized inference models for image classification, and classifying the image by concurrently executing each of the plurality of containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image, wherein each containerized inference model for image classification runs independently to concurrently execute the plurality of inference models that are implemented in different machine learning languages.


2). The system (100) as claimed in claim 1, wherein the portable edge device (104) comprises an imaging device (106) that is configured to capture a plurality of images of a plurality of regions of the subject (102) or the target, wherein the imaging device (106) comprises one or more illumination sources, one or more imaging accessories, and one or more optical pathways, wherein the imaging device (106) is reconfigured based on a region of the subject (102) or the target to be imaged at a given time by modifying at least one of wavelength, intensity or brightness of at least one illumination source or selecting at least one imaging accessory or modifying the one or more optical pathways, using the embedded service.
.
3). The system (100) as claimed in claim 1, wherein each inference model for image classification is split into a plurality of modules and each of the plurality of modules is containerized as microservices to execute the plurality of modules independently and concurrently for classification of the image to evaluate the one or more health conditions.


4). The system (100) as claimed in claim 1, wherein the system (100) comprises a server (116) that is configured to construct and containerize a plurality of inference models for image classification using docker composition tools.


5). The system (100) as claimed in claim 4, wherein the server (116) is configured to construct each of the inference model(s) for image classification by: (i) receiving one or more gradable, curated image datasets related to an inference model to be constructed or updated, from a secure system, (ii) creating a curator pool including one or more graders for interpreting the one or more gradable image datasets and providing interpreted results to a moderator for curating the interpreted results of each grader and thus generating the curated datasets required for the inference model creation, (iii) generating a reference standard based on the curated interpreted results, and (iv) constructing each of the inference model(s) for image classification based on the generated reference standard, wherein the reference standard is updated periodically on receiving new curated and gradable image datasets from the secure system.


6) A portable edge device (104) for evaluating one or more health conditions of at least one region of a subject (102) or a target by processing, using a plurality of containerized inference models for image classification, an image of the at least one region of a subject (102) or a target, the portable edge device (104) comprises
at least one communication link;
an embedded service that enables to configure or reconfigure one or more parameters of the portable edge device (104) through the at least one communication link;
a memory (108) that stores a set of instructions, data, and model weights; and
a processor (110) that is configured to execute the set of instructions to
determine a quality of an image of at least one region of the subject (102) or the target by processing the image to indicate if the quality of the image is unclassifiable, wherein the image is recaptured if the image is unclassifiable,
characterized in that, the processor (110) is configured to
access, by a user, a plurality of containerized inference models for image classification from at least one of an application download service or an embedded database (200), based on one or more health conditions to be evaluated and a type of image when the quality of the image is classifiable, wherein the health condition evaluating functionality of the portable edge device (104) is reconfigurable in real-time by accessing the plurality of containerized inference models for image classification according to a type of health condition to be evaluated at a given time;
execute the plurality of containerized inference models on at least one of (i) a direct execution mode in which the plurality of containerized inference models have been prebuilt for direct execution on an edge computing platform of the portable edge device (104) or (ii) a hardware accelerator execution mode in which the plurality of containerized inference models have been pre-built for execution on a hardware accelerator on the portable edge device (104); and
evaluate the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the plurality of containerized inference models for image classification, and classifying the image by concurrently executing each of the plurality of containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image, wherein each containerized inference model for image classification runs independently to concurrently execute the plurality of inference models that are implemented in different machine learning languages.


7). A method for evaluating one or more health conditions of at least one region of a subject (102) or a target using a portable edge device (104) in an image of at least one region of the subject (102) or the target, wherein the portable edge device (104) processes, using a plurality of containerized inference models for image classification, the image, the method comprising:
determining a quality of an image of at least one region of the subject (102) or the target by processing the image to indicate if the quality of the image is unclassifiable, wherein the image is recaptured if the image is unclassifiable;
characterized in that, the method comprises
accessing, by a user, a plurality of containerized inference models for image classification from at least one of an application download service or an embedded database (200), based on one or more health conditions to be evaluated and a type of image when the quality of the image is classifiable, wherein the health condition evaluating functionality of the portable edge device (104) is reconfigurable in real-time by accessing the plurality of containerized inference models for image classification according to a type of health condition to be evaluated at a given time;
executing the plurality of containerized inference models on at least one of (i) a direct execution mode in which the plurality of containerized inference models have been prebuilt for direct execution on an edge computing platform of the portable edge device (104) or (ii) a hardware accelerator execution mode in which the plurality of containerized inference models have been pre-built for execution on a hardware accelerator on the portable edge device (104); and
evaluating the one or more health conditions in the image of the at least one region to assist an expert in decision making and recommendations by providing the image to each of the plurality of containerized inference models for image classification, and classifying the image by concurrently executing each of the plurality of containerized inference models to process the image for simultaneous detection of one or more features of one health condition or simultaneous detection of the one or more health conditions in the image, wherein each containerized inference model for image classification runs independently to concurrently execute the plurality of inference models that are implemented in different machine learning languages.


8). The method as claimed in claim 7, wherein the portable edge device (104) comprises an imaging device (106) that is configured to capture a plurality of images of a plurality of regions of the subject (102) or the target, wherein the imaging device (106) comprises one or more illumination sources, one or more imaging accessories, and one or more optical pathways.


9). The method as claimed in claim 8, wherein the method comprises reconfiguring the imaging device (106) based on a region of the subject (102) or the target to be imaged at a given time by modifying at least one of wavelength, intensity or brightness of at least one illumination source or selecting at least one imaging accessory or modifying at least one optical pathway using the embedded service.


10). The method as claimed in claim 7, wherein the method comprises splitting each inference model for image classification into a plurality of modules and containerizing each of the plurality of modules as microservices to execute the plurality of modules independently and concurrently for classification of the image to evaluate the one or more health conditions.


11). The method as claimed in claim 7, wherein the method comprises constructing and containerizing a plurality of inference models for image classification using docker composition tools.


12). The method as claimed in claim 11, wherein the method comprises constructing each of the inference models for image classification by: (i) receiving one or more gradable, curated image datasets related to an inference model to be constructed or updated, from a secure system, (ii) creating a curator pool including one or more graders for interpreting the one or more gradable image datasets and providing interpreted results to a moderator for curating the interpreted results of each grader and thus generating the curated datasets required for the inference model creation, (iii) generating a reference standard based on the curated interpreted results, and (iv) constructing each of the inference model(s) for image classification based on the generated reference standard, wherein the reference standard is updated periodically on receiving new curated and gradable image datasets from the secure system.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 202141041758-IntimationOfGrant10-01-2023.pdf 2023-01-10
1 202141041758-STATEMENT OF UNDERTAKING (FORM 3) [15-09-2021(online)].pdf 2021-09-15
2 202141041758-PatentCertificate10-01-2023.pdf 2023-01-10
2 202141041758-STATEMENT OF UNDERTAKING (FORM 3) [15-09-2021(online)]-1.pdf 2021-09-15
3 202141041758-Written submissions and relevant documents [02-08-2022(online)].pdf 2022-08-02
3 202141041758-PROVISIONAL SPECIFICATION [15-09-2021(online)].pdf 2021-09-15
4 202141041758-PROVISIONAL SPECIFICATION [15-09-2021(online)]-1.pdf 2021-09-15
4 202141041758-Correspondence to notify the Controller [15-07-2022(online)].pdf 2022-07-15
5 202141041758-US(14)-ExtendedHearingNotice-(HearingDate-18-07-2022).pdf 2022-07-15
5 202141041758-PROOF OF RIGHT [15-09-2021(online)].pdf 2021-09-15
6 202141041758-POWER OF AUTHORITY [15-09-2021(online)].pdf 2021-09-15
6 202141041758-Correspondence to notify the Controller [22-06-2022(online)].pdf 2022-06-22
7 202141041758-POWER OF AUTHORITY [15-09-2021(online)]-1.pdf 2021-09-15
7 202141041758-Correspondence to notify the Controller [20-06-2022(online)].pdf 2022-06-20
8 202141041758-US(14)-HearingNotice-(HearingDate-23-06-2022).pdf 2022-06-07
8 202141041758-FORM 1 [15-09-2021(online)].pdf 2021-09-15
9 202141041758-CLAIMS [03-06-2022(online)].pdf 2022-06-03
9 202141041758-FORM 1 [15-09-2021(online)]-1.pdf 2021-09-15
10 202141041758-CORRESPONDENCE [03-06-2022(online)].pdf 2022-06-03
10 202141041758-DRAWINGS [15-09-2021(online)].pdf 2021-09-15
11 202141041758-DRAWING [05-01-2022(online)].pdf 2022-01-05
11 202141041758-FER_SER_REPLY [03-06-2022(online)].pdf 2022-06-03
12 202141041758-CORRESPONDENCE-OTHERS [05-01-2022(online)].pdf 2022-01-05
12 202141041758-OTHERS [03-06-2022(online)].pdf 2022-06-03
13 202141041758-COMPLETE SPECIFICATION [05-01-2022(online)].pdf 2022-01-05
13 202141041758-FER.pdf 2022-03-24
14 202141041758-FORM 18A [10-03-2022(online)].pdf 2022-03-10
14 202141041758-FORM-9 [12-01-2022(online)].pdf 2022-01-12
15 202141041758-FORM 18A [10-03-2022(online)].pdf 2022-03-10
15 202141041758-FORM-9 [12-01-2022(online)].pdf 2022-01-12
16 202141041758-COMPLETE SPECIFICATION [05-01-2022(online)].pdf 2022-01-05
16 202141041758-FER.pdf 2022-03-24
17 202141041758-OTHERS [03-06-2022(online)].pdf 2022-06-03
17 202141041758-CORRESPONDENCE-OTHERS [05-01-2022(online)].pdf 2022-01-05
18 202141041758-DRAWING [05-01-2022(online)].pdf 2022-01-05
18 202141041758-FER_SER_REPLY [03-06-2022(online)].pdf 2022-06-03
19 202141041758-CORRESPONDENCE [03-06-2022(online)].pdf 2022-06-03
19 202141041758-DRAWINGS [15-09-2021(online)].pdf 2021-09-15
20 202141041758-CLAIMS [03-06-2022(online)].pdf 2022-06-03
20 202141041758-FORM 1 [15-09-2021(online)]-1.pdf 2021-09-15
21 202141041758-FORM 1 [15-09-2021(online)].pdf 2021-09-15
21 202141041758-US(14)-HearingNotice-(HearingDate-23-06-2022).pdf 2022-06-07
22 202141041758-Correspondence to notify the Controller [20-06-2022(online)].pdf 2022-06-20
22 202141041758-POWER OF AUTHORITY [15-09-2021(online)]-1.pdf 2021-09-15
23 202141041758-Correspondence to notify the Controller [22-06-2022(online)].pdf 2022-06-22
23 202141041758-POWER OF AUTHORITY [15-09-2021(online)].pdf 2021-09-15
24 202141041758-PROOF OF RIGHT [15-09-2021(online)].pdf 2021-09-15
24 202141041758-US(14)-ExtendedHearingNotice-(HearingDate-18-07-2022).pdf 2022-07-15
25 202141041758-PROVISIONAL SPECIFICATION [15-09-2021(online)]-1.pdf 2021-09-15
25 202141041758-Correspondence to notify the Controller [15-07-2022(online)].pdf 2022-07-15
26 202141041758-Written submissions and relevant documents [02-08-2022(online)].pdf 2022-08-02
26 202141041758-PROVISIONAL SPECIFICATION [15-09-2021(online)].pdf 2021-09-15
27 202141041758-STATEMENT OF UNDERTAKING (FORM 3) [15-09-2021(online)]-1.pdf 2021-09-15
27 202141041758-PatentCertificate10-01-2023.pdf 2023-01-10
28 202141041758-STATEMENT OF UNDERTAKING (FORM 3) [15-09-2021(online)].pdf 2021-09-15
28 202141041758-IntimationOfGrant10-01-2023.pdf 2023-01-10

Search Strategy

1 202141041758_searchE_24-03-2022.pdf

ERegister / Renewals

3rd: 21 Feb 2023

From 15/09/2023 - To 15/09/2024

4th: 21 Feb 2023

From 15/09/2024 - To 15/09/2025

5th: 21 Feb 2023

From 15/09/2025 - To 15/09/2026

6th: 21 Feb 2023

From 15/09/2026 - To 15/09/2027

7th: 21 Feb 2023

From 15/09/2027 - To 15/09/2028

8th: 21 Feb 2023

From 15/09/2028 - To 15/09/2029

9th: 21 Feb 2023

From 15/09/2029 - To 15/09/2030

10th: 21 Feb 2023

From 15/09/2030 - To 15/09/2031