Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Automatic Classification Of Cells And Structures Of Microscopic Images Through Mobile Application

Abstract: A system and method is provided for automatically classifying and identifying the types of cells and structures using microscopic images captured with an application installed on a smart computing device. The captured image is pre-processed by normalizing the parameters of the captured image. The application is run to identify the patches of the cells and structures in the captured image to extract the features and attributes of the cells and structures in the captured image. The pre-trained machine learning models/algorithms are applied for classifying the cells and structures in the image based on the extracted features and attributes of the cells and structures. A report is generated on a server based on the cell classification. [FIG.1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
23 February 2016
Publication Number
45/2017
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
rprabhu@almtlegal.com
Parent Application

Applicants

SIGTUPLE TECHNOLOGIES PRIVATE LIMITED
2nd Floor, 9, 17A Main, 5th Block, Koramangala, Bengaluru - 560034, Karnataka, India

Inventors

1. ROHIT KUMAR PANDEY
Flat B3, Clivia, Royale Habitat Apartment, HSR Layout, Sector 2, Bengaluru - 560102, Karnataka, India
2. APURV ANAND
405, A3 Tower, Ganga Block, National Games Village, Bengaluru - 560047, Karnataka, India
3. BHARATH CHELUVARAJU
A-201, Gouthami Comforts Apartments, Basapura Main Road, Near Dream Paradise Layout, Electronic City, Bengaluru - 560100, Karnataka, India
4. TATHAGATO RAI DASTIDAR
Flat 217 Vineyard, 22 Heerachand Road Cox Town, Bangalore - 560005, Karnataka, India

Specification

DESC:CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This patent application is related to and claims the benefit of priority from the Indian Provisional Patent Application with Serial No. 201641006272 titled “A SYSTEM AND METHOD FOR AUTOMATIC CLASSIFICATION OF CELLS AND STRUCTURES OF MICROSCOPIC IMAGES THROUGH MOBILE APPLICATION”, filed on February 23, 2016, and the contents of which are incorporated in entirety by the way of reference.

A) TECHNICAL FIELD
[0002] The embodiments herein are generally related to classification of different types of cells and structures. The embodiments herein are particularly related to a system and method for an image or video acquisition and classification of cells and structures via a mobile application. The embodiments herein are more particularly related to a system and method for extraction and analysis of samples under a microscope.

B) BACKGROUND OF THE INVENTION
[0003] In medical diagnosis, classification of cells plays a very important role. Traditionally, a classification of different cells in a sample is performed by a visual microscopic examination. The visual microscopic examination is performed for a quantitative and qualitative analysis of the blood samples for diagnosing the several diseases. The visual microscopic examination performed manually is tedious, time consuming and susceptible to a human error.
[0004] Currently, several techniques for an automatic analysis and classification of the cells and structures are available. The automatic techniques allow identification and a precise counting of the several types of cells in a sample according to the physical and chemical characteristics of the cells. However, there are more chances of misidentification of cells with morphological abnormalities in the automatic techniques. Therefore, a human intervention is required in many cases for a better identification and segmentation of the cells. The step is very crucial as the accuracy of the steps including the extraction and classification of cell characteristics depends on the correct identification and segmentation of cells.
[0005] Moreover, most of the automatic machines for analysis and classification of cells are quite expensive. Therefore, such automatic machines are not affordable for most laboratories. Further, several stages of processing are performed on a high performance server having high computational capacity in most of the automated machines.
[0006] Hence, there is a need for an efficient system and method for an automatic classification of several types of cells and structures using microscopic images captured with a smart computing device. There is also a need for a system and method for extraction and analysis of samples under a microscope. Further, there is a need to automate the different stages of the process efficiently, thereby making the system error free.
[0007] The above-mentioned shortcomings, disadvantages and problems are addressed herein, which will be understood by reading and studying the following specification.

C) OBJECT OF THE INVENTION
[0008] The primary object of the embodiments herein is to provide a method and system for an automatic classification of several types of cells and structures using microscopic images captured with an application installed on a smart computing device.
[0009] Another object of the embodiments herein is to provide a system and method for extraction and analysis of samples under a microscope.
[0010] Yet another object of the embodiments herein is to provide a method and system for an automated image or video acquisition, image preprocessing, image identification process and extraction of features and classification of cells and structures with a smart computing device.
[0011] Yet another object of the embodiments herein is to provide a system and method to capture image or video manually using an application installed on a smart computing device.
[0012] Yet another object of the embodiments herein is to provide a system and method to capture the image or video of a slide kept under a microscope automatically by controlling the movement of the stage with a robot using an application installed on a smart computing device.
[0013] Yet another object of the embodiments herein is to provide a system and method for executing image pre-processing operation including normalization of multiple parameters of the image using an application installed on a smart computing device.
[0014] Yet another object of the embodiments herein is to provide a system and method for extraction of multiple patches of interest containing different types of cells with an application installed on a smart computing device, thereby ensuring that all the true positives are identified.
[0015] Yet another object of the embodiments herein is to provide a system and method for the classification and subsequent identification of different types of cells and structures using a smart computing device or a server using machine-learning techniques.
[0016] Yet another object of the embodiments herein is to provide a system and method for generating reports based on the cell classification results at a high performance server.
[0017] These and other objects and advantages of the embodiments herein will become readily apparent from the following detailed description taken in conjunction with the accompanying drawings.

D) SUMMARY OF THE INVENTION

[0018] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
[0019] The various embodiments herein provide a system and method for an extraction and analysis of a plurality of types of cells and structures using microscopic images captured with an application installed on a smart computing device. The smart computing device includes, but is not limited to, smart phones and tablet devices. The system performs the steps of image acquisition, image processing, extraction and classification of cells and structures on the smart computing device and report generation on a server. The image of a specimen kept on a slide under the microscope is captured by the application installed in the smart computing device. The captured image is preprocessed by normalizing the plurality of parameters of the captured image. Further, the patches of the plurality of the cells and structures in the captured image are identified and the plurality of features and attributes of the cells and structures in the captured image are extracted. The plurality of classes of cells is classified based on the features and attributes of the cells in the extracted patches of the image. The classification is performed by using a plurality of pre-trained machine learning models on one of the smart computing devices and the server. Further, the report having the diagnosis information is generated based on the results obtained from the classification module.
[0020] According to one embodiment herein, the smart computing device further comprises an application for capturing the plurality of images to digitize the sample observed through the microscope.
[0021] According to one embodiment herein, a system for extraction and analysis of cells and structures in a sample is disclosed. The system comprises a smart computing device and a server. The smart computing device is configured to extract features and attributes of the cells and the structures of interest in the sample observed through a microscope. The smart computing device is configured to extract the features and the attributes from a plurality of images of samples captured and processed using the smart computing device. The server is configured to analyze the features and the attributes of the cells and the structures of interest extracted for generating reports, wherein the analysis of features and the attributes of the cells and the structures of interest is performed by executing pre-trained machine learning models for classifying the cells and the structures into a plurality of pre-defined classes.
[0022] According to one embodiment herein, the smart computing device further comprises an image acquisition module, an image processing module, an optional extraction module and an optional classification module. The image acquisition module runs on a processor in the smart computing device and is configured to capture the plurality of images or videos of the sample observed through a microscope. The plurality of images are captured using an in-built camera in the smart computing device. The image-processing module runs on the processor in the smart computing device and is configured to process the plurality of captured images or videos by performing normalization and image quality assessment. The optional extraction module run on the processor in the smart computing device and is configured to extract the features and the attributes of cells and structures of interest in the sample. The extraction is performed by executing an extraction logic based on the type of the cells and the structures of interest. The optional classification module is run on the processor in the smart computing device and is configured to classify the plurality of the cells and the structures into pre-defined classes.
[0023] According to one embodiment herein, the smart computing device is selected from a group consisting of smart phones and tablet devices.
[0024] According to one embodiment herein, the smart computing device further comprises an application installed for activating the image acquisition module, the image processing module, the optional extraction module and the optional classification module.
[0025] According to one embodiment herein, the image-processing module is configured to perform normalization and image quality assessment of the captured images by standardizing a plurality of parameters of the camera for ensuring same quality of consecutive images captured by the camera. The plurality of parameters includes but are not limited to auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and colour temperature settings. A plurality of image characteristics of the captured images is adjusted to be in a permissible range for ensuring a desired quality of the plurality of captured images. The plurality of image characteristics includes but is not limited to blur, sharpness and focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, colour profile and tone of image. A plurality of Digital Image Processing (DIP) techniques is applied for normalizing the color scheme and contrast of the captured image. The plurality of DIP techniques includes but are not limited to histogram equalization, blur detection, and similarity detection techniques.
[0026] According to one embodiment herein, the features and the attributes extracted by the smart computing device includes but are not limited to a density of cells in the image, size and areas of the image under a plurality of cell types, color attributes of the plurality of types of patches of interest etc.
[0027] According to one embodiment herein, the smart computing device is configured to classify the cells and the structures based on the type of the cells and structures of interest.
[0028] According to one embodiment herein, the smart computing device is configured to upload the extracted features and attributes of the cells and the structures, extracted patches of cells and structures and the classification of the cells and the structures to the server.
[0029] According to one embodiment herein, the server further comprises an Application Programming interface (API) Gateway for exposing APIs to receive the uploads from the smart computing device.
[0030] According to one embodiment herein, the server further comprises a classification module, an extraction module and a report generation module. The classification module is run on a hardware processor in a computer system and is configured to classify the extracted cells and structures into predefined classes using an artificial intelligence platform. The artificial intelligence platform is configured to analyze the images in real time and batch mode using a list of procedures. The report generation module is run on a hardware processor in a computer system, and is configured to generate the report based on the analysis during classification of extracted features, and attributes using a custom logic.
[0031] According to one embodiment herein, the server is configured to publish the generated report on a webpage or a user interface of the smart computing device using APIs in the API gateway.
[0032] According to one embodiment herein, the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
[0033] According to an embodiment herein, both the optional classification module provided in the smart computing device and a classification module provided in the server are trained in a training phase with machine learning models. The training of the optional classification module and the classification module mentioned herein does not take place during the process as explained in the course of the present invention- the optional classification module and the classification module are pre-trained using artificial intelligent models. During the classification of the cells, depending on the case, the entire classification is performed in the optional classification module of the smart computing device after the extraction of features and attributes from the image by the optional extraction module; or otherwise, the entire classification is performed with the classification module in the server, when the extracted data is sent directly to the server. When the entire classification is performed in the optional classification module in the smart computing device, then the classified data is sent to the server for collating the same to generate the reports using the report generation module.
[0034] According to an embodiment herein, the server is configured to perform the complete classification using the classification module along with report generation for cases when the extracted data is directly sent to the server from the optional extraction module without any classification taking place in the optional classification module.
[0035] According to an embodiment herein, the classification of cells is partially performed in the optional classification module and the result of the partly completed classification is sent to the server for further classification by the classification module.
[0036] According to an embodiment herein, an extraction module is provided in the server. The extraction is performed after performing an image processing operation with the image processing module. When the extraction operation is performed in the smart computing device, then the classification is carried out either with the optional classification module in the smart computing device or with the classification module in the server or the classification is partially performed with the optional classification module and the remaining classification process is performed in the classification module, depending on the case. When the extraction is carried out directly in the server, then the complete classification is also done in the server using the classification module. The report is generated after the completion of the classification of the extracted data.
[0037] According to an embodiment herein, the extraction process is partially performed in the smart computing device and the remaining extraction operation is performed in the server.
[0038] According to one embodiment herein, a method for extraction and analysis of cells and structures in a sample is disclosed. The method comprising capturing a plurality of images of the sample observed through a microscope using an application installed in a smart computing device. The plurality of captured images are processed by performing normalization and image quality assessment using the application. The features and attributes of cells and structures of interest are extracted in the sample and image patches containing extracted cells and structures of interest using the application in the smart computing device. The extraction process is performed by executing an extraction logic based on the type of the cells and the structures of interest. The extracted cells and structures are analyzed to identify and classify the cells and structures into pre-defined classes by running a hierarchy of artificial intelligence models in an artificial intelligence platform in a server. The statistical parameters are calculated for suggesting abnormal conditions in the sample based on the output of classification and the extracted features and attributes of the cells and structures. The report is generated by collaborating statistical parameters and suggested abnormal conditions using a custom logic in the server.
[0039] According to one embodiment herein, the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
[0040] According to one embodiment herein, the method further comprises reviewing the generated report on a webpage or a user interface of the smart computing device.
[0041] According to one embodiment herein, the method further comprises uploading the extracted features and attributes of cells and structures of interest in the sample and the image patches containing extracted cells and structures of interest from the smart computing device to the server.
[0042] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
[0043] [0038] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.

E) BRIEF DESCRIPTION OF THE DRAWINGS
[0044] The other objects, features and advantages will occur to those skilled in the art from the following description of the preferred embodiment and the accompanying drawings in which:
[0045] FIG. 1A illustrates block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein.
[0046] FIG. 1B illustrates a hardware block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein.
[0047] FIG. 2 illustrates a flowchart explaining a method for an automatic classification of the types of cells and structures using an application installed on a smart computing device, in accordance with one embodiment herein.
[0048] FIG. 3 illustrates a flowchart explaining a method for extraction and analysis of samples under a microscope, in accordance with one embodiment herein.
[0049] Although the specific features of the embodiments herein are shown in some drawings and not in others. This is done for convenience only as each feature may be combined with any or all of the other features in accordance with the embodiments herein.

F) DETAILED DESCRIPTION OF THE INVENTION
[0050] In the following detailed description, a reference is made to the accompanying drawings that form a part hereof, and in which the specific embodiments that may be practiced is shown by way of illustration. These embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments and it is to be understood that other changes may be made without departing from the scope of the embodiments. The following detailed description is therefore not to be taken in a limiting sense.
[0051] The various embodiments herein provide a system and method for an extraction and analysis of a plurality of types of cells and structures using microscopic images captured with an application installed on a smart computing device. The smart computing device includes but is not limited to smart phone and tablet devices etc. The system performs the steps of image acquisition, image processing, extraction and classification of cells and structures on the smart computing device and report generation on a server. The image or a video of a specimen kept on a slide under the microscope is captured by the application installed in the smart computing device. The captured image or a video is preprocessed by normalizing the plurality of parameters of the captured image or video. Further, the patches of the plurality of the cells and structures in the captured image or a video are identified and the plurality of features and attributes of the cells and structures in the captured image are extracted. The plurality of classes of cells is classified based on the features and attributes of the cells in the extracted patches of the image. The classification is performed by using a plurality of pre-trained machine learning models on one of the smart computing devices and the server. Further, the report having the diagnosis information is generated based on the results obtained from the classification module.
[0052] According to one embodiment herein, a system for extraction and analysis of cells and structures in a sample is disclosed. The system comprises a smart computing device and a server. The smart computing device is configured to extract features and attributes of the cells and the structures of interest in the sample observed through a microscope. The smart computing device is configured to extract the features and the attributes from a plurality of images or videos of samples captured and processed using the smart computing device. The server is configured to analyze the features and the attributes of the cells and the structures of interest extracted for generating reports, wherein the analysis of features and the attributes of the cells and the structures of interest is performed by executing pre-trained machine learning models for classifying the cells and the structures into a plurality of pre-defined classes.
[0053] According to one embodiment herein, the smart computing device further comprises an application for capturing the plurality of images or videos to digitize the sample observed through the microscope.
[0054] According to one embodiment herein, the smart computing device further comprises an image acquisition module, an image processing module, an optional extraction module and an optional classification module. The image acquisition module runs on a processor in the smart computing device and is configured to capture the plurality of images or videos of the sample observed through a microscope. The plurality of images or videos are captured using an in-built camera in the smart computing device. The image-processing module runs on the processor in the smart computing device and is configured to process the plurality of captured images or videos by performing normalization and image quality assessment. The optional extraction module run on the processor in the smart computing device and is configured to extract the features and the attributes of cells and structures of interest in the sample. The extraction is performed by executing an extraction logic based on the type of the cells and the structures of interest. The optional classification module is run on the processor in the smart computing device and is configured to classify the plurality of the cells and the structures into pre-defined classes.
[0055] According to one embodiment herein, the features and the attributes extracted by the smart computing device includes but are not limited to a density of cells in the image, size and areas of the image under a plurality of cell types and color attributes of the plurality of types of patches of interest.
[0056] According to one embodiment herein, the smart computing device is selected from a group consisting of smart phones and tablet devices.
[0057] According to one embodiment herein, the smart computing device further comprises an application installed for activating the image acquisition module, the image processing module, the optional extraction module and the optional classification module.
[0058] According to one embodiment herein, the smart computing device is configured to classify the cells and the structures based on the type of the cells and structures of interest.
[0059] According to one embodiment herein, the image-processing module is configured to perform normalization and image quality assessment of the captured images and videos by standardizing a plurality of parameters of the camera for ensuring same quality of consecutive images captured by the camera. The plurality of parameters include but are not limited to auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and colour temperature settings. A plurality of image characteristics of the captured images or videos is adjusted to be in a permissible range for ensuring a desired quality of the plurality of captured images. The plurality of image characteristics includes but is not limited to blur, sharpness and focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, color profile and tone of image. A plurality of Digital Image Processing (DIP) techniques is applied for normalizing the color scheme and contrast of the captured image. The plurality of DIP techniques includes but are not limited to histogram equalization, blur detection, and similarity detection techniques.
[0060] According to one embodiment herein, the smart computing device is configured to upload the extracted features and attributes of the cells and the structures, extracted patches of cells and structures and the classification of the cells and the structures to the server.
[0061] According to one embodiment herein, the server further comprises an Application Programming interface (API) Gateway for exposing APIs to receive the uploads from the smart computing device.
[0062] According to one embodiment herein, the server further comprises of an extraction module, a classification module, and a report generation module. The classification module is run on a hardware processor in a computer system and is configured to classify the extracted cells and structures into predefined classes using an artificial intelligence platform. The artificial intelligence platform is configured to analyze images in real time or batch mode using a list of procedures. The report generation module is run on a hardware processor in a computer system, and is configured to generate the report based on the analysis during classification of extracted features, and attributes using a custom logic.
[0063] According to one embodiment herein, the server is configured to publish the generated report on a webpage or a user interface of the smart computing device using APIs in the API gateway.
[0064] According to one embodiment herein, the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
[0065] According to an embodiment herein, both the optional classification module provided in the smart computing device and a classification module provided in the server are trained in a training phase with machine learning models. The training of the optional classification module and the classification module mentioned herein does not take place during the process as explained in the course of the present invention- the optional classification module and the classification module are pre-trained using artificial intelligent models. During the classification of the cells, depending on the case, the entire classification is performed in the optional classification module of the smart computing device after the extraction of features and attributes from the image by the optional extraction module; otherwise, the entire classification is also performed with the classification module in the server, when the extracted data is sent directly to the server. When the entire classification is performed in the optional classification module in the smart computing device, then the classified data is sent to the server for collating all the data to generate the reports using the report generation module.
[0066] According to an embodiment herein, the server is configured to perform the complete classification using the classification module along with report generation for cases when the extracted data is directly sent to the server from the optional extraction module without any classification taking place in the optional classification module.
[0067] According to an embodiment herein, the classification of cells is partially performed in the optional classification module and the result of the partly completed classification is sent to the server for further classification by the classification module.
[0068] According to an embodiment herein, an extraction module is provided in the server. The extraction performed after performing an image processing operation with the image processing module in the smart computing device or the server. When the extraction operation is performed in the smart computing device, then the classification is carried out either with the optional classification module in the smart computing device or with the classification module in the server or the classification is partially performed with the optional classification module and the remaining classification process is performed in the classification module, depending on the case. When the extraction is carried out directly in the server, then the complete classification is also done in the server using the classification module. The report is generated after the completion of the classification of the extracted data.
[0069] According to an embodiment herein, the extraction process is partially performed in the smart computing device and the remaining extraction operation is performed in the server.
[0070] According to one embodiment herein, a method for extraction and analysis of cells and structures in a sample is disclosed. The method comprising capturing a plurality of images or videos of the sample observed through a microscope using an application installed in a smart computing device. The plurality of captured images or videos are processed by performing normalization and image quality assessment using the application. The features and attributes of cells and structures of interest are extracted in the sample and image patches containing extracted cells and structures of interest using the application in the smart computing device. The extraction process is performed by executing an extraction logic based on the type of the cells and the structures of interest. The extracted cells and structures are analyzed to identify and classify the cells and structures into pre-defined classes by running a hierarchy of artificial intelligence models in an artificial intelligence platform in a server. The statistical parameters are calculated for suggesting abnormal conditions in the sample based on the output of classification and the extracted features and attributes of the cells and structures. The report is generated by collaborating statistical parameters and suggested abnormal conditions using a custom logic in the server.
[0071] According to one embodiment herein, the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
[0072] According to one embodiment herein, the method further comprises reviewing the generated report on a webpage or a user interface of the smart computing device.
[0073] According to one embodiment herein, the method further comprises uploading the extracted features and attributes of cells and structures of interest in the sample and the image patches containing extracted cells and structures of interest from the smart computing device to the server.
[0074] According to an embodiment herein, a system is provided for an automatic classification of the plurality of the types of cells and structures using images or videos captured by the microscope with an application installed on a smart computing device. The system comprises an image acquisition module, an image pre-processing module, an optional extraction module, an optional classification module, an extraction module, a classification module and a report generation module. The image acquisition module is coupled to a camera in-built on the smart computing device. The application installed on the smart computing device is run and configured to initiate the analysis. On activating the application, an image capturing mode is selected. The application is run and configured to capture the image or video through the in-built camera. Once the image or video is captured, the image pre-processing module is configured to normalize the captured image or video by standardizing the camera parameters and performing various digital signal-processing techniques on the captured image. The image processing is performed on the smart computing device.
[0075] The optional extraction module installed in the smart computing device and the extraction module of the server are configured to initially identify the patches containing the several types of cells and structures in the captured image. Further, optional extraction module and the extraction module are configured to extract the features and attributes of the cells and structures in the captured image. The optional classification module and the classification module are configured to employ a plurality of pre-trained machine learning models to classify the cells and structures based on the extracted features and attributes. Further, when the extracted features of the cells, the extracted patches and the results obtained during classification using the optional classification module are uploaded from the smart computing device to the server, the report generation module of the server, which is generates analytical reports based on the data uploaded by the application installed in the smart computing device.
[0076] According to an embodiment herein, a method is provided for an automatic classification of the types of cells and structures using the enhanced images or videos captured using a microscope with the application installed on a smart computing device. The method involves activating the application installed in the smart computing device by a user. The application communicatively coupled to the smart computing device with a built-in camera is activated to select an image capture mode in the smart computing device. The image capture mode is activated to capture the image or video of a specimen kept on a slide under the microscope. Once the image or video is captured, the application is configured to initiate the preprocessing of the captured image. The preprocessing of the application includes standardizing a plurality of parameters of the camera and processing the image or video using digital processing techniques. Further, the patches of the plurality of the cells are identified from the image or video to extract the features and attributes of the cells. The extracted features and attributes are utilized for classifying the cells into a plurality of types using a plurality of pre-trained machine learning models. Further, a report is generated based on the extracted features, attributes and classification of the cells.
[0077] FIG. 1A illustrates block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein. FIG. 1B illustrates a hardware block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein.
[0078] With respect to FIG. 1A-FIG. 1B, the system for extraction and analysis of samples under a microscope is disclosed. The system comprises a smart computing device 102 and a server 112. The smart computing device captures and digitizes the sample observed through the microscope. The examples of smart computing device include but are not limited to smart phone and tablet devices. The smart computing device 102 comprises a camera 116, an application 118, a processor 120, a storage memory 122 and an operating system 124. The smart computing device 102 is capable of communicating using short range communication protocols such as Bluetooth. The application 118 is installed in the smart computing device 102. The application 118 enables the digitization of samples observed under a microscope. Further, the processor 120 is configured to execute the steps of extraction and classification. The smart computing device 102 comprises a storage device 122 configured to store instructions, algorithms and software models to be executed by the processor 120. The examples of the operating system 124 include but are not limited to Google Android, Apple iOS etc. The smart computing device 102 comprises an image acquisition module 104, image processing module 106, an optional extraction module 108a and an optional classification module110a.
[0079] The image acquisition module 104 is coupled to on the smart computing device 102 provided with an inbuilt camera 116. The smart computing device 102 is attached to the eyepiece of the microscope. The smart computing device is attached to the microscope using a smart computing device holder. The smart computing device holder comprises a receptacle capable of holding the smart computing device 102. The smart computing device holder enables the user to align the camera 116 and the eyepiece of the microscope. The receptacle on the smart computing device holder aligns the center of the camera 116 and the center of the eyepiece automatically. The smart computing device holder further enables a user to position the camera 116 at a proper distance away from the eyepiece of the microscope. In order to achieve proper distance, the receptacle on the smart computing device holder is moved forward and backward along a rail running through the smart computing device holder.
[0080] Initially, the user is enabled to activate the application 118 on the smart computing device 102. The application 118 is run on the smart computing device and configured to select or activate an image capture mode. The camera 116 is adjusted to focus on the image of a sample kept on a slide under the microscope. The captured image is displayed as a split screen image on the smart computing device 102. The split screen view comprises a full field view and an enlarged view. The user provides commands based on the full field view through the application 118 to a robot to adjust and move the slide to a particular position. The robot retrofitted to the microscope is configured to adjust a movement of the slide along the X, Y and Z axis, thereby moving the slide to a desired position. The robot receives the commands from the application 118 using a short range communication protocol, wherein said short range communication protocol can be Bluetooth. Further, the voice or gesture commands are provided based on the enlarged view through the application to capture the images or videos.
[0081] The captured image or video is further processed by the image processing module 106 in the smart computing device 102. The video is sampled into a set of images based on the “frames-per-second” captured by the camera. The image processing module 102 is controlled by the application 118 in the smart computing device 102. The image processing module 106 is configured to run on the processor 120. The image processing module 106 is configured to normalizes the captured image in order to standardize the image quality and color scheme. The normalization is performed to ensure that the quality of the consecutive images is independent of the changes in the lighting conditions, slide color scheme, camera settings of the smart computing device camera etc. The image processing module 106 is configured for performing normalization and image quality assessment based on a plurality of characteristics of the image. The plurality of characteristics of the image includes blur/sharpness/focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, colour profile and tone of image. The plurality of characteristics is adjusted to be within a permissible range to ensure that the captured image is of desired quality for analysis. The permissible range for the plurality of characteristics depends on the types of cells and structures identified. The application 118 of the smart computing device 102 is configured to download the permissible range from the server 112 periodically, thereby quality of the captured images.
[0082] The preprocessing operation of the image is carried out in two steps. Firstly, a plurality of parameters of the camera 116 is standardized. The plurality of parameters include auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and color temperature settings. The predefined setting of the plurality of parameters helps to ensure same quality for the consecutive images captured by the camera 116. Therefore, the user is enabled to capture the images without any additional processing by the camera chipset of the smart computing device 102.
[0083] Secondly, the image processing module 106 is configured to apply a plurality of Digital Image Processing (DIP) techniques for normalizing the color scheme and contrast of the captured image. Some of the features such as type and quality of the glass slides and the plurality of staining techniques applied result in changing the color scheme and contrast of the captured image. The pluralities of the DIP techniques for normalization include but are not limited to histogram equalization, blur detection, and similarity detection techniques. Histogram techniques are employed to normalize the contrast of the image. Blur detection techniques are applied to identify whether the captured image is of desired sharpness and focus. The captured image is discarded from further processing when the desired sharpness and focus is not met. The similarity detection technique is applied to identify similar images and discard duplicate images.
[0084] Further, the optional extraction module 108a on the smart computing device 102 is configured to identify the patches containing a plurality of types of cells and structures to extract features and areas of interest from the image. The optional extraction module 108a is configured to run on the processor 120 of the smart computing device 102. The extraction module 108b is configured to run a logic depending on the type of cells and structures of interest from the image. For example, the logic used for extracting blood cells from a peripheral blood smear is different from the logic used to extract sperm cells from a semen slide. The optional extraction module 108a and the extraction module 108b are configured to perform a plurality of steps for extracting features and attributes of the cells and structures. The plurality of steps includes identification of patches, and extraction of patches.
[0085] The step of identification of the patches includes identifying the cells and structures of interest. The identification of the patches is performed by executing custom logic based on the type of cells and structures of interest to be extracted. The optional extraction module 108a and the extraction module 108b are configured to apply a plurality of image processing techniques for identifying the patches to identify the cells and structures of interest present in each normalized image. The identification of the patches is performed by including all the true positives even when some false positive are received. The false positives are discarded in the subsequent processing steps. Further, the features and attributes are extracted from the image for classifying the cells and generating reports. The features and attributes of the image include but are not limited to density of cells in the image, size and areas of the image under a plurality of cell types, color attributes of the plurality of types of patches of interest etc. The extraction of features and attribute reduces the size of the data under consideration.
[0086] The cells and structures are extracted from the image patches, wherein the size of said cells and structures are based on their types of feature and the objects of interest. The optional extraction module and the extraction module are configured to subtract the background, thereby generating an image with only the extracted cells and structures of interest visible. The system is also configured to extract cells and structures on the server 112 based on the size of the captured and normalized images transferred to the server 112 and the complexity of extraction logic. The extraction logic is selected based on types of cells and structures identified.
[0087] The extracted features and attributes from the image are utilized by the optional classification module110a in the smart computing device 102 or the classification module in the server for classifying the plurality of types of cells and structures. The optional classification module 110a is run on the smart computing device 102 and is activated depending upon the type of cells or structures to be identified. The cells are classified to identify and label each extracted cell and structure into any one of a predefined classes determined by the content of the slide under analysis. The optional classification module 110a and the classification module are operated in two phases. The first phase is a training phase where the optional classification module and the classification module are trained with machine leaning models to identify the cells belonging to the plurality of classes from the annotated images of the cells. The machine leaning models are provided to understand the typical attributes of each class to differentiate between the plurality of cell types. The training of the optional classification module and the classification module mentioned herein does not take place during the process as explained in the course of the present invention- the optional classification module and the classification module are pre-trained using artificial intelligent models.
[0088] The optional classification module and the classification module are operated in the second phase after completion of an identification of the cells and the attributes of each class of cells with a sufficient and required degree of accuracy. The second phase is an execution phase where the pre-trained machine leaning models are employed on a new set of data to accurately identify different types of cells and structures from the plurality of patches extracted during the extraction process.
[0089] According to one embodiment herein, the machine learning model used is a deep learning model. The deep learning models are activated as a decision tree based structure with a plurality of nodes. Each node of the decision tree among the plurality of nodes is treated and configured as a deep leaning model. The nodes at the top of the decision tree are configured to segregate the data into broad classes. Further, the lower nodes of the decision tree are configured to classify the broad classes into specific classes corresponding to each type of cells and structures to be identified. The classification is performed in a hierarchal manner to facilitate a differential classification.
[0090] When the optional classification module is not invoked by the mobile application, a classification process is not done with the mobile application. When a classification process is not performed on the smart computing device 102, a classification module 110b installed on the server 112 is configured to execute the classification process in the server. Further, the extracted features and attributes of the images, extracted patches of cells and structures and the results obtained during classification are uploaded from the smart computing device 102 to the server 112. The server 112 is a high performance server. The server 112 comprises of the extraction module 108b, classification module 110b and a report generation module 114.
[0091] The classification module 110b in the server 112 is configured to run on a hardware processor 132. The server 112 comprises an Application Programming interface (API) Gateway 128 and an Artificial intelligence platform 130. The API gateway 128 is a distributed cloud cluster. The API Gateway 128 is configured to expose APIs and to upload the captured images, the extracted features and attributes to the server 112. Further, the API Gateway 128 is configured to access the final report after performing classification. The API Gateway 128 is configured to provide APIs for integration with third party lab information system and other computer systems.
[0092] The artificial intelligence platform 130 is another distributed cloud cluster. The artificial intelligence platform 130 is configured to perform analysis on images in real time and batch mode. The artificial intelligence platform 130 is configured with a list of procedures for performing analysis of the images. All the procedures among the list of procedures are interdependent. Therefore, the application program interface 130 is configured to ensure that a procedure is run only after receiving the outputs of other dependent procedures. Each procedure involves the steps of running a statistical or machine learned model on the images, creating report constructs and collating output of different procedures for creating a final report. The report constructs includes but are not limited to calculating metrics in report, creating interactive charts of plurality of parameters etc.
[0093] The report generation module 114 is configured to generate charts, graphs and report based on the analysis during classification of extracted features and attributes. The report generation module 114 is run on the hardware processor 132. The report generation module 114 is configured to generate reports based on the analysis performed by the artificial intelligence platform 130. The outputs of the artificial intelligence platform 130 are collaborated to render the report. The API gateway 128 is configured to provide the API to display the outputs of the artificial intelligence platform 130 to render the report on a webpage or on the user interface of the smart computing device 102. The report is communicated back to the smart computing device 102 through the application to be viewed by the clinician or technician. The report comprises details including differential count of each cells or structure of interest from the image, histograms and other charts representing key attributes of all cells/structures of interest from the images, pertinent parameters of cells and structures of interest using available attributes from the image and outputs of machine learning models, with some form of regression analysis.
[0094] The report generation module 114 is configured to generate reports by applying a custom logic based on the cells and structures of interest identified and quantified in the analyzed images. The generated report comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions. The metric section includes metrics computed during analysis The metrics includes at least one of direct properties of individual cells and volumetric quantities. An example of direct properties of individual cells includes count or size of each type of cells and structures of interest. Further, an example of volumetric quantities includes concentration per unit volume. The metrics are calculated either directly based on captured images or derived using statistical models on combination of directly calculated metrics. The volumetric quantities are generally derived using statistical models on count and concentration of cells in each image.
[0095] According to an embodiment herein, both the optional classification module 110a provided in the smart computing device 102 and a classification module 110b provided in the server 112 are trained in a training phase with machine learning models. During the classification of the cells, depending on the case, the entire classification is performed in the optional classification module 110a of the smart computing device after the extraction of features and attributes from the image by the optional extraction module 108a; or otherwise, the entire classification is performed with the classification module 110b in the server, when the extracted data is sent directly to the server 112. When the entire classification is performed in the optional classification module 110a in the smart computing device, then the classified data is sent to the server for collating and to generate the reports using the report generation module 114.
[0096] According to an embodiment herein, the server 112 is configured to perform the complete classification using the classification module 110b along with report generation cases when the extracted data is directly sent to the server from the optional extraction module 108a without any classification taking place in the optional classification module 110a.
[0097] According to an embodiment herein, the classification of cells is partially performed in the optional classification module 110a and the result of the partly completed classification is sent to the server for further classification by the classification module 110b.
[0098] According to an embodiment herein, an extraction module 108b is provided in the server 112. The extraction performed after performing an image processing operation with the image processing module 106 in the smart computing device 102 or the server 112. When the extraction operation is performed in the smart computing device 102, then the classification is carried out either with the optional classification module 110a in the smart computing device or with the classification module 110b in the server or the classification is partially performed with the optional classification module 110a and the remaining classification process is performed in the classification module 110b, depending on the case. When the extraction is carried out directly in the server 112, then the complete classification is also done in the server 112 using the classification module (110b). The report is generated after the completion of the classification of the extracted data.
[0099] According to an embodiment herein, the extraction process is partially performed in the smart computing device (102) and the remaining extraction operation is performed in the server (112).
[00100] The plurality of charts and graphs includes a set of interactive charts and graphs based on calculated attributes of cells and structures of interest. The set of interactive charts and graphs includes histograms, line graphs, bar graphs, scatter plots etc. The set of interactive charts and graphs provides an insight on distribution of the cell properties and attributes across the captured images. The visual/monitor section is configured to display a small patch of the captured image containing the cells and structures of interest identified during analysis. The visual section enables the user to view the identified types of cells and structures of interest visually. The cells and structure of interest are identified and grouped into a plurality of types during classification. The visual chart enables the user of the report to correct an incorrectly assigned label to any cell image. The section of suggestions provides suggestions based on holistic analysis of the metrics, charts and classification performed on cells and structures of interest identified during analysis. For example, when malaria parasite is observed during analysis of a blood smear, then a suggestion is provided in the report for suspected malarial infection.
[00101] FIG. 2 illustrates a flowchart explaining a method for an automatic classification of the types of cells using an application installed on a smart computing device, according to one embodiment herein. The method involves a user activating the application installed in the smart computing device (202). The application is configured to direct the user to select or activate an image capture mode to operate a camera inbuilt on the smart computing device to capture an image of the specimen on the slide. The user is enabled to capture a single image or a plurality of images or videos of a specimen kept on a slide under the microscope using manual or automated methods (204). The user is enabled to capture the images or videos manually also with the application in the smart computing device. Further, the user is enabled to automate the image or video capture process by adjusting the movement of the slide under the microscope using a robot through the mobile application. Further, the user is configured to capture the image or video using voice or gesture activated commands provided through the mobile application.
[00102] Once the image or video is captured, the application is configured to initiate the preprocessing operation of the captured image or video, wherein the video is sampled into a set of images based on the “frames-per-second” captured by the camera. The preprocessing operation of the image is performed by standardizing the quality of the image (206). The quality of image is standardized by standardizing a plurality of parameters of the camera and processing the image using digital processing techniques. The plurality of parameters include auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and color temperature settings. Further, the patches of the plurality of types of cells and structures are identified from the processed image to extract the features and attributes of the cells and structures (208). The features and attributes of the image includes but are not limited to density of cells in the image, size and areas of the image under the plurality of cell types, color attributes of the plurality of types of patches of interest etc. The features and attributes extracted are used for classifying the cells and structures into a plurality of types.
[00103] The classification of the cells and structures is performed by using a plurality of pre-trained machine learning models (210). The classification is executed on the smart computing device based on the cells and structures to be identified. When the classification is not executed by the smart computing device, the classification is executed from the server. A report is generated based on the extracted features, attributes and classification of the cells and structures in the server using the report generation module (212).
[00104] FIG. 3 illustrates a flowchart explaining a method for extraction and analysis of samples under a microscope, in accordance with one embodiment herein. The method includes capturing single or multiple images or videos of the sample kept under the microscope using an application installed in the smart computing device (302). A user is enabled to activate the application for directing the user to select or activate an image or video capture mode to operate an inbuilt camera on the smart computing device to capture an image of the sample on a slide. The application is configured to capture the image or video in a manual mode or an automated mode.
[00105] The captured images or videos, wherein the video is sampled into a set of images based on the frames-per-second captured by the camera, are processed for extracting cells and structures of interest and calculating specific attributes (304). The step of processing the captured image involves a plurality of steps. A first step includes assessing the quality of the captured images. The quality of the captured images is assessed by comparing each captured image against a list of parameters. The list of parameters includes blur, sharpness and focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, color profile and tone of image. The captured images that fail the quality assessment are not further processed.
[00106] A second step includes normalizing the captured image. During normalization process, each captured image is pre-processed to ensure that all the captured images is of similar properties such as dynamic range, color, brightness etc. A third step includes identifying cells and structures of interest. The cells and structures of interest are identified using image processing techniques. In the third step, custom logic is executed based on type of cells and structures of interest to be extracted. A fourth step involves extracting smaller image patches comprising all cells and structures of interest and multiple features and attributes of the cells and structures. Further, a background subtraction is performed so that the image patch contains only the extracted cell or structure of interest visible.
[00107] Further, the extracted cells and structures are analyzed to identify and classify the cells and structures into pre-defined subclasses based on the type of the sample (306). The extracted cells and structures are analyzed using a hierarchy of artificial intelligence models to identify and classify the cells and structures into multiple pre-defined subclasses. For example, the pre-defined subclasses for blood slides include but are not limited to red blood cells, white blood cells, platelets etc.
[00108] The statistical parameters are calculated to estimate abnormal conditions in the sample based on the output of classification and the extracted features and attributes of the cells and structures (308). The plurality of statistical models is employed to calculate a list of statistical parameters for creating a report. Further, the abnormal conditions in the samples are identified for generating suggestions based on configurable rules.
[00109] The report is generated and published on a webpage or a user interface of the smart computing device (310). The report includes a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions. The user is enabled to review the report using the smart phone application. Further, the report is viewed remotely on a web browser or hand held device by sharing the report through emails or uploading the report on cloud.
[00110] The embodiments herein envisage a system and method for an automatic classification of the plurality of types of cells using an application installed on a smart computing device. The system is configured to automatically identify and classify the cells and complex structures of interest on a slide under a microscope, thereby making the analysis process fast and efficient. The system is implemented by installing the application on the smart computing device. Therefore, the system is cost effective and is capable of being used in any laboratories. Further, the system is configured to execute most of the steps of analysis on the smart computing device rather than the server. Therefore, the need for a server with a high processing capacity is eliminated and the processing load on the server is reduced largely.
[00111] The image or video acquisition process is performed automatically using a robot controlled by the application. The extraction process is performed on the smart computing device and care is taken not to miss any true positive. Further, the classification is performed using machine learning models including deep leaning techniques. The deep learning models have certain technical advancement over other traditional machine learning techniques, thereby allowing to reach near-human accuracy levels in the image identification processes.
[00112] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
[00113] Although the embodiments herein are described with various specific embodiments, it will be obvious for a person skilled in the art to practice the invention with modifications. However, all such modifications are deemed to be within the scope of the claims.
[00114] It is also to be understood that the following claims are intended to cover all of the generic and specific features of the embodiments described herein and all the statements of the scope of the embodiments, which as a matter of language might be said to fall there between. ,CLAIMS:1. A system for extraction and analysis of cells and structures in a sample, the system comprising:
a smart computing device configured to extract features and attributes of the cells and the structures of interest in the sample observed through a microscope, and wherein the smart computing device is configured to extract the features and the attributes from a plurality of images or videos of samples captured and processed using an application; and
a server configured to analyze the features and the attributes of the cells and the structures of interest extracted for generating reports, and wherein the server is configured to analyze the features and the attributes of the cells and the structures of interest by executing pre-trained machine learning models for classifying the cells and the structures into a plurality of pre-defined classes.

2. The system according to claim 1, wherein the smart computing device is installed with an application for capturing the plurality of images or videos to digitize the sample observed through the microscope.

3. The system according to claim 1, wherein the smart computing device further comprises:
an image acquisition module run on a processor in the smart computing device and configured to capture the plurality of images or videos of the sample observed through a microscope, and wherein the plurality of images or videos are captured using an in-built camera in the smart computing device;
an image processing module run on the processor in the smart computing device and configured to process the plurality of captured images or videos by performing normalization and image quality assessment;
an optional extraction module run on the processor in the smart computing device and configured to extract the features and the attributes of cells and structures of interest in the sample using a logic based on the type of the cells and the structures of interest; and
an optional classification module run on the processor in the smart computing device and configured to classify the plurality of the cells and the structures into pre-defined classes.

4. The system according to claim 3, wherein the videos captured with the image acquisition module are sampled into a set of images based on the frames-per-second captured by the in-built camera of the smart computing device.

5. The system according to claim 1, wherein the features and the attributes extracted by the smart computing device are selected from a group comprising of density of cells in the image, size and areas of the image under a plurality of cell types, color attributes of the plurality of types of patches of interest.

6. The system according to claim 1, wherein the smart computing device is selected from a group consisting of smart phones and tablet devices.

7. The system according to claim 1, wherein the smart computing device is installed with an application for activating the image acquisition module, the image processing module, the optional extraction module and the optional classification module.

8. The system according to claim 1, wherein the smart computing device is configured to classify the cells and the structures based on the type of the cells and structures of interest.

9. The system according to claim 3, wherein the image processing module is configured to normalize the captured images and assess an image quality of the captured images by performing the steps of:
standardizing a plurality of parameters of the camera for ensuring that consecutive images captured by the camera are of same quality, and wherein the plurality of parameters includes auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and colour temperature settings;
adjusting a plurality of characteristics of the image to be within a permissible range for ensuring desired quality of the plurality of images captured, and wherein the plurality of characteristics includes blur, sharpness, and focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, colour profile and tone of image; and
applying a plurality of Digital Image Processing (DIP) techniques for normalizing the color scheme and contrast of the captured image, and wherein the plurality of DIP techniques includes histogram equalization, blur detection, and similarity detection techniques.

10. The system according to claim 1, wherein the smart computing device is configured to upload the extracted features and attributes of the cells and the structures, extracted patches of cells and structures and the classification of the cells and the structures to the server.

11. The system according to claim 1, wherein the server further comprises an Application Programming interface (API) Gateway configured for providing APIs to receive the uploads from the smart computing device.

12. The system according to claim 1, wherein the server further comprises:
a classification module run on a hardware processor in a computer system and configured to classify the extracted cells and structures into predefined classes using an artificial intelligence platform, and wherein the artificial intelligence platform is configured to analyses the images in real time and batch mode using a list of procedures; and
a report generation module run on a hardware processor in a computer system and configured to generate the report based on the analysis during classification of extracted features and attributes using a custom logic.

13. The system according to claim 1, wherein the server is configured to publish the generated report on a webpage or a user interface of the smart computing device using APIs in the API gateway.

14. The system according to claim 1, wherein the report generated by the server comprises a plurality of sections, and wherein the plurality of sections includes metrics section, charts and graphs, visual section and suggestions.

15. A computer implemented method comprising instructions stored on a non-transitory computer readable storage medium and executed on a smart computing device provided with a hardware processor and memory for extraction and analysis of cells and structures in a sample, the method comprises:
capturing a plurality of images or videos of the sample observed through a microscope using an application installed in a smart computing device;
processing the plurality of captured images or videos by performing normalization and image quality assessment using the application;
extracting features and attributes of cells and structures of interest in the sample and image patches containing extracted cells and structures of interest using the application in the smart computing device, and wherein the extraction is performed by executing an extraction logic based on the type of the cells and the structures of interest;
analyzing the extracted cells and structures to identify and classify the cells and structures into pre-defined classes by running a hierarchy of artificial intelligence models in an artificial intelligence platform in a server;
calculating statistical parameters and suggestions of abnormal conditions in the sample based on the output of classification and the extracted features and attributes of the cells and structures; and
generating the report by collaborating statistical parameters and suggestions of abnormal conditions using a custom logic in the server.

15. The method according to claim 15, wherein the extraction of the features and attributes is performed in the server based on the size of the captured and normalized images transferred to the server and the complexity of extraction logic.

16. The method according to claim 15, wherein the report generated by the server comprises a plurality of sections, and wherein the plurality of sections includes metrics section, charts and graphs, visual section and suggestions.

17. The method according to claim 15 further comprises enabling a user to review the generated report on a webpage or a user interface of the smart computing device.

18. The method according to claim 15, further comprises sampling the captured videos into a set of images based on the frames-per-second captured by the in-built camera of the smart computing device.

Documents

Application Documents

# Name Date
1 Power of Attorney [23-02-2016(online)].pdf 2016-02-23
2 Form 5 [23-02-2016(online)].pdf 2016-02-23
4 Drawing [23-02-2016(online)].pdf 2016-02-23
5 Description(Provisional) [23-02-2016(online)].pdf 2016-02-23
6 OTHERS [28-09-2016(online)].pdf 2016-09-28
7 Form-2(Online).pdf 2016-09-28
8 Form 18 [28-09-2016(online)].pdf 2016-09-28
9 Drawing [28-09-2016(online)].pdf 2016-09-28
10 Description(Complete) [28-09-2016(online)].pdf 2016-09-28
11 Form-18(Online).pdf 2016-10-03
12 CERTIFIED COPIES TRANSMISSION TO IB [12-10-2016(online)].pdf 2016-10-12
13 201641006272-FORM 3 [16-08-2017(online)].pdf 2017-08-16
14 201641006272-FER.pdf 2020-05-01

Search Strategy

1 SearchStrategy_201641006272E_29-04-2020.pdf