Abstract: METHOD AND SYSTEM FOR PERFORMING HIERARCHICAL CLASSIFICATION OF OBJECTS IN MICROSCOPIC IMAGE ABSTRACT Embodiments of present disclosure disclose method and system for performing hierarchical classification of objects in microscopic image comprising objects. Initially, root classification is performed on received microscopic image for identifying each of objects to be associated with one of object classes. Root classification is trained using first properties selected based on features of object classes. Upon the root classification, sub-classes are selected based on first probability score associated with each of object classes. Sub-classification is initiated on microscopic image, for each of sub-classes. Initiating of sub-classification includes to perform sub-classification to identify each object associated with one or more sub-classes. The sub-classification is trained using second properties selected based on features of corresponding sub-class from sub-classes. Further, new sub-classes are selected based on second probability score associated with each of sub-classes. The new sub-classes are updated as sub-classes for initiating sub-classification. Figure 3
Claims:We claim:
1. A method for performing hierarchical classification of objects in a microscopic image of a sample, comprising:
receiving, by a hierarchical classification system (101), a microscopic image (208) of a sample comprising plurality of objects;
performing, by the hierarchical classification system (101), a root classification in a hierarchical classification on the microscopic image (208) to identify each of the plurality of objects to be associated with one of plurality of object classes (209), wherein the root classification is trained using one or more first properties (210) selected based on one or more features of the plurality of object classes (209);
selecting, by the hierarchical classification system (101), one or more sub-classes (211) based on a first probability score (212) associated with each of the plurality of object classes (209); and
initiating, by the hierarchical classification system (101), a sub-classification in the hierarchical classification for each of the one or more sub-classes (211), on the microscopic image (208), wherein initiating the sub-classification comprises:
performing, by the hierarchical classification system (101), the sub-classification to identify each object associated with the one or more of sub-classes (211) to be associated with one of the one or more sub-classes (211), wherein the sub-classification is trained using one or more second properties (213) selected based on one or more features of corresponding sub-class from the one or more sub-classes (211); and
selecting, by the hierarchical classification system (101), one or more new sub-classes (214) based on a second probability score (215) associated with each of the one or more sub-classes (211), wherein the one or more new sub-classes (214) are updated as the one or more sub-classes (211) for initiating the sub-classification.
2. The method as claimed in claim 1 further comprising processing of the microscopic image (208) for performing at least one of the root classification based on the one or more first properties (210) and the sub-classification based on the one or more second properties (213).
3. The method as claimed in claim 1, wherein the one or more first properties (210) and the one or more second properties (213) comprises at least one of channel parameter, texture parameter, focus region parameter and size parameter associated with the plurality of objects and a hyper-parameter.
4. The method as claimed in claim 1, wherein the plurality of object classes (209) comprises classes associated with the plurality of objects and a reject class.
5. The method as claimed in claim 4, wherein an object from the plurality of objects is identified to be associated with the reject class when one or more features of the object is not associated with one or more predefined features.
6.The method as claimed in claim 4, wherein an object is identified to be associated with the reject class when the object is not associated with other plurality of object classes (209).
7.The method as claimed in claim 1, wherein the plurality of objects is observed at a relative natural scale in the microscopic image (208).
8. The method as claimed in claim 1, wherein selecting the one or more sub-classes (211) and the one or more new sub-classes (214) comprises:
identifying one or more first classes associated with values of the first probability score (212) greater than a predefined threshold value and one or more second classes associated with values of the second probability score (215) greater than the predefined threshold value; and
selecting the one or more sub-classes (211) based on one or more features of the one or more first classes and the one or more new sub-classes (214) based on one or more features of the one or more second classes.
9. The method as claimed in claim 1, wherein selecting the one or more sub-classes (211) and the one or more new sub-classes (214) comprises:
identifying presence of one or more first confusion sets based on confusion matrix associated with the root classification and the first probability score (212), and presence of one or more second confusion sets based on confusion matrix associated with the sub-classification and the second probability score (215); and
selecting one or more classes associated with the one or more first confusion sets to be the one or more sub-classes (211) and one or more classes associated with the one or more second confusion sets to be the one or more new sub-classes (214).
10. The method as claimed in claim 1, wherein the first probability score (212) indicates probability of the plurality of objects to be associated with corresponding class from the plurality of object classes (209) and the second probability score (215) indicates probability of the plurality of objects to be associated with corresponding sub-class from the one or more sub-classes (211).
11. A hierarchical classification system for performing hierarchical classification of objects in a microscopic image (208) of a sample, comprises
a processor (104); and
a memory (107) communicatively coupled to the processor (104), wherein the memory (107) stores processor-executable instructions, which, on execution, cause the processor (104) to:
receive a microscopic image (208) of a sample comprising plurality of objects;
perform a root classification in a hierarchical classification on the microscopic image (208) to identify each of the plurality of objects to be associated with one of plurality of object classes (209), wherein the root classification is trained using one or more first properties (210) selected based on one or more features of the plurality of object classes (209);
select one or more sub-classes (211) based on a first probability score (212) associated with each of the plurality of object classes (209); and
initiate a sub-classification in the hierarchical classification for each of the one or more sub-classes (211), on the microscopic image (208), wherein initiating the sub-classification comprises:
performing the sub-classification to identify each object associated with one or more sub-classes (211) to be associated with one of the one or more sub-classes (211), wherein the sub-classification is trained using one or more second properties (213) selected based on one or more features of corresponding sub-class from the one or more sub-classes (211); and
selecting one or more new sub-classes (214) based on a second probability score (215) associated with each of the one or more sub-classes (211), wherein the one or more new sub-classes (214) are updated as the one or more sub-classes (211) for initiating the sub-classification.
12. The hierarchical classification system as claimed in claim 11 further comprises the processor (104) configured to process the microscopic image (208) for performing at least one of the root classification based on the one or more first properties (210) and the sub-classification based on the one or more second properties (213).
13. The hierarchical classification system as claimed in claim 11, wherein the one or more first properties (210) and the one or more second properties (213) comprises at least one of channel parameter, texture parameter, focus region parameter and size parameter associated with the plurality of objects and a hyper-parameter.
14. The hierarchical classification system as claimed in claim 11, wherein the plurality of object classes (209) comprises classes associated with the plurality of objects and a reject class.
15. The hierarchical classification system as claimed in claim 14, wherein an object from the plurality of objects is identified to be associated with the reject class when one or more features of the object is not associated with one or more predefined features.
16.The method as claimed in claim 14, wherein an object is identified to be associated with the reject class when the object is not associated with other plurality of object classes (209).
17.The hierarchical classification system as claimed in claim 11, wherein the plurality of objects is observed at a relative natural scale in the microscopic image (208).
18. The hierarchical classification system as claimed in claim 11, wherein selecting the one or more sub-classes (211) and the one or more new sub-classes (214) comprises:
identifying one or more first classes associated with values of the first probability score (212) greater than a predefined threshold value and one or more second classes associated with values of the second probability score (215) greater than the predefined threshold value; and
selecting the one or more sub-classes (211) based on one or more features of the one or more first classes and the one or more new sub-classes (214) based on one or more features of the one or more second classes.
19. The hierarchical classification system as claimed in claim 11, wherein selecting the one or more sub-classes (211) and the one or more new sub-classes (214) comprises:
identifying presence of one or more first confusion sets based on confusion matrix associated with the root classification and the first probability score (212) and presence of one or more second confusion sets based on confusion matrix associated with the sub-classification and the second probability score (215); and
selecting one or more classes associated with the one or more first confusion sets to be the one or more sub-classes (211) and one or more classes associated with the one or more second confusion sets to be the one or more new sub-classes (214).
20. The hierarchical classification system as claimed in claim 11, wherein the first probability score (212) indicates probability of the plurality of objects to be associated with corresponding class from the plurality of object classes (209) and the second probability score (215) indicates probability of the plurality of objects to be associated with corresponding sub-class from the one or more sub-classes (211).
, Description:TECHNICAL FIELD
The present subject matter is related in general to field of classifiers, more particularly, but not exclusively to a system and a method for hierarchical classification of objects in a microscopic image of a sample.
BACKGROUND
A classification framework or a classifier may be implemented on an image to classify one or more objects that are spread across the image. The classifier may be trained to identify relevant class associated with each of the one or more objects based on features of the one or more objects. In some scenarios, there can be significant variability in features among objects of different classes and objects of same class. On the other hand, two classes may be associated with similar features. Further, there may be variability in size of objects associated with different classes in the image. Such wide diversity with respect to features and sizes may increase complexity of the classification. In some scenarios, large number objects associated with non-class entities or reject class may be present in the image and with said diversities, the classifier may not be configured to provide an accurate output. The classifiers may falsely predict objects associated with the reject class to be associated with any other class.
Also, the image for the classification may not be as desired for the classification. Region of interest in the image may not completely cover the objects to be classified or the image may not be in a desired format to perform the classification. In some cases, undesired entities may be present in the image considered for the classification and may aid in undesired output of classification. This may be because the classifier is not trained to classify such undesired entities. Also, when the object of a particular class is blurry, broken or not clear in the image, the classifier may falsely predict such object to be associated with the reject class.
Existing classification systems may disclose multi-order classifiers for performing multiple classifications to classify one or more objects in an image. Classifier in each order may be trained based on an output that is desired. However, such classification systems may be restricted to predefined number of orders in the classifier. Also, the classification may still output false predictions of classes when the image is not as desired for the classification. Also, such classification system may not be configured to prevent propagation of error associated with previous classification of the multi-order classifiers.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
SUMMARY
In an embodiment, the present disclosure relates to a method for performing hierarchical classification of objects in a microscopic image of a sample. Initially, the microscopic image comprising plurality of objects is received and a root classification in the hierarchical classification is performed on the microscopic image. By the root classification, each of the plurality of objects is identified to be associated with one of plurality of object classes. The root classification is trained using one or more first properties selected based on one or more features of the plurality of object classes. Upon the root classification, one or more sub-classes are selected based on a first probability score associated with each of the plurality of object classes. A sub-classification in the hierarchical classification is initiated on the microscopic image, for each of the one or more sub-classes. Initiating of the sub-classification includes to perform the sub-classification to identify each object associated with the one or more sub-classes to be associated with one of the one or more sub-classes. The sub-classification is trained using one or more second properties selected based on one or more features of corresponding sub-class from the one or more sub-classes. Further, one or more new sub-classes are selected based on a second probability score associated with each of the one or more sub-classes. The one or more new sub-classes are updated as the one or more sub-classes for initiating the sub-classification.
In an embodiment, the present disclosure relates to a hierarchical classification system for performing hierarchical classification of objects in a microscopic image of a sample. The hierarchical classification system comprises a processor and a memory communicatively coupled to the processor. The memory stores processor-executable instructions which on execution cause the processor to perform the hierarchical classification. Initially, the microscopic image comprising plurality of objects is received and a root classification in the hierarchical classification is performed on the microscopic image. By the root classification, each of the plurality of objects is identified to be associated with one of plurality of object classes. The root classification is trained using one or more first properties selected based on one or more features of the plurality of object classes. Upon the root classification, one or more sub-classes are selected based on a first probability score associated with each of the plurality of object classes. A sub-classification in the hierarchical classification is initiated on the microscopic image, for each of the one or more sub-classes. Initiating of the sub-classification includes to perform the sub-classification to identify each object associated with the one or more sub-classes to be associated with one of the one or more sub-classes. The sub-classification is trained using one or more second properties selected based on one or more features of corresponding sub-class from the one or more sub-classes. Further, one or more new sub-classes are selected based on a second probability score associated with each of the one or more sub-classes. The one or more new sub-classes are updated as the one or more sub-classes for initiating the sub-classification.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and regarding the accompanying figures, in which:
Figure 1 illustrates an exemplary environment for performing hierarchical classification of objects in a microscopic image of a sample, in accordance with some embodiments of the present disclosure;
Figure 2 shows a detailed block diagram of a hierarchical classification system for performing hierarchical classification of objects in a microscopic image of a sample, in accordance with some embodiments of the present disclosure;
Figure 3 illustrates a flowchart showing an exemplary method for performing hierarchical classification of objects in a microscopic image of a sample, in accordance with some embodiments of present disclosure;
Figure 4a and 4b illustrate a flowchart showing exemplary methods for selecting one or more sub-classes, in accordance with some embodiments of present disclosure;
Figure 5a and 5b illustrate a flowchart showing exemplary methods for selecting one or more new sub-classes, in accordance with some embodiments of present disclosure;
Figure 6a shows an exemplary microscopic image of a sample, in accordance with some embodiments of present disclosure;
Figure 6b and 6c show flowcharts illustrating exemplary methods for hierarchical classification of objects in a microscopic image of a sample, in accordance with some embodiments of present disclosure; and
Figure 7 illustrates a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether such computer or processor is explicitly shown.
DETAILED DESCRIPTION
In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the spirit and the scope of the disclosure.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
The terms “includes”, “including”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that includes a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “includes… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
Present disclosure provides an efficient method for classifying objects in a microscopic image of a sample. The present disclosure implements a hierarchical classification methodology for the classification. The hierarchical classification comprises root classification and one or more sub-classifications to output an accurate classification of the objects. Firstly, the root classification of the objects is performed to identify each of the objects to be associated with a class from object classes. Based on probability score associated with the object classes, one or more sub-classifications are performed to identify each of the objects to be associated with a class from sub-classes. Further, based on probability scores of sub-classes, additional sub-classifications are performed. The present disclosure provisions to dynamically perform the classification until there are no objects for sub-classification.
Figures 1 illustrates an exemplary environment 100 comprising a hierarchical classification system 101, a communication network 102 and a microscopic system 103 for performing hierarchical classification of objects in a microscopic image. The hierarchical classification system 101 may be configured to perform the hierarchical classification by performing steps as disclosed in the present disclosure. The microscopic image on which the hierarchical classification is to be performed may be received from the microscopic system 103. The microscopic image may be an image of a sample comprising plurality of objects in which multiple regions are to be classified for presence of the plurality of objects.
In an embodiment, the sample may be associated with, but is not limited to, urine, blood, semen, tissue, smear, body fluid, biological fluid, cells, biopsy and so on, obtained from a subject. The subject may be a human, an animal, or a plant. In an embodiment, the plurality of objects in the sample may include, but are not limited to, at least one of Red Blood Cell (RBC), White Blood Cell (WBC), RBC clump, WBC clump, epithelial cell (also referred as epithelial), cast, bacteria, yeast, parasite, mucus, sperm, crystal, artefact, malignant cell, rejects and so on. The microscopic system 103 may be any system which is configured to retrieve the microscopic image of the sample and provide the microscopic image to the hierarchical classification system 101. In an embodiment, the microscopic system 103 may comprise a microscope, a stage and an image capturing unit for retrieving the microscopic image. The stage may be configured to hold the sample. The microscope may be configured to focus on region of interest in the sample. The image capturing unit may be configured to capture the microscopic image of the sample. The hierarchical classification system 101 may communicate with the microscopic system 103 via the communication network 102 as shown in the figure. In an embodiment, the communication network 102 may include, without limitation, a direct interconnection, Local Area Network (LAN), Wide Area Network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, and the like. In an embodiment, the hierarchical classification system 101 may be an integral part of the microscopic system 103.
Further, the hierarchical classification system 101 may include a processor 104, I/O interface 105, one or more modules 106 and a memory 107. In some embodiments, the memory 107 may be communicatively coupled to the processor 104. The memory 107 stores processor executable instructions, which, on execution, may cause the hierarchical classification system 101 to perform the hierarchical classification, as disclosed in the present disclosure. The hierarchical classification system 101 may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a smartphone, a tablet, e-book readers, a server, a network server, and the like.
Initially, the hierarchical classification system 101 receives microscopic image comprising the plurality of objects from the microscopic system 103. In an embodiment, the plurality of objects in the microscopic image are observed at a relative natural scale. The hierarchical classification system 101 performs the root classification in the hierarchical classification on the microscopic image. By the root classification, each of the plurality of objects may be identified to be associated with plurality of object classes with their corresponding probabilities. In an embodiment, a region or a patch of the microscopic image may be retrieved by the hierarchical classification system 101 for the classification. The root classification may be performed on the region or the patch. The root classification is trained using one or more first properties selected based on one or more features of the plurality of object classes. The one or more first properties may include, but are not limited to, channel parameter, texture parameter, focus region parameter and size parameter associated with the plurality of objects, and a hyper-parameter. Said hyper-parameter may be associated with classifier configured to perform the root classification.
In an embodiment, the plurality of object classes may include classes associated with the plurality of objects and a reject class. The classes associated with the plurality of objects may be based on the sample. For example, consider the sample to be a urine sample, the classes associated with the plurality of objects may include, but are not limited to, RBC class, WBC class, epithelial class, cast class, artefact class and so on. The RBC in the urine sample are associated with the RBC class, the WBC in the urine sample are associated with the WBC class and so on. In an embodiment, each of said classes may be associated with one or more predefined features relating to the object. For example, one or more features of the RBC may be related to the RBC class, one or more features of the WBC may be related to the WBC class and so on.
By the root classification, each object from the plurality of objects in the microscopic image of the urine sample may be identified to be associated one of said classes or reject class with certain probability score. In an embodiment, an object from the plurality of objects may be identified to be associated with the reject class when one or more features of the object is not associated with the one or more predefined features. In an embodiment, the object may be identified to be associated with the reject class when the object is not associated with other plurality of object classes. In an embodiment, one or more features may be defined for the reject class to identify the object to be associated with the reject class. One or more techniques, known to a person skilled in the art may be implemented for identifying the object to be associated with the reject class.
Further, the hierarchical classification system 101 may be configured to process the microscopic image based on the one or more first properties, for the root classification.
Upon the root classification, the hierarchical classification system 101 may select one or more sub-classes based on a first probability score associated with each of the plurality of object classes. In an embodiment, the first probability score may indicate probability of the plurality of objects to be associated with corresponding class from the plurality of object classes. In an embodiment, the first probability score may be computed for each of the plurality of object classes upon the root classification. One or more techniques, known to a person skilled in the art may be implemented for computing the first probability score.
In an embodiment, selecting the one or more sub-classes includes to identify one or more first classes associated with values of the first probability score greater than a predefined threshold value. Further, based on the one or more features of the one or more first classes, the one or more sub-classes may be selected. In an embodiment, selecting the one or more sub-classes includes to identify presence of one or more first confusion sets based on confusion matrix associated with the root classification and the first probability score. Further, one or more classes associated with the one or more first confusion sets may be selected to be the one or more sub-classes. In an embodiment, the one or more sub-classes may be one of the plurality of object classes or may be a new class associated with corresponding object class.
Upon selection of the one or more sub-classes, the hierarchical classification system 101 may initiate the sub-classification in the hierarchical classification on the microscopic image. The sub-classification may be initiated for each of the one or more sub-classes. Initiating of the sub-classification includes to perform the sub-classification to identify each object associated with the one or more sub-classes to be associated with one of the one or more sub-classes with their corresponding probabilities. The sub-classification is trained using one or more second properties selected based on one or more features of corresponding sub-class from the one or more sub-classes. In an embodiment, the one or more second properties may include, but are not limited to channel parameter, texture parameter, focus region parameter and size parameter associated with the plurality of objects and hyper-parameters. Said hyper-parameters are associated with classifier configured to perform the sub-classification.
Upon the sub-classification, the hierarchical classification system 101 may select one or more new sub-classes based on a second probability score associated with each of the one or more sub-classes. In an embodiment, the second probability score may be computed for each of the one or more sub-classes upon the sub-classification. In an embodiment, the second probability score may indicate probability of the plurality of objects to be associated with corresponding sub-class from the one or more sub-classes. One or more techniques, known to a person skilled in the art may be implemented for computing the second probability score.
In an embodiment, selecting the one or more new sub-classes includes to identify one or more second classes associated with values of the second probability score greater than the predefined threshold value. Further, based on the one or more features of the one or more second classes, the one or more new sub-classes may be selected. In an embodiment, selecting the one or more new sub-classes includes to identify presence of one or more second confusion sets based on confusion matrix associated with the sub-classification classification and the first probability score. Further, one or more classes associated with the one or more second confusion sets may be selected to be the one or more new sub-classes. In an embodiment, the one or more new sub-class may be one of the one or more sub-classes or may be a new class associated with corresponding sub-class.
Further, the hierarchical classification system 101 may update the one or more new sub-classes as the one or more sub-classes for initiating the sub-classification. As previously described, steps of performing sub-classification on the one or more sub-classes and selecting one or more new sub-classes may be repeated until no one or more new sub-classes are selected.
In an embodiment, the hierarchical classification system 101 may be configured to process the microscopic image based on the one or more second properties, for performing the sub-classification from each of the one or more sub-classes. In an embodiment, processing of the microscopic image may include to retrieve a region or a patch from the microscopic image based on the one or more second properties.
Figure 2 shows a detailed block diagram of the hierarchical classification system 101 for performing hierarchical classification of the microscopic image in accordance with some embodiments of the present disclosure.
Data 207 in the memory 107 and the one or more modules 106 of the hierarchical classification system 101 may be described herein in detail.
In one implementation, the one or more modules 106 may include, but are not limited to, an image receive module 201, a root classifier module 202, a sub-class selection module 203, a sub-classifier module 204, an image processor module 205, and one or more other modules 206, associated with the hierarchical classification system 101.
In an embodiment, the data 207 in the memory 107 may comprise microscopic image data 208 (also referred to as microscopic image 208), object class data 209 (also referred to as plurality of object classes 209), first property data 210 (also referred to one or more first properties 210), sub-class data 211 (also referred to as one or more sub-classes 211), first probability score data 212 (also referred to as first probability score 212), second property data 213 (also referred to one or more second properties 213), new sub-class data 214 (also referred to as one or more new sub-classes 214), second probability score data 215 (also referred to as second probability score 215), predefined threshold value 216, confusion matrix data 217, confusion set data 218, and other data 219 associated with the hierarchical classification system 101.
In an embodiment, the data 207 in the memory 107 may be processed by the one or more modules 106 of the hierarchical classification system 101. As used herein, the term module refers to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide the described functionality. The one or more modules 106 when configured with the functionality defined in the present disclosure may result in a novel hardware.
The image receive module 201 may receive the microscopic image 208 comprising the plurality of objects from the microscopic system 103. In an embodiment , the image receive module 201 may be configured to receive a region or a patch of the microscopic image 208. In an embodiment, size of the region or the patch may be selected based on the one or more first properties 210. In an embodiment, the plurality of objects in the microscopic image 208 are observed at the relative natural scale. In an embodiment, the plurality of objects may be approximately at their relative natural scales. This may occur when most of the plurality of objects are present at same distance from a view. The view may be associated with the image capturing unit which is capturing the microscopic image 208 or may be associated with a user who is viewing the microscopic image 208. For example, an image observed under the microscope may be at their natural scales, since the plurality of objects in the image are approximately at a level plane and their relatively sizes are sufficiently preserved.
Figure 6a shows an exemplary microscopic image 600a of the sample, in accordance with some embodiments of present disclosure. The microscopic image 600a may comprise plurality of object including the rejects. The plurality of objects may tend to have huge variations in size across different classes. For example, consider a urine sample for which epithelial cells and cast cells may be of area which is hundred times bigger than that of area of a bacterium in the urine sample. Also, the plurality of objects may vary based on one or more features including, but not limited to, size, colour, texture, colour channels, colour space and so on. If boundary of the epithelial cells and the cast cells are not visible in the microscopic image, the classifier may be confused to accurately identify the class associated with the epithelial cells and the cast cells. Similarly, consider the sample to be a blood sample, objects such as immature granulocytes and blast cells in the blood sample may be confused during classification, due to structure of small round nuclei surrounded by cytoplasm with granular structures. By the hierarchical classification proposed in the present disclosure, diversity in features of the plurality of objects are considered for identifying the class of each of the plurality of objects. In an embodiment, the image receive module 201 may be configured to extract a patch or a region from the microscopic image 208 for performing the hierarchical classification. In an embodiment, size of the patch may be selected based on one or more properties of the plurality of objects. For example, the size of the patch may be selected such that boundaries of objects from the plurality of objects are within the patch or the region. .
The root classifier module 202 may perform the root classification in the hierarchical classification on the region or the patch of the received microscopic image 208. In the hierarchical classification proposed in the present disclosure, the root classification is first classification that performs inference on the microscopic image 208. In an embodiment, the root classifier module 202 may invoke one or more known classifier models for performing the root classification. In an embodiment, the classifier models may be selected from decision tree classifiers, rule-based classifiers, neural networks, Convolutional Neural Networks (CNN), support vector machines, naive Bayes classifiers, and so on. In an embodiment of the present disclosure, the root classification may be performed by invoking a CNN, CNN variant or conventional manual feature-based classifier. In an embodiment, the root classification may be trained to encompass all possible plurality of object classes 209 including the reject class. Further, the invoked classifier may be trained using the one or more first properties 210 for performing the root classification. In an embodiment, the one or more first properties 210 may include, but are not limited to, the channel parameter, the texture parameter, the focus region parameter, size parameter and so on associated with the plurality of objects. The one or more first properties 210 may also include a hyper-parameter. The hyper-parameter-based training of the root classification may include tuning of the root classifier module 202 using the hyper-parameter. The hyper-parameter may be selected based on the features of the plurality of object classes 209. In an embodiment, the hyper-parameter may be selected by performing cross-validation for the root classification.
Upon the root classification, the sub-classifier selection module 203 may select one or more sub-classes 211 based on the first probability score 212 associated with each of the plurality of object classes 209. In an embodiment, the probability score may represent confidence of the root classifier module 202 in predicting objects in the region or the patch of the microscopic image 208 to be associated with a class from the plurality of classes. In an embodiment, the first probability score 212 may be determined using unnormalized probabilities, log probabilities and so on. One or more techniques, known to a person skilled in the art, may be implemented for determining the first probability score 212.
In an embodiment, selecting the one or more sub-classes 211 includes to identify one or more first classes associated with values of the first probability score 212 greater than a predefined threshold value. In an embodiment, the one or more sub-classes 211 may be associated with highest value of the first probability score 212 and the selected one or more sub-classes 211 may be referred to as most probable classes. Further, based on the one or more features of the one or more first classes, the one or more sub-classes 211 may be selected. One or more techniques, known to a person skilled in the art may be used to select and identify the one or more first classes using the values of the first probability score 212 and the predefined threshold value 216.
In an embodiment, selecting the one or more sub-classes 211 includes to identify presence of one or more first confusion sets from the confusion set data 218, based on the confusion matrix associated with the root classification and the first probability score 212. The confusion matrix associated with the root classification may be stored as the confusion matrix data 217 in the hierarchical classification system 101. In an embodiment, the confusion matrix associated with the root classification may be received by performing confusion matrix analysis on the root classification. In an embodiment, confusion matrix analysis may be performed based on a validation data set associated with the root classification. The confusion matrix associated with the root classification may include the one or more first confusion sets. Each of the one or more first confusion sets may include one or more classes which may be identified to be confused for performing the root classification. The confusion matrix-based analysis is used during training time. During inference, the presence of the one or more first confusion sets may be identified using the first probability score 212. During inference, in an embodiment, one or more classes in the confusion matrix associated with the root classification, with probability score greater than the predefined threshold value, may be identified to be the one or more first confusion sets. During training, one or more methods, known to a person skilled in the art, may be implemented in identifying the presence of the one or more first confusion sets. For example, consider a microscopic image 208 of the urine sample for which the root classification is performed. The confusion matrix received upon the root classification may comprise a confusion set including classes namely, the epithelial class, the cast class, and the reject class.. Similarly, consider a microscopic image 208 of the blood sample for which the root classification is performed. The confusion matrix received upon the root classification may comprise a confusing set including classes namely, blast class and atypical class. During training, the confusion sets may help in deciding which sub-classifiers consisting of which sub-classes are to be trained, while during inference the confusion sets help in deciding which sub-classifier is to be invoked next.
Upon identifying the presence of one or more first confusion sets, the one or more sub-classes 211 may be identified based on the one or more classes associated with the one or more first confusion sets. From the above example for the urine sample, one or more features associated with said classes i.e., the epithelial class, the cast class, and the reject class, may be the size parameter as shape formed by boundaries of epithelial cells and cast cells is a key feature in distinguishing between the epithelial cells, the cast cells, and the rejects. Hence, the one or more sub-classes 211 may be the epithelial class, the cast class and the reject class. The sub-classification may be trained using the one or more second properties 213 as the size parameter. By training the sub-classification, the objects associated with said sub-classes 211 may be identified to be associated with one of the epithelial class, the cast class, and the reject class. Similarly, for the blood sample, one or more features associated with said classes i.e., the blast class and the atypical class, may be shape of nucleoli in nucleus of blast cell and atypical cell. The focus region parameter may be key feature in distinguishing between the blast cell and the atypical cell. Hence, the one or more sub-classes 211 may be the blast class and the atypical class. The sub-classification may be trained using the one or more second properties 213 as the focus region parameter. By said sub-classification, objects associated with said sub-classes 211 may be identified to be associated with one of the blast class and the atypical class.
In an embodiment, the sub-classifier module 204 may invoke one or more known classifier models for performing the sub-classification. In an embodiment, the classifier models may be selected from one of decision tree classifiers, rule-based classifiers, neural networks, CNN, support vector machines, naive Bayes classifiers, and so on. In an embodiment of the present disclosure, the sub-classification may be performed by invoking a CNN, CNN variant or conventional manual feature-based classifier. The one or more second properties 213, for training the sub-classification may include, but are not limited to, the channel parameter, the texture parameter, the focus region parameter, and the size parameter associated with the plurality of objects. The one or more second properties 213 may also include a hyper-parameter. In an embodiment, the root classification may be trained to encompass all possible one or more sub-classes 211. In an embodiment, the one or more one or more sub-classes 211 may or may not include the reject class. The hyper-parameter-based training may include tuning of the sub-classifier module 204 using the hyper-parameter. The hyper-parameter may be selected based on the features of the plurality of object classes 209. In an embodiment, the hyper-parameter may be selected by performing cross-validation for the root classification.
Upon the sub-classification, the sub-classifier selection module 204 may select the one or more new sub-classes 214 based on the second probability score 215 associated with each of the one or more sub-classes 211. In an embodiment, the second probability score 215 may be computed for each of the one or more sub-classes 211 upon the sub-classification.
In an embodiment, selecting the one or more new sub-classes 214 includes to identify one or more second classes associated with values of the second probability score 215 greater than the predefined threshold value. Further, based on the one or more features of the one or more second classes, the one or more new sub-classes 214 may be selected. Consider the sub-classification illustrated in the previous example for the blood sample. Upon the sub-classification, the blast class and immature granulocyte class may be associated with the second probability score 215 greater than the predefined threshold values. The one or more new sub-classes 214 may be selected to be the blast class and the immature granulocyte class. Further, the one or more new sub-classes 214 may be updated to the one or more sub-classes 211 for initiating the sub-classification. A differentiating factor between the blasts cells and the immature granulocytes is that the immature granulocytes are typically blue in colour and may be highlighted in saturation plane of HSV colour space. Hence, the sub-classification may be trained using the one or more second properties 213 to be the channel parameter, to identify the objects in the microscopic image 208 to be associated with one of the blast class and the immature granulocyte class.
In an embodiment, selecting the one or more new sub-classes includes to identify presence of one or more second confusion sets from the confusion set data 218, based on the confusion matrix associated with the sub-classification and the second probability score 215. The confusion matrix associated with the sub-classification may be stored as the confusion matrix data 217 in the hierarchical classification system 101. In an embodiment, the confusion matrix associated with the sub-classification may be received by performing confusion matrix analysis on the sub-classification on a validation data set. The confusion matrix associated with the sub-classification may include the one or more second confusion sets. Each of the one or more second confusion sets may include one or more classes which may be identified to be confused for performing the sub-classification. The confusion matrix-based analysis is used during training time. During inference, the presence of the one or more second confusion sets may be identified using the second probability score 212. In an embodiment, one or more classes associated with the root classification, with probability score greater than the predefined threshold value, may be identified to be the one or more second confusion sets. One or more methods, known to a person skilled in the art, may be implemented in identifying the presence of the one or more second confusion sets. During training, the confusion sets help in deciding which sub-classifiers consisting of which sub-classes are to be trained, while during inference the confusion sets help in deciding which sub-classifier is to be invoked next.
In an embodiment, the image processor module 205 may be configured to process the microscopic image for performing each of the root classification and the sub-classification. The microscopic image may be processed based on the one or more first properties 210 for performing the root classification. The microscopic image may be processed based on the one or more second properties 213 for performing the sub-classification. For example, consider the one or more first properties 210 for performing the root classification may be the size parameter. The image processor module 205 may be configured to process the microscopic image to extract a patch or region with predetermined size or size found through cross-validation during training. Similarly, consider the one or more second properties 213 for performing the sub-classification may be the shape parameter. The image processor module 205 may be configured to process the microscopic image to focus the shape of objects.
In the present disclosure, initiation of the sub-classification may be performed until no one or more new sub-classes are selected. Initiation may be an iterative process until the absence of confusion sets is identified. Each of the initiated sub-classification may be associated with corresponding one or more second properties and corresponding second probability score. The one or more second properties 213 of each of the initiated sub-classification may be stored as the second property data 213 in the hierarchical classification system 101. The second probability score 215 of each of the initiated sub-classification may be stored as the second probability score data 215 in the hierarchical classification system 101.
Figure 6b shows a flow chart illustrating an exemplary example for hierarchical classification of objects in the microscopic image 208 of the sample.
Consider hierarchical classification of plurality of objects in the urine sample is to be performed by the hierarchical classification system 101. By the hierarchical classification, accurate class associated with each of the plurality of objects may be identified.
At block 601, the microscopic image 208 of the urine sample, comprising the plurality of objects, may be received. The size of the patch of the microscopic image 208 may be selected to be large enough to include boundaries of most of the plurality of objects. Also, the size of the patch may be selected such that neighbouring objects from the plurality of objects do not appear significantly closer in the patch. In an embodiment, the size of the patch may be selected using histograms, where the histogram may be drawn for plurality of patch sizes for the plurality of object classes 209. One size may be selected based on percentile in the histogram. For example, patch size of 128 pixels can contain most of the plurality of objects in 7 out of 10 classes, while a bigger patch size of 512 pixels may also be able to contain the plurality of objects from the remaining three classes and include the neighbouring objects of other classes. In this case, the patch size of 128 pixels may be chosen. The remaining three objects may be bigger objects in this setting. One or more techniques, known to a person skilled in the art may be implemented for selecting the size of the patch.
At block 602, the root classification may be performed on the microscopic image 208. By the root classification, each of the plurality of objects in the urine sample may be identified to be associated with one of the plurality of object classes 209. Upon root classification, the first probability score 212 associated with each of the plurality of object classes 209 may be computed.
At block 603, check for the reject class to be most probable class or second most probable class is performed. The most probable class may be class which is associated with highest value of the first probability score 212 associated with each of the plurality of object classes 209. The second most probable class may be class which is associated with second highest value of the first probability score 212 associated with each of the plurality of object classes 209. If the reject class is identified to be one of the most probable class and the second most probable class, step at block 604 is performed. If the reject class is not identified to be one of the most probable class and the second most probable class, step at block 606 is performed.
At block 604, upon identifying the reject class to be one of the most probable class and the second most probable class, the microscopic image 208 may be processed for sub-classification of the reject class and the other classes. In an embodiment, the microscopic image 208 may be processed such that objects associated with the reject class may be differentiated with objects associated with other object classes 209. In an embodiment, the microscopic image 208 may be processed based on the shape parameter.
At block 605, the sub-classification for classifying the reject class and the other classes may be performed. By said sub-classification, the plurality of objects may be identified to be associated with one of the reject class and the other classes in the top-2 most probable classes 209. In an embodiment, the sub-classification may be trained to classify the objects based on the shape parameter. In an embodiment, a dedicated binary classifier may be invoked for performing said sub-classification.
At block 607, upon identifying the bigger object class to be one of the most probable class and the second most probable class, the microscopic image 208 may be processed for classification of the bigger object class. In an embodiment, the microscopic image 208 may be processed such that objects associated with the bigger object class may be differentiated with objects associated with other bigger object class or other class. In an embodiment, the microscopic image 208 may be processed based on the texture parameter. In an embodiment, processing of the microscopic image may include changing size of the patch of the microscopic image 208 to cover most of the plurality of objects for performing the sub-classification.
At block 608, the sub-classification for classifying the bigger object class may be performed. By said sub-classification, the plurality of objects may be identified to be associated with one of the bigger object class and or other class. In an embodiment, the sub-classification may be trained to classify the objects based on the texture parameter.
At block 609, upon identifying the bigger object class to not be one of the most probable class and the second most probable class, output of the root classification may be predicted to be an accurate output and provided to a user.
In an embodiment, upon identifying the bigger object class to not be one of the most probable class and the second most probable class, one or more other sub-classes 211 may be considered for performing further sub-classification of the microscopic image 208. The one or more other sub-classes may be selected for further sub-classification based on previously classified objects. For example, upon identifying the bigger object class to not be one of the most probable class and the second most probable class, check for a class which is the most probable class or and the second most probable class, may be performed. The selected class can be any other class, like the immature granulocyte class, blast cell class and so on. Further, based on the check, the sub-classification for the selected class may be performed.
Figure 6c shows a flow chart illustrating an exemplary method for hierarchical classification of objects in the microscopic image 208 of the sample.
Consider hierarchical classification of plurality of objects in a blood sample is to be performed by the hierarchical classification system 101. By the hierarchical classification, accurate class associated with each of the plurality of objects may be identified.
At block 610, the microscopic image 208 of the blood sample, comprising the plurality of objects, may be received. The size of the patch of the microscopic image 208 may be selected as disclosed previously. In an embodiment, one or more techniques, known to a person skilled in the art may be implemented for selecting the size of the patch.
At block 611, the root classification may be performed on the microscopic image 208. By the root classification, each of the plurality of objects may be identified to be associated with one of the plurality of object classes 209. Upon root classification, the probability score associated with each of the plurality of object classes 209 may be computed. Also, confusion matrix analysis may be performed in the root classification to find confusion sets. Based on the confusion matrix during training time and the probability scores during inference time, presence of the one or more first confusion sets may be identified.
At block 612, check for the RBC class to be associated with the most probable class from confusion matrix of the root classification is performed. If the RBC class is associated with the most probable class, step at block 613 may be performed. If the RBC class is not associated with the most probable class, step at block 619 may be performed.
At block 613, upon identifying the most probable class to be the RBC class, the microscopic image 208 may be processed for performing sub-classification associated with the RBC class. In an embodiment, the microscopic image 208 may be processed such that objects associated with the RBC class may be further classified between sub-classes associated with the RBC class. In an embodiment, the microscopic image 208 may be processed to input a grayscale image of the microscopic image 208, to the sub-classification associated with the RBC class. In an embodiment, the sub-classes associated with the RBC class may include, but are not limited to, round object class, elliptocyte class, echinocyte class, target class, teardrop class, broken class, invalid class and so on.
At block 614, the sub-classification for classifying the RBC class may be performed. By said sub-classification, the plurality of objects may be identified to be associated with one of the sub-classes associated with the RBC class. Upon said sub-classification, the probability score associated with each of the sub-classes associated with the RBC class may be computed. Also, confusion matrix analysis may be performed in said sub-classification to retrieve the confusion matrix associated with said sub-classification. Based on the confusion matrix and the probability score, presence of the one or more second confusion sets may be identified.
Upon performing step at block 614, consider the confusion matrix associated with the sub-classification may include the round object class. Step at block 615 may be performed further.
At block 615, check for round object class and target object class is associated with the most probable class is performed. If the round object class and the target object class is associated with the most probable class, step at block 616 may be performed. If the round object class and the target object class is not associated with the most probable class, step at block 620 may be performed.
At block 616, upon identifying the round object class and the target object class to be associated with the most probable class, the microscopic image 208 may be processed for sub-classification to classify the round object class and the target object class. In an embodiment, the microscopic image 208 may be processed such that objects associated with sub-classes of the RBC class may be further classified between the round object class and the target object class. In an embodiment, the microscopic image 208 may be processed to input a binary image of the microscopic image 208, to the sub-classification of the round object class and the target object class.
At block 617, the sub-classification for classifying the round object class and the target object class may be performed. By said sub-classification, the plurality of objects may be identified to be associated with one of the round object class and the target object class. Upon said sub-classification, the probability score associated with each of the round object class and the target object class may be computed. Also, confusion matrix analysis may be performed in said sub-classification to retrieve the confusion matrix associated with said sub-classification. Based on the confusion matrix and the probability score, presence of the one or more second confusion sets may be identified. In the given example, absence of the one or more second confusion sets are identified.
At block 618, when the absence of the one or more second confusion sets are identified upon performing step at block 617, , output of the sub-classification associated with the round object class and the target object class may be predicted to be an accurate output and provided to the user.
At block 619, when the RBC class is not associated with the most probable class, upon performing step at block 612, , output of the root-classification may be predicted to be an accurate output and provided to the user.
At block 620, when the round object class and the target object class is not associated with the most probable class, upon performing step at block 615, output of the sub-classification associated with the RBC class may be predicted to be an accurate output and provided to the user.
The other data 219 may store data, including temporary data and temporary files, generated by modules for performing the various functions of the hierarchical classification system 101. The one or more modules 106 may also include other modules 206 to perform various miscellaneous functionalities of the hierarchical classification system 101. It will be appreciated that such modules may be represented as a single module or a combination of different modules.
Figure 3 illustrates a flowchart showing an exemplary method for performing the hierarchical classification of objects in the microscopic image 208.
At block 301, the image receive module 201 may be configured to receive the microscopic image 208 of the sample comprising the plurality of objects. In an embodiment, the plurality of object in the microscopic image 208 may be observed at the relative natural scale.
At block 302, the root classifier module 202 may be configured to perform the root classification on the microscopic image 208 to identify each of the plurality of objects to be associated with one of plurality of object classes 209. The root classification may be trained using the one or more first properties 210 selected based on the one or more features of the plurality of object classes 209.
At block 303, the sub-class selection module 203 may be configured to check if the one or more sub-classes 211 are selected based on the first probability score 212 associated with each of the plurality of object classes 209. If the one or more sub-classes 211 are selected based on the first probability score 212, step at block 304 may be performed. If the one or more sub-classes 211 are not selected based on the first probability score 212, step at block 308 may be performed.
Figure 4a and 4b illustrate a flowchart showing exemplary methods for selecting the one or more sub-classes 211. Figure 4a, corresponds to train time of the hierarchical classification while Figure 4b corresponds to inference time of the hierarchical classification.
In Figure 4a, at block 401, the sub-class selection module 203 may be configured to compare the first probability score 212 of each of the plurality of object classes 209 with the predefined threshold value.
At block 402, the sub-class selection module 203 may be configured to identify the one or more first classes associated with values of the first probability score 212 greater than the predefined threshold value.
At block 403, the sub-class selection module 203 may be configured to select the one or more sub-classes 211 based on the one or more features of the one or more first classes.
In Figure 4b, at block 404, the sub-class selection module 203 may be configured to receive the confusion matrix associated with the root classification.
At block 404, the sub-class selection module 203 may be configured to identify presence of the one or more first confusion sets based on the confusion matrix associated with the root classification and the first probability score 212.
At block 404, the sub-class selection module 203 may be configured to select the one or more classes associated with the one or more first confusion sets to be the one or more sub-classes 211.
Referring to Figure 3, at block 304, upon verifying the selection of the one or more sub-classes, the sub-classifier module 204 may be configured to initiate the sub-classification in the hierarchical classification for each of the one or more sub-classes 211 on the microscopic image 208.
At block 305, the sub-classifier module 204 may be configured to perform the sub-classification to identify each object associated with the one or more of sub-classes 211 to be associated with one of the one or more sub-classes 211. The sub-classification may be trained using the one or more second properties 213 selected based on the one or more features of corresponding sub-class from the one or more sub-classes 211.
At block 306, the sub-classifier module 204 may be configured to check if the one or more new sub-classes 214 are selected based on the second probability score 215 associated with each of the one or more sub-classes. If the one or more new sub-classes 214 are selected based on the second probability score 215, step at block 307 may be performed. If the one or more new sub-classes 214 are not selected based on the second probability score 215, step at block 309 may be performed.
Figure 5a and 5b illustrate a flowchart showing exemplary methods for selecting the one or more new sub-classes 214. Figure 5a, corresponds to train time while Figure 5b corresponds to inference time.
In Figure 5a, at block 501, the sub-classifier module 204 may be configured to compare the second probability score 215 of each of the plurality of object classes 209 with the predefined threshold value.
At block 502, the sub-classifier module 204 may be configured to identify the one or more second classes associated with values of the second probability score 215 greater than the predefined threshold value.
At block 503, the sub-classifier module 204 may be configured to select the one or more new sub-classes 214 based on the one or more features of the one or more second classes.
In Figure 5b, at block 504, the sub-classifier module 204 may be configured to receive the confusion matrix associated with the sub-classification.
At block 505, the sub-classifier module 204 may be configured to identify presence of the one or more second confusion sets based on the confusion matrix associated with the sub-classification and the second probability score 215.
At block 506, the sub-classifier module 204 may be configured to select the one or more classes associated with the one or more second confusion sets to be the one or more new sub-classes 214.
Referring to Figure 3, at block 307, upon verifying the selection of the one or more new sub-classes, the sub-classifier module 204 may be configured to update the one or more new sub-classes 214 as the one or more sub-classes 211 for initiating the sub-classification. Upon updating, steps at blocks 304-307 may be repeated. The steps at blocks 304-307 may be repeated until the sub-classifier module 204 is unable to one or more new sub-classes 214 from the previous sub-classification.
At block 308, upon verifying that the one or more sub-classes are selected, output of the root classification by performing the step at block 302 may be provided to the user.
At block 309, upon verifying that the one or more new sub-classes are selected, output of the sub-classification by performing the step at block 305 may be provided to the user.
As illustrated in Figures 3, 4a-4b and 5a-5b the methods 300, 400a-400b and 500a-500b may include one or more blocks for executing processes in the hierarchical classification system 101. The methods 300, 400a-400b and 500a-500b may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
The order in which the methods 300, 400a-400b and 500a-500b are described may not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
Computing System
Figure 7 illustrates a block diagram of an exemplary computer system 700 for implementing embodiments consistent with the present disclosure. In an embodiment, the computer system 700 is used to implement the hierarchical classification system 101. The computer system 700 may include a central processing unit (“CPU” or “processor”) 702. The processor 702 may include at least one data processor for executing processes in Virtual Storage Area Network. The processor 702 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
The processor 702 may be disposed in communication with one or more input/output (I/O) devices 709 and 710 via I/O interface 701. The I/O interface 701 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 702.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
Using the I/O interface 701, the computer system 700 may communicate with one or more I/O devices 709 and 710. For example, the input devices 709 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output devices 710 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma display panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
In some embodiments, the computer system 700 may consist of the hierarchical classification system 101. The processor 702 may be disposed in communication with the communication network 711 via a network interface 703. The network interface 703 may communicate with the communication network 711. The network interface 703 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 702.11a/b/g/n/x, etc. The communication network 711 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface 703 and the communication network 711, the computer system 700 may communicate with a microscopic system 712 for performing hierarchical classification of a microscopic image received from the microscopic system 712. The network interface 703 may employ connection protocols include, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 702.11a/b/g/n/x, etc.
The communication network 711 includes, but is not limited to, a direct interconnection, an e-commerce network, a peer to peer (P2P) network, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, Wi-Fi, and such. The first network and the second network may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the first network and the second network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
In some embodiments, the processor 702 may be disposed in communication with a memory 705 (e.g., RAM, ROM, etc. not shown in Figure 7) via a storage interface 704. The storage interface 704 may connect to memory 705 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as, serial advanced technology attachment (SATA), Integrated Drive Electronics (IDE), IEEE-1394, Universal Serial Bus (USB), fibre channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
The memory 705 may store a collection of program or database components, including, without limitation, user interface 706, an operating system 707 etc. In some embodiments, computer system 700 may store user/application data 706, such as, the data, variables, records, etc., as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle ® or Sybase®.
The operating system 707 may facilitate resource management and operation of the computer system 700. Examples of operating systems include, without limitation, APPLE MACINTOSH® OS X, UNIX®, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTIONTM (BSD), FREEBSDTM, NETBSDTM, OPENBSDTM, etc.), LINUX DISTRIBUTIONSTM (E.G., RED HATTM, UBUNTUTM, KUBUNTUTM, etc.), IBMTM OS/2, MICROSOFTTM WINDOWSTM (XPTM, VISTATM/7/8, 10 etc.), APPLE® IOSTM, GOOGLE® ANDROIDTM, BLACKBERRY® OS, or the like.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
Advantages
An embodiment of the present disclosure discloses an effective classification methodology for classifying objects in a microscopic image. A hierarchical classification is implemented to dynamically improve the accuracy of a classification system.
An embodiment of the present disclosure discloses to reduce propagation of error arising in naive hierarchical classification. This may be achieved by training each classifier based on output of previous classifier and training a classifier using property different from that of the other classifiers. By this, the error of the previous classifier may be corrected rather than carrying it forward.
An embodiment of the present disclosure is configured to adapt the microscopic image based on the classification which is to be performed. This may be achieved by processing the microscopic image before performing a classification. By this, error in output of the classification may be reduced.
The described operations may be implemented as a method, system or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code maintained in a “non-transitory computer readable medium”, where a processor may read and execute the code from the computer readable medium. The processor is at least one of a microprocessor and a processor capable of processing and executing the queries. A non-transitory computer readable medium may include media such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), etc. Further, non-transitory computer-readable media may include all computer-readable media except for a transitory. The code implementing the described operations may further be implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.).
Still further, the code implementing the described operations may be implemented in “transmission signals”, where transmission signals may propagate through space or through a transmission media, such as, an optical fibre, copper wire, etc. The transmission signals in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The transmission signals in which the code or logic is encoded is capable of being transmitted by a transmitting station and received by a receiving station, where the code or logic encoded in the transmission signal may be decoded and stored in hardware or a non-transitory computer readable medium at the receiving and transmitting stations or devices. An “article of manufacture” includes non-transitory computer readable medium, hardware logic, and/or transmission signals in which code may be implemented. A device in which the code implementing the described embodiments of operations is encoded may include a computer readable medium or hardware logic. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the invention, and that the article of manufacture may include suitable information bearing medium known in the art.
The terms “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean “one or more (but not all) embodiments of the invention(s)” unless expressly specified otherwise.
The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise.
The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.
The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
The illustrated operations of Figures 3, 4a-4b and 5a-5b show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified, or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Referral numerals:
Reference Number Description
100 Environment
101 Hierarchical classification system
102 Communication network
103 Microscopic system
104 Processor
105 I/O interface
106 Modules
107 Memory
201 Image receive module
202 Root classifier module
203 Sub-class selection module
204 Sub-classifier module
205 Image processor module
206 Other modules
207 Data
208 Microscopic image data
209 Object class data
210 First property data
211 Sub-class data
212 First probability score data
213 Second property data
214 New sub-class data
215 Second probability score data
216 Predefined threshold value
217 Confusion matric data
218 Confusion set data
219 Other data
700 Computer System
701 I/O Interface
702 Processor
703 Network Interface
704 Storage Interface
705 Memory
706 User Interface
707 Operating System
708 Web Server
709 Input Devices
710 Output Devices
711 Communication Network
712 Microscopic system
| # | Name | Date |
|---|---|---|
| 1 | 201841020083-STATEMENT OF UNDERTAKING (FORM 3) [29-05-2018(online)].pdf | 2018-05-29 |
| 2 | 201841020083-FORM 1 [29-05-2018(online)].pdf | 2018-05-29 |
| 3 | 201841020083-DRAWINGS [29-05-2018(online)].pdf | 2018-05-29 |
| 4 | 201841020083-DECLARATION OF INVENTORSHIP (FORM 5) [29-05-2018(online)].pdf | 2018-05-29 |
| 5 | 201841020083-COMPLETE SPECIFICATION [29-05-2018(online)].pdf | 2018-05-29 |
| 6 | 201841020083-Proof of Right (MANDATORY) [12-11-2018(online)].pdf | 2018-11-12 |
| 7 | Correspondence by Agent_Form 1_19-11-2018.pdf | 2018-11-19 |
| 8 | 201841020083-OTHERS [28-11-2018(online)].pdf | 2018-11-28 |
| 9 | 201841020083-FORM FOR STARTUP [28-11-2018(online)].pdf | 2018-11-28 |
| 10 | 201841020083-FORM-9 [08-12-2018(online)].pdf | 2018-12-08 |
| 11 | 201841020083-FORM 18A [17-12-2018(online)].pdf | 2018-12-17 |
| 12 | 201841020083-FORM 3 [03-01-2019(online)].pdf | 2019-01-03 |
| 13 | 201841020083-FER.pdf | 2019-01-07 |
| 14 | 201841020083-FER_SER_REPLY [21-03-2019(online)].pdf | 2019-03-21 |
| 15 | 201841020083-DRAWING [21-03-2019(online)].pdf | 2019-03-21 |
| 16 | 201841020083-COMPLETE SPECIFICATION [21-03-2019(online)].pdf | 2019-03-21 |
| 17 | 201841020083-CLAIMS [21-03-2019(online)].pdf | 2019-03-21 |
| 18 | 201841020083-HearingNoticeLetter.pdf | 2019-05-17 |
| 19 | 201841020083-Written submissions and relevant documents (MANDATORY) [25-06-2019(online)].pdf | 2019-06-25 |
| 20 | Marked up Claims_Granted 316269_17-07-2019.pdf | 2019-07-17 |
| 21 | Drawings_Granted 316269_17-07-2019.pdf | 2019-07-17 |
| 22 | Description_Granted 316269_17-07-2019.pdf | 2019-07-17 |
| 23 | Claims_Granted 316269_17-07-2019.pdf | 2019-07-17 |
| 24 | Abstract_Granted 316269_17-07-2019.pdf | 2019-07-17 |
| 25 | 201841020083-PatentCertificate17-07-2019.pdf | 2019-07-17 |
| 26 | 201841020083-IntimationOfGrant17-07-2019.pdf | 2019-07-17 |
| 27 | 201841020083-RELEVANT DOCUMENTS [18-03-2020(online)].pdf | 2020-03-18 |
| 1 | search_07-01-2019.pdf |