Abstract: Abstract The present invention is a system and method for a spatio-temporal neural The present invention is a system and method for a spatio-temporal neural network for non-modular high content pathological screening. The method includes following steps: obtaining images from various sources, annotating the images by medical expert through user interface, extracting features from the annotated images by a feature extractor, classifying lumen area on each of the annotated images, re-authenticating the classified lumen through user interface, exporting images in requisite format and training online and offline by user without user intervention. The annotating of images is manual The method also includes steps of data collection and domain expert support, de-noising of the noise that arises from straining process, morphological feature extraction and template based learning for classification and synthesis of images. The system for a spatio-temporal neural network for non-modular high content pathological screening includes, a high content screening (HCS) device, an annotating device, a feature extractor and a user interface. The HCS device provides data required for screening. The annotating device is used for annotation of the data by the medical expert practitioners by using a user interface to formulate ground truth and the feature extractor is used for providing stage I optics. The user interface is a display with server.
DESC:Field of the invention:
The present invention relates to the field of a system and method for a spatio-temporal neural network for non-modular high content pathological screening.
Background of the invention:
Multiple biomedical imaging forms an essential part of cancer clinical protocols and is able to furnish morphological, structural, metabolic and functional information. Early detection of cancer through screening based on imaging is probably the major contributor to a reduction in mortality for certain cancers. Diagnosis usually requires the histological examination of biopsy samples; a pathologist typically assesses the deviation in the cell structures and/or the change in the distribution of the cells. Integration with diagnostic tools assists in clinical decision-making. For such applications it is desired to have intelligent systems, which can handle temporal data by doing dynamic adjustment in the learnt pattern using underlying criticalities. Evolutionary Artificial Neural Networks (EANN) has ability to adapt to the situation, where evolution is another fundamental form of adaptation in addition to learning.
Section II presents brief review of work in cancer detection and diagnosis. Section III presents the proposed methodology and mathematical model for it. Section IV reports the experimentation carried out using parametric methods of image processing, and template based learning for pattern classification, Subsequently Section V discusses results from experimentation and Finally Section VI presents conclusion and future direction of work.
Prior art:
Digital image processing and pattern recognition techniques are widely used in pathological screening for cancer. A special issue on the same compiles the recent trends in. Study for cancer diagnosis and Gleason grading of the histological images of prostate reported in dealt with color, texture, and morpho-metric features at both the image (global) and histological object levels. Paper compared the performance of Gaussian, nearest neighbor, and Support Vector Machine (SVM) classifiers together with the sequential forward feature selection algorithm.
Yu-Len Huangt et al employed the image retrieval technique to classify breast tumors as benign or malignant lesions. Oncology/ Pathologist located regions-of-interest (ROI) of ultrasound images. The textual features from manually annotated regions-of-interest (ROI) sub-image are utilized to classify breast tumors. The principal component analysis (PCA) is used to reduce the dimension of textural feature.
In attempt of reducing the diagnosis time and classifying mass in breast to either benign, or malignant with high accuracy. Afzan Adam et al used Back Propagation Neural Network (BPNN) model with Genetic Algorithm (GA) namely GAwNN for faster classifier, without downgrading the classification performance.
C. Scott, introduces a hierarchical wavelet-based framework for modeling patterns in digital images. Wavelet is used for efficient image representations. Unknown model parameters are inferred from labeled training data using TEMPLAR, a template learning algorithm with linear complexity. It employs minimum description length (MDL). After sufficient training with different patterns, it provides a low-dimensional subspace classifier, invariant to unknown pattern transformations and background clutter. The TEMPLAR essentially considers a pattern confined at centre, else requires a preprocessing, as it aiming at making template invariant to global transformation. Local transformations need to be taken care by wavelet models.
Therefore, there is a need to provide a spatio-temporal neural network for non-modular high content pathological screening, which will avoid all the drawbacks mentioned above.
Object of the invention:
Object of the present invention is to provide a system and method for a spatio-temporal neural network for non-modular high content pathological screening.
Another object of the present invention is to provide a system and method for a spatio-temporal neural network for non-modular high content pathological screening, which Provides scalability and flexibility to the system.
Yet another object of the present invention is to provide a system and method for a spatio-temporal neural network for non-modular high content pathological screening, which is self learning in nature thereby providing adaptive quotient.
Still another object of the present invention is to provide a system and method for a spatio-temporal neural network for non-modular high content pathological screening, which brings out region of interest through pruning.
One more object of the present invention is to provide a system and method for a spatio-temporal neural network for non-modular high content pathological screening, which quantifies lumen information more objectively.
Another object of the present invention is to provide a system and method for a spatio-temporal neural network for non-modular high content pathological screening, which provides effective user interface to blend best of the machine tasks (heuristic) and manual tasks (holistic, perceptions).
Summary of the invention
The present invention is a system and method for a spatio-temporal neural network for non-modular high content pathological screening. The method includes following steps: obtaining images from various sources, annotating the images by medical expert through user interface, extracting features from the annotated images by a feature extractor, classifying lumen area on each of the annotated images, re-authenticating the classified lumen through user interface, exporting images in requisite format and training online and offline by user without user intervention. The annotating of images is manual The method also includes steps of data collection and domain expert support, de-noising of the noise that arises from straining process, morphological feature extraction and template based learning for classification and synthesis of images. The system for a spatio-temporal neural network for non-modular high content pathological screening includes, a high content screening (HCS) device, an annotating device, a feature extractor and a user interface. The HCS device provides data required for screening. The annotating device is used for annotation of the data by the medical expert practitioners by using a user interface to formulate ground truth and the feature extractor is used for providing stage I optics. The user interface is a display with server.
Brief description figures:
The advantages and features of the present invention will be better understood with reference to the following detailed description and claims taken in conjunction with the accompanying drawings, wherein like elements are identified with like symbols, and in which:
Figure 1 shows a schematic view of system and method for a spatio-temporal neural network for non-modular high content pathological screening;
Figure 2 shows an original image of cancerous cell,(Top right) grey-scale of it, (Bottom Left) de-noised image after convolution, (Bottom Right) Un-sharped image after filtering;
Figure 3 shows (top row) change in dilation parameter from 08, 100, 1000 from left to right resp. (bottom row) shows change in shape of object based on different constructors, and
Figure 4 shows illustration of translation operation for generating training images, Translation is from left to right in the images starting from top left, top right to Bottom right successively(original image : Figure2 top right).
Detail description of the invention:
An embodiment of this invention, illustrating its features, will now be described in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
The terms “first,” “second,” and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another, and the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms.The present invention provides modeling of all Indian instruments and in a later stage the synthesis of the instruments is the key area of research which is not been explored to a great extent in the present scenario.
The present invention is a system and method for a spatio-temporal neural network for non-modular high content pathological screening. The system and method provides scalability and flexibility to the system. Further, the system and method is self learning in nature thereby providing adaptive quotient. Furthermore, the system and method brings out region of interest through pruning. Moreover, the system and method quantifies lumen information more objectively. Lastly, the system and method provides effective user interface to blend best of the machine tasks (heuristic) and manual tasks (holistic, perceptions).
Now referring to figure 1, a system (hereinafter referred as “system 100”) and method for a spatio-temporal neural network for non-modular high content pathological screening in accordance with the present invention is illustrated. The system 100 includes a high content screening device 10, an annotating device 20, a feature extractor 30 and a user interface 40. The data is acquired from HCS (High Content Screening) device 10 and used for further processing. The data is annotated by the medical expert practitioners to formulate ground truth by using the annotating device 20. The Feature extractor 30 provides stage I optics. The user interface 40 displays all the information. Depending on the user feedback system 100 learns on the fly and adapts to provide more accurate and flexible system.
The method 200 for working of system 100 is provided below. The method 200 includes following steps. A set of images loaded into system through HCS device 10. A batch processing is run using the feature extractor 30 (Phase I) as an offline process. The feature extractor 30 extracts the features for each image loaded into the system. After that the extracted features are fed to ANN based classifier proposed in Phase II (on the fly) for classifying lumen area on each of the annotated images. a Parametric Algorithm is provided to classify all possible lumen (as an example) areas in each image. Lumen areas is classified as positive and negative. Positive is a high confident, medium confident and low confident; whereas Negative is background, cytoplasm and other areas.
Again according to method 200 the user interface 40 is provided to qualify the automatic processing. In this there are two main conditions, a Pass condition where user agrees to identification and classification by Algorithm) and a Fail condition where user greatly disagrees to identification & classification by algorithm. The pass condition further includes following steps:
i. 100% agreement… NEXT
ii. False Negatives
1. User clicks on missed lumens and system will annotate those automatically…. NEXT
iii. False Positives
1. User clicks on falsely identified lumens and system will remove its annotation automatically… NEXT
iv. Incorrect pick up of borders
1. User clicks on lumen and system will remove its annotation
2. User drags and marks the border and system will identify it as lumen
3. NEXT
The fail condition further includes following steps:
i. User clicks on FAIL button and all annotations will be removed
ii. User can manually annotate using PEN feature
After this the images are exported in requisite format for further usage. An online training of ANN is provided for PASS cases while user is working and offline training is provided for FAIL cases where FAIL cases are automatically picked up by ANN along with manual annotations provided by user without user intervention.
The interconnection algorithm for back propagation is twofold: first, the data flow is separated from the local computations, which brings flexibility. This gradient-descent methodology is also valid for training recurrent networks and networks through time. Second, the local error is available as a signal at the PEs of the dual topology, which means that the system 100 does not need to write equations to compute the local error, the biggest problem when simulating arbitrary neural networks with back propagation. Only the topology of the network needs to be specified by the user. The flow of errors through the dual network topology is doing the back propagation computations for us effortlessly. Implementing the back propagation algorithm with the dual network is much more versatile than directly coding the previous equations, since the dual network can be programmed very simply from the user's specified topology.
A mathematical model for system 100 is provided as follows:
Notations:
Known a priori
State parameter uncertainties
Measurement disturbances
Vector responsible for matching
Evolutionary dynamic update strategy for network learning is given by
Wavelet Differential evolutions with orthogonal basis are
NN approximation error condition
Upper bound on error modeling
Estimation Error
Nominal dynamics of wavelet neural network
Observer structure from NN estimator
Where
converges asymptotically and w1 should converge and w2 should be adaptive.
Parameter adjustment wi,t provides function approximation.
Where P1P2 are positive solution of Riccati equation
Average Estimation Error (ß)
Training of Network
Training Quality
Stability analysis of learning lawas by Lyapunav method (Bounded weight behavior)
The system 100 is implemented by using method 200 as shown in following experiment. The first step of the experiment is data collection and domain expert support. Huge volume of data is required for the experimentation. Data is collected from Histopathology Dept, Cancer Research Centre, Ruby Hall Clinic, Pune, in the form of glass slides. Images are processed using microscope-CCD setup. Manual annotation plays a key role in establishing a ground truth. the results are validated against the state-of-the-art, which may lead to give best predicted pathological solutions by avoiding the subjectivity in processing and underlying algorithms. Due to evolutionary nature of the algorithm mentioned above, the manual annotation is required for training and validation for every new class of pattern.
Referring now to figure 2, the second step of the experiment is de-noising. Due to a considerable amount of noise arising from the staining process, it is usually necessary to reduce the noise prior to the focal area identification. The noise can be removed using methods, such as, thresholding, filtering, Morphology operations and the like. A filter is implemented the Gaussian Probability density function (PDF) for one dimension and repeated for the other with
Where s > 0 is the standard deviation, the real parameter µ is the expected value and ? (x) = (2p) -1/2e-x2/2 is the density of the “standard normal” distribution. Convolution filtering is used to remove the noise from the image. Resulting image is free from noise, but contains blurring and un-sharp objects in the image.
an un-sharpening filter is used for sharpening and contrast enhancement. Referring to figure 3, the third step of experimentation is morphological feature extraction i.e. subjectivity. Morphological feature extraction is foremost common and important step in cancer detection. Parametric approach is selected to extract the features. Images are filtered using anisotropic diffusion thresholding. Staining methods routinely used in pathology. This causes problems in image segmentation for the quantitative analysis and detection of cancer. Morphological features are extracted using dilation and erosion with different parameter values, from KP=1000, 100 and 08 respectively. In the next operation, the morphological constructor is changed and the change in shape of the extracted shape can be seen. The Morphological constructors used are disk, arbitrary and diamond.
Referring now to figure 4, the fourth step of the experiment is TEMPLAR – Template based learning for classification and synthesis. During process of capture, an image can appear at any locations, orientations, scales. These uncertainties in pattern observations are modeled with a hierarchical framework, based on the notion of deformable templates from pattern theory. A template is a noise-free observation of a pattern that can be transformed into an arbitrary observation of the same pattern by applying a deformation to the template in the form of rotation, scaling, translation and shear, as well adding observation noise. For translation, the image was treated as a torus. An image was translated to the right, those pixels on the right edge of the original image wrap around and appear on the left edge of the translated image. For rotation, when an image was rotated through an angle which was not a multiple of 90 degrees, the corners of the original image that are cropped off by rotation are mapped in a one-to-one manner back to the empty corners of the rotated image. Number of training images used are 10, with depth of the wavelet as 5. a translation applied on the input image, resulting in non-parametric way of template learning as shown in the figure 3. When observed image and training images are passed through NIST-STS, it is observed that, the training images formed are passing certain non-parametric tests, which could not be through for original captures image. The step five of the experiment is On-the-fly learning results: Test example automatic segmentation of Lumens in gland units recognized on 20x microscope images of H&E stained prostate carcinoma tissue. A template based approach is used to extract the features and then classify those as non lumen and lumen areas. Various techniques used in processing these images were as follows:
? Background subtraction
? Extracting white areas
? Identification and Classification of cytoplasm areas
? Identification and classification of nucleated areas
? White area processing
o Size and Shape filter
? Classification of Lumens
? Exporting the image in “Adobe Photoshop layered” image format.
40 images are used to define the feature vectors and algorithm was manually trained to consider the boundary conditions. A different set of 80 images were then used to perform the acceptance test.
The accuracy of system 100 reported as according to the experiment is as follow:
1. In one case 99% of lumens were identified while in other case 97% lumen were identified.
2. On average,
? Around 98% of lumens are correctly identified
i. Around 90% of lumens are correctly identified along with border
ii. Around 8% of lumens needed to be corrected for border pickup
? Around 1% lumens were not identified (False negatives)
? Around 1% lumens were not true lumens (False positives)
This produces a system with:
1. EER (Equal Error Rate) of 1%
2. Accuracy of 98% (average)
The Result of the experiment is as follows:
During the process of image acquisition it was observed that, the cell images can appear at anywhere in glass slide whenever attempted with manual or motorized microscopy. HCS data is also suffering from this acquisition related problems of clutter and occlusion, touching and overlapping cells. Furthermore, no fixed shape, size and color fir particular type of cell or level of deviation. Most of the parameters are non-modular. Segmentation becomes a tedious task for this case and none of the algorithms claimed confidence for general framework of pathological data in general, rather most of the algorithms reported success for particular disease. Staining is very much required for pathology, but adds burden of filtering the image. Morphological feature extraction, which is based on mainly thresholding and segmentation, detection of continuities gets trapped into subjectivity. Results shows, high subjectivity in results as the methods are strictly modular. Morphological operations are not representing the physical meaning of the processes. It suffers from dependence on the non-topographical content in an image.
In light of these results from experimentation as well reported in. Non-parametric approaches are followed. Template based learning using Hierarchical wavelet-based model used for pattern modeling as represented in. It is observed that, the model is not enough to take care of local transformation in images as well, it requires pre-processing in order to confine the pattern to centre. The training images generated from TEMPLAR framework are observed invariant to translation and rotation. Tested with NIST-STS suit, the training images pass certain non-parametric test, whereas, the original image, from which patterns are made, fails for the same. This, in a way, confirms non-parametric approach and deals with non-modular data. Another important issue with network training is the different datasets and opinions of experts. Manual annotation plays an important role in establishing the ground truth. The proposed dynamic update model which will be evolving on wavelet framework, is based on the nominal model and Riccati matrix, on stone-wiestrass theorem of uniform approximation by polynomials for continuous real valued functions on a compact interval. The on-the-fly learning mechanism is similar to human way of understanding and taking decisions. The evolving system produced EER of 1% under challenging situations.
The conclusion of the experiment:
In the present invention, various challenges in detection and diagnosis of cancer cells from histo-pathological data are discussed for denoising and feature extraction. In particular, subjectivity in segmentation and morphological feature extraction with modular methods are discussed. Classification methodologies based on image analysis algorithm and along-with limitations. In light of the results of parametric methods, a TEMPLAR framework, a hierarchical wavelet based approach is adopted for taking advantage of sparse representation and to achieve invariant to global transformation. Worked with wavelet based model, we come up with a mathematical model for future work on lines of Riccatti matrix and Stone-Wiestrass theorem of approximation. Experimentation proved shortfalls of traditional process of image analysis. Approach for designing a class of EANN with objectives for classifying cell image based on more objective features is gives system accuracy of 98%.
The present invention is a system and method for a spatio-temporal neural network for non-modular high content pathological screening. The system and method provides scalability and flexibility to the system. Further, the system and method is self learning in nature thereby providing adaptive quotient. Furthermore, the system and method brings out region of interest through pruning. Moreover, the system and method quantifies lumen information more objectively. Lastly, the system and method provides effective user interface to blend best of the machine tasks (heuristic) and manual tasks (holistic, perceptions).
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present invention and its practical application, to thereby enable others skilled in the art to best utilize the present invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omission and substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but such are intended to cover the application or implementation without departing from the spirit or scope of the claims of the present invention.
,CLAIMS:We Claim:
1. A method for a spatio-temporal neural network for non-modular high content pathological screening, the method comprising steps of:
obtaining images from various sources;
annotating the images by medical expert through user interface;
extracting features from the annotated images by a feature extractor;
classifying lumen area on each of the annotated images;
re-authenticating the classified lumen through user interface;
exporting images in requisite format and
training online and offline with manual annotations by user without user intervention.
2. The method as claimed in claim 1, wherein the method comprises data collection and domain expert support.
3. The method as claimed in claim 1, wherein the method comprises de-noising of the noise that arises from straining process.
4. The method as claimed in claim 1, wherein the method comprises morphological feature extraction.
5. The method as claimed in claim 1, wherein the method comprises a template based learning for classification and synthesis of images.
6. The method as claimed in claim 1, wherein the annotating of images is manual.
7. A system for a spatio-temporal neural network for non-modular high content pathological screening. The system comprises,
a high content screening (HCS) device, wherein the HCS device provides data required for screening;
an annotating device, wherein in the annotating device the data is annotated by the medical expert practitioners by using a user interface to formulate ground truth and
a feature extractor for providing stage I optics.
8. The system as claimed in claim 7, wherein the user interface is a display with server.
| # | Name | Date |
|---|---|---|
| 1 | 3540-MUM-2014-CORRESPONDENCE-250116.pdf | 2018-08-11 |
| 1 | 3540-MUM-2014-FORM 1 (11-05-2015).pdf | 2015-05-11 |
| 2 | 3540-MUM-2014-Power of Attorney-250116.pdf | 2018-08-11 |
| 2 | 3540-MUM-2014-CORRESPONDANCE (11-05-2015).pdf | 2015-05-11 |
| 3 | Drawing [11-11-2015(online)].pdf | 2015-11-11 |
| 3 | ABSTRACT1.jpg | 2018-08-11 |
| 4 | Figure- Evo_Nets.pdf | 2018-08-11 |
| 4 | Description(Complete) [11-11-2015(online)].pdf | 2015-11-11 |
| 5 | Form-2(Online).pdf | 2018-08-11 |
| 5 | Provisional specification - UOP1 - Evo_Nets - As Filed.pdf | 2018-08-11 |
| 6 | Form-3 UOP1.pdf | 2018-08-11 |
| 6 | Form-5 UOP1.pdf | 2018-08-11 |
| 7 | Form-3 UOP1.pdf | 2018-08-11 |
| 7 | Form-5 UOP1.pdf | 2018-08-11 |
| 8 | Form-2(Online).pdf | 2018-08-11 |
| 8 | Provisional specification - UOP1 - Evo_Nets - As Filed.pdf | 2018-08-11 |
| 9 | Description(Complete) [11-11-2015(online)].pdf | 2015-11-11 |
| 9 | Figure- Evo_Nets.pdf | 2018-08-11 |
| 10 | Drawing [11-11-2015(online)].pdf | 2015-11-11 |
| 10 | ABSTRACT1.jpg | 2018-08-11 |
| 11 | 3540-MUM-2014-Power of Attorney-250116.pdf | 2018-08-11 |
| 11 | 3540-MUM-2014-CORRESPONDANCE (11-05-2015).pdf | 2015-05-11 |
| 12 | 3540-MUM-2014-FORM 1 (11-05-2015).pdf | 2015-05-11 |
| 12 | 3540-MUM-2014-CORRESPONDENCE-250116.pdf | 2018-08-11 |