Sign In to Follow Application
View All Documents & Correspondence

System And Method For Identifying And Analysing Behaviour Of A Cell Colony Using Microscopic Images

Abstract: System and method for identifying and analyzing behavior of a cell colony using image series classification is provided. The method includes receiving a plurality of images associated with the cell colony, and creating an image encoding vector for each of the plurality of images, using a downsampling convolutional neural network (D-CNN) model. The method further includes generating a binary mask for each of the image encoding vectors, using an upsampling convolutional neural network (U-CNN), generating a mask encoding vector based on an encoding of each of the binary masks, and generating a cell colony encoding vector by concatenating each of the image encoding vectors with corresponding the mask encoding vectors. The method further comprises analyzing cell colony with determination of an image series category of the cell colony from a set of categories, using a sequence based neural network model.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 June 2021
Publication Number
52/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
mohammed.faisal@ltts.com
Parent Application

Applicants

L&T TECHNOLOGY SERVICES LIMITED
DLF IT SEZ Park, 2nd Floor – Block 3, Mount Poonamallee Road, Ramapuram, Chennai

Inventors

1. ARANGARAJAN PALANIAPPAN
50A, Sindhu Nagar, Aindhu Panai, Kadachanallur, Tiruchegode, Namakkal – 638008
2. SWAROOP KUMAR MYSORE LOKESH
68, 5th Cross, Silver Springs Layout, Mysuru – 570026
3. USHA SAIDAPPA DIGGI
8-1545/81A/1/12, Shivaji Nagar, Kalaburagi – 585104

Specification

DESC:Technical Field
[001] This disclosure relates generally to bioinformatics, and more particularly relates to system and method for identifying and analysing behavior of an entity (such as, a cell colony) using microscopic images.

Background
[002] Cell colony corresponds to a cluster of identical cells (clones) on the surface of (or within) a solid medium, usually derived from a single parent cell, as in bacterial colony. Generally, conventional mechanisms of image processing for cell colony detection and behavior analysis are very challenging. The conventional mechanisms need a clinical operator to track cell colonies and a clinical expert to understand behavior of cell colonies. The challenges may include cost, time consumption, and inaccuracy. In many cases, such conventional mechanisms may require precautions to be taken for specific diagnosis. For example, conventional mechanisms may require human intervention with manual validation steps. As a result, cell colonies may get damaged in a cultivated environment.
[003] Accordingly, there is a need for system and method with an artificial neural network (ANN) based image series classification model for identifying and analysing cell colony.

SUMMARY
[004] In accordance with an embodiment, a method of identifying and analyzing cell colony using microscopic images is disclosed. The method includes receiving a plurality of images associated with the cell colony, wherein the plurality of images are captured at subsequent instances of time, and creating an image encoding vector for each of the plurality of images, using a downsampling convolutional neural network (D-CNN) model. The method further includes generating a binary mask for each of the image encoding vectors, using an upsampling convolutional neural network (U-CNN), and generating a mask encoding vector based on an encoding of each of the binary masks. The method further includes generating a cell colony encoding vector by concatenating each of the image encoding vectors with corresponding mask encoding vectors, and analyzing the cell colony with determination of an image series category of the cell colony from a set of categories, using a sequence based neural network model, based on the cell colony encoding vector.
[005] In an embodiment, a system for identifying and analyzing cell colony using image series classification is disclosed. The system may include a processor and a memory communicatively coupled to the processor. The memory may be configured to store processor-executable instructions. The processor-executable instructions, on execution, cause the processor to receive a plurality of images associated with the cell colony, wherein the plurality of images are captured at subsequent instances of time, and create an image encoding vector for each of the plurality of images, using a downsampling convolutional neural network (D-CNN) model. The processor-executable instructions, on execution, further cause the processor to generate a binary mask for each of the image encoding vectors, using an upsampling convolutional neural network (U-CNN), and generating a mask encoding vector based on an encoding of each of the binary masks. The processor-executable instructions, on execution, further cause the processor to generate a cell colony encoding vector by concatenating each of the image encoding vectors with corresponding mask encoding vectors, and analyzing the cell colony with determination of an image series category of the cell colony from a set of categories, using a sequence based neural network model, based on the cell colony encoding vector.
[006] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
[007] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[008] FIG. 1 is a block diagram that illustrates an environment for a system for identifying and analyzing cell colony, in accordance with an embodiment of the disclosure.
[009] FIG. 2 is a block diagram that illustrates an exemplary system for identifying and analyzing cell colony using a temporal Convolution Neural Network model, in accordance with an embodiment of the disclosure.
[010] FIG. 3 illustrates exemplary operations for training and testing of a deep learning neural network model for a prediction of image series classification of an entity, in accordance with an embodiment of the disclosure.
[011] FIG. 4 is a block diagram that illustrates an architecture for using deep learning model used by a system to predict a type of entity for image series, in accordance with an embodiment of the disclosure.
[012] FIG. 5 is a flowchart that illustrates an exemplary method for identifying and analyzing an entity (such as, a cell colony) using image series classification, in accordance with an embodiment of the disclosure.

DETAILED DESCRIPTION
[013] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims. Additional illustrative embodiments are listed below.
[014] The following described implementations may be found in the disclosed system and method for identifying and analyzing cell colony using image series classification, based on deep learning Neural Network (NN) model (can broadly be termed as ANN model). Exemplary aspects of the disclosure provide a system that trains a temporal deep learning model to be suitable for real-time inference. The disclosed system makes use of temporal differences amongst classes (such as, but not limited to, a healthy class, an unhealthy class and an inconclusive class) from image series associated with one or more cell colonies. In contrast with conventional NNs, the disclosed system may also dynamically diagnose a disease using image series classification based on the deep learning NN model.
[015] The disclosed system may help doctors and clinical operators to locate and understand the cell colony behavior in cultivated environment and hence facilitate time saving, reduction in cell damage, reduction in handling errors, reduction in false classification due to human bias, easy cell colony detection and behavior analysis in clinical labs, effective cell quality analysis and rapid because of less dependency on a specialist. Exemplary aspects of the disclosure provide the system to facilitate regenerative therapies, and toxicological studies. Exemplary aspects of the disclosure provide the system that non-invasively extracts information on cell quality and cellular processes from time-lapse phase-contrast videos or images.
[016] FIG. 1 is a block diagram that illustrates an environment for a system for identifying and analysing a cell colony, in accordance with an embodiment of the disclosure. With reference to FIG.1, there is shown an environment 100. The environment 100 includes a system 102, an image sensor 104, an external device 106, and a communication network 108. The system 102 may include the image sensor 104.
[017] The system 102 may be communicatively coupled to the external device 106, via the communication network 108. The system 102 may include a deep learning neural network model 110, for example, as part of an application stored in memory of the system 102.
[018] The system 102 may include suitable logic, circuitry, interfaces, and/or code that may be configured to train the deep learning neural network model 110 for identifying and analyzing one or more cell colonies in a cultivated environment. Once trained, the deep learning neural network model 110 may determine a class for predicting the behavior of the cell colony. Additionally, the deep learning neural network model 110, once trained, may be deployable for applications (such as, a toxicological diagnostic application and a regenerative therapy application) which may take actions or generate real-time or near real-time inferences. The deep learning neural network model 110 may include a combination of a downsampling Convolutional Neural Network, an upsampling Convolutional Neural Network, a Long Short Term Memory classifier. Other examples of implementation of the system 102 may include, but are not limited to, medical diagnostic equipment, a microscopic device, and a Consumer Electronic (CE) device.
[019] The image sensor 104 may include suitable logic, circuitry, interfaces, and/or code that may be configured to capture a plurality of images (also referred as a series of images) corresponding to a cell colony. The plurality of images may correspond to a sequence of image frames taken at subsequent instances of time associated with the cell colony. The plurality of images may be used, for example, to train the deep learning neural network model 110, or as an input to the trained deep learning neural network model 110 in a test environment (e.g., for benchmarking) or in an application-specific deployment, e.g., applications related to behavior analysis of cell colonies or a diagnosis of diseases.
[020] By way of an example, and not limitation, the image sensor 104 may have suitable optical instruments, such as lenses and actuators for the lenses, to capture the plurality of images. Examples of implementation of the image sensor 104 may include, but not limited to, high-definition scanners and cameras (such as, microscopic cameras). Although in FIG. 1, the system 102 and the image sensor 104 are shown as a single entity, this disclosure is not so limited. Accordingly, in some embodiments, the entire functionality of the image sensor 104 may be included as a separate entity from the system 102, without a deviation from scope of the disclosure.
[021] The external device 106 may include suitable logic, circuitry, interfaces, and/or code that may be configured to deploy the deep learning neural network model 110, as part of an application engine that may use the output of the deep learning neural network model 110 to generate real or near-real time inferences, take decisions, or output prediction results for identifying and analysing behavior of cell colonies and/or diagnosis of diseases. The deep learning neural network model 110 may be deployed on the external device 106 once the deep learning neural network model 110 is trained on the system 102. The functionalities of the external device 106 may be implemented in portable devices, such as a high-speed computing device, and/or non-portable devices, such as a server. Examples of the external device 106 may include, but are not limited to, medical diagnosis equipment, a smart phone, a mobile device, or a laptop.
[022] The communication network 108 may include a communication medium through which the system 102, the image sensor 104, and the external device 106 may communicate with each other. Examples of the communication network 108 may include, but are not limited to, the Internet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), or a Metropolitan Area Network (MAN). Various devices in the environment 100 may be configured to connect to the communication network 108, in accordance with various wired and wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, a Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Zig Bee, EDGE, IEEE 802.11, light fidelity(Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communication, wireless access point (AP), device to device communication, cellular communication protocols, and Bluetooth (BT) communication protocols.
[023] The deep learning neural network model 110 may be referred to as a computational network or a system of artificial neurons, where each Neural Network (NN) layer of the deep learning neural network model 110 includes artificial neurons as nodes. Outputs of all the nodes in the deep learning neural network model 110 may be coupled to at least one node of preceding or succeeding NN layer(s) of the deep learning neural network model 110. Similarly, inputs of all the nodes in the deep learning neural network model 110 may be coupled to at least one node of preceding or succeeding NN layer(s) of the deep learning neural network model 110. Node(s) in a final layer of the deep learning neural network model 110 may receive inputs from at least one previous layer. A number of NN layers and a number of nodes in each NN layer may be determined from hyperparameters of the deep learning neural network model 110. Such hyperparameters may be set before or while training the deep learning neural network model 110 on a training dataset of images.
[024] Each node in the deep learning neural network model 110 may correspond to a mathematical function with a set of parameters, tunable while the deep learning neural network model 110 is trained. These parameters may include, for example, a weight parameter, a regularization parameter, and the like. Each node may use the mathematical function to compute an output based on one or more inputs from nodes in other layer(s) (e.g., previous layer(s)) of the deep learning neural network model 110.
[025] The deep learning neural network model 110 may include electronic data, such as, for example, a software program, code of the software program, libraries, applications, scripts, or other logic/instructions for execution by a processing device, such as the system 102 and the external device 106. Additionally, or alternatively, the deep learning neural network model 110 may be implemented using hardware, such as a processor, a microprocessor (e.g., to perform or control performance of one or more operations), a field-programmable gate array (FPGA), or an application-specific integrated circuit (ASIC). In some embodiments, the deep learning neural network model 110 may be implemented using a combination of both the hardware and the software program.
[026] In operation, a pre-trained deep learning neural network model 110 may be deployed on the system 102. Alternatively, in accordance with an embodiment, a process may be initialized to train the deep learning neural network model 110 on a behavior classification task from image series, for example, predicting a class from the captured images. In training of the deep learning neural network model 110, one or more of the set of parameters for each node of the deep learning neural network model 110 may be updated. In the training phase of the deep learning neural network model 110, ground truth masks may be required as an input for each image where the value of each pixel of each image represents whether the corresponding pixel of the image belongs to the entity whose behavior is under analysis.
[027] As part of the process, the system 102 may input a video with a plurality of images captured at subsequent instance of time to the deep learning neural network model 110. By way of example, the deep learning neural network model 110 may be trained to understand a complex structure of growing cell colonies from the captured images associated with classes for predicting the state of the cell colony, based on classification by the deep learning neural network model 110.
[028] The system 102 may be configured to localize the entity (cell colony/cell colonies) in each image frame of the plurality of frames. Further, the system 102 may be configured to analyze cell colony behavior. A defined time sequence is given as input to the trained deep learning neural network model 110 model which will predict the entity status (healthy or unhealthy) in a probability score.
[029] In accordance with an embodiment, the system 102 may be configured to classify segmented entity (such as, a cell colony) into a set of behavioral categories. Once trained, the deep learning neural network model 110 may be also referred to as the trained deep learning neural network model 110, ready to be deployed on the system 102. After the training, the deep learning neural network model 110 may be used to generate image series classification results for identifying and analyzing behavior of entities, such as a cell colony and/or diagnosing diseases for the diagnostic images that are inputted to the deep learning neural network model 110. The system 102 may deploy the trained deep learning neural network model 110. Additionally, or alternatively, the system 102 may deploy the trained deep learning neural network model 110 on external devices, such as the external device 106.
[030] FIG. 2 is a block diagram of an exemplary system for identifying and analyzing an entity (such as, a cell colony) using a deep learning neural network model, in accordance with an embodiment of the disclosure. FIG. 2 is explained in conjunction with elements from FIG. 1.
[031] With reference to FIG. 2, there is shown a block diagram 200 of the system 102. The system 102 may include a processor 202, a memory 204, an input/output (I/O) device 206, a network interface 208, an application interface 210, and a persistent data storage 212. The system 102 may also include the deep learning neural network model 110, as part of, for example, a software application for image-based decisioning in identifying and analyzing an entity (such as, a cell colony). The processor 202 may be communicatively coupled to the memory 204, the I/O device 206, the network interface 208, the application interface 210, and the persistent data storage 212. In one or more embodiments, the system 102 may also include a provision/functionality to capture images/videos via one or more image sensors, for example, the image sensor 104.
[032] The processor 202 may include suitable logic, circuitry, interfaces, and/or code that may be configured to train the deep learning neural network model 110 for multi-class classification task on input diagnostic images. Once trained, the deep learning neural network model 110 may be either deployed on other electronic devices (e.g., the external device 106) or on the system 102 for real time prediction of class of images of a pre-captured images or video feed (time series video). The processor 202 may be implemented based on a number of processor technologies, which may be known to one ordinarily skilled in the art. Examples of implementations of the processor 202 may be a Graphics Processing Unit (GPU), a Reduced Instruction Set Computing (RISC) processor, an Application-Specific Integrated Circuit (ASIC) processor, a Complex Instruction Set Computing (CISC) processor, a microcontroller, Artificial Intelligence (AI) accelerator chips, a co-processor, a central processing unit (CPU), and/or a combination thereof.
[033] The memory 204 may include suitable logic, circuitry, and/or interfaces that may be configured to store instructions executable by the processor 202. Additionally, the memory 204 may be configured to store program code of the deep learning neural network model 110 and/or the software application that may incorporate the program code of the deep learning neural network model 110. Examples of implementation of the memory 204 may include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Hard Disk Drive (HDD), a Solid-State Drive (SSD), a CPU cache, and/or a Secure Digital (SD) card.
[034] The I/O device 206 may include suitable logic, circuitry, and/or interfaces that may be configured to act as an I/O interface between a user and the system 102. The user may include a clinical operator who operates the system 102. The I/O device 206 may include various input and output devices, which may be configured to communicate with different operational components of the system 102. Examples of the I/O device 206 may include, but are not limited to, a touch screen, a keyboard, a mouse, a joystick, a microphone, and a display screen.
[035] The network interface 208 may include suitable logic, circuitry, interfaces, and/or code that may be configured to facilitate different components of the system 102 to communicate with other devices, such as the external device 106, in the environment 100, via the communication network 108. The network interface 208 may be configured to implement known technologies to support wired or wireless communication. Components of the network interface 208 may include, but are not limited to an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, an identity module, and/or a local buffer.
[036] The network interface 208 may be configured to communicate via offline and online wireless communication with networks, such as the Internet, an Intranet, and/or a wireless network, such as a cellular telephone network, a wireless local area network (WLAN), personal area network, and/or a metropolitan area network (MAN). The wireless communication may use any of a plurality of communication standards, protocols and technologies, such as Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), LTE, time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (such as IEEE 802.11, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or any other IEEE 802.11 protocol), voice over Internet Protocol (VoIP), Wi-MAX, Internet-of-Things (IoT) technology, Machine-Type-Communication (MTC) technology, a protocol for email, instant messaging, and/or Short Message Service (SMS).
[037] The application interface 210 may be configured as a medium for the user to interact with the system 102. The application interface 210 may be configured to have a dynamic interface that may change in accordance with preferences set by the user and configuration of the system 102. In some embodiments, the application interface 210 may correspond to a user interface of one or more applications installed on the system 102.
[038] The persistent data storage 212 may include suitable logic, circuitry, and/or interfaces that may be configured to store program instructions executable by the processor 202, operating systems, and/or application-specific information, such as logs and application-specific databases. The persistent data storage 212 may include a computer-readable storage media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable storage media may include any available media that may be accessed by a general-purpose or special-purpose computer, such as the processor 202.
[039] By way of example, and not limitation, such computer-readable storage media may include tangible or non-transitory computer-readable storage media including, but not limited to, Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices (e.g., Hard-Disk Drive (HDD)), flash memory devices (e.g., Solid State Drive (SSD), Secure Digital (SD) card, other solid state memory devices), or any other storage medium which may be used to carry or store particular program code in the form of computer-executable instructions or data structures and which may be accessed by a general-purpose or special-purpose computer. Combinations of the above may also be included within the scope of computer-readable storage media.
[040] Computer-executable instructions may include, for example, instructions and data configured to cause the processor 202 to perform a certain operation or a set of operations associated with the system 102. The functions or operations executed by the system 102, as described in FIG. 1, may be performed by the processor 202. In accordance with an embodiment, additionally, or alternatively, the operations of the processor 202 are performed by various modules.
[041] FIG. 3 illustrates exemplary operations for training and testing of a deep learning neural network model for a prediction of class of an entity, in accordance with an embodiment of the disclosure. FIG. 3 is explained in conjunction with elements from FIG. 1 to FIG. 2. FIG. 3 illustrates a set of operations for training of a deep learning neural network model for prediction of an class of an entity, as described herein. The deep learning neural network model includes a combination of a downsampling Convolutional Neural Network, an upsampling Convolutional Neural Network, a Long Short Term Memory classifier.
[042] The deep learning neural network model may correspond to the deep learning neural network model 110 of FIG. 1 and may be, for example, modelled on a deep neural network architecture with multiple neural network models.
[043] At 302, a data acquisition and segmentation operation may be performed. In the data acquisition operation, the processor 202 may acquire a training dataset which may include segmentation of a sequence of diagnostic images (also referred as training images) from a video feed. An example of the training dataset may be used to train deep neural network models using semantic segmentation for identifying an entity (such as, cell colonies) in the video feed.
[044] The training dataset may be used to benchmark relative performance and accuracy of the trained deep neural network models for segmenting cell colonies from the video feed.
[045] The training images may correspond to, but not limited to, a set of cell colony (may include stem cell colony). In accordance with an embodiment, the training images may also include, but not limited to, microscopy images, and a medical modality for detecting functional abnormalities using temporally differing images for the entity. The training images may be received to generate a model (for example, the deep learning neural network model or AI model). Such deep learning neural network model or AI model may be trained to localize entities (such as, the cell colonies) in the video frame and analyze the behavior of the entities.
[046] In accordance with an embodiment, the processor 202 may receive the sequence of images as signals from the image sensor 104. In accordance with an embodiment, a signal may correspond to a stem cell colony image. The signal may further correspond to a 1-dimensional signal or a 2-dimensional signal. The 1-dimensional signal may be converted into the 2-dimensional signal by using techniques, such as, but not limited to, Short-term Fourier Transform, Wavelet Transform, and Mel Spectrogram. The processor 202 may perform further processing on the converted 2-dimensional signal. The processing may include measures for degree of growth of the stem cells, variations in vibrations of lungs captured with chest image to screen pneumonia, 1-dimensional spot signal to measure variations in oxygen level for diagnosis, or the like.
[047] In accordance with an embodiment, the sequence of images may correspond to a good optical quality images captured with 140 times magnification and imaging resolution of 2 microns that may enable minute patterns to be captured in the sequence of images.
[048] At 304, data pre-processing is performed. In accordance with an embodiment, the processor 202 may be configured to pre-process the sequence of images using cropping and other image operations (such as, image resizing and noise removal) to make them suitable for processing by the deep learning neural network model or AI model.
[049] At 306, a model training operation may be performed. The pre-processed sequence of images may be received by the deep learning neural network model. The processor 202 may be configured to train the deep learning neural network model on complexity of entity (cell colonies) for segmenting the cell colony from the video feed. In accordance with an embodiment, training images of a huge dataset (with thousands of images) may correspond to a predefined class and a binary class. The binary class may either correspond to a healthy class or an unhealthy class. In accordance with an embodiment, the deep learning neural network model may be trained with labelled dataset and model hyperparameters may be fine-tuned for accuracy.
[050] Features relevant to localizing of cell colonies may be extracted by the deep learning neural network model. Such features may correspond to, but not limited to, contours (edges) of the training images, intensity variations, image orientation and degree of tilt. Further, dynamic features, morphological features and texture features may be extracted by the deep learning neural network model, based on appearance of the entity (such as, the cell colony) for behavior analysis of the entity. Such features may use area, centroid and protrusions associated with the entity for understanding entity growth (such as, cell colony growth). By way of an example, for a healthy cell colony, the area of detected cell colony increases linearly with time which is not the case for unhealthy cell colony. By way of another example, change in centroid of the cell colony is used to track the cell colony movement. When the cell colony is healthy, the movement or oscillation of the cell colony is less because the cell colony includes active cells and the healthy cell colony may be denser as compared to the unhealthy cell colony. Further, protrusions are dynamic cell processes that extend off cell colonies and take a variety of shapes. The protrusions may allow cell colonies to attach, spread, and migrate. The number of protrusions may increase on healthy cell colonies and may decrease on the unhealthy cell colonies during incubation. Protruding-to-total area ratio and Bright-to-total area ratio corresponding to cell colonies may also be useful for extracting features. In accordance with an embodiment, the value for protruding-to-total area ratio may be larger for the healthy cell colony as compared to the unhealthy cell colony. In accordance with an embodiment, the value for the bright-to-total area ratio may be smaller for the healthy cell colony as compared to the unhealthy cell colony.
[051] A number of NN layers and a number of nodes in each NN layer may be determined from hyperparameters of the deep learning model or the AI model. Such hyperparameters may be set before or while training the deep learning neural network model 110 on a training dataset of diagnostic images.
[052] Each node in the deep learning neural network model or the AI model may correspond to a mathematical function with a set of parameters, tunable while the deep learning neural network model or the AI model is trained. These parameters may include, for example, a weight parameter, a regularization parameter, and the like. Each node may use the mathematical function to compute an output based on one or more inputs from nodes in other layer(s) (e.g., previous layer(s)) of the deep learning model or the AI model. Once trained, the AI model may be suitable for understanding of complex diagnostic images to segment the entity (such as, the cell colonies).
[053] At 308, model testing operation is performed. In accordance with an embodiment, the processor 202 may be configured to perform testing with the deep learning neural network model or the AI model which is already trained. After the training, the trained deep learning neural network model may be tested with real time data to generate prediction results to segment cell colonies for diagnostic images as input to the trained the deep learning neural network model.
[054] FIG. 4 is a block diagram that illustrates an architecture for using deep learning model used by a system to predict a type of entity for image series, in accordance with an embodiment of the disclosure. With reference to FIG.4, there is shown a block diagram 400 with a series of images 402, a downsampling Convolutional Neural Network (D-CNN) 404 using feature extraction, image encoding feature vectors 406, upsampling Convolutional Neural Network (U-CNN) 408 for segmentation, a binary mask 410, a set of mask encoding vectors 412, concatenation 414, a Long Short Term Memory (LSTM)/ transformer classifier 416, prediction of a class of the entity 418, a classification loss 420, and a segmentation loss 422.
[055] With reference to FIG. 4, in training phase, input consists of the series of images that corresponds to a series of images of the entity (such as, cultivated stem cell colony) at subsequent time intervals. For the training phase, the behavior or state of the entity in the form of one or more categories may be judged by a human expert (such as, a clinical operator). Further, the training phase may require input of segmentation masks for each image where the value of each pixel of each image represents whether the corresponding pixel of the image belongs to the entity whose behavior is under analysis. For each image from the series of images, the training phase may proceed in two steps.
[056] Each image of the series of images 402 (input a) may be given as an input to the D-CNN 404 whose output for an image is a low-dimensional vector, an “image-encoding vector” 406, resulting in a series of vectors, one for each image. Each vector of the series of vectors is given as input to the U-CNN 408 whose output for a vector is a high-dimensional matrix with similar dimensions as the original image, which is further thresholded to produce a binary mask 410, resulting in a series of masks, one for each image. Each pixel of an output mask represents whether the corresponding pixel of the image belongs to the entity whose behavior is under analysis (for example, “0” representing “entity” and “1” representing the background). The degree of match between each output mask and the corresponding segmentation mask (also referred as input c) is determined using a loss function (or segmentation loss 422) such as, but not limited to, Mean Square Error (MSE), and Cross Entropy. This measure is then back-propagated through both the D-CNN 404 and U-CNN 408 using the Backpropagation algorithm and the parameters of these models are updated in order to maximize the degree of match.
[057] Each output mask is then encoded into a low-dimensional vector, a “mask-encoding vector”. In accordance with an embodiment, the encoding of the output mask may be done using any parametrized or non-parametrized differentiable function. The x- and y- positions of each pixel in the output mask 410 whose value is indicative of “entity” are determined and the average x- and y- values are computed. Further, the edge pixels of the entity are determined by scanning the output mask 410 from the top-left to the bottom-right, determining the minimum and maximum x- values for each row where the value is indicative of “entity”. Further, the Euclidean distance from the center to each of the set of edge pixels or a subset of edge pixels is computed. The distances and the x- and y- positions of the center are then normalized by dividing them by the length of the diagonal of the image. The mask-encoding vector 412 comprises of the normalized x- and y- positions of the center and the normalized distances to the edge pixels.
[058] The “image-encoding vector” 406 and the “mask-encoding vector” 412 are concatenated to produce an “entity-encoding vector”. The “entity-encoding vector” for each image of the image series is input to a Sequence Network model such as, the LSTM or a Transformer 416 which classifies the series into one or more of a fixed set of categories or behaviors. The degree of match between the categorization and the corresponding actual categorization (input b) is determined using a loss function (also referred as the classification loss 420) such as, Mean Square Error (MSE), and Cross Entropy. This measure is then back-propagated through all three models (the D CNN 404, the U CNN 408 and the sequence model LSTM 416) using the Backpropagation algorithm and the parameters of these models are updated in order to maximize the degree of match.
[059] FIG. 5 is a flowchart that illustrates an exemplary method for identifying and analyzing behavior of an entity (such as, a cell colony) using image series classification, in accordance with an embodiment of the disclosure. With reference to FIG. 5, there is shown a flowchart 500. The operations of the exemplary method may be executed by any computing system, for example, by the system 102 of FIG. 1. The operations of the flowchart 500 may start at 502 and proceed to 504.
[060] At 502, a plurality of images associated with the cell colony may be received. In accordance with an embodiment, the processor 202 may be configured to receive the plurality of images associated with the cell colony. The plurality of images are captured at subsequent instances of time.
[061] At 504, a set of image encoding vectors may be extracted from each of the plurality of images. In accordance with an embodiment, the processor 202 may be configured to extract a set of image encoding vectors from each of the plurality of images, using a downsampling convolutional neural network (D-CNN) model.
[062] At 506, a set of binary masks may be extracted from each of the set of image encoding vectors. In accordance with an embodiment, the processor 202 may be configured to generate a set of binary masks from each of the set of image encoding vectors, using an upsampling convolutional neural network (U-CNN).
[063] At 508, a set of mask encoding vectors may be generated. In accordance with an embodiment, the processor 202 may be configured to generate a set of mask encoding vectors based on an encoding of each of the set of binary masks.
[064] At 510, a concatenated cell colony encoding vector may be generated. In accordance with an embodiment, the processor 202 may be configured to generate a concatenated cell colony encoding vector based on each of the set of image encoding vectors with corresponding mask encoding vectors.
[065] At 512, cell colony may be analyzed with determination of a class of the cell colony from a set of categories. In accordance with an embodiment, the processor 202 may be configured to analyze the cell colony with determination of a class of the cell colony from a set of categories, using a sequence based neural network model, based on the concatenated cell colony encoding vector.
[066] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[067] It will be appreciated that, for clarity purposes, the above description has described embodiments of the disclosure with reference to different functional units and processors. However, it will be apparent that any suitable distribution of functionality between different functional units, processors or domains may be used without detracting from the disclosure. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controller. Hence, references to specific functional units are only to be seen as references to suitable means for providing the described functionality, rather than indicative of a strict logical or physical structure or organization.
[068] Although the present disclosure has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present disclosure is limited only by the claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the disclosure.
[069] Furthermore, although individually listed, a plurality of means, elements or process steps may be implemented by, for example, a single unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. Also, the inclusion of a feature in one category of claims does not imply a limitation to this category, but rather the feature may be equally applicable to other claim categories, as appropriate.
,CLAIMS:1. A method of analyzing behavior of a cell colony using image series classification, the method comprising:
receiving, by a system for analyzing behavior of the cell colony using image series classification, a plurality of images associated with the cell colony, wherein the plurality of images are captured at subsequent instances of time;
creating, by the system, an image encoding vector for each of the plurality of images, using a downsampling convolutional neural network (D-CNN) model;
generating, by the system, a binary mask for each of the image encoding vectors, using an upsampling convolutional neural network (U-CNN);
generating, by the system, a mask encoding vector based on an encoding of each of the binary masks;
generating, by the system, a cell colony encoding vector by concatenating each of the image encoding vectors with corresponding mask encoding vectors; and
analyzing, by the system, the behavior of the cell colony with determination of an image series category of the cell colony from a set of categories, using a sequence based neural network model, based on the cell colony encoding vector.

2. The method as claimed in claim 1, wherein generating the binary mask for each of the image encoding vectors comprises:
feeding the image encoding vectors for the plurality of images to the U-CNN;
for each image encoding vector, receiving a high-dimensional matrix with similar dimensions as the original image, from the U-CNN; and
thresholding the high-dimensional matrix to generate the binary mask, for each image encoding vector.

3. The method as claimed in claim 1, wherein the sequence based neural network model is trained to predict a behavior state of the cell colony based on determining of features associated with temporal differences amongst the plurality of images.

4. The method as claimed in claim 1, wherein each pixel of the binary mask corresponds to either the cell colony or a background.

5. The method as claimed in claim 4, further comprising determining a degree of match between each of the binary masks and a corresponding ground truth mask using a segmentation loss function.

6. The method as claimed in claim 5, further comprising back propagating the determined degree of match through the D-CNN model and the U-CNN model using a backpropagation algorithm for hyperparameter tuning of the D-CNN model and the U-CNN model.

7. The method as claimed in claim 1, further comprising determining a degree of match between categorization of class and a corresponding actual categorization using a classification loss function.

8. The method as claimed in claim 7, further comprising back propagating the determined degree of match through the D-CNN model, the U-CNN model, and the sequence based neural network model using a backpropagation algorithm for hyperparameter tuning of the D-CNN model, the U-CNN model, and the sequence based neural network model.

Documents

Application Documents

# Name Date
1 202141028960-STATEMENT OF UNDERTAKING (FORM 3) [28-06-2021(online)].pdf 2021-06-28
2 202141028960-PROVISIONAL SPECIFICATION [28-06-2021(online)].pdf 2021-06-28
3 202141028960-POWER OF AUTHORITY [28-06-2021(online)].pdf 2021-06-28
4 202141028960-FORM 1 [28-06-2021(online)].pdf 2021-06-28
5 202141028960-DRAWINGS [28-06-2021(online)].pdf 2021-06-28
6 202141028960-DECLARATION OF INVENTORSHIP (FORM 5) [28-06-2021(online)].pdf 2021-06-28
7 202141028960-Proof of Right [06-12-2021(online)].pdf 2021-12-06
8 202141028960-Correspondence_Amend the email addresses_14-12-2021.pdf 2021-12-14
9 202141028960-DRAWING [18-03-2022(online)].pdf 2022-03-18
10 202141028960-CORRESPONDENCE-OTHERS [18-03-2022(online)].pdf 2022-03-18
11 202141028960-COMPLETE SPECIFICATION [18-03-2022(online)].pdf 2022-03-18
12 202141028960-Proof of Right [22-03-2022(online)].pdf 2022-03-22
13 202141028960-FORM-26 [13-10-2022(online)].pdf 2022-10-13
14 202141028960-Form-18_Examination Request_13-10-2022.pdf 2022-10-13
15 202141028960-Correspondence_Form-18_13-10-2022.pdf 2022-10-13
16 202141028960-FER.pdf 2023-04-18
17 202141028960-OTHERS [17-10-2023(online)].pdf 2023-10-17
18 202141028960-FORM-26 [17-10-2023(online)].pdf 2023-10-17
19 202141028960-FER_SER_REPLY [17-10-2023(online)].pdf 2023-10-17
20 202141028960-DRAWING [17-10-2023(online)].pdf 2023-10-17
21 202141028960-COMPLETE SPECIFICATION [17-10-2023(online)].pdf 2023-10-17
22 202141028960-CLAIMS [17-10-2023(online)].pdf 2023-10-17
23 202141028960-ABSTRACT [17-10-2023(online)].pdf 2023-10-17
24 202141028960-RELEVANT DOCUMENTS [18-02-2025(online)].pdf 2025-02-18
25 202141028960-MARKED COPIES OF AMENDEMENTS [18-02-2025(online)].pdf 2025-02-18
26 202141028960-FORM 13 [18-02-2025(online)].pdf 2025-02-18
27 202141028960-AMENDED DOCUMENTS [18-02-2025(online)].pdf 2025-02-18

Search Strategy

1 searchAE_25-01-2024.pdf
2 202141028960searchE_17-04-2023.pdf