Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Quantifying And Analyzing Follicles Using A Deep Neural Network

Abstract: ABSTRACT Methods and systems for performing assessment of anatomical objects using a neural network. A method disclosed herein includes training a neural network using a transfer learning to perform an assessment of at least one anatomical object. Training the neural network using the transfer learning includes generating a plurality of task models to perform the assessment of the at least one anatomical object in medical media data of a plurality of dimensions and recursively refining the generated plurality of task models until convergence criteria in performing the assessment of the at least one anatomical object is achieved. FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 July 2019
Publication Number
04/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patent@bananaip.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-08-08
Renewal Date

Applicants

Samsung Medison
3366, Hanseo-ro, Yangdeokwon-ri, Nam-myeon, Hongcheon-gun, Gangwon-do

Inventors

1. Nitin Singhal
#2870, Phoenix Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037
2. Srinivas Rao Kudavelly
#2870, Phoenix Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037
3. Karan Kakwani
#2870, Phoenix Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037

Specification

DESC:CROSS REFERENCE TO RELATED APPLICATION
This application is based on and derives the benefit of Indian Provisional Application IN201941028954 as filed on 18th July, 2019, the contents of which are incorporated herein by reference.
TECHNICAL FIELD
[001] The present disclosure relates to the field of medical media assessment and more particularly to performing an assessment of anatomical objects in medical media data using a neural network.
BACKGROUND
[002] In a current clinical practice, medical media data is collected to perform an assessment of anatomical objects. The assessment of the anatomical objects involves quantifying and analyzing the anatomical objects, classification of the anatomical objects, and so on.
[003] In conventional approaches, for the assessment of the anatomical objects, the medical media data of the anatomical objects can be collected using one of medical media modalities such as, but not limited to, X-ray, computed tomography (CT), magnetic resonance (MR), ultrasound, and so on. The collected medical media data can be analyzed to perform the assessment of the anatomical objects. However, in the conventional approaches, analyzing the medical media data may involve manual analysis, which may be a complicated demanding task. The manual analysis is dependent on an expertise of an operator/lab technician, which can be highly subjective.
[004] In addition, the assessment of the anatomical objects using the manual analysis is not always robust, especially when the anatomical objects exhibit large variations in anatomy, shape, or appearance in the medical media data or when the medical media data includes low quality media of the anatomical objects with noise and other artifacts.

OBJECTS
[005] The principal object of the embodiments herein is to disclose methods and systems for performing assessment of at least one anatomical object in medical media data using a neural network.
[006] Another object of the embodiments herein is to disclose methods and systems for training the neural network using a transfer learning method and using the trained neural network to perform the assessment of the at least one anatomical object.
SUMMARY
[007] Accordingly, the embodiments herein disclose methods and systems for performing an anatomical object assessment using a neural network. A method disclosed herein includes training a neural network for the at least one anatomical object by generating a plurality of task models of the neural network using supervised medical media data of the at least one anatomical object and transfer learning. The method further includes processing the trained neural network to perform the assessment of the at least one anatomical object, on receiving medical media data including the at least one anatomical object.
[008] Accordingly, the embodiments herein provide an electronic device for performing an anatomical object assessment using a neural network, wherein the electronic device includes a memory and a controller. The controller is configured to generate a plurality of task models of the neural network using supervised medical media data of the at least one anatomical object and transfer learning. The controller is further configured to process the trained neural network to perform the assessment of the at least one anatomical object, on receiving medical media data including the at least one anatomical object.
[009] These and other aspects of the example embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating example embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the example embodiments herein without departing from the spirit thereof, and the example embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0010] Embodiments herein are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0011] FIGs. 1a and 1b depict an anatomical object assessment system 100, according to embodiments as disclosed herein;
[0012] FIG. 2 is a block diagram depicting various components of an electronic device for performing assessment of anatomical objects, according to embodiments as disclosed herein;
[0013] FIG. 3 is a flow diagram depicting a method for performing the assessment of the anatomical object using the neural network, according to embodiments as disclosed herein;
[0014] FIG. 4 depicts an example use case scenario of training the neural network for quantifying and classifying ovarian follicle, according to embodiments as disclosed herein; and
[0015] FIGs. 5a and 5b are example graphs depicting increased performance in quantifying and analyzing follicles using the neural network, which is trained using the transfer learning method, according to embodiments as disclosed herein.

DETAILED DESCRIPTION
[0016] The example embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The description herein is intended merely to facilitate an understanding of ways in which the example embodiments herein can be practiced and to further enable those of skill in the art to practice the example embodiments herein. Accordingly, this disclosure should not be construed as limiting the scope of the example embodiments herein.
[0017] Embodiments herein disclose methods and systems for performing an assessment of anatomical objects in medical media data using a neural network.
[0018] Embodiments herein disclose methods and systems for training the neural network using a “ping pong transfer learning method” and using the trained neural network to perform the assessment of the anatomical objects.
[0019] Referring now to the drawings, and more particularly to FIGS. 1a through 5b, where similar reference characters denote corresponding features consistently throughout the figures, there are shown example embodiments.
[0020] FIGs. 1a and 1b depict an anatomical object assessment system 100, according to embodiments as disclosed herein.
[0021] The anatomical object assessment system 100 referred herein can be configured to perform an assessment of anatomical objects in medical media data using a neural network. The assessment of the anatomical objects includes at least one of detecting the anatomical objects, quantifying the anatomical objects, and classifying/segmenting the anatomical objects into at least class.
[0022] Examples of the anatomical objects referred herein can be, but is not limited to, an abdomen, a kidney, a liver, a head, an ovary, or any other part of anatomy. Examples of the medical media data referred herein can be, but is not limited to, images of the anatomical objects, videos of the anatomical objects, and so on. The medical media data can include the medical media data of one or more dimensions. For example, the supervised medical media data can include at least one of, but not limited to, two-dimensional (2D) medical media data, three dimensional (3D) medical media data, four dimensional (4D) medical media data, and so on.
[0023] Examples of the neural network referred herein can be, but is not limited to, a deep neural network, a convolutional neural network, or the like. The neural network includes a plurality of nodes, which can be arranged in layers. Examples of the layers can be, but is not limited to, a convolutional layer, an activation layer, an average pool layer, a max pool layer, a concatenated layer, a dropout layer, a fully connected layer, a SoftMax layer, and so on. A topology of the layers of the neural network may vary based on the type of the neural network. In an example, the neural network may include an input layer, an output layer, and a hidden layer. The input layer receives an input (for example: the medical media data) and forwards the received input to the hidden layer. The hidden layer transforms the input received from the input layer into a representation, which can be used for generating the output in the output layer. The hidden layers extract useful/low level features from the input, introduce non-linearity in the network and reduce a feature dimension to make the features equivariant to scale and translation. The nodes of the layers can be fully connected via edges to the nodes in adjacent layers. The input received at the nodes of the input layer can be propagated to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients/weights respectively associated with each of the edges connecting the layers.
[0024] As depicted in FIG. 1a, the anatomical object assessment system 100 includes at least one external entity 102, at least one medical media modality device 104, and an electronic device 106.
[0025] The at least one external entity 102 referred herein can be configured to store supervised medical media data of the anatomical objects. Examples of the at least one external entity 102 can be, but is not limited to, an external server, an external database, and so on. In an embodiment, the supervised medical media data can be annotated medical media data. The annotated medical media data can include at least one of, but not limited to, the medical media data with the detected anatomical objects, the medical media with the quantified anatomical objects, the medical media data with the segmented/classified anatomical objects, and so on. In an embodiment, the supervised medical media can be non-annotated medical media data. The supervised medical media data can include the supervised medical media data of the one or more dimensions of the data. For example, the supervised medical media data can include at least one of, but not limited to, 2D supervised medical media data, 3D supervised medical media data, 4D supervised medical media data, and so on.
[0026] The at least one medical media modality device 104 can be configured to collect the medical media data of the anatomical objects of the human body. Examples of the medical media modality device 104 can be, but is not limited to, an X-ray device, a computed tomography (CT) scan device, a magnetic resonance imaging (MRI) device, an ultrasound device, and so on.
[0027] The electronic device 106 can be configured to perform the assessment of the anatomical objects in the medical media data (received from the medical media modality device 104) using the neural network. Examples of the electronic device 106 can be, but is not limited to, a mobile phone, a smart phone, a tablet, a handheld device, a phablet, a laptop, a computer, a wearable computing device, a medical equipment, an Internet of Thing (IoT) device and so on. The electronic device 106 can also be a special-purpose computing system such as, but not limited to, a server, a cloud, a multiprocessor system, a microprocessor based programmable consumer electronic device, a network computer, a minicomputer, a mainframe computer, a medical device, the medical media modality device 104, and so on.
[0028] In an embodiment, the electronic device 106 can be the at least one medical media modality device 104, as depicted in FIG.1b, which can be configured to perform the assessment of the anatomical objects in the medical media data using the neural network.
[0029] The electronic device 106 can further connect with the at least one external entity 102 and the at least one medical media modality device 104 using a communication network 108. Examples of the communication network 108 can be, but is not limited to, the Internet, a wired network (a Local Area Network (LAN), Ethernet and so on), a wireless network (a Wi-Fi network, a cellular network, a Wi-Fi Hotspot, Bluetooth, Zigbee and so on) and so on.
[0030] For performing the assessment of the anatomical object(s), the electronic device 106 can train/configure the neural network for the anatomical object and utilize the trained neural network for performing the assessment of the anatomical object.
[0031] In order to train the neural network, the electronic device 106 can access the at least one external entity 102 and fetch the supervised medical data of the anatomical object for which the neural network has to be trained. The fetched supervised medical media data can include the supervised medical media data of the one or more dimensions.
[0032] For training the neural network, the electronic device 106 generates one or more task models using the supervised medical media data of the anatomical object of the one or more dimensions. The one or more task models can be for the anatomical objects in the medical media data of the one or more dimensions. A task model includes the layers of the neural network that have to be used for performing the assessment on the medical media data of the specific dimension and the weights associated with each layer.
[0033] The electronic device 106 generates a first task model of the one or more task models by adjusting weights of the layers of the neural network defined for the anatomical object using the received supervised medical media data of the anatomical object of the first dimension. The electronic device 106 further generates each subsequent task model by adjusting the weights of the layers of the neural network defined for the anatomical object using the received supervised medical media data of the subsequent dimension and training data of the previously generated task model. The previously generated task model can be one of the first task model or a subsequent task model of the different dimension. The training data of the previously generated task model can be the adjusted weights of the neural network in the previously generated task.
[0034] In an embodiment, on generating the one or more task models for the anatomical object in the medical media data of the one or more dimensions, the electronic device 106 recursively improves/refines the one or more task models until convergence criteria is satisfied. In an embodiment, the convergence criteria can be a pre-defined loss function. The electronic device 106 can improve each task model by adjusting the weights of the layers of the neural network based on the supervised medical media data of the corresponding dimension and the training data of the previously generated task model or the previously improved task model of different dimension.
[0035] In an embodiment, for adjusting the weights of the task model(s), the electronic device 106 identifies default/prior weights of the task model, wherein the task model can be the first/subsequent task model. On identifying the default/prior weights, the electronic device 106 receives the supervised medical media data. The electronic device 106 calculates a loss function based on the identified weights and the received supervised medical media data. The electronic device 106 then calculates gradients of an output error with respect to all the weights in the neural network in a back-propagation phase and uses a gradient descent to resample the calculated gradients by updating the weights of the neural network, therefore the output error can be minimized. The electronic device 106 recursively uses the received supervised medical media data to adjust the weights of the task model, till the loss function is minimized.
[0036] Thus, in an embodiment herein, the neural network can be trained by:
? generating the first task model using the supervised medical media data of the anatomical object of the first dimension;
? generating the at least one subsequent task model using the supervised medical media data of the anatomical object of the at least one subsequent dimension and the training data of the previously generated task; and
? recursively improving the first and at least one subsequent task model reusing the supervised medical media data of the anatomical object of the associated dimension and the training data of the previously generated/improved task. Such a method of training the neural network can be referred herein as a “ping pong transfer learning method” or a transfer learning method through the document.
[0037] Consider an example scenario, wherein the electronic device 106 fetches the supervised medical media data of an abdomen of a human body, wherein the supervised medical media data includes 2D, 3D and 4D supervised medical media data of the abdomen. In such a scenario, the electronic device 106 generates a first task model for performing the assessment of the abdomen in a 2D abdomen media (which is an example of the medical media data). The electronic device 106 generates the first task model by adjusting the weights of the layers of the neural network defined for the abdomen with the received 2D supervised medical media data of the abdomen. On generating the first task model, the electronic device 106 generates a second task model for performing the assessment of the abdomen in a 3D abdomen data. The electronic device 106 generates the second task model by adjusting the weights of the layers of the neural network defined for the abdomen using the training data of the generated first task model and the received 3D supervised medical media data. On generating the second task model, the electronic device 106 generates a third task model for performing the assessment of the abdomen in a 4D abdomen data. The electronic device 106 generates the third task model by adjusting the weights of the layers of the neural network defined for the abdomen using the training data of the generated second task model and the received 4D supervised medical media data.
[0038] On generating the task models, the electronic device 106 recursively improves the first, second and third task models until the convergence criteria is satisfied. In an example herein, the electronic device 106 can improve the first task model by adjusting the weights of the layers of the neural network defined for the abdomen by reusing the training data of the generated third task model and the received 2D supervised medical media data. The electronic device 106 can improve the second task model by adjusting the weights of the layers of the neural network defined for the abdomen by reusing the training data of the improved first task model and the received 3D supervised medical media data. The electronic device 106 can improve the third task model by adjusting the weights of the layers of the neural network defined for the abdomen by reusing the training data of the improved second task model and the received 4D supervised medical media data. Thus, each task model of the neural network can be trained by reusing the training data (i.e., the adjusted weights) of the previous task model and the supervised medical media data of the corresponding dimension.
[0039] In an embodiment, on training the neural network for each of the anatomical objects, the electronic device 106 deploys the trained neural network onto a target device (for example; at least one computing device, the at least one medical media modal device 104, or the like). The target device can use the trained neural network to perform the assessment of the anatomical objects, on receiving the medical media data including the anatomical objects.
[0040] In an embodiment, on training the neural network for each of the anatomical objects, the electronic device 106 receives the medical media data of one of the anatomical objects from the at least one medical media modality device 104. The electronic device 106 performs the assessment on the received medical media data by processing the task model of the neural network generated for the received medical media data of the specific dimension.
[0041] FIGs. 1a and 1b show exemplary blocks of the anatomical object assessment system 100, but it is to be understood that other embodiments are not limited thereon. In other embodiments, the anatomical object assessment system 100 may include less or more number of blocks. Further, the labels or names of the blocks are used only for illustrative purpose and does not limit the scope of the embodiments herein. One or more blocks can be combined together to perform same or substantially similar function in the anatomical object assessment system 100.
[0042] FIG. 2 is a block diagram depicting various components of the electronic device 106 for performing the assessment of the anatomical objects, according to embodiments as disclosed herein. The electronic device 106 includes an interface 202, a display 204, a memory 206, and a controller 208.
[0043] The interface 202 can be configured to enable the electronic device 106 to communicate with at least one external entity 102 using the communication network 108. The interface 202 can also include one or more physical ports that can be configured to enable the electronic device 106 to communicate with additional devices/modules. Examples of the physical ports can be, but not limited to, general-purpose input/output (GPIO), Universal Serial Bus (USB), Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI), and so on. Examples of the additional devices/modules can be, but not limited to, On-board diagnostics (OBD) ports, the at least one medical media modality device 104, and so on.
[0044] The display 204 can be configured to enable the user/technician to interact with the electronic device 106. The display 204 can be used to provide information about the assessment of the anatomical objects to the user in a form of text, visual alerts, and so on. The information can be at least one of, but not limited to, the medical data, the detected anatomical object, the quantified anatomical object, the classification of the anatomical object, and so on.
[0045] The memory 206 can store at least one of, but not limited to, the at least one neural network for the at least one anatomical object, the task models of the neural network, the medical media data, the supervised medical media data, and so on. Examples of the memory 206 can be, but not limited to, NAND, embedded Multi Media Card (eMMC), Secure Digital (SD) cards, micro SD cards, Compact Flash (CF) cards, Universal Serial Bus (USB), Serial Advanced Technology Attachment (SATA), solid-state drive (SSD), and so on. The memory 206 may also include one or more computer-readable storage media. The memory 206 may also include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory 206 may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that the memory 206 is non-movable. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
[0046] In an embodiment, the neural network stored in the memory 206 can be trained using at least one transfer learning method to perform the assessment of the anatomical objects. A function associated with the transfer learning method may be performed through the non-volatile memory, the volatile memory, and the controller 208.
[0047] The controller 208 may include one or a plurality of processors. At this time, one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Artificial Intelligence (AI)-dedicated processor such as a neural processing unit (NPU).
[0048] The one or a plurality of processors performs the assessment of the anatomical objects in accordance with a predefined operating rule of the neural network stored in the non-volatile memory and the volatile memory. The predefined operating rule of the neural network is provided through training the neural network using the transfer learning method.
[0049] Here, being provided through learning means that, by applying the transfer learning method to a plurality of learning data (for example: the supervised medical media data), a predefined operating rule or AI model of a desired characteristic is made. The assessment of the at least one anatomical object may be performed in the electronic device 106 itself in which the learning according to an embodiment is performed, and/or may be implemented through a separate server/system.
[0050] The neural network may comprise of a plurality of neural network layers. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of the neural network include, but is not limited to, a convolutional neural network (CNN), a deep neural network (DNN), a recurrent neural network (RNN), a restricted Boltzmann Machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and a deep Q-networks.
[0051] The neural network is used for training a predetermined target device (for example, a robot, an IoT device, a wearable device, a user equipment, at least one medical media modality device, or any other computing device) using the supervised medical media data to cause, allow, or control the target device to perform the assessment of the anatomical objects. Examples of the transfer learning method include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
[0052] The controller 208 can be configured to perform the assessment of the anatomical objects using the neural network. The controller 208 trains/configures the neural network for each of the anatomical objects using the supervised medical media data of the anatomical object. In an embodiment, the controller 208 can deploy the trained neural network on the target device to perform the assessment of the anatomical objects in the medical media data. In an embodiment, the controller 208 can process the trained neural network on at least one of the processors to perform the assessment of the anatomical objects in the medical media data.
[0053] The controller 208 includes a training module 208a, and a processing module 208b for performing the assessment of the anatomical objects using the neural network.
[0054] The training module 208a can be configured to train/configure the neural network by generating one or more task models for the anatomical object and recursively improving the one or more task models until the convergence criteria is achieved. For training the neural network using the transfer learning method, the training module 208a connects with the at least one external entity 102 through the interface 202 and receives the supervised medical media data of the anatomical object of the one or more dimensions. The training module 208a generates the one or more task models for performing the assessment of the anatomical object in the one or more dimensions. The training module 208a generates the first task model of the one or more task models using the supervised medical media data of the anatomical object of the first dimension. The training module 208a generates the at least one subsequent task model using the supervised medical media data of the anatomical object of the at least one subsequent dimension and reusing the training data of the previously generated task. The training module 208a further recursively improves the first and at least one subsequent task model using the supervised medical media data of the anatomical object of the associated dimension and reusing the training data of the previously generated/improved task model, until the convergence criteria is achieved.
[0055] Embodiments herein further explain the training of the neural network by considering that the received supervised medical media data of the anatomical object includes 2D and 3D supervised medical media data of the anatomical object as an example, but it may be obvious to a person skilled in the art that any other dimension of the supervised medical media data of the anatomical object may be considered.
[0056] On receiving the 2D and 3D supervised medical media data of the anatomical object, the training module 208a generates a first task model for performing the assessment of the anatomical object in 2D medical media data. The training module 208a generates the first task model by adjusting the weights of the neural network using the 2D supervised medical media data. On generating the first task model, the training module 208a generates a second task model for performing the assessment of the anatomical object in 3D medical media data. The training module 208a generates the second task model by adjusting the weights of the neural network using the received 3D supervised medical media data and reusing the training data of the previously generated task model (i.e., the first task model). The training data can be the adjusted weights of the neural network in the first task model.
[0057] On generating the first task model and the second task model, the training module 208a recursively improves/refines the training of the first task model and the second model until the convergence criteria is achieved. The convergence criteria can be the pre-defined loss function, wherein the achieved the convergence criteria refer to minimized loss function. In an embodiment, the loss function can be, but is not limited to, a dice accuracy or the like. In case of the dice accuracy, achieving the convergence criteria implies that no accuracy maxima can be reached further. The training module 208a improves the first task model by adjusting the weights of the neural network using the received 2D supervised medical media data of the anatomical object and the training data of the previously generated/improved task model (i.e., the second task model). On improving the first task model, the training module 208a improves the second task model by adjusting the weights of the neural network using the received 3D supervised medical media data of the anatomical object and the training data of the previously generated/improved task model (i.e., the first task model).
[0058] In an embodiment, the training module 208a can adjust the weights of the task model(s) by:
- identifying the default/prior weights of the task model;
- receiving the supervised medical media data;
- calculating the loss function based on the identified weights and the received supervised medical media data;
- calculating the gradients of the output error with respect to all the weights in the neural network in the back-propagation phase and using the gradient descent to resample the calculated gradients by updating the weights of the neural network, therefore the output error can be minimized; and
- recursively using the received supervised medical media data to adjust the weights of the task model, till the loss function is minimized.
[0059] The training module 208a may store the generated one or more task models of the neural network for performing the assessment of the anatomical object in the medical media data of the one or more dimensions in the memory 206. Alternatively, the training module 208a may provide the generated one or more task models of the neural network to the target device to perform the assessment of the anatomical objects in the medical media data of the one or more dimensions.
[0060] The processing module 208b can be configured to perform the assessment of the medical media data by processing the neural network. On receiving the medical media data including the anatomical object for the assessment, the processing module 208b determines the dimension of the received medical media data. The processing module 208b fetches the task model of the neural network stored in the memory 206 for the anatomical object in the medical media data of the determined dimension. The processing module 208b processes the layers of the task model on the at least one processor associated with the controller 208 to perform the assessment of the anatomical object in the received medical media data of the at least one dimension.
[0061] FIG. 2 shows exemplary blocks of the electronic device 106, but it is to be understood that other embodiments are not limited thereon. In other embodiments, the electronic device 106 may include less or more number of blocks. Further, the labels or names of the blocks are used only for illustrative purpose and does not limit the scope of the embodiments herein. One or more blocks can be combined together to perform same or substantially similar function in the electronic device 106.
[0062] FIG. 3 is a flow diagram 300 depicting a method for performing the assessment of the anatomical object using the neural network, according to embodiments as disclosed herein.
[0063] At step 302, the method includes training, by the electronic device 106, a neural network for classifying the at least one anatomical object using the transfer learning. The transfer learning includes generating the one or more task models for performing the assessment of the at least one anatomical object in the medical media data and refining the generated one or more task models until the convergence criteria is achieved. The one or more tasks can be generated and refined using the supervised medical media data of the corresponding dimensions and the training data of the previously generated/refined task model.
[0064] At step 304, the method includes processing, by the electronic device 106, the trained neural network to perform the assessment of the at least one anatomical object, on receiving the medical media data including the at least one anatomical object. The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.
[0065] FIG. 4 depicts an example use case scenario of training the neural network for quantifying and classifying ovarian follicle, according to embodiments as disclosed herein.
[0066] Consider an example scenario, wherein the electronic device 106 receives the supervised medical media data of the ovarian follicle (an example of an anatomical object) to train the neural network for quantifying and classifying the ovarian follicle. The supervised medical media data includes supervised/trained 2D ovarian data and supervised/trained 3D ovarian data. In such a scenario, the electronic device 106 generates a first task model for quantifying and classifying the ovarian follicle in 2D ovarian media. The electronic device 106 generates the first task model by adjusting the weights of the neural network stored for the ovarian follicle using the received supervised 2D ovarian data. On generating the first task model, the electronic device 106 generates a second task model for the for quantifying and classifying the ovarian follicle in 3D ovarian data. The electronic device 106 generates the 3D ovarian data by adjusting weights of the neural network stored for the ovarian follicle using the received supervised 3D ovarian data and reusing the training data of the first task model. The training data can be the adjusted weights of the neural network in the first task model.
[0067] On generating the first and second task models, the electronic device 106 recursively improves the training of the first and second task models until the convergence criteria is achieved. In an example herein, on generating the second task model, the electronic device 106 improves the first task model by adjusting the weights of the neural network stored for the ovarian follicle using the received supervised 2D ovarian data and reusing the training data of the second task model. On improving the first task model, the electronic device 106 improves the second task model by adjusting the weights of the neural network stored for the ovarian follicle using the received supervised 3D ovarian data and reusing the training data of the improved first task model.
[0068] The electronic device 106 can further use the first task model of the neural network to quantify and classify the ovarian follicle into at least one type like foreground (follicle) and background (Non-follicle), on receiving the 2D ovarian data from the at least one medical modality device 104 for the assessment. The electronic device 106 can further use the second task model of the neural network to classify the ovarian follicle into the follicle or the non-follicle, on receiving the 3D ovarian data for the assessment.
[0069] FIGs. 5a and 5b are example graphs depicting increased performance in quantifying and analyzing follicles using the neural network, which is trained using the transfer learning method, according to embodiments as disclosed herein.
[0070] Embodiments herein train the neural network using the transfer learning method for quantifying and analyzing the ovarian follicles, which results in the increased performance. The increased performance can be depicted in the example graphs of FIGs. 5a, and 5b, based on plotting of iterations required to train the task model of the neural network on an X-axis v/s dice parameter on a Y-axis.
[0071] Embodiments herein train a neural network with supervised medical media data in accordance with a ping pong transfer learning method for performing an assessment of anatomical objects in medical media data. The neural network can be also be trained with a set of feature values in accordance with the ping pong transfer learning method for at least one of detecting objects, speech utterances, and so on.
[0072] Training of the neural network using the ping pong transfer learning method results in:
? requirement of less supervised medical media data to train the neural network;
? small memory footprint, low latency and low computational cost; and
? increased accuracy and computational efficiency.
[0073] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the elements. The elements shown in FIG. 1 and FIG. 2 can be at least one of a hardware device, or a combination of hardware device and software module.
[0074] The embodiments disclosed herein describe methods and systems for performing assessment of anatomical objects using a neural network. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means, having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment through or together with a software program written in e.g. Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of portable device that can be programmed. The device may also include means which could be e.g. hardware means like e.g. an ASIC, or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0075] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein. ,CLAIMS:STATEMENT OF CLAIMS
I/We claim:
1. A method for performing assessment of at least one anatomical object, the method comprising:
training, by an electronic device (106), a neural network for the at least one anatomical object by generating a plurality of task models of the neural network using supervised medical media data of the at least one anatomical object and transfer learning; and
processing, by the electronic device (106), the trained neural network to perform the assessment of the at least one anatomical object, on receiving medical media data including the at least one anatomical object.

2. The method of claim 1, wherein training, by the electronic device (106), the neural network includes:
receiving the supervised medical media data of the at least one anatomical object, wherein the supervised medical media data includes supervised medical media data of a plurality of dimensions;
generating a first task model of the plurality of task models by adjusting weights of layers of the neural network using the received supervised medical media data of a first dimension of the plurality of dimensions; and
generating at least one subsequent task model of the plurality of task models by adjusting the weights of the layers of the neural network using the supervised medical media data of at least one subsequent dimension of the plurality of dimensions and training data of a previously generated task model, wherein the previously generated task model includes one of the first task model and a subsequent task model of a different dimension.
3. The method of claim 2, wherein
the plurality of task models is generated to perform the assessment of the at least one anatomical object in the medical media data of the plurality of dimensions; and
the training data of the previously generated task model includes the adjusted weights of the layers of the neural network in the previously generated task model.

4. The method of claim 2, further comprising: recursively refining the first task model and the at least one subsequent task model until convergence criteria is achieved, wherein the convergence criteria include a pre-defined loss function.

5. The method of claim 4, wherein refining the first task model and the at least one subsequent task model includes:
adjusting the weights of the layers of the neural network of the first and the at least one subsequent task model by reusing the supervised medical media data of the first and at least one subsequent dimension and the training data of the previously generated task model or the previously refined task model.

6. An electronic device (106) comprising:
a memory (206); and
a controller (208) coupled to the memory (206) configured to:
train a neural network for the at least one anatomical object by generating a plurality of task models of the neural network using supervised medical media data of the at least one anatomical object and transfer learning; and
process the trained neural network to perform the assessment of the at least one anatomical object, on receiving medical media data including the at least one anatomical object.

7. The electronic device (106) of claim 6, wherein the controller (208) is further configured to:
receive the supervised medical media data of the at least one anatomical object, wherein the supervised medical media data includes supervised medical media data of a plurality of dimensions;
generate a first task model of the plurality of task models by adjusting weights of layers of the neural network using the received supervised medical media data of a first dimension of the plurality of dimensions; and
generate at least one subsequent task model of the neural network by adjusting the weights of the layers of the neural network using the supervised medical media data of at least one subsequent dimension of the plurality of dimensions and training data of a previously generated task model, wherein the previously generated task model includes one of the first task model and a subsequent task model of a different dimension.

8. The electronic device (106) of claim 7, wherein
the plurality of task models is generated to perform the assessment of the at least one anatomical object in the medical media data of the plurality of dimensions; and
the training data of the previously generated task model includes the adjusted weights of the layers of the neural network in the previously generated task model.

9. The electronic device (106) of claim 7, wherein the controller (208) is further configured to: recursively refine the first task model and the at least one subsequent task model until convergence criteria is achieved, wherein the convergence criteria include a pre-defined loss function.

10. The electronic device (106) of claim 9, wherein the controller (208) is further configured to: adjust the weights of the layers of the neural network of the first and the at least one subsequent task model by reusing the supervised medical media data of the first and at least one subsequent dimension and the training data of the previously generated task model or the previously refined task model.

Documents

Application Documents

# Name Date
1 201941028954-STATEMENT OF UNDERTAKING (FORM 3) [18-07-2019(online)].pdf 2019-07-18
2 201941028954-PROVISIONAL SPECIFICATION [18-07-2019(online)].pdf 2019-07-18
3 201941028954-POWER OF AUTHORITY [18-07-2019(online)].pdf 2019-07-18
4 201941028954-FORM 1 [18-07-2019(online)].pdf 2019-07-18
5 201941028954-DRAWINGS [18-07-2019(online)].pdf 2019-07-18
6 201941028954-DECLARATION OF INVENTORSHIP (FORM 5) [18-07-2019(online)].pdf 2019-07-18
7 201941028954-Proof of Right (MANDATORY) [06-08-2019(online)].pdf 2019-08-06
8 Correspondence by Agent_Form-1_08-08-2019.pdf 2019-08-08
9 201941028954-FORM 18 [18-07-2020(online)].pdf 2020-07-18
10 201941028954-DRAWING [18-07-2020(online)].pdf 2020-07-18
11 201941028954-CORRESPONDENCE-OTHERS [18-07-2020(online)].pdf 2020-07-18
12 201941028954-COMPLETE SPECIFICATION [18-07-2020(online)].pdf 2020-07-18
13 201941028954-FER.pdf 2021-10-17
14 201941028954-OTHERS [02-03-2022(online)].pdf 2022-03-02
15 201941028954-FER_SER_REPLY [02-03-2022(online)].pdf 2022-03-02
16 201941028954-CORRESPONDENCE [02-03-2022(online)].pdf 2022-03-02
17 201941028954-CLAIMS [02-03-2022(online)].pdf 2022-03-02
18 201941028954-ABSTRACT [02-03-2022(online)].pdf 2022-03-02
19 201941028954-US(14)-HearingNotice-(HearingDate-07-06-2024).pdf 2024-04-23
20 201941028954-FORM-26 [30-05-2024(online)].pdf 2024-05-30
21 201941028954-Correspondence to notify the Controller [30-05-2024(online)].pdf 2024-05-30
22 201941028954-Annexure [30-05-2024(online)].pdf 2024-05-30
23 201941028954-US(14)-ExtendedHearingNotice-(HearingDate-25-06-2024).pdf 2024-06-05
24 201941028954-FORM-26 [24-06-2024(online)].pdf 2024-06-24
25 201941028954-Correspondence to notify the Controller [24-06-2024(online)].pdf 2024-06-24
26 201941028954-Written submissions and relevant documents [10-07-2024(online)].pdf 2024-07-10
27 201941028954-Annexure [10-07-2024(online)].pdf 2024-07-10
28 201941028954-PatentCertificate08-08-2024.pdf 2024-08-08
29 201941028954-IntimationOfGrant08-08-2024.pdf 2024-08-08

Search Strategy

1 5(2)E_05-08-2021.pdf

ERegister / Renewals

3rd: 07 Nov 2024

From 18/07/2021 - To 18/07/2022

4th: 07 Nov 2024

From 18/07/2022 - To 18/07/2023

5th: 07 Nov 2024

From 18/07/2023 - To 18/07/2024

6th: 07 Nov 2024

From 18/07/2024 - To 18/07/2025

7th: 16 Jul 2025

From 18/07/2025 - To 18/07/2026