Abstract: ABSTRACT A SYSTEM AND METHOD OF DETERMINING AND MONITORING OF ANIMAL HEALTH A system and method for determining health score of an animal in a farm is provided. A health monitoring device receives a set of 2D images corresponding to a set of views of a posterior of the animal. The images are pre-processed to generate a set of corresponding input images of a pre-defined size. A health score of the animal is determined by determining a score for each of the set of the corresponding input images by using a feed forward CNN model which is pre-trained. An overall health score of the animal is computed based on a weighted average of the health score for each of the set of the corresponding input images and the animal is classified into an identified class based on the overall health score. The identified class and the overall health score of the animal is rendered to a user via a graphical user interface. (To be published with FIG. 1)
DESC:DESCRIPTION
Technical Field
[001] This disclosure relates generally to techniques for determining and monitoring health of animals, more particularly to a system and method for determining and monitoring health of animals in a farm based on image processing and artificial intelligence.
BACKGROUND
[002] Animal farming is a major part of agriculture. However, there are many problems faced by farmers when practicing animal farming due to unstructured and undefined management process. Dairy cattle in a farm may include cows and buffalos and are prone to different types of diseases. Monitoring health of the dairy cattle is essential to sustain the milk output. Also, feed requirements and care of animals depend on the health condition of the animals. However, there is no proper means to monitor health of the dairy cattle which makes it becomes difficult to control spread of communicable diseases from one animal to another due to lack of knowledge and resources. This may lead to death of animals due to which the farmer may loose on revenue. Further, the farmers are dependent on their experience-based knowledge and may not be able to afford regular veterinary assessment for each animal. Therefore, there is a requirement for a robust and cost-effective tool which may monitor and determine the health of farm animals on a regular basis.
SUMMARY
[003] This disclosure relates generally to techniques for determining and monitoring health of animals, more particularly to a system and method for determining and monitoring health of animals in a farm based on image processing and artificial intelligence.
[004] In an aspect a method for determining health score of an animal in a farm may be provided. A health monitoring device may receive a set of 2D images corresponding to a set of views of a posterior of the animal. The images may be pre-processed to generate a set of corresponding input images of a pre-defined size and may comprise a region of interest within a view of the set of views. A health score of the animal is determined by determining a score for each of the set of the corresponding input images by using a feed forward CNN model which is pre-trained VGG19 model and may comprise of a set of sixteen convolution layers and a set of three fully connected layers and is based on domain knowledge with respect to the animal. The health score may be determined by extracting a set of classification feature vectors from the corresponding input image using the set of sixteen convolution layers. The set of classification feature vectors may correspond to a distinguishing physical feature of the animal. After extracting the set of classification feature vectors the health score of the animal is determined using the set of three fully connected layers. An overall health score of the animal may be computed based on a weighted average of the health score for each of the set of the corresponding input images and the animal is classified into an identified class based on the overall health score. The identified class and the overall health score of the animal may be rendered to a user via a graphical user interface.
[005] In another aspect, a system for determining health score of an animal in a farm may be provided. A health monitoring device may receive a set of 2D images corresponding to a set of views of a posterior of the animal. The images may be pre-processed to generate a set of corresponding input images of a pre-defined size and may comprise a region of interest within a view of the set of views. A health score of the animal is determined by determining a score for each of the set of the corresponding input images by using a feed forward CNN model which is pre-trained VGG19 model and may comprise of a set of sixteen convolution layers and a set of three fully connected layers and is based on domain knowledge with respect to the animal. The health score may be determined by extracting a set of classification feature vectors from the corresponding input image using the set of sixteen convolution layers. The set of classification feature vectors may correspond to a distinguishing physical feature of the animal. After extracting the set of classification feature vectors the health score of the animal is determined using the set of three fully connected layers. An overall health score of the animal may be computed based on a weighted average of the health score for each of the set of the corresponding input images and the animal is classified into an identified class based on the overall health score. The identified class and the overall health score of the animal may be rendered to a user via a graphical user interface.
BRIEF DESCRIPTION OF DRAWINGS
[006] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[007] FIG. 1 illustrates a network diagram of a monitoring system for monitoring and determining health score of farm animals, in accordance with an embodiment of the present disclosure.
[008] FIG. 2 illustrates an input image sample for a corresponding score value, in accordance with an embodiment of the present disclosure.
[009] FIG. 3 illustrates a functional diagram of the monitoring device of FIG. 1, in accordance with an embodiment of the present disclosure.
[010] FIG. 4a illustrates an exemplary input image of an animal, in accordance with an embodiment of the present disclosure.
[011] FIG. 4b illustrates determination of a test dataset based on VGG-19 framework, in accordance with an embodiment of the present disclosure.
[012] FIG. 5 illustrates the layers in VGG-19 model framework, in accordance with an embodiment of the present disclosure.
[013] FIG. 6 illustrates a flowchart of the methodology of determining health score of an animal, in accordance with an embodiment of the present disclosure.
[014] FIG. 7 illustrates an exemplary computer system to implement the proposed system, in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION OF DRAWINGS
[015] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims. Additional illustrative embodiments are listed.
[016] Presently, there is no efficient way of monitoring and determining health of farm animals in a farm. In general, the caretaker may manually assess the health of the animals in a farm based on any physical symptoms shown by an animal. However, animal’s health may be monitored based on various physical symptoms depicting physiological parameters of an animal. For example, muscle and fat thickness, eye color, eye discharge, fur loss patch, physical deformation, nasal discharge, mouth discharge, etc. By regular monitoring of animal’s health a farmer may be able to increase economic returns by increased reproductive performance and management of feeding cost. It may also enable the farmer to take timely disease control measures to prevent the disease from spreading.
[017] A health monitoring device is disclosed which may include one or more cameras installed in a farm for capturing images of the farm animals on a daily basis. The cameras may be installed in an entrance to the feeding station or an exit towards the pasture from where all animals may pass through regularly. The cameras may capture the multiple views of the animals from different directions. The camera utilized may be a panoramic camera which may determine a panoramic image of an animal.
[018] The animals may be identified based on radio-frequency identifier (RFID) tags attached to each animal. The RFID tag may be attached on suitable body part such as ears. The RFID tag of each animal may store identification information of an animal such as animal variety, family information, lifecycle stage, identifier, age, etc.
[019] The camera may also capture the RFID tag of an animal and the images captured may be linked to corresponding animal of which the RFID tag is detected. All data related to images may be transmitted to be stored in a central database of the health monitoring device through a network.
[020] The health monitoring device may pre-process the captured images to remove noise and discard images which may not be clear or are blurry. The pre-processed images may be processed further to determine visual features related to the anatomy of the animal such as eye discharge, fur loss patch, muscle and fat thickness, skeletal shape, physical deformation, nasal discharge, mouth discharge, etc. The visual features may be determined based on training data that may be used to determine a probability score corresponding to a disease, nutritional status, lifecycle stage, etc. of an animal of which the images are captured.
[021] In an embodiment, the images may be pre-processed by a health monitoring device using image processing algorithms. The pre-processed images may be inputted to the health monitoring device. The health monitoring device may then determine region of interest for each animal based on animal type and other physiological factors. The health monitoring device may determine a health score or a body condition score (BCS) by using machine learning (ML) model of known in the art. The ML models may be trained using training data which may include images of each type of animals depicting various visual features corresponding to various BCS levels. The training data may be fed into the health monitoring device and analyzed by the image processing module to determine or learn common features across the training data corresponding to the animal for a health parameter. The health monitoring device may determine region of interests corresponding to the visual features and anatomical features of the animal. The health monitoring device may determine the correlated features of images corresponding to various health conditions and categorize the images features based on which health score of the animal may be determined.
[022] The health monitoring device may then output a probability score of all health parameters depicting a likelihood for an animal being assessed to have a health condition, nutritional status, lifecycle stage, etc.
[023] In an exemplary embodiment, BCS of an animal may be determined based on the detection and monitoring of certain anatomical features of an animal. The BCS score is used in milking animals for monitoring milk production, fertility, and diseases. The BCS score is determined based on the determination of fat reserves in farm cattle. Accordingly, based on determination of a BCS score for an animal, feed can be optimized based on the nutritional parameters determined and other parameters such as weight, age, lifecycle stage, etc.
[024] In an exemplary embodiment, the anatomical features required to be assessed by the monitoring device in detecting the region of interest may be from backbone, loin, rump areas. The region around the pin bone, hip bone and top of the backbone, ends of short ribs, are devoid of muscle tissue and may be used as region of interests to monitor the deposition of fat.
[025] In an embodiment, the health monitoring device may utilize the images captured by the camera of cattle and determine the BCS score based on input parameters depicting the health conditions of cattle associated to a BCS score level. The health monitoring device may determine the BCS for an animal based on training data corresponding to each score range and the analysis of region of interests of the images captured by the camera.
[026] The output from the health monitoring device may be used to determine condition scores for an animal as well as for a herd depicting a lifestyle stage of each animal in herd or individually such as lactating, pregnant, etc. The individual scores may be processed using various analytical tools and reports depicting the results may be generated. The reports may include plotted charts or data sheets etc. including details pertaining to the type of report and information such as ID of the animal or group of animal and information related to each animal such as lactation number, production level or health problems. In an exemplary embodiment, the cattle may be categorized based on the lifestyle stage such as lactating cattle which may be determined based on an average of the determined BCS score corresponding to the lactating cattle. The information may be used to provide custom feed for the lactating cattle which may be customized as required.
[027] FIG. 1 is a block diagram of a monitoring system for monitoring and determining health of farm animals, in accordance with an embodiment of the present disclosure.
[028] As shown in FIG. 1, camera 114 may capture images of each of the animals 118 in a farm associated to the monitoring system 100. The images captured may be transmitted to a monitoring device 102 through a network 112. In an embodiment, the network 112 may be a wireless network, a wired network or a combination thereof. The network 112 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, Wi-Fi, LTE network, CDMA network, and the like. Further, the network 112 can either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 112 can include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[029] In an embodiment, the camera 114 may capture multiple images of an animal capturing various view of the animal. The views of the animal may capture images of an animal from various sides and angles. In an embodiment, the camera 114 may be located at an appropriate location in order for it to capture image of the dorsal and posterior portions of the animal, including, but not limited to the spine, the hook bones, the pin bones and the tail head.
[030] In an embodiment, the camera may also capture an identifier such as, but not limited to an RFID tag attached to the body of the animal. In an embodiment, the RFID tag may be provided on the ears of the animal. In an embodiment, the camera may be associated with an RFID scanner or reader in order to read the RFID information associated to the tag scanned.
[031] In an aspect, a monitoring device 102 may be implemented in any computing device and can be configured/operatively connected with a server (not shown). In an aspect, the monitoring device 102 may comprise one or more processor(s) 108. The one or more processor(s) 108 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 108 are configured to fetch and execute computer-readable instructions stored in a memory 110 of the monitoring device 102. The memory 110 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 110 may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[032] The monitoring device 102 may also comprise one or more interface(s) 106 for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 106 may facilitate communication of monitoring device 102 with the user devices 120. The interface(s) 106 may also provide a communication pathway for one or more components of the monitoring device 102 such as, but not limited to, memory 110.
[033] The monitoring device 102 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the monitoring device 102. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the monitoring device 102 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the monitoring device 102 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource of the processor(s) 108, implement the monitoring device 102. In such examples, the monitoring device 102 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to monitoring device 102 and the processing resource. In other examples, the monitoring device 102 may be implemented by electronic circuitry.
[034] The memory 110 may store data that is either stored or generated as a result of functionalities implemented by any of the components of the monitoring device 102 or the monitoring device 102.
[035] Further, multiple users devices 120-1, 120-2 … 120-N (which are collectively referred to as users devices 120 and individually referred to as the user device 120, hereinafter) can communicate with the monitoring device 102 through a network 112. The user devices may be operated by users which can be customers of an entity or a farm, where the entity can be farm manager. The user devices 120 can include a variety of computing systems, including but not limited to, a laptop computer, a desktop computer, a notebook, a workstation, a portable computer, a personal digital assistant, a handheld device and a mobile device.
[036] Further, the network 112 can be a wireless network, a wired network or a combination thereof. The network 112 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, Wi-Fi, LTE network, CDMA network, and the like. Further, the network 112 can either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 112 can include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[037] In an aspect, the monitoring device 102 can receive one or more multimedia content including but not limited to, images, a video file or a video signal from the camera 114 through the network 112 using HTTP. Further, monitoring device 102 of the monitoring device 102 can be configured to process the multimedia content to generate a processed file to generate output information corresponding to each animal in the farm and transmit the output information to the user device 120. The multimedia content may include streaming content pertaining to live videos that may be delivered to the user devices 120, alternately, the video file can also include the video file stored on a server or a database (not shown) of the monitoring system 100.
[038] In an embodiment, the user devices 120 can register themselves directly with the monitoring device 102 using any or a combination of a mobile number, date of birth, place of birth, first name and last name, a biometric or any other such unique identifier based input. On successful registration, the user of the user device 120 can be provided with a username and password, which can be used for accessing the monitoring device 102 for monitoring animal health information.
[039] In an embodiment, the monitoring device 102 may receive the images capturing different views of the animal. In an embodiment, the images may be extracted from the video recorded based on detection of region of interest in a frame of the video. In an embodiment, the region of interest may include, but not limited to the spine, the hook bones, the pin bones and the tail head. In an embodiment, the camera 114 may capture, lateral/dorsal portion of the animal showing the area between the pin bones and the hook bones, and the edge of the spinal processes. In an embodiment, the views of the animals may be associated to a unique identifier of the animal detected or read based on RFID scanner information.
[040] The animal health information may be calculated by calculating a body condition score (BCS) of each animal in a farm. Body condition score may be calculated by evaluating fatness or thinness in according to a pre-defined scale, as shown in FIG. 2, where a score of one denotes a very thin buffalo, while five denotes an excessively fat buffalo. Research and field experiments have shown that body condition score is indicative of the body condition of animals which may be used to determine productivity, re-production, health and longevity of the animals. Thus, thinness or fatness can indicate underlying nutritional deficiencies, health problems, or improper herd management. Hence, BCS may be used to detect problems in animals within the farm which may help in improving the health and productivity of the animals in a farm, when done on a regular basis. Body condition scoring is better for monitoring body energy reserves than body weight. As animals’ body weight may change due to changes in body fat, frame size, gut size, udder size, pregnancy status, and intake of food and water. The monitoring device 102 may score each animal based on the determined fat reserve measure in each animal assigning a BCS score to an animal.
[041] In an embodiment, the score allocated to an animal is in a range of 1-5 with increments of 0.5. Further, an animal with a score in range of 2.5-3.5 may be classified as healthy. An animal with a score in range of 1-2 and 4-5 may be classified as unhealthy.
[042] In an exemplary embodiment, a BCS may range from 1 to 5, where BCS of 1 may depict a malnutrition animal with no fat reserves and 5 may depict a severely over-conditioned cow. Ideal BCS range from 2.5 to 3.5 at lifecycle stages of dry off and calving and BCS may range from 2.25 to 2.75 at peak lactation stage.
[043] In an embodiment, BSC score may also depict the amount of fur cover in terms of hair and wool. This may be used to determine the type of stages or housing required for the animal based on temperature and weather conditions. For example, an animal with high BCS may have increases insulation due to fat reserve and fur and may be housed outside or open areas comfortably. However, animals with low BCS may not have enough insulation and may require a covered housing.
[044] The BCS score may also be used to determine health of an animal by farmers for trading of animals. The monitoring system 102 may be able to determine a BCS score and other health parameters of an animal based on which the animal may be appropriately rated for being traded. The farmers may be able to buy healthy animals based on the results provided by the monitoring system 102.
[045] In an embodiment, the animals classified as healthy may exhibit less metabolic diseases. Hence, feeding consumption for such animal may be kept normal. Further, a normal yield from healthy animals such as fur, wool, milk, etc. is obtained. Animal with healthy score may indicate that the needs of the animals are fully met. Further, animal health score in form of BCS also indicates the energy balance estimation in dairy cattle. It is sometimes required to ensure that animals with undesirable or unhealthy classification are made healthy so that they reach to the desired levels by additional intervention by providing the required care to the animals.
[046] In an embodiment, the monitoring device 102 can receive data packets of information related to one or more interactions between the camera 114 and the user device 120. The interactions can be related to setting of the camera 114 to an appropriate field of view or enabling monitoring of a particular part of the farm through one or more cameras 114 installed in the farms. The interaction may enable switching on or off of the one or more cameras and the controlling the cameras to zoom, pan or tilt in order to set the cameras 114 as per desired configuration. Further, the outputted health information from the monitoring device 102 may be viewed by the users on the user devices 120 as per their preference.
[047] In an embodiment, the monitoring device 102 may enable a ML model such as, but not limited to, a pre-trained VGG-19 model as described in detail in FIG. 3.
[048] FIG. 3 illustrates a functional diagram of the monitoring device 102 of FIG. 1, in accordance with an embodiment of the present disclosure. The monitoring device 102 may include a pre-processing module 302, a scoring module 304, an output module 306 and other module 308.
[049] The pre-processing module 302 may process the various images forming the set of images received from the camera 114. The images captured may include two-dimensional images which may capture a set of views of a posterior of the animal. In an embodiment, the set of views may also include dorsal portions of the animal.
[050] In an embodiment, the set of images may be pre-processed to capture a set of input images based on detection of region of interest within the images capturing the set of views. In an embodiment the region of interest may be based on detection of, body parts such as, but not limited to, the spine, the hook bones, the pin bones, thighs, the ribs, a rump, a short rib, and a long rib, and/or the tail head. In an embodiment, the detection of region of interest in the images is performed using any image processing technique. In an alternative embodiment, the detection of region of interest in the images is enabled by pre-trained machine learning model such as, but not limited to, ConvNets or CNNs. In an embodiment, the image outputted as a result of the pre-processing by the pre-processing module 302 may be selected as, but not limited to, 224-by-224. In an embodiment, each of the set of images may be assigned a weight based on, but not limited to, the accuracy of the region of interest detected in each of the image, the image quality, etc.
[051] The scoring module 304 may utilize the ML model such as, but not limited to, a pretrained VGG-19. The scoring module 304 may determine a health score of the animal detected in the images based on the determining a health score for each of the set of images. The ML model may be pre-trained based on training data which may include images of animals depicting various region of interests corresponding to a BCS score. In an embodiment, the training database may comprise of tens of thousands of images which may be scored based on expert opinions. In an embodiment, the ML model may be pre-trained to detect the region of interests by training on million images from a database such as, but not limited to, ImageNet.
[052] In an embodiment, the weight assigned to each of the set of images may be determined based on the training data to perform the scoring with higher accuracy. In an embodiment, a feed forward convolution neural network (CNN) model such as a pre-trained VGG-19 model comprising a set of 16 convolutional layers and a set of three fully connected layers and is based on domain knowledge with respect to the animal. In an embodiment, a set of classification feature vectors from the input set of images are extracted using the set of 16 convolution layers. The set of classification feature vectors may be based on the region of interest determination based on detection of distinguishing physical features of the animal. In an embodiment, the pre-trained VGG19 model may further comprise a set of pooling layers to perform down-sampling of the set of classification feature vectors. Further, the health score of the animal is based on the set of classification features vectors using the set of three fully connected layers. Further, the pre-trained VGG19 model may comprise a SoftMax layer to determine a probability index corresponding to each of the pre-defined health scores. Softmax is a mathematical function that converts a vector of numbers into a vector of probabilities, where the probabilities of each value are proportional to the relative scale of each value in the vector. Further, the feed forward CNN model may be optimized based in Adam optimizer. Adam optimizer is an algorithm for optimization technique for gradient descent. The method requires less memory and is efficient. Intuitively, it is a combination of the ‘gradient descent with momentum’ algorithm and the ‘RMSP’ algorithm.
[053] In an embodiment, weights of the pre-trained network may be used as a head start to assign the weights to a neural network. In an embodiment, a 19 layers-deep convolutional neural network is implemented by a VGG-19. As a result, the network learns feature representations for a wide range of images. In an embodiment, the other module 308 may supplement the functionalities of other modules or the monitoring device 102 as required.
[054] FIG. 4a illustrates an exemplary input image of an animal, in accordance with an embodiment of the present disclosure. Image 402 depicts one view of the animal which may be processed by the pre-processing module 302 to extract the corresponding input image based on the detection of one or more pre-defined region of interests. In an embodiment, the image 404 may be 224-by-224 in dimension. In an embodiment, image 406 depicts a region of interest detected in the image 404.
[055] FIG. 4b illustrates determination of a test dataset based on VGG-19 framework, in accordance with an embodiment of the present disclosure. In an embodiment, a convolution neural network (CNN) may be used to get features from the multi-dimensional array of the images. In an embodiment, down-sampling may be done by using techniques such as, but not limited to, Maxpooling, etc. on input making the CNN as shift-invariant. To create the convolution layer, which is the main step in CNN. The mathematical operation of convolution is applied in this layer of CNN. A kernel/filter with size NxN slides over the input image and dot product happens on the kernel and the part of the image. As an output, features of an image such as pin bone, hip bone and top of the backbone, ends of short ribs, etc. are obtained to train the model further.
[056] In the pooling layer, the objective is to reduce the size of features coming from the convolution layer. This may reduce the computational cost besides having the properties of the feature vector, it may also be implemented as the subsampling layer. In an embodiment, pooling may use techniques such as, but not limited to, MaxPooling, MinPooling, and AvgPooling.
[057] The CNN is made of two layers which are the Convolution layer, Pooling layer. FIG. 5 illustrates the layers in VGG-19 model framework, in accordance with an embodiment of the present disclosure. In an embodiment, the model comprises the following layers:
• Conv3x3 (64)
• Conv3x3 (64)
• MaxPool
• Conv3x3 (128)
• Conv3x3 (128)
• MaxPool
• Conv3x3 (256)
• Conv3x3 (256)
• Conv3x3 (256)
• Conv3x3 (256)
• MaxPool
• Conv3x3 (512)
• Conv3x3 (512)
• Conv3x3 (512)
• Conv3x3 (512)
• MaxPool
• Conv3x3 (512)
• Conv3x3 (512)
• Conv3x3 (512)
• Conv3x3 (512)
• Global Average Pooling
• Fully Connected (128)
• Dropout(0.3)
• Fully Connected (32)
• Fully Connected (2)
• SoftMax
[058] It is important to note that the layers of Global Average Pooling, Fully Connected (128), Dropout (0.3), Fully Connected (32), Fully Connected (2) and SoftMax layers have been added as customization of the VGG-19 model.
[059] In an embodiment, the monitoring device 102 may train the model with a sample containing 1841 images in which 1081 images is for healthy classification and 757 is for unhealthy classification. The network is trained using several folds of data to ensemble them to make the model more generalized. In an embodiment, K-Fold cross-validation technique may be used with k selected as 5 or more to have an unbiased model. The k value may be pre-define depending on how many folds the dataset is going to be divided. Every fold gets chance to appear in the training set (k-1) times, which in turn ensures that every observation in the dataset appears in the dataset, thus enabling the model to learn the underlying data distribution better. In another, embodiment, a stratified K-fold may be used depending on the balance of the datasets.
[060] In an embodiment, the monitoring device 102 may ensemble several models trained on different folds of data and the final layer is obtained by taking an average of all the models. Finally, the model is optimized to provide a scoring based on precision and recall determination by testing the model for a number of images. In an exemplary embodiment, the model when trained on 376 images provided a precision value of 0.78 and recall value of 0.86. Accordingly, the monitoring device 102 may determine the health score of each image of the animal based on the trained VGG-19 model.
[061] In an embodiment, a fixed size of (224 * 224) RGB image was given as input to this network which means that the matrix was of shape (224,224,3). The kernels used is of (3 * 3) size with a stride size of 1 pixel, this enabled them to cover the whole notion of the image. Spatial padding may be used to preserve the spatial resolution of the image. Max-pooling was performed over a 2 * 2 pixel windows with sride 2. This was followed by Rectified linear unit (ReLu) to introduce non-linearity to make the model classify better and to improve computational time. Added additional layers Global Average Pooling, Fully Connected (128), Dropout (0.3), Fully Connected (32), Fully connected (2). Further, the Model was trained on 1501 images and tested on 376 images. In an embodiment, Adam Optimizer with 0.001 learning rate is utilized. The model is trained on 5 different folds and all these trained models are ensembled to have generalized results. Therefore, the model provides a continuous goodness level of a BCS which provides an exact determination of the health of the animal based on the processing of the images.
[062] Based on the health score assigned to each image of the animal. Further, based on a weighted assigned to each view of the image of the animal, a weighted average of all the images of the animal is determined to determine an overall health score of the animal.
[063] Further, the overall health score of the animal may depict if the animal is determined to be healthy or unhealthy. If the overall health score is determined from 2.5 and 3.5 then the animal is classified as healthy and if the overall health score of the animal is determined between 1-2.5 and 3.5-5 then the animal is determined to be unhealthy. More particularly, the score may be normalized to provide a continuous probability score of each animal being healthy or unhealthy. For example, a score of 80% healthy may depict that the BCS score of an animal is around above 3 and the animal is mostly healthy based on its fat reserve. In an embodiment, the health score of each animal may be associated with a unique identifier associated with an animal.
[064] In an embodiment, the monitoring device 102 may calculate the overall health score of each animal in the farm on a periodic basis such as daily, weekly, and/or monthly. The monitoring device 102 may calculating an average of the health score of animals of each type. Therefore, it may provide an average health of all animals in a farm. The average score may also depict the beneficial conditions provided by a farm for a particular type of animal for which the average health score is more.
[065] In an embodiment, the monitoring device 102 may also provide a cost of each animal based on its health score. In an embodiment, the monitoring device 102 may provide a feed type or amount for each animal or for each classification of animals which are healthy and unhealthy.
[066] The output module 306 may include an output in form of an interface or a display which may display a score card of the animal image. The score card of the animal may include a unique identifier of the animal and the overall health score of the animal, a statistical display in form of a graph depicting the variation of the health score on for a pre-defined period. The score card may also display a cost of the animal.
[067] Although in various embodiments, the implementation of monitoring device 102 is explained with regard to the network 112, those skilled in the art would appreciate that, the monitoring device 102 can fully or partially be implemented in other computing devices operatively coupled with network 112 such as user devices 120 with minor modifications, without departing from the scope of the present disclosure.
[068] FIG. 6 illustrates a flowchart of the methodology of determining health score of an animal, in accordance with an embodiment of the present disclosure.
[069] The methodology may be performed by the monitoring device 102. At step 602, a set of two-dimensional (2D) images corresponding to a set of views of a posterior of the animal may be received. At step 604, the set of images may be pre-processed to generate a set of corresponding input images of a pre-defined size. Further, the set of corresponding input images may comprise a region of interest within a view of the set of views. At step 606, a health score of the animal for each of the set of the corresponding input images may be determined based on the corresponding input image using a feed forward convolution neural network (CNN) model. The feed forward CNN model may be a pre-trained VGG19 model comprising of a set of sixteen convolution layers and a set of three fully connected layers and is based on domain knowledge with respect to the animal. At step 606a, the health score of the animal may be determined by extracting a set of classification feature vectors from the corresponding input image using the set of sixteen convolution layers. In an embodiment, each of the set of classification feature vectors corresponds to a distinguishing physical feature of the animal. At step 606b, the health score of the animal may be determined based on the set of classification feature vectors using the set of three fully connected layers. At step 608, an overall health score of the animal may be computed based on a weighted average of the health score for each of the set of the corresponding input images. At step 610, the animal may be classified into an identified class from a set of pre-defined classes based on the overall health score. At step 612, the identified class and the overall health score of the animal may be rendered to a user via a graphical user interface.
[070] FIG. 7 illustrates an exemplary computer system 700 to implement the proposed system in accordance with embodiments of the present disclosure.
[071] As shown in FIG. 7, computer system can include an external storage device 710, a bus 720, a main memory 730, a read only memory 740, a mass storage device 750, communication port 760, and a processor 770. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 770 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor 770 may include various modules associated with embodiments of the present invention. Communication port 660 can be any of an RS-232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port 660 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[072] Memory 730 can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory 740 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 770. Mass storage 750 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.
[073] Bus 720 communicatively couples processor(s) 770 with the other memory, storage and communication blocks. Bus 720 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 770 to software system.
[074] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus 620 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 660. External storage device 610 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[075] Embodiments of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a "circuit," "module," "component," or "system." Furthermore, aspects of the present disclosure may take the form of a computer program product comprising one or more computer readable media having computer readable program code embodied thereon.
[076] Thus, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named.
[077] As used herein, and unless the context dictates otherwise, the term "coupled to" is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms "coupled to" and "coupled with" are used synonymously. Within the context of this document terms "coupled to" and "coupled with" are also used euphemistically to mean "communicatively coupled with" over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
[078] It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms "comprises" and "comprising" should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C …. and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
[079] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
,CLAIMS:I/We Claim:
1. A method for determining a health score of an animal, comprising:
receiving, by a health monitoring device, a set of two-dimensional (2D) images corresponding to a set of views of a posterior of the animal;
pre-processing, by the health monitoring device, the set of images to generate a set of corresponding input images, wherein each of the set of corresponding input images is of a pre-defined size and comprises a region of interest within a view of the set of views;
for each of the set of the corresponding input images:
determining, by the health monitoring device, a health score of the animal based on the corresponding input image using a feed forward convolution neural network (CNN) model, wherein the feed forward CNN model is pre-trained VGG19 model comprising of a set of sixteen convolution layers and a set of three fully connected layers and is based on domain knowledge with respect to the animal, and wherein determining the health score comprises:
extracting a set of classification feature vectors from the corresponding input image using the set of sixteen convolution layers, wherein each of the set of classification feature vectors corresponds to a distinguishing physical feature of the animal; and
determining the health score of the animal based on the set of classification feature vectors using the set of three fully connected layers;
computing, by the health monitoring device, an overall health score of the animal based on a weighted average of the health score for each of the set of the corresponding input images;
classifying, by the health monitoring device, the animal into an identified class from a set of pre-defined classes based on the overall health score;
rendering, by the health monitoring device, the identified class and the overall health score of the animal to a user via a graphical user interface.
2. The method of claim 1, wherein the pre-trained VGG19 model further comprises a set of pooling layers to perform down-sampling of the set of classification feature vectors.
3. The method of claim 1, wherein the pre-trained VGG19 model further comprises a SoftMax layer to determine a probability index corresponding to each of the pre-defined health scores.
4. The method of claim 1, wherein the distinguishing physical feature of the animal comprises at least one of a hip, a tailhead, a backbone, a pin, and a thigh.
5. The method of claim 1, wherein the set of 2D images further correspond to a set of views of a side of the animal, and wherein the distinguishing physical feature of the animal comprises at least one of a hip, a backbone, a thigh, a rump, a short rib, and a long rib.
6. The method of claim 1, comprising optimizing the feed forward CNN model using an Adam optimizer.
7. The method of claim 1, wherein each weight in the weighted average is pre-defined for the view captured in the corresponding input image.
8. A system for determining a health score of an animal, comprising:
one or more processors; and
a memory communicatively coupled to the one or more processors, wherein the memory stores processor-executable instructions which, upon execution by the one or more processors, cause the one or more processors to:
receive, from one or more image sensors, a set of two-dimensional (2D) images corresponding to a set of views of a posterior of the animal;
pre-process the set of images to generate a set of corresponding input images, wherein each of the set of corresponding input images is of a pre-defined size and comprises a region of interest within a view of the set of views;
for each of the set of the corresponding input images:
determine a health score of the animal based on the corresponding input image using a feed forward convolution neural network (CNN) model, wherein the feed forward CNN model is pre-trained VGG19 model comprising of a set of sixteen convolution layers and a set of three fully connected layers and is based on domain knowledge with respect to the animal, and wherein the determination of the health score comprises:
extraction of a set of classification feature vectors from the corresponding input image using the set of sixteen convolution layers, wherein each of the set of classification feature vectors corresponds to a distinguishing physical feature of the animal; and
determination of the health score of the animal based on the set of classification feature vectors using the set of three fully connected layers;
compute an overall health score of the animal based on a weighted average of the health score for each of the set of the corresponding input images; and
classify the animal into an identified class from a set of pre-defined classes based on the overall health score; and
render the identified class and the overall health score of the animal to a user via a graphical user interface.
9. The system of claim 8, wherein the pre-trained VGG19 model further comprises:
a set of pooling layers to perform down-sampling of the set of classification feature vectors; and
a SoftMax layer to determine a probability index corresponding to each of the pre-defined health scores.
10. The system of claim 9, wherein the feed forward CNN model is optimized using an Adam optimizer.
| # | Name | Date |
|---|---|---|
| 1 | 202211040675-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2022(online)].pdf | 2022-07-15 |
| 2 | 202211040675-PROVISIONAL SPECIFICATION [15-07-2022(online)].pdf | 2022-07-15 |
| 3 | 202211040675-PROOF OF RIGHT [15-07-2022(online)].pdf | 2022-07-15 |
| 4 | 202211040675-FORM FOR STARTUP [15-07-2022(online)].pdf | 2022-07-15 |
| 5 | 202211040675-FORM FOR SMALL ENTITY(FORM-28) [15-07-2022(online)].pdf | 2022-07-15 |
| 6 | 202211040675-FORM 1 [15-07-2022(online)].pdf | 2022-07-15 |
| 7 | 202211040675-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-07-2022(online)].pdf | 2022-07-15 |
| 8 | 202211040675-EVIDENCE FOR REGISTRATION UNDER SSI [15-07-2022(online)].pdf | 2022-07-15 |
| 9 | 202211040675-DRAWINGS [15-07-2022(online)].pdf | 2022-07-15 |
| 10 | 202211040675-DECLARATION OF INVENTORSHIP (FORM 5) [15-07-2022(online)].pdf | 2022-07-15 |
| 11 | 202211040675-DRAWING [14-09-2022(online)].pdf | 2022-09-14 |
| 12 | 202211040675-CORRESPONDENCE-OTHERS [14-09-2022(online)].pdf | 2022-09-14 |
| 13 | 202211040675-COMPLETE SPECIFICATION [14-09-2022(online)].pdf | 2022-09-14 |