Sign In to Follow Application
View All Documents & Correspondence

Machine Learning Based System And Method For Analyzing Livestock Animals

Abstract: MACHINE LEARNING BASED SYSTEM AND METHOD FOR ANALYZING LIVESTOCK ANIMALS ABSTRACT A ML-based system (100) for analyzing livestock animals (102) is disclosed. The ML-based system (100) is configured to: (a) receive the images associated with the livestock animals (102), (b) detect face regions (704) of the livestock animals (102), based on the images of livestock animals (102), by a face detection model, (c) detect parts (706) in face regions (704) of livestock animals (102) for reorienting and normalizing second images (708) corresponding to face regions (704) of the livestock animals (102), based on the detected face regions (704), by a keypoint detection model, (d) generate face identity for the livestock animals (102) by correlating the second images (708) corresponding to the face regions (704), with one or more vectors (710), by an embedding generation model, and (e) analyze the livestock animals (102), based on the face identity generated for the livestock animals (102), by a machine learning (ML) model. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
10 July 2023
Publication Number
05/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

FLOK LIVESTOCK TECHNOLOGIES PRIVATE LIMITED
NO 28, MCHS COLONY, 5TH ‘B’ CROSS, 16TH MAIN B.T.M, 2ND STAGE, BANGALORE, KARNATAKA – 560076, INDIA

Inventors

1. Manjunath Dodda Venkatappa
28/1, MCHS Colony, 5th B Cross, 16th Main, B.T.M 2nd Stage, Bengaluru, Karnataka - 560076, India
2. Sucheendra Kumar P
2522, E Block, 9th A Cross, 13th Main, Sahakaranagar, Bengaluru, Karnataka 560092, India
3. Aryaman Shrey
No 28, MCHS Colony, 5th ‘B’ Cross, 16th Main B.T.M, 2nd Stage, Bangalore, Karnataka - 560076, India

Specification

Description:FIELD OF INVENTION
[0001] Embodiments of the present disclosure relate to an artificial intelligence (AI)/machine learning (ML) based system and more particularly relates to a machine learning based system and method for analyzing one or more livestock animals.
BACKGROUND
[0002] A way to identify animals has long been used historically, just as there is a record of marking parts of animals in the Hammurabi Code 3800 years ago. Especially at that time, the identification of the animals was often used to prevent theft of highly valuable animals such as goats, horses, and the like. In recent years, in addition to the purpose of confirming ownership for defences from existing thieves and theft, the identification of the animals is essential to control the production of livestock animals, control of animal disease occurrence on the farm, management of extinction and protected animals. The identification of the animals is sometimes used for quarantine the livestock animals. Due to the unified global economy, consumption of meat is increasing worldwide, not only within a single country. In addition to food, the use of animals for various purposes such as pets is increasing.
[0003] As a result, many animals are being raised or moved, but this has caused global epidemics (e.g., mad cow disease) that were previously confined to specific farms, regions, and specific countries. Therefore, in each country including the United Nations, effective and reliable animal tracking and identification systems are established in order to control all the risks that may occur in the production, and distribution. In recent years, various attempts and research have been carried out to build a better system through advanced information technology besides traditional methods.
[0004] Traditional methods of managing animals include (a) ear notching (i.e., particularly used for pigs), (b) attaching plastic and bar-coded ear tags to an ear of a sheep, (c) inserting radio frequency identifier (RFID) or microchip in the livestock animals, (d) winding a chain with numeric tags (neck chain), (e) freezing branding by cooling a numerical or textured metal piece of an animal with liquefied nitrogen, dry ice or alcohol, (f) paint branding, (g) tattooing, and (h) toe clipping was used.
[0005] However, the microchip inserted into the livestock animals and the surrounding material of the antenna, may cause a phenomenon such as tumour or tissue necrosis on the body of the livestock animals, which is not 100% safe for the livestock animals. Further, ear punching in the livestock animals may result in fever and weight loss of the livestock animals. Furthermore, the livestock animals with defects including ear-tag holes are not preferred for halal slaughtering. Even though the traditional methods are used for identifying the livestock animals, the better results of identifying the livestock animals are not achieved by the traditional methods. Also, the identification of the livestock animals using the traditional methods is cost effective.
[0006] Therefore, there is a need for an improved machine learning (ML) based system and method for analyzing one or more livestock animals, to address the aforementioned issues.
SUMMARY
[0007] This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the disclosure. This summary is neither intended to identify key or essential inventive concepts of the subject matter nor to determine the scope of the disclosure.
[0008] In accordance with one embodiment of the disclosure, a machine learning based (ML-based) system for analyzing one or more livestock animals is disclosed. The machine learning based (ML-based) system includes an image capturing device configured to capture one or more images associated with the one or more livestock animals. The machine learning based (ML-based) system further includes one or more hardware processors, and a memory unit coupled to the one or more hardware processors. The memory unit comprises a set of program instructions in form of a plurality of subsystems, configured to be executed by the one or more hardware processors. The plurality of subsystems comprises an image receiving subsystem, a face detection subsystem, a keypoint detection subsystem, a face identity generation subsystem, and a livestock animal analyzing subsystem.
[0009] The image receiving subsystem is configured to receive the one or more images associated with the one or more livestock animals, captured by the image capturing device. The face detection subsystem is configured to detect one or more face regions of each livestock animal of the one or more livestock animals, based on the one or more images associated with the one or more livestock animals, by a face detection model.
[0010] The keypoint detection subsystem is configured to detect one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals, comprising at least one of: at least one eye, a nose, and at least one ear, for at least one of: reorienting and normalizing second one or more images corresponding to one or more face regions of the one or more livestock animals, based on the one or more detected face regions, by a keypoint detection model.
[0011] The face identity generation subsystem is configured to generate a face identity for each livestock animal of the one or more livestock animals by correlating the second one or more images corresponding to the one or more face regions of the one or more livestock animals, with one or more vectors, by an embedding generation model. The livestock animal analyzing subsystem is configured to analyze each livestock animal of the one or more livestock animals, based on the face identity generated for each livestock animal of the one or more livestock animals, by a machine learning (ML) model.
[0012] In an embodiment, wherein the machine learning model is trained by (a) obtaining the second one or more images corresponding to the one or more face regions of the one or more livestock animals, (b) obtaining one or more labels related to the second one or more images corresponding to the one or more face regions of the one or more livestock animals, at the machine learning model, and (c) training the machine learning model by correlating the obtained second one or more images corresponding to the one or more face regions of the one or more livestock animals with the one or more labels related to the second one or more images corresponding to the one or more face regions of the one or more livestock animals. The one or more labels comprises information related to at least one of: breed, sex, colour, and age of the one or more livestock animals. The one or more labels further comprises the information related to at least one of: face of a first livestock animal, faces of the one or more livestock animals except the first livestock animal, information associated with eyes, mouth, and nose, of the one or more livestock animals, and a quality of the one or more images of the one or more livestock animals captured from one or more angles.
[0013] In another embodiment, in analyzing, by the machine learning model, each livestock animal of the one or more livestock animals, based on the face identity generated for each livestock animal of the one or more livestock animals, the livestock animal analyzing subsystem is configured to: (a) obtain a second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals, at the machine learning model, (b) compare the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals, with pre-determined data associated with face identity generated for each livestock animal of the one or more livestock animals, and (b) analyze each livestock animal of the one or more livestock animals, based on the comparison of the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals with the pre-determined data associated with the face identity generated for each livestock animal of the one or more livestock animals.
[0014] In yet another embodiment, in generating the face identity for each livestock animal of the one or more livestock animals, the embedding generation model is configured to: (a) generate the one or more vectors corresponding to the second one or more images of the one or more face regions of the one or more livestock animals, (b) correlate the second one or more images of the one or more face regions of the one or more livestock animals, with the one or more vectors, and (c) generate the face identity for each livestock animal of the one or more livestock animals based on the correlation of the second one or more images corresponding to the one or more face regions of the one or more livestock animals, with the one or more vectors.
[0015] In yet another embodiment, each of the one or more vectors associated with the each of the second one or more images corresponding to the one or more face regions of the one or more livestock animals, comprises unique identity for each livestock animal of the one or more livestock animals.
[0016] In yet another embodiment, in detecting, by the keypoint detection model, the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals, the keypoint detection subsystem configured to: (a) obtain the one or more face regions of each livestock animal of the one or more livestock animals, from the face detection subsystem, (b) generate one or more keypoints for each part of the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals, and (c) detect one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals comprising at least one of: at least one eye, a nose, and at least one ear, based on the one or more keypoints generated for each part of the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals. The one or more keypoints are constant when the one or more images of the one or more parts in the one or more face regions are at least one of: rotated, and distorted.
[0017] In yet another embodiment, the face detection subsystem is configured to detect the one or more face regions of each livestock animal of the one or more livestock animals, based on the one or more images of the one or more livestock animals, captured from the one or more angles.
[0018] In yet another embodiment, each livestock animal of the one or more livestock animals is analyzed by: (a) training one or more machine learning models for one or more poses of each livestock animal of the one or more livestock animals, on a large dataset of the one or more livestock animals, (b) adding data associated with at least one of: breed, sex and age, of each livestock animal of the one or more livestock animals into the one or more machine learning models during training the one or more machine learning models, and (c) generating a combination of two or more machine learning models trained for the one or more poses of each livestock animal of the one or more livestock animals to analyze the one or more livestock animals.
[0019] In yet another embodiment, the trained one or more machine learning models is fine-tuned based on the one or more images of the one or more livestock animals, which adapts the trained one or more machine learning models to require less training data for training the one or more machine learning models, by a transfer learning method.
[0020] In one aspect, a machine learning based (ML-based) method for analyzing one or more livestock animals is disclosed. The machine learning based (ML-based) method includes capturing, by an image capturing device, one or more images associated with the one or more livestock animals. The machine learning based (ML-based) method further includes receiving, by one or more hardware processors, the one or more images associated with the one or more livestock animals, captured by the image capturing device. The machine learning based (ML-based) method further includes detecting, by the one or more hardware processors, one or more face regions of each livestock animal of the one or more livestock animals, based on the one or more images associated with the one or more livestock animals, by a face detection model.
[0021] The machine learning based (ML-based) method further includes detecting, by the one or more hardware processors, one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals, comprising at least one of: at least one eye, a nose, and at least one ear, for at least one of: reorienting and normalizing second one or more images corresponding to one or more face regions of the one or more livestock animals, based on the one or more detected face regions, by a keypoint detection model.
[0022] The machine learning based (ML-based) method further includes generating, by the one or more hardware processors, a face identity for each livestock animal of the one or more livestock animals by correlating the second one or more images corresponding to the one or more face regions of the one or more livestock animals, with one or more vectors, by an embedding generation model. The machine learning based (ML-based) method further includes analyzing, by the one or more hardware processors, each livestock animal of the one or more livestock animals, based on the face identity generated for each livestock animal of the one or more livestock animals, by a machine learning (ML) model.
[0023] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0025] FIG. 1 is a schematic representation of a machine learning based (ML-based) system for analyzing one or more livestock animals, in accordance with an embodiment of the present disclosure;
[0026] FIG. 2 is a detailed view of a server, such as those shown in FIG. 1, in accordance with an embodiment of the present disclosure;
[0027] FIGS. 3A-3D are user interface views depicting one or more images and videos captured by the image capturing device using an in-house mobile application, in accordance with an embodiment of the present disclosure;
[0028] FIG. 4 is a schematic representation depicting that a user annotates information associated with the one or more livestock animals through the user device, in accordance with an embodiment of the present disclosure;
[0029] FIG. 5 is an user interface view depicting that the user annotates the information associated with the one or more livestock animals through the user device, such as those shown in FIG. 4, in accordance with an embodiment of the present disclosure;
[0030] FIG. 6 is a process flow for generating a face identity (ID) for the one or more livestock animals and identifying the one or more livestock animals based on the face identity of the one or more livestock animals, in accordance with an embodiment of the present disclosure;
[0031] FIG. 7 is an exemplary view depicting generation of the face identity for the one or more livestock animals, in accordance with an embodiment of the present disclosure;
[0032] FIG. 8 is an exemplary view depicting that the identification of the one or more livestock animals based on the face identity of the one or more livestock animals, in accordance with an embodiment of the present disclosure;
[0033] FIG. 9 is a schematic representation depicting an initial setup of registering the one or more images of the one or more livestock animals by capturing the one or more images of the one or more livestock animals using an image capturing device, in accordance with an embodiment of the present disclosure;
[0034] FIG. 10 is a process flow depicting the identification of the one or more livestock animals using the server, in accordance with an embodiment of the present disclosure;
[0035] FIG. 11 is a schematic representation depicting generation of the face identity (ID) for the one or more livestock animals using an embedding generation model, in accordance with an embodiment of the present disclosure;
[0036] FIG. 12 is a process flow for analyzing of the one or more livestock animals based on a combination of two or more learning models, in accordance with an embodiment of the present disclosure;
[0037] FIG. 13 is a process flow for fine-tuning the trained one or more machine learning models, in accordance with an embodiment of the present disclosure;
[0038] FIG. 14 is a graphical representation showing an accuracy in the face identity of the one or more livestock animals, in accordance with an embodiment of the present disclosure; and
[0039] FIG. 15 is a flow chart illustrating a computer-implemented method for analyzing the one or more livestock animals, such as those shown in FIG. 1, in accordance with an embodiment of the present disclosure.
[0040] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0041] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated online platform, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0042] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or subsystems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, subsystems, elements, structures, components, additional devices, additional subsystems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0043] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0044] In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0045] A computer system (standalone, client or server computer system) configured by an application may constitute a “module” (or “subsystem”) that is configured and operated to perform certain operations. In one embodiment, the “module” or “subsystem” may be implemented mechanically or electronically, so a module includes dedicated circuitry or logic that is permanently configured (within a special-purpose processor) to perform certain operations. In another embodiment, a “module” or “subsystem” may also comprise programmable logic or circuitry (as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations.
[0046] Accordingly, the term “module” or “subsystem” should be understood to encompass a tangible entity, be that an entity that is physically constructed permanently configured (hardwired) or temporarily configured (programmed) to operate in a certain manner and/or to perform certain operations described herein.
[0047] FIG. 1 is a schematic representation of a machine learning based (ML-based) system 100 for analyzing one or more livestock animals 102, in accordance with an embodiment of the present disclosure. The machine learning based (ML-based) system 100 is configured to uniquely identify each livestock animal 102 for standardization of a supply chain of the one or more livestock animals 102. The machine learning based (ML-based) system 100 includes the one or more livestock animals 102, an image capturing device 104, a user 106, a user device 108, a server 110, a livestock image database 112, and a local storage 116.
[0048] The image capturing device 104 is configured to capture one or more images associated with the one or more livestock animals 102. In an embodiment, the image capturing device 104 is configured to capture at least thirteen images, two videos of the one or more livestock animals 102, in one or more angles (i.e., at least in five angles). In an embodiment, the image capturing device 104 is configured to capture the one or more images associated with the one or more livestock animals 102 in 70:30 ratio (i.e., male to female livestock animal ratio). In an embodiment, visual diversity (e.g., colour) of the one or more images of the one or more livestock animals 102 is captured.
[0049] In an embodiment, the image capturing device 104 may be configured in the user device 108 to capture the one or more images and videos of the one or more images of the one or more livestock animals 102. In another embodiment, the one or more images and videos are collected from the local storage 116. The user device 108 is configured to transmit the one or more images to the server 110 through a communication network 114. The server 110 is configured to receive the one or more images associated with the one or more livestock animals 102, captured by the image capturing device 104. The server 110 is further configured to detect one or more face regions of each livestock animal of the one or more livestock animals 102, based on the one or more images associated with the one or more livestock animals 102, by a face detection model.
[0050] The server 110 is further configured to detect one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102, for at least one of: reorienting and normalizing second one or more images corresponding to one or more face regions of the one or more livestock animals 102, based on the one or more detected face regions, by a keypoint detection model. In an embodiment, the one or more parts in the one or more face regions includes at least one of: at least one eye, a nose, and at least one ear, of the one or more livestock animals 102. The server 110 is further configured to generate a face identity for each livestock animal of the one or more livestock animals 102 by correlating the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, with one or more vectors, by an embedding generation model.
[0051] The server 110 is further configured to analyze each livestock animal of the one or more livestock animals 102, based on the face identity generated for each livestock animal of the one or more livestock animals 102, by a machine learning (ML) model. In an embodiment, the image capturing device 104 may store the one or more images of the one or more livestock animals 102 in the livestock image database 112. The server 110 may be configured to receive the one or more images of the one or more livestock animals 102 from the livestock image database 112. In an embodiment, the livestock image database 112 is a cloud database storing the one or more images of the one or more livestock animals 102. The mobile application 118 displays the information associated with the one or more images and videos corresponding to the one or more livestock animals 102.
[0052] In an embodiment, the user device 108 may be at least one of: a mobile phone, a Smartphone, a laptop, a personal computer (PC), an electronic notebook, and the like. In an embodiment, the one or more livestock animals 102 may be at least one of: a goat, a sheet, a pig, a cow, and the like. In an embodiment, the communication network 114 may be at least one of: a wired communication network and a wireless communication network. In another embodiment, the wireless communication network may be at least one of: a local area network (LAN), a wide area network (WAN), a virtual private network (VPN), and the like.
[0053] FIG. 2 is a detailed view of the server 110, such as those shown in FIG. 1, in accordance with an embodiment of the present disclosure. The server 110 includes one or more hardware processor(s) 220. The server 110 further includes a memory unit 202 coupled to the one or more hardware processor(s) 220. The memory unit 202 includes a set of program instructions in the form of the plurality of subsystems 204.
[0054] The one or more hardware processor(s) 220, as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[0055] The memory unit 202 includes the plurality of subsystems 204 stored in the form of executable program which instructs the one or more hardware processor(s) 220 via a system bus 216 to perform the above-mentioned method steps. The plurality of subsystems 204 includes following subsystems: an image receiving subsystem 206, a face detection subsystem 208, a keypoint detection subsystem 210, a face identity generation subsystem 212, and a livestock animal analyzing subsystem 214.
[0056] The one or more hardware processor(s) 220, as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor unit, microcontroller, complex instruction set computing microprocessor unit, reduced instruction set computing microprocessor unit, very long instruction word microprocessor unit, explicitly parallel instruction computing microprocessor unit, graphics processing unit, digital signal processing unit, or any other type of processing circuit. The one or more hardware processor(s) 220 may also include embedded controllers, such as generic or programmable logic devices or arrays, application specific integrated circuits, single-chip computers, and the like.
[0057] The memory unit 202 may be non-transitory volatile memory unit and non-volatile memory unit. The memory unit 202 may be coupled for communication with the one or more hardware processor(s) 220, such as being a computer-readable storage medium. The one or more hardware processor(s) 220 may execute machine-readable instructions and/or source code stored in the memory unit 202. A variety of machine-readable instructions may be stored in and accessed from the memory unit 202. The memory unit 202 may include any suitable elements for storing data and machine-readable instructions, such as read only memory, random access memory, erasable programmable read only memory, electronically erasable programmable read only memory, a hard drive, a removable media drive for handling compact disks, digital video disks, diskettes, magnetic tape cartridges, memory cards, and the like. In the present embodiment, the memory unit 202 includes the plurality of subsystems 204 stored in the form of machine-readable instructions on any of the above-mentioned storage media and may be in communication with and executed by the one or more hardware processor(s) 220.
[0058] Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Executable program stored on any of the above-mentioned storage media may be executable by the hardware processor(s) 220.
[0059] The database 218 may be at least one of: cloud database, a structured query language (SQL) data store, and a location on a file system directly accessible by the plurality of subsystems 204.
[0060] The plurality of subsystems 204 includes the image receiving subsystem 206 that is communicatively connected to the one or more hardware processor(s) 220. The image receiving subsystem 206 is configured to receive the one or more images associated with the one or more livestock animals 102, captured by the image capturing device 104. The plurality of subsystems 204 includes the face detection subsystem 208 that is communicatively connected to the one or more hardware processor(s) 220. The face detection subsystem 208 is configured to detect the one or more face regions of each livestock animal of the one or more livestock animals 102, based on the one or more images associated with the one or more livestock animals 102, by the face detection model. In an embodiment, the face detection subsystem 208 is configured to detect the one or more face regions of each livestock animal of the one or more livestock animals 102, based on the one or more images of the one or more livestock animals 102, captured from the one or more angles.
[0061] The plurality of subsystems 204 includes the keypoint detection subsystem 210 that is communicatively connected to the one or more hardware processor(s) 220. The keypoint detection subsystem 210 is configured to detect the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102 for at least one of: reorienting and normalizing the second one or more images corresponding to one or more face regions of the one or more livestock animals 102, based on the one or more detected face regions, by the keypoint detection model. In an embodiment, the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102, includes at least one of: at least one eye, a nose, and at least one ear, of the one or more livestock animals 102.
[0062] For detecting the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals, the keypoint detection subsystem 210 is configured to obtain the one or more face regions of each livestock animal of the one or more livestock animals 102, from the face detection subsystem 208. The keypoint detection subsystem 210 is further configured to generate one or more keypoints for each part of the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102. The one or more keypoints are constant when the one or more images of the one or more parts in the one or more face regions are at least one of: rotated, distorted, and the like. The keypoint detection subsystem 210 is further configured to detect the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102 comprising at least one of: at least one eye, a nose, and at least one ear, based on the one or more keypoints generated for each part of the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102.
[0063] The plurality of subsystems 204 includes the face identity generation subsystem 212 that is communicatively connected to the one or more hardware processor(s) 220. The face identity generation subsystem 212 is configured to generate the face identity for each livestock animal of the one or more livestock animals 102 by correlating the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, with one or more vectors, by the embedding generation model. For generating, by the embedding generation model, the face identity for each livestock animal of the one or more livestock animals 102, the face identity generation subsystem 212 is configured to generate the one or more vectors corresponding to the second one or more images of the one or more face regions of the one or more livestock animals 102.
[0064] The face identity generation subsystem 212 is further configured to correlate the second one or more images of the one or more face regions of the one or more livestock animals 102, with the one or more vectors. The face identity generation subsystem 212 is further configured to generate the face identity for each livestock animal of the one or more livestock animals based on the correlation of the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, with the one or more vectors. In an embodiment, each of the one or more vectors associated with the each of the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, comprises unique identity for each livestock animal of the one or more livestock animals 102.
[0065] The plurality of subsystems 204 includes the livestock animal analyzing subsystem 214 that is communicatively connected to the one or more hardware processor(s) 220. The livestock animal analyzing subsystem 214 is configured to analyze each livestock animal of the one or more livestock animals 102, based on the face identity generated for each livestock animal of the one or more livestock animals 102, by the machine learning (ML) model. In an embodiment, the machine learning model is trained by (a) obtaining the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, (b) obtaining one or more labels related to the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, at the machine learning model, and (c) training the machine learning model by correlating the obtained second one or more images corresponding to the one or more face regions of the one or more livestock animals 102 with the one or more labels related to the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102.
[0066] In an embodiment, the one or more labels includes information related to at least one of: breed, sex, colour, and age of the one or more livestock animals 102. In another embodiment, the one or more labels further includes the information related to at least one of: face of a first livestock animal, faces of the one or more livestock animals 102 except the first livestock animal, information associated with eyes, mouth, and nose, of the one or more livestock animals 102, and a quality of the one or more images of the one or more livestock animals 102 captured from one or more angles.
[0067] In an embodiment, for analyzing, by the machine learning model, each livestock animal of the one or more livestock animals 102, based on the face identity generated for each livestock animal of the one or more livestock animals 102, the livestock animal analyzing subsystem 214 is configured to obtain a second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals 102, at the machine learning model. The livestock animal analyzing subsystem 214 is further configured to compare the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals 102, with pre-determined data associated with face identity generated for each livestock animal of the one or more livestock animals 102.
[0068] The livestock animal analyzing subsystem 214 is further configured to analyze each livestock animal of the one or more livestock animals 102, based on the comparison of the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals 102 with the pre-determined data associated with the face identity generated for each livestock animal of the one or more livestock animals 102.
[0069] FIGS. 3A-3D are user interface views 300 depicting that one or more images and videos captured by the image capturing device 104 using the in-house mobile application 118, in accordance with an embodiment of the present disclosure. In an embodiment, the mobile application 118 is configured in the user device 108. The in-house mobile application 118 displays the captured one or more images and videos of the one or more livestock animals 102 in at least one of: day-wise 302, flock-wise 304, animal-wise 306, and angle-wise 308. The day-wise 302 captured one or more images and videos of the one or more livestock animals 102 shows statistics including number of animals present and also a number of images and videos of the one or more livestock animals 102 captured per day.
[0070] The in-house mobile application 118 further displays information associated with the number of images and videos of the one or more livestock animals 102 in flocks in the flock-wise collection 304. The in-house mobile application 118 further displays information associated with the images and videos of the one or more livestock animals in the animal-wise collection 306. The in-house mobile application 118 further displays information associated with the one or more images and videos of the one or more livestock animals 102 in different angles, in angle-wise collection 308.
[0071] FIG. 4 is a schematic representation 400 depicting that the user 106 annotates information associated with the one or more livestock animals 102 through the user device 108, in accordance with an embodiment of the present disclosure. The user 106 provides the one or more labels related to the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102 to the user device 108 through an annotation application 402.
[0072] In an embodiment. the one or more labels comprises information related to at least one of: breed, sex, colour, and age of the one or more livestock animals 102, face of a first livestock animal, faces of the one or more livestock animals 102 except the first livestock animal, information associated with eyes, mouth, and nose, of the one or more livestock animals 102, and a quality of the one or more images of the one or more livestock animals 102 captured from one or more angles. In an embodiment, the one or more labels provided by the user 106 may be utilized by the machine learning model for training the machine learning model in order to analyze the one or more livestock animals 102. In an embodiment, the one or more labels is stored in a livestock labels database 404.
[0073] FIG. 5 is an user interface view 500 depicting that the user 106 annotates information 502 associated with the one or more livestock animals 102 through the user device 108, such as those shown in FIG. 4, in accordance with an embodiment of the present disclosure. The user 106 annotates the information 502 associated with the one or more livestock animals 102 through the mobile application 118 configured in the user device 108. The mobile application 118 enables the user 106 to annotate the information 502 associated with the one or more labels for the one or more livestock animals 102 in the herd/flock 504. In an embodiment, the user 106 is adapted to annotate the information 502 for individual livestock animal 102. In another embodiment, the user 106 is adapted to annotate the information 502 for the one or more livestock animals 102 in the herd/flock 504.
[0074] FIG. 6 is a process flow 600 for generating the face identity (ID) for the one or more livestock animals 102 and identifying the one or more livestock animals 102 based on the face identity of the one or more livestock animals 102, in accordance with an embodiment of the present disclosure. The one or more images of the livestock animals 102 is inputted at the face detection model, as shown in step 602. At step 604, the face detection model is configured to detect the one or more face regions of each livestock animal of the one or more livestock animals 102, based on the one or more images associated with the one or more livestock animals 102. At step 606, the key point detection model is configured to detect one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102 including at least one of: at least one eye, a nose, and at least one ear, for at least one of: reorienting and normalizing the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, based on the one or more detected face regions, as shown in step 608.
[0075] At step 610, the one or more vectors corresponding to the second one or more images of the one or more face regions of the one or more livestock animals 102, is generated, by the embedding generation model. At step 612, the face identity of the one or more livestock animals 102 is generated using the embedding generation model. At step 614, the face identity of the one or more livestock animals 102 is stored at the database 218. In an embodiment, the , the face identity of the one or more livestock animals 102 is stored at the livestock image database 112.
[0076] At step 616, one or more new images of the one or more livestock animals 102 are inputted at the face detection model, as shown in step 618. The face detection model, as described above, is configured to detect the one or more face regions of each livestock animal of the one or more livestock animals 102, based on the one or more new images associated with the one or more livestock animals 102. At step 620, the keypoint detection model is configured to detect the one or more parts in the one or more face regions of each livestock animal of the one or more livestock animals 102 for reorienting and normalizing the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, based on the one or more detected face regions, as shown in 622.
[0077] At step 624, the face identity for the one or more new images of the one or more livestock animals 102 is generated by correlating the second one or more images corresponding to the one or more face regions of the one or more livestock animals 102, with one or more vectors. At step 626, the face identity generated for the one or more new images of the one or more livestock animals 102, is compared with the face identity generated for the one or more livestock animals 102 to determine presence of the one or more livestock animals 102, as shown in step 628.
[0078] FIG. 7 is an exemplary view 700 depicting generation of the face identity for the one or more livestock animals 102, in accordance with an embodiment of the present disclosure. FIG. 7 depicts that one or more images 702 of the one or more livestock animals 102 are inputted at the face detection model to detect the one or more face regions 704 of each livestock animal of the one or more livestock animals 102. The one or more parts 706 including at least one of: at least one eye, a nose, and at least one ear, of the one or more livestock animals 102 is detected for at least one of: reorienting and normalizing second one or more images 708 corresponding to the one or more face regions 704 of the one or more livestock animals 102.
[0079] The one or more vectors 710 is uniquely generated corresponding to the second one or more images 708 of the one or more face regions 704 of the one or more livestock animals 102. The face identity for each livestock animal of the one or more livestock animals 102 by correlating the second one or more images 708 corresponding to the one or more face regions 704 of the one or more livestock animals 102, with the one or more vectors 710, by the embedding generation model.
[0080] FIG. 8 is an exemplary view 800 depicting that the identification 802 of the one or more livestock animals 102 based on the face identity of the one or more livestock animals 102, in accordance with an embodiment of the present disclosure. FIG. 8 depicts that the one or more livestock animals 102 is identified by comparing the face identity of the one or more livestock animals 102 with the predetermined face identity of the one or more livestock animals 102. The one or more livestock animals is then identified with the corresponding face ID. For example, the livestock animal (e.g., a goat) 5 is identified by comparing the face identity generated for the livestock animal 5 among the one or more livestock animals 102 with the pre-determined data associated with face identity generated for the livestock animal 5 among the one or more livestock animals 102.
[0081] FIG. 9 is a schematic representation 900 depicting an initial setup of registering the one or more images of the one or more livestock animals 102 by capturing the one or more images of the one or more livestock animals 102 using the image capturing device 104, in accordance with an embodiment of the present disclosure. FIG. 9 shows that the one or more images of the one or more livestock animals 102 are registered by capturing the face, eyes, nose, and mouth of the one or more livestock animals 102 using the image capturing device 104. In an embodiment, the image capturing device 104 or the user device 108 may be circled 902 to cover the one or more angles including right, left, and head in a 360 degree manner.
[0082] FIG. 10 is a process flow 1000 depicting the identification of the one or more livestock animals using the server 110, in accordance with an embodiment of the present disclosure. At step 1002, the image capturing device 104 is configured to capture the one or more images of the one or more livestock animals 102 and to detect at least one of: the face, eyes, nose, and mouth of the one or more livestock animals 102. In an embodiment, the image capturing device 104 is configured in the user device 108. At step 1004, the user device 108 sends a request to the server 110 to analyze the one or more livestock animals 102. The server 110 compares the face identity of the one or more livestock animals 102 with the predetermined face identity of the one or more livestock animals 102. If no match found then the server 110 sends “no match found” message to the user device 108, as shown in step 1006. Else, the server 110 sends information related to “the matched livestock animal and its flock” to the user device 108, as shown in step 1008.
[0083] FIG. 11 is a schematic representation 1100 depicting generation of the face identity (ID) for the one or more livestock animals 102 using the embedding generation model, in accordance with an embodiment of the present disclosure. Upon receiving the one or more images of the one or more livestock animals 102, a face detection process 1102 is performed for detecting the one or more face regions 704 of each livestock animal of the one or more livestock animals 102, based on the one or more images associated with the one or more livestock animals 102, by the face detection model. The embedding generation process 1104 is performed to generate the face identity for each livestock animal of the one or more livestock animals 102 by correlating the second one or more images 708 corresponding to the one or more face regions 704 of the one or more livestock animals, with the one or more vectors 710, by the embedding generation model. In an embodiment, the one or more vectors 710 corresponding to the second one or more images 708 of the one or more face regions 704 of the one or more livestock animals 102, is stored in a livestock vector database 1110. A face recognition process 1106 is performed to recognize/analyze the one or more livestock animals 102 and the one or more analyzed livestock animals is displayed in a display 1108 of the user device 108.
[0084] FIG. 12 is a process flow 1200 for analyzing of the one or more livestock animals 102 based on a combination of two or more learning models, in accordance with an embodiment of the present disclosure. The one or more livestock animals 102 is analyzed by (a) training one or more machine learning models for one or more poses (i.e., a front pose, a side pose, and the like) of each livestock animal of the one or more livestock animals 102, on a very large dataset of the one or more livestock animals 102 (shown in step 1202 and 1204), (b) adding data associated with at least one of: breed, sex, age, and the like, of each livestock animal of the one or more livestock animals 102 into the one or more machine learning models during training the one or more machine learning models, and generating a combination of two or more machine learning models (ensemble machine learning models) (shown in steps 1206, 1208) trained for the one or more poses of each livestock animal of the one or more livestock animals 102 to analyze the one or more livestock animals 102 in a robust and accurate manner.
[0085] FIG. 13 is a process flow 1300 for fine-tuning the trained one or more machine learning models, in accordance with an embodiment of the present disclosure. The fine-tuning of the trained one or more machine learning models (shown in step 1306) is used for building the one or more accurate machine learning models such that lesser training data is required for training the one or more machine learning models. In an embodiment, the trained one or more machine learning models is trained on the very large dataset of the one or more livestock animals 102. The trained one or more machine learning models is then fine-tuned using the one or more images of the one or more livestock animals 102 (i.e., the one or more images in a dataset shown in step 1302), by a transfer learning method, thereby needing lesser training data for training the one or more machine learning models and also for analyzing the one or more livestock animals 102 in the accurate manner, as shown in step 1308. In an embodiment, the one or more images of the one or more livestock animals 102 are preprocessed (shown in step 1304) in prior to the fine-tuning of the trained one or more machine learning models.
[0086] FIG. 14 is a graphical representation 1400 showing an accuracy in the face identity of the one or more livestock animals 102, in accordance with an embodiment of the present disclosure. The graphical representation 1400 depicts that a score distribution 1402 between the one or more images of one or more same livestock animals 102, and the score distribution 1404 between the one or more images of one or more different livestock animals 102, in a first model. The graphical representation 1400 further depicts that the score distribution 1406 between the one or more images of the one or more same livestock animals 102, and the score distribution 1408 between the one or more images of the one or more different livestock animals 102, in a second model. The first and second model are constantly being retrained on more data for better accuracy.
[0087] FIG. 15 is a flow chart illustrating a computer-implemented method 1500 for analyzing the one or more livestock animals 102, such as those shown in FIG. 1, in accordance with an embodiment of the present disclosure. At step 1502, the one or more images associated with the one or more livestock animals 102, is captured by the image capturing device 104. In an embodiment, the image capturing device 104 may be configured in the user device 108 to capture the one or more images associated with the one or more livestock animals 102.
[0088] At step 1504, the one or more images associated with the one or more livestock animals 102 captured by the image capturing device 104, is received. At step 1506, the one or more face regions 704 of each livestock animal of the one or more livestock animals 102, is detected based on the one or more images associated with the one or more livestock animals 102, by the face detection model. At step 1508, the one or more parts 706 in the one or more face regions 704 of the one or more livestock animals 102, is detected for at least one of: reorienting and normalizing the second one or more images 708 corresponding to one or more face regions 704 of the one or more livestock animals 102, based on the one or more detected face regions 704, by the keypoint detection model. In an embodiment, the one or more parts 706 in the one or more face regions 704 of the one or more livestock animals 102, includes at least one of: at least one eye, a nose, and at least one ear, of the one or more livestock animals 102.
[0089] At step 1510, the face identity for each livestock animal of the one or more livestock animals 102 is generated by correlating the second one or more images 708 corresponding to the one or more face regions 704 of the one or more livestock animals 102, with the one or more vectors 710, by the embedding generation model. At step 1512, each livestock animal of the one or more livestock animals 102 is analyzed, based on the face identity generated for each livestock animal of the one or more livestock animals 102, by the machine learning (ML) model.
[0090] The present invention has following advantages. The machine learning based (ML-based) system 100 of the present invention is configured to identify each livestock animal of the one or more livestock animals 102 uniquely and the identification of the one or more livestock animals 102 in multiple applications (e.g., foolproof trading, insurance, and finance) in the standardization of trading of the one or more livestock animals 102.
[0091] The present invention provides first commercial-grade face identity system for the livestock animals 102 (e.g., goats and sheep) in the world. The machine learning based (ML-based) system 100 of the present invention provides better accuracy by utilizing the largest dataset of the one or more livestock animals 102. The machine learning based (ML-based) system 100 of the present invention is further configured to utilize different machine learning models that are trained for the one or more poses including at least one of: the front pose, the side pose, of the one or more images of the one or more livestock animals 102, so that the identification of the one or more livestock animals 102 becomes more robust and accurate.
[0092] The machine learning based (ML-based) system 100 of the present invention is further configured to utilize the transfer learning method for building accurate machine learning models, such that lesser training data is required for training the machine learning models. The trained one or more machine learning models (i.e., pretrained machine learning models) that are trained using a very large data set of livestock animals, is used. The pretrained one or more machine learning models is then fine-tuned using the one or more images of the one or more livestock animals 102, thereby needing lesser training data and at the same time, improving the accuracy for identification of the one or more livestock animals 102.
[0093] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0094] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, and the like. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, an apparatus, or a device.
[0095] The medium can be an electronic, a magnetic, an optical, an electromagnetic, an infrared, or a semiconductor system (or an apparatus or a device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid-state memory, a magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include a compact disk-read only memory (CD-ROM), a compact disk-read/write (CD-R/W) and a DVD.
[0096] Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, and the like.) can be coupled to the ML-based system 100 either directly or through intervening I/O controllers. Network adapters may also be coupled to the ML-based system 100 to enable a data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[0097] A representative hardware environment for practicing the embodiments may include a hardware configuration of an information handling/ML-based system 100 in accordance with the embodiments herein. The ML-based system 100 herein comprises at least one of: a processor or a central processing unit (CPU). The CPUs are interconnected via the system bus 216 to various devices such as a random-access memory (RAM), read-only memory (ROM), and an input/output (I/O) adapter. The I/O adapter can connect to peripheral devices, such as disk units and tape drives, or other program storage devices that are readable by the ML-based system 100. The ML-based system 100 can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
[0098] The ML-based system 100 further includes a user interface adapter that connects a keyboard, a mouse, a speaker, a microphone, and/or other user interface devices such as a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device such as a monitor, a printer, or a transmitter, for example.
[0099] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be apparent that a single device/article may be used in place of the more than one device or article, or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
[0100] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, and the like. of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0101] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
, Claims:WE CLAIM:
1. A machine learning based (ML-based) system (100) for analyzing one or more livestock animals (102), the machine learning based (ML-based) system (100) comprising:
an image capturing device (104) configured to capture one or more images associated with the one or more livestock animals (102); and
a server (110) comprising:
one or more hardware processors (220); and
a memory unit (202) coupled to the one or more hardware processors (220), wherein the memory unit (202) comprises a set of program instructions in form of a plurality of subsystems (204), configured to be executed by the one or more hardware processors (220), wherein the plurality of subsystems (204) comprises:
an image receiving subsystem (206) configured to receive the one or more images associated with the one or more livestock animals (102), captured by the image capturing device (104);
a face detection subsystem (208) configured to detect one or more face regions (704) of each livestock animal of the one or more livestock animals (102), based on the one or more images associated with the one or more livestock animals (102), by a face detection model;
a keypoint detection subsystem (210) configured to detect one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), comprising at least one of: at least one eye, a nose, and at least one ear, for at least one of: reorienting and normalizing second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), based on the one or more detected face regions (704), by a keypoint detection model;
a face identity generation subsystem (212) configured to generate a face identity for each livestock animal of the one or more livestock animals (102) by correlating the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), with one or more vectors (710), by an embedding generation model; and
a livestock animal analyzing subsystem (214) configured to analyze each livestock animal of the one or more livestock animals (102), based on the face identity generated for each livestock animal of the one or more livestock animals (102), by a machine learning (ML) model.

2. The machine learning based (ML-based) system (100) as claimed in claim 1, wherein the machine learning model is trained by:
obtaining the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102);
obtaining one or more labels related to the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), at the machine learning model, wherein the one or more labels comprises information related to at least one of: breed, sex, colour, and age of the one or more livestock animals (102); and
training the machine learning model by correlating the obtained second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102) with the one or more labels related to the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102),
wherein the one or more labels further comprises the information related to at least one of: face of a first livestock animal, faces of the one or more livestock animals (102) except the first livestock animal, information associated with eyes, mouth, and nose, of the one or more livestock animals (102), and a quality of the one or more images of the one or more livestock animals (102) captured from one or more angles.

3. The machine learning based (ML-based) system (100) as claimed in claim 1, wherein in analyzing, by the machine learning model, each livestock animal of the one or more livestock animals (102), based on the face identity generated for each livestock animal of the one or more livestock animals (102), the livestock animal analyzing subsystem (214) is configured to:
obtain a second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals (102), at the machine learning model;
compare the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals (102), with pre-determined data associated with face identity generated for each livestock animal of the one or more livestock animals (102); and
analyze each livestock animal of the one or more livestock animals (102), based on the comparison of the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals (102) with the pre-determined data associated with the face identity generated for each livestock animal of the one or more livestock animals (102).

4. The machine learning based (ML-based) system (100) as claimed in claim 1, wherein in generating, by the embedding generation model, the face identity for each livestock animal of the one or more livestock animals (102), the face identity generation subsystem (212) is configured to:
generate the one or more vectors (710) corresponding to the second one or more images (708) of the one or more face regions (704) of the one or more livestock animals (102);
correlate the second one or more images (708) of the one or more face regions (704) of the one or more livestock animals (102), with the one or more vectors (710); and
generate the face identity for each livestock animal of the one or more livestock animals (102) based on the correlation of the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), with the one or more vectors (710).

5. The machine learning based (ML-based) system (100) as claimed in claim 4, wherein each of the one or more vectors (710) associated with the each of the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), comprises unique identity for each livestock animal of the one or more livestock animals (102).

6. The machine learning based (ML-based) system (100) as claimed in claim 1, wherein in detecting, by the keypoint detection model, the one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), the keypoint detection subsystem (210) configured to:
obtain the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), from the face detection subsystem (208);
generate one or more keypoints for each part of the one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), wherein the one or more keypoints are constant when the one or more images of the one or more parts (706) in the one or more face regions (704) are at least one of: rotated, and distorted; and
detect one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102) comprising at least one of: at least one eye, a nose, and at least one ear, based on the one or more keypoints generated for each part of the one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102).

7. The machine learning based (ML-based) system (100) as claimed in claim 1, wherein the face detection subsystem (208) is configured to detect the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), based on the one or more images of the one or more livestock animals (102), captured from the one or more angles.

8. The machine learning based (ML-based) system (100) as claimed in claim 1, wherein each livestock animal of the one or more livestock animals (102) is analyzed by:
training one or more machine learning models for one or more poses of each livestock animal of the one or more livestock animals (102), on a large dataset of the one or more livestock animals (102);
adding data associated with at least one of: breed, sex and age, of each livestock animal of the one or more livestock animals (102) into the one or more machine learning models during training the one or more machine learning models;
generating a combination of two or more machine learning models trained for the one or more poses of each livestock animal of the one or more livestock animals (102) to analyze the one or more livestock animals (102).

9. The machine learning based (ML-based) system (100) as claimed in claim 8, wherein the trained one or more machine learning models is fine-tuned based on the one or more images of the one or more livestock animals (102), which adapts the trained one or more machine learning models to require less training data for training the one or more machine learning models, by a transfer learning method.

10. A machine learning based (ML-based) method (1500) for analyzing one or more livestock animals (102), the machine learning based (ML-based) method (1500) comprising:
capturing (1502), by an image capturing device (104), one or more images associated with the one or more livestock animals (102);
receiving (1504), by one or more hardware processors (220), the one or more images associated with the one or more livestock animals (102), captured by the image capturing device (104);
detecting (1506), by the one or more hardware processors (220), one or more face regions (704) of each livestock animal of the one or more livestock animals (102), based on the one or more images associated with the one or more livestock animals (102), by a face detection model;
detecting (1508), by the one or more hardware processors (220), one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), comprising at least one of: at least one eye, a nose, and at least one ear, for at least one of: reorienting and normalizing second one or more images (708) corresponding to one or more face regions (704) of the one or more livestock animals (102), based on the one or more detected face regions (704), by a keypoint detection model;
generating (1510), by the one or more hardware processors (220), a face identity for each livestock animal of the one or more livestock animals (102) by correlating the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), with one or more vectors (710), by an embedding generation model; and
analyzing (1512), by the one or more hardware processors (220), each livestock animal of the one or more livestock animals (102), based on the face identity generated for each livestock animal of the one or more livestock animals (102), by a machine learning (ML) model.

11. The machine learning based (ML-based) method (1500) as claimed in claim 10, wherein the machine learning model is trained by:
obtaining, by the one or more hardware processors (220), the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102);
obtaining, by the one or more hardware processors (220), one or more labels related to the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), at the machine learning model, wherein the one or more labels comprises information related to at least one of: breed, sex, colour, and age of the one or more livestock animals (102), face of a first livestock animal, faces of the one or more livestock animals (102) except the first livestock animal, information associated with eyes, mouth, and nose, of the one or more livestock animals (102), and a quality of the one or more images of the one or more livestock animals (102) captured from one or more angles; and
training, by the one or more hardware processors (220), the machine learning model by correlating the obtained second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102) with the one or more labels related to the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102),
wherein the one or more labels further comprises the information related to at least one of: face of a first livestock animal, faces of the one or more livestock animals (102) except the first livestock animal, information associated with eyes, mouth, and nose, of the one or more livestock animals (102), and a quality of the one or more images of the one or more livestock animals (102) captured from one or more angles.

12. The machine learning based (ML-based) method (1500) as claimed in claim 10, wherein analyzing (1512), by the machine learning model, each livestock animal of the one or more livestock animals (102), based on the face identity generated for each livestock animal of the one or more livestock animals (102), comprises:
obtaining, by the one or more hardware processors (220), a second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals (102), at the machine learning model;
comparing, by the one or more hardware processors (220), the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals (102), with pre-determined data associated with face identity generated for each livestock animal of the one or more livestock animals (102); and
analyzing, by the one or more hardware processors (220), each livestock animal of the one or more livestock animals (102), based on the comparison of the second plurality of data associated with the face identity generated for each livestock animal of the one or more livestock animals (102) with the pre-determined data associated with the face identity generated for each livestock animal of the one or more livestock animals (102).

13. The machine learning based (ML-based) method (1500) as claimed in claim 10, wherein generating (1510), by an embedding generation model, the face identity for each livestock animal of the one or more livestock animals (102), comprises:
generating, by the one or more hardware processors (220), the one or more vectors (710) corresponding to the second one or more images (708) of the one or more face regions (704) of the one or more livestock animals (102);
correlating, by the one or more hardware processors (220), the second one or more images (708) of the one or more face regions (704) of the one or more livestock animals (102), with the one or more vectors (710); and
generating, by the one or more hardware processors (220), the face identity for each livestock animal of the one or more livestock animals (102) based on the correlation of the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), with the one or more vectors (710).

14. The machine learning based (ML-based) method (1500) as claimed in claim 13, wherein each of the one or more vectors (710) associated with the each of the second one or more images (708) corresponding to the one or more face regions (704) of the one or more livestock animals (102), comprises unique identity for each livestock animal of the one or more livestock animals (102).

15. The machine learning based (ML-based) method (1500) as claimed in claim 10, wherein detecting (1508), by the keypoint detection model, the one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), comprises:
obtaining, by the one or more hardware processors (220), the one or more face regions (704) of each livestock animal of the one or more livestock animals (102);
generating, by the one or more hardware processors (220), one or more keypoints for each part of the one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), wherein the one or more keypoints are constant when the one or more images of the one or more parts (706) in the one or more face regions (704) are at least one of: rotated, and distorted; and
detecting, by the one or more hardware processors (220), one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102) comprising at least one of: at least one eye, a nose, and at least one ear, based on the one or more keypoints generated for each part of the one or more parts (706) in the one or more face regions (704) of each livestock animal of the one or more livestock animals (102).

16. The machine learning based (ML-based) method (1500) as claimed in claim 10, wherein the one or more face regions (704) of each livestock animal of the one or more livestock animals (102), is detected based on the one or more images of the one or more livestock animals (102), captured from the one or more angles.

17. The machine learning based (ML-based) method (1500) as claimed in claim 10, wherein analyzing (1512) each livestock animal of the one or more livestock animals (102) comprises:
training (1202, 1204), by the one or more hardware processors (220), one or more machine learning models for one or more poses of each livestock animal of the one or more livestock animals (102), on large one or more datasets of the one or more livestock animals (102);
adding, by the one or more hardware processors (220), data associated with at least one of: breed, sex and age, of each livestock animal of the one or more livestock animals (102) into the one or more machine learning models during training the one or more machine learning models;
generating (1206, 1208), by the one or more hardware processors (220), a combination of the one or more machine learning models trained for the one or more poses of each livestock animal of the one or more livestock animals (102) to analyze the one or more livestock animals (102).

18. The machine learning based (ML-based) method (1500) as claimed in claim 17, wherein the trained one or more machine learning models is fine-tuned based on the one or more images of the one or more livestock animals (102), which adapts the trained one or more machine learning models to require less training data for training the one or more machine learning models, by a transfer learning method.

Dated this 10th Day of July 2023

Vidya Bhaskar Singh Nandiyal
Patent Agent (IN/PA-2912)
IPexcel Services Private Limited
AGENT FOR APPLICANTS

Documents

Application Documents

# Name Date
1 202341046413-STATEMENT OF UNDERTAKING (FORM 3) [10-07-2023(online)].pdf 2023-07-10
2 202341046413-PROOF OF RIGHT [10-07-2023(online)].pdf 2023-07-10
3 202341046413-FORM FOR STARTUP [10-07-2023(online)].pdf 2023-07-10
4 202341046413-FORM FOR SMALL ENTITY(FORM-28) [10-07-2023(online)].pdf 2023-07-10
5 202341046413-FORM 1 [10-07-2023(online)].pdf 2023-07-10
6 202341046413-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-07-2023(online)].pdf 2023-07-10
7 202341046413-EVIDENCE FOR REGISTRATION UNDER SSI [10-07-2023(online)].pdf 2023-07-10
8 202341046413-DRAWINGS [10-07-2023(online)].pdf 2023-07-10
9 202341046413-DECLARATION OF INVENTORSHIP (FORM 5) [10-07-2023(online)].pdf 2023-07-10
10 202341046413-COMPLETE SPECIFICATION [10-07-2023(online)].pdf 2023-07-10
11 202341046413-FORM-26 [13-07-2023(online)].pdf 2023-07-13