Sign In to Follow Application
View All Documents & Correspondence

System And Method For Identification Of Cattle & Buffaloes

Abstract: ABSTRACT A method and a system are disclosed herein for facilitating identification of cattle. A cattle identification module is configured to receive in real-time, a muzzle image of the cattle, being shot in real time by a user via a camera module 102 associated with the electronic device 104. The received muzzle image is processed in real-time to identify if the muzzle image is a ‘good quality image’ or ‘bad quality image’ based on a plurality of predefined parameters. A feedback notification is sent to the electronic device 104, upon identifying the muzzle image as a ‘bad quality image’. Thereafter, the user is prompted to align the camera module 102 as indicated in the received feedback notification. The camera module 102 is activated to automatically capture the muzzle image with good quality, without requiring the user to manually trigger the camera module 102.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 April 2021
Publication Number
43/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
aditya@ira.law
Parent Application

Applicants

Dvara E-Dairy Solutions Private Limited
10th Floor, IIT M Research park, Kanagam, Taramani, Chennai

Inventors

1. Ravi. K.A
Plot No. 9, First floor, Mathru Krupa, PVS Nagar 2nd Street, Thoraipakkam, Chennai 600097

Specification

DESC:SYSTEM AND METHOD FOR IDENTIFICATION OF CATTLE & BUFFALOES
TECHNICAL FIELD
[0001] The present subject matter generally relates to digital identification of cattle and buffaloes and more particularly, to automated system and method for identifying and recording unique features of cattle and buffaloes.
BACKGROUND
[0002] Accurate identification of cattle is an important facet of growth of the dairy industry. Providing cattle with a unique and tamper proof digital identity is critical for tracking productivity of cattle and managing its health. Moreover, providing a unique identity to each cattle avoids duplication of asset while offering financial services such as loans and insurance, thereby mitigating the risk of multiple loans to the same cattle and ensuring correct identification of the cattle at the time of insurance claim settlement.
[0003] Traditionally, tracking cattle has involved tattooing, ear notching, ear tags and branding. These older methods lacked accuracy and techniques such as tattooing, branding, ear piercing could also cause physical pain to the cattle. In recent times and with the advent of digital architecture, technologies such as barcodes and radio frequency identification (RFID) tags have been used. These modern technologies are expensive and often unaffordable for developing countries such as India.
[0004] Studies have shown that the ridges and valleys of the skin of bovine (cows and buffaloes, all breeds, calf to adult) are unique, much like human fingerprints. Thus, bovine muzzle patterns have been used as a unique identifier for cattle identification. However, capturing images of muzzle patterns has proven to be challenging. Since animals are prone to move their heads frequently and are not trained for their images to be clicked, it is difficult to capture an image which contains all the requisite attributes such as (i) a full image of the muzzle, (ii) muzzle image without foreign objects/ materials such as water, mucus, fly, mud or other particles and (iii) suitable lighting. In view of these challenges, a user handling the imaging device, needs to be trained to improve the quality of muzzle images at the time of collecting the images. The user is expected to gain the requisite skill to achieve this, and skill building takes time. This impacts scalability of any solution which is largely dependent on human skill in capturing images.
[0005] To be usable for the purposes of cattle identification, the muzzle images are required to be of a quality suitable for various algorithms used for matching the muzzle image against a given database. The methods available in the art are unable to accurately capture muzzle images. Since the images being captured are of a moving object, this poses several challenges. The muzzles captured as per the solutions available in the art are often not captured in full. Sometimes they are captured with water, mucus, fly, mud or other particles in/ on the muzzle. Sometimes there is poor lighting or reflection of sun. These factors significantly impair the usefulness of the muzzle image for identification.
[0006] The key challenges faced while developing a solution for the problems as identified in the prior art are:
(i) capturing quality images of the cattle;
(ii) identifying appropriate orientation of the muzzle in the image and appropriate environment lighting, background, time of day, artificial light, which has impact on the accuracy;
(iii) developing a solution which is easy to use at last mile;
(iv) establishing accuracy of match at the time of claim validation by various stakeholders in dairy ecosystem;
(v) choosing an algorithm which is capable of being scaled at an affordable price given the constraints in the industry;
(vi) leveraging existing infrastructure at last mile; and
(vii) capturing muzzle images within a short time.
[0007] Therefore, there is a well felt need for a system and method, for accumulating improved quality of bovine muzzle patterns and thereby increasing utility of such accumulated bovine patterns for cattle identification purposes.
SUMMARY
[0008] In view of the above, it is an object of the present subject matter to capture improved quality images of cattle.
[0009] It is another object of the present subject matter to identify specific body parts of the cattle with unique identifiers such as bovine muzzle with significantly improved quality suitable for matching the images against a database of bovine muzzle images.
[00010] It is yet another object of the present subject matter to capture images of body parts of the cattle with unique identifiers such as the bovine muzzle after identifying the appropriate orientation of the muzzle in the image and appropriate environment lighting.
[00011] It is yet another object of the present subject matter to classify, based on quality, the images in real-time as being appropriate or not and providing feedback to the user.
[00012] According to an embodiment of the present disclosure, a method is disclosed for facilitating identification of cattle. The method comprises configuring an electronic device to execute a cattle identification module via a server in a communication network; configuring the cattle identification module for: receiving in real-time, a muzzle image of the cattle, the muzzle image being captured live by a user via a camera module associated with the electronic device; processing, in real-time, the received muzzle image to thereby identify the muzzle image either as ‘good quality image’ or ‘bad quality image’ based on a plurality of predefined parameters, a feedback notification to the electronic device upon identifying the muzzle image as a ‘bad quality image’; prompting the user to align the camera module as indicated in the received feedback notification; triggering the camera module to automatically capture the muzzle image with good quality, without requiring the user to manually trigger the camera module.
[00013] According to an embodiment of the present disclosure the cattle identification module is further configured to identify unique patterns and/or unique identification marks present in the muzzle image and/or any body parts of the cattle being clicked by the camera module.
[00014] According to an embodiment of the present disclosure the cattle identification module is an Artificial Intelligence based (AI-based) module that executes machine learning algorithms for learning unique biometric identification of the cattle over a period of time.
[00015] According to an embodiment of the present disclosure the cattle identification module is further configured to generate a guiding frame that is displayed on the electronic device to enable the user to align the camera module in correct position to capture the muzzle image with good quality in real time.
[00016] According to an embodiment of the present disclosure the cattle identification module is further configured to process the image being captured inside the guiding frame in real time.
[00017] According to an embodiment of the present disclosure, the cattle identification module is further configured to filter out any portion of the image captured outside the guiding frame.
[00018] According to an embodiment of the present disclosure the guiding frame is turned ‘green’ in colour to indicate that the muzzle image being captured is a good quality image. Whereas the guiding frame is turned ‘red’ in colour to indicate that the muzzle image being captured is a bad quality image.
[00019] According to an embodiment of the present disclosure, a system for facilitating identification of cattle, the system comprising: an electronic device configured to execute a cattle identification module via a server in a communication network; a camera module associated with the electronic device; the cattle identification module configured to: receive in real-time, a muzzle image of the cattle, the muzzle image being captured live by a user via the camera module; process, in real-time, the received muzzle image to thereby identify the muzzle image either as ‘good quality image’ or ‘bad quality image’ based on a plurality of predefined parameters; send a feedback notification to the electronic device upon identifying the muzzle image as a ‘bad quality image’; prompt the user to align the camera module as indicated in the received feedback notification; trigger the camera module to automatically capture the muzzle image with good quality, without requiring the user to manually trigger the camera module.
[00020] The afore-mentioned objectives and additional aspects of the embodiments herein will be better understood when read in conjunction with the following description and accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. This section is intended only to introduce certain objects and aspects of the present invention, and is therefore, not intended to define key features or scope of the subject matter of the present invention.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[00021] The present invention, both as to its organization and manner of operation, together with further objects and advantages, may best be understood by reference to the following description, taken in connection with the accompanying drawings. These and other details of the present invention will be described in connection with the accompanying drawings, which are furnished only by way of illustration and not in limitation of the invention, and in which drawings:
[00022] FIG. 1 illustrates a system for facilitating identification of cattle, in accordance with one embodiment of the present subject matter.
[00023] FIG. 2 illustrates screenshots of bovine images depicting “good” and “bad” quality image, in accordance with one embodiment of the present subject matter.
[00024] FIG. 3 is a flow diagram illustrating bovine validation algorithm, in accordance with one embodiment of the present subject matter.
[00025] FIG. 4 is a flow diagram illustrating various steps involved in a method for facilitating identification of cattle, according to an embodiment of the present subject matter.
[00026] Like reference numerals refer to like parts throughout the description of several views of the drawings.
DETAILED DESCRIPTION
[00027] The following presents a detailed description of various embodiments of the present subject matter with reference to the accompanying drawings.
[00028] The embodiments of the present subject matter are described in detail with reference to the accompanying drawings. However, the present subject matter is not limited to these embodiments which are only provided to explain more clearly the present subject matter to a person skilled in the art of the present disclosure. In the accompanying drawings, like reference numerals are used to indicate like components.
[00029] The specification may refer to “an”, “one”, “different” or “some” embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
[00030] As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes”, “comprises”, “including” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being “attached” or “connected” or “coupled” or “mounted” to another element, it can be directly attached or connected or coupled to the other element or intervening elements may be present. As used herein, the term “and/or” includes all combinations and arrangements of one or more of the associated listed items.
[00031] The figures depict a simplified structure only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown.
[00032] As used herein, ‘server’ is a computer-based device or a system that facilitates in providing, data, services, or programs to various client devices over a communication network. The server may include one or more intelligent processing devices or modules, capable of processing digital logics and also possesses analytical skills for analyzing and processing various data or information, according to the embodiments of the present invention.
[00033] As used herein, ‘electronic device’ is a smart electronic device capable of communicating with various other electronic devices and applications via one or more communication networks. Examples of said primary electronic device include, but not limited to, a wireless communication device, a smart phone, a tablet, a desktop, a laptop, etcetera. The primary electronic device comprises: an input unit to receive one or more input data; an operating system to enable the primary electronic device/electronic device to operate; a processing to process various data and information; a memory unit to store initial data, intermediary data and final data pertaining to restaurant and customer related data; and an output unit having a graphical user interface (GUI).
[00034] As used herein, ‘storage device’ refers to a local or remote storage device; docket systems; database systems; capable to store information related to transmission of educational content, audio-visual content, play list, user profiles, historical data, etcetera. In an embodiment, the storage device may be a database server, a cloud storage, a remote database, a local database, a storage unit.
[00035] As used herein, ‘module’ or ‘unit’ refers to a device, a system, a hardware, a computer application configured to execute specific functions or instructions according to the embodiments of the present invention. The module or unit may include a single device or multiple devices configured to perform specific functions according to the present invention disclosed herein.
[00036] As used herein, ‘communication network includes a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a virtual private network (VPN), an enterprise private network (EPN), Internet, and a global area network (GAN).
[00037] Terms such as ‘connect’, ‘integrate’, ‘configure’, and other similar terms include a physical connection, a wireless connection, a logical connection, or a combination of such connections including electrical, optical, RF, infrared, Bluetooth, or other transmission media, and include configuration of software applications to execute computer program instructions, as specific to the presently disclosed embodiments, or as may be obvious to a person skilled in the art.
[00038] Terms such as ‘send’, ‘transfer’, ‘transmit’ and ‘receive’, ‘collect’, ‘obtain’, ‘access’ and other similar terms refers to transmission of data between various modules and units via wired or wireless connections across a communication network.
[00039] FIG. 1 illustrates a system for facilitating identification of cattle, in accordance with one embodiment of the present subject matter. The system comprises a server 108, an electronic device 104, a camera module 102 associated with the electronic device 104, and a database 112 communicably connected to each other in a communication network 106. An executable cattle identification module 110 may be configured on the server 108. The electronic device electronic device 104 may be configured to operate the cattle identification module 110 via the server 108. The cattle identification module 110 may be configured on the electronic device electronic device 104 by following download instructions by respective registered users associated with the electronic device 104.
[00040] The cattle identification module 110 is configured for facilitating a user to capture good quality images of cattle. The electronic device 104 is configured to execute the cattle identification module 110 via the server 108 connected in the communication network 106. The camera module 102 is associated with the electronic device 104 to capture images of the cattle and buffaloes in real time. The cattle identification module 110 is configured to receive in real-time, a muzzle image of the cattle. The muzzle image can be captured live by a user via the camera module 102 associated with the electronic device 104. The camera module 102 may be an in-built camera of the electronic device 104 or may be an external or separate camera device. The muzzle images received by the cattle identification module 110 are processed, in real-time, to thereby identify the muzzle image either as ‘good quality image’ or ‘bad quality image’ based on a plurality of predefined parameters.
[00041] Further, the cattle identification module 110 sends a feedback notification to the electronic device 104 upon identifying the muzzle image as a ‘bad quality image’ and prompts the user to align the camera module 102 as indicated in the received feedback notification. The feedback notification is sent in real time based on prediction of cattle identification module 110 being executed by the electronic device 104. Once the cattle identification module 110 acknowledges that the muzzle image being captured is of good quality, it triggers the camera module 102 to automatically capture the muzzle image with good quality, without requiring the user to manually trigger the camera module 102. Option for manually clicking the camera button is also provided to allow the user to capture the images manually.
[00042] According to an embodiment, the live images or video feed is obtained through the electronic device 104 having in-built camera or a separate camera device. The live images of the bovine include biometric muzzle pattern that is analysed in order to determine bovine identity. Any other image in the video feed which does not show biometric pattern may be ignored. Using light weight classification models (Model ResNet50, Model size : 4Mb, Classes : Muzzle/ Non Muzzle, Image size : 128*128, fp16 Quantized, latency of ~200ms, trained on data size of 3000+ images with 2:1 ratio, 99%+ accuracy in field testing ) deployed via the cattle identification module 110, the live video feed may be analysed frame-wise and only muzzle image in frame is recognised in real time. Any frames with random background are not processed.
[00043] An ‘Auto Capture’ toggle option may be provided with the cattle identification module 110 to facilitate the user to choose the option of triggering the camera module 102 automatically. Once the muzzle image is recognized in a frame, the ‘Auto Capture’ toggle option becomes active and the user may turn it ‘ON’ to automatically capture a muzzle image without having to manually click the camera button. This helps the user to capture the best frame as there could be a delay from the user in capturing the same or the bovine may move its head during the time frame.
[00044] According to an embodiment of the present disclosure the cattle identification module 110 is further configured to generate a guiding frame that is displayed on the electronic device 104 to enable the user to align the camera module 102 in correct position to capture the muzzle image with good quality in real time. The image being captured inside the guiding frame is processed in real time. Any portion of the image captured outside the guiding frame is filtered out.
[00045] The guiding frame plane works preferably on 2x2 blocks. Accordingly, all dimensions with odd size are rounded up. Each 2x2 block takes 2 bytes to encode, one each for frame. The images are processed by the cattle identification module 110 by clipping RGB values to be inside boundaries between 0 to 26,02,143.
[00046] The database 112 of muzzle images stores the muzzle images of cattle that are uploaded by various service providers, for example financial service providers, insurance service providers etc., while offering services to farmers in order to store the records for validating bovine identity. In an event, for example death of the cattle, the validation of the bovine may be required against muzzle images of dead cattle. Accordingly, muzzle images of the dead cattle may be analysed and compared with the data stored in the database 112 as mentioned above. Subsequently, based on the comparison results, further processing of insurance claims or the like, may be carried by the service providers.
[00047] FIG. 2 illustrates screenshots of bovine images depicting “good” and “bad” quality image, in accordance with one embodiment of the present subject matter. The cattle identification module 110 is further configured to set the frame configuration using previewWidth, previewheight and sensor orientation. According to an embodiment, the electronic device may be associated with a sensor to detect the modes of orientation of the muzzle image. The orientation modes may include landscape orientation and portrait orientation. All the images may preferrably be captured in portrait mode, except for the ide view of the cattle. The sensor detects orientation of the image and may send a feedback to the user to switch to portrait/landscape based on type of image capture. Further, the sensor may be provided with an in-built gyroscope to provide orientation and angle of the camera, image captured.
[00048] A frame counter is initiated which may be incremented during detection of images for every frame. Based on frame, counter threshold value will be reduced by 10% suitable to the environment. Thereafter, the cattle identification module 110 detects the muzzle/ non muzzle images. If the received image is of muzzle, it will be highlighted with a red or green box/guiding frame 212 depending on the quality of the muzzle image. As shown in the figure, the guiding frame 212 is turned ‘green’ in colour to indicate that the muzzle image being captured is a good quality image 202, 206. On the other hand, the guiding frame 212 is turned ‘red’ in colour to indicate that the muzzle image being captured is a bad quality image 204, 208.
[00049] According to an embodiment of the present disclosure, the muzzle image 202, 204, 206, 208 is processed to check if it is from a bovine on the field (alive or dead bovine) or a digital image (Image of a muzzle captured from a laptop/phone screen). This is done using a custom model (Model ResNet50, Model size : 4Mb, Classes : Field/ Digital, Image size : 128*128, fp16 Quantized, latency of ~200ms, trained on data size of 10000+ images from each class) trained on real field images and images captured using several different electronic devices 104 from different digital screens.
[00050] Once the muzzle image in frame is identified as field muzzle image from bovine, the quality of the muzzle image is analysed using a classification model (Model EfficientNet Lite2, Model size : 7Mb, Classes : Good / Poor_Lighting/ Glossy/ Dirty/ Bad_angle/ Blurry, Image size : 128*128, fp16 Quantized, latency of ~300ms, trained on data size of 500+ images from each class, 97%+ accuracy in field testing). A good quality muzzle image is a muzzle which has distinct and clear patterns without any dirt, sweat, mucus, water droplets, sun reflection, blurriness and captured under good lighting. The good quality muzzle image is highlighted with green frame. Images not following the above criteria are highlighted with red box and the user is provided with relevant real time feedback/ toast messages or notifications 210. For example, if muzzle is under low lighting, feedback notification 210 may be given as “Please move the cattle to a better lighting condition or use artificial lighting”. In another example, if the muzzle has sweat droplets, feedback notification 210 may be sent to the electronic device 104, asking the user to wipe the muzzle with a dry cloth so that better images may be recorded.
[00051] In addition to the process of filtration as discussed above, the angle and distance of the muzzle with respect to camera module 102 is analysed using opencv algorithms (using focal length and muzzle width and heights using detection model as described above). It is expected to be parallel and at a distance of (2-3ft) to the camera module 102 so that the muzzle patterns are identifiable. The user is prompted real-time with 2D arrows to move the camera left, right, top, bottom and toast messages to move closer/farther to position the muzzle in the desired angle and distance to get good quality muzzle image highlighted by green box.
[00052] In an event, when any of the above criteria or the predefined parameters are not met, the next frame is analysed again. However, if all the criteria is met and a good muzzle image is detected, highlighted with green box. The image frame is auto-captured and sent for further processing.
[00053] The accepted good quality muzzle image is automatically cropped using light weight detection models (YoloV5 Small, model size : 7Mb, Int8 quantized, latency ~150 ms, Image size : 256*256, Classes : Muzzle, trained on a dataset of 5000+ images) marking the four corner points of the muzzle. The detection algorithm is trained to mark the corner points and crop automatically. The nostrils and other areas in the image are removed and only the portion with patterns is auto cropped for further analysis.
[00054] Once the cattle identification module 110 validates the good muzzle from bovine, it crops the muzzle from the captured area based on (x1,y1),(x2,y2),(x3,y3),(x4,y4) frame corner co-ordinates. Thereafter, the cropped muzzle image is validated. In a Muzzle detector file, as configured by the cattle detection module, if value returned as 1 then it contains muzzle percentage value which is greater than 90, otherwise it will return as 0. If the value is 0 then new muzzle image is captured live and all process steps as explained above, are repeated.
[00055] The cattle detection module, analyses and automatically captures a plurality of good muzzle images, preferably three images. According to various embodiments of the present subject matter as disclosed herein, the process of analyzing the images takes few seconds, if performed under suitable conditions. However, if the user is not able to capture muzzle image due to various constraints for a longer period of time, the thresholds for processing the images may be modified temporarily. In an event, if the automated triggering of the camera module 102 is not activated, the user is facilitated to use the option of manually clicking the capture button. Thus, the user captures the muzzle images manually using the ‘Manual Mode’. This is only done in corner cases.
[00056] Further, the front and side image of the cattle is also collected and colour of the cattle is analysed in real time for secondary validation and classification. The cattle identification module 110 detects a front face or side body from the received image and classifies the detected part into one of the colour categories (black, white, black & white, brown). Random images or images captured with very poor lighting are rejected. The classification and detection models provided by the AI-based cattle identification module 110 are similar to the models as used for muzzle.
[00057] In one embodiment herein, the captured muzzle images are saved in the server 108 as tagging database images. The meta data such the predictions classes and scores from various algorithms used are also saved and sent to the database 112 for further fine tuning the models. As mentioned earlier, multiple images may be collected during onboarding of the cattle to store as records in the database 112 also referred to as tagging database images.
[00058] Further, multiple parameters are predicted by the machine learning models (ML models) on the device during this process. For example, quality of muzzle images (good/bad) with confidence scores, validity of image (muzzle/non-muzzle) with scores, color of front and side view of cattle with confidence scores etcetera. These values are sent and stored as meta data along with the images in the database 112. The meta data is needed to further finetune the machine learning models (ML) in order to collect better quality data.
[00059] FIG. 3 is a flow diagram illustrating bovine validation algorithm, in accordance with one embodiment of the present subject matter. The bovine validation algorithm is a combination of image processing, pattern recognition, image encoding and machine learning algorithms provided by the cattle identification module 110. It compares test muzzle images and tagging database 112 muzzle images. It leverages a two-stream algorithm to make accurate predictions. The various steps of the validation algorithm are discussed as follows:

STEP 1 – Filtering 302

[00060] All tagging (database) and test muzzle images are read from the server 108. Images are rejected if it falls to one of bad quality categories by using multiple classification algorithms. Firstly, digital Images of muzzle are taken (digital images are the images captured from muzzles on a laptop/phone screen). The non-muzzle Images or the random images that have been captured in Manual Mode are rejected as these are not considered as the muzzle images. The muzzle images captured (using Manual Mode) under the following predefined parameters, where patterns are not discernable, are filtered out. These include (i) very bad orientation, (ii) extreme low lighting, (iii) muzzle smeared with dirt, dung, fodder, (iv) very glossy with sweat, and (v) partially occluded muzzle (with hand/rope/cloth/tongue etc across the muzzle).
[00061] Both front and side bovine images of tagging and test are read from the server 108 and their colours may be predicted using a combination of detection and multi-label classification algorithm. By research, the most common front and side colours of bovine are black, white, black&white, brown. If the colour of test bovine is the same as tagging bovine in either front or side view, Step 2 is to be proceeded, else the result is returned as a ‘No Match’.

STEP 2 – Preprocessing 304

[00062] All the tagging (database) and test muzzle images are read from server 108 and contrast-enhanced using CLAHE. This is done to enhance the edges of the beads and ridges on the muzzle which could be sometimes occluded by poor lighting, glossiness of the muzzle due to sweat etc. ROI (Muzzle area only between the nostrils with patterns) are extracted for all the tagging (database) and test muzzle images using an object detection algorithm.
[00063] According to an embodiment of the present subject matter, the muzzle images are resized to 256 * 256 for the purpose of faster computation. This size was arrived at after extensive analysis with different resizing factors. It has been chosen as the optimal size with sufficient pattern information without overloading the system. Each of 3 database tagging muzzle images is regenerated into 25 different variations of the muzzle image using the following augmentations:
• rotation from -15 to +15 degrees spaced by 3 degrees. (11)
• brightness 2 darker at 0.8, 0.9 + 2 brighter at 1.1, 1.2 (4)
• gray scale (1)
• shift and shear (3)
• padded (3)
• translated (3)

Step3: Stream 1 - Image Vector Similarity: 306

[00064] According to an embodiment of the present subject matter, each regenerated tagging image is encoded into a unique vector that represents its features (patterns) by passing through a triplet loss trained model. The model may be trained using hard and semi-hard negatives mining methodology. Further, Image Vector Similarity Matrix (IVSM) is computed and stored for all possible pairs of test and tagging images by calculating the distance between the feature vectors in euclidean/cosine space. Thereafter, Image Vector Similarity Binary Matrix (IVSBM) is initialised with the same size as Image Vector Similarity Matrix (IVSM) with all zeros. For each element in IVSM, if the distance is greater than vector_threshold*, the corresponding element in IVSBM is set to 1.

Step4: Stream 2 - Sub-Image Pattern Similarity : 308

[00065] Each test image is divided into a 8 X 8 grid (each 32 X 32 pixel) and then sub blocks/templates of 4 X 3 grid (each of size 127 X 95 pixels) are extracted and stored for comparison. That results in 30 templates per test image totaling into 90 templates for all 3 test images. As shown in the figure, the process iterates through each sub block and perform sliding window template match across each of the train images. The sub block if blurred/devoid of patterns, is rejected by the classification algorithm.
[00066] The sub-block similarities (match confidence of finding a sub-block of test image in a particular tagging image) are stored for each of the pairs in Sub Block Similarity Matrix (SBSM). Further, Sub Block Similarity Binary Matrix (SBSBM) may be initialised with the same size as Sub Block Similarity Matrix (SBSM) with all zeros.
[00067] For each element in SBSM, if the match confidence is greater than subblock_threshold*, the corresponding element in SBSBM is set to 1.

Step5: Final Result 310

[00068] The final result is decided from the results of these two streams with two possible outcomes including ‘Match’ and ‘No Match’. To determine the result, following steps are followed :
• Initialise match_counter to 0.
• Find the indexes of all elements in the SBSBM which are 1.
• Iterate through all indexes from the previous step, find which tagging and test image the matched sub blocks belong to. Look for the corresponding IVSBM value. If the value is 1, increment match_counter by 1.
• If match_counter is greater than or equal to the final_match_threshold*, return result as Match else No Match.
[00069] The different thresholds (vector_threshold, subblock_threshold, final_match_threshold) used in the algorithm is arrived at by analysing through extensive TP and FP field validations done for varied muzzle images under different settings. (Time of day, geography, phones, types of bovine, types of muzzle etcetera). The machine learning algorithm executed by the cattle identification module 110 identifies cattle based on “muzzle pattern” through images of muzzles.
[00070] FIG. 4 is a flow diagram illustrating various steps involved in a method for facilitating identification of cattle, according to an embodiment of the present subject matter.
[00071] At step 402, a muzzle image of cattle is received by a cattle identification module 110 in real time. The muzzle image is captured live by a user via a camera module 102 associated with an electronic device 104. The electronic device 104 is configured to execute a cattle identification module 110 via a server 108 in a communication network 106. The cattle identification module 110 is further configured to identify unique patterns and/or unique identification marks present in the muzzle image and/or any body parts of the cattle being clicked by the camera module 102. The cattle identification module 110 is an Artificial Intelligence based (AI-based) module that executes machine learning algorithms for learning unique biometric identification of the cattle over a period of time.
[00072] At step 404, the received muzzle image is processed in real-time, to thereby identify the muzzle image either as ‘good quality image’ or ‘bad quality image’ based on a plurality of predefined parameters.
[00073] At step 406, a feedback notification is sent to the electronic device 104, upon identifying the muzzle image as a ‘bad quality image’. A guiding frame 212 is set and displayed on the electronic device 104 to enable the user to align the camera module 102 in correct position to capture the muzzle image with good quality in real time. The image being captured inside the guiding frame 212 is processed in real time. Further, the cattle identification module 110 removes any portion of the image captured outside the guiding frame 212.
[00074] At step 408, the user is prompted to align the camera module 102 as indicated in the received feedback notification. The guiding frame 212 is turned ‘green’ in colour to indicate that the muzzle image being captured is a good quality image. The guiding frame 212 is turned ‘red’ in colour to indicate that the muzzle image being captured is a bad quality image.
[00075] At step 410, the camera module 102 is triggered automatically to capture the muzzle image with good quality, without requiring the user to manually trigger the camera module 102.
[00076] While the preferred embodiments of the present invention have been described hereinabove, various changes, adaptations, and modifications may be made therein without departing from the spirit of the invention and the scope of the appended claims. It will be obvious to a person skilled in the art that the present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. ,CLAIMS:Claims:

1. A method for facilitating identification of cattle, the method comprising:
configuring an electronic device 104 to execute a cattle identification module via a server 108 in a communication network 106;
configuring the cattle identification module for:
receiving in real-time, a muzzle image of the cattle, the muzzle image being captured live by a user via a camera module 102 associated with the electronic device 104;
processing, in real-time, the received muzzle image to thereby identify the muzzle image either as ‘good quality image’ or ‘bad quality image’ based on a plurality of predefined parameters;
sending a feedback notification to the electronic device 104 upon identifying the muzzle image as a ‘bad quality image’;
prompting the user to align the camera module 102 as indicated in the received feedback notification; and
triggering the camera module 102 to automatically capture the muzzle image with good quality, without requiring the user to manually trigger the camera module 102.

2. The method of claim 1, wherein the cattle identification module is further configured to identify unique patterns and/or unique identification marks present in the muzzle image and/or any body part of the cattle being clicked by the camera module 102.

3. The method of claim 1, wherein the cattle identification module is an Artificial Intelligence based (AI-based) module that executes machine learning algorithms to identify cattle based on a plurality of “muzzle pattern” through images of muzzles.

4. The method of claim 1, wherein the cattle identification module is further configured to generate a guiding frame 212 that is displayed on the electronic device 104 to enable the user to align the camera module 102 in correct position to capture the muzzle image with good quality in real time.

5. The method of claim 4, wherein the cattle identification module is further configured to process the image being captured inside the guiding frame 212 in real time.

6. The method of claim 4, wherein the cattle identification module is further configured to filter out any portion of the image captured outside the guiding frame 212.

7. The method of claim 4, wherein the guiding frame 212 is turned ‘green’ in colour to indicate that the muzzle image being captured is a good quality image.

8. The method of claim 4, wherein the guiding frame 212 is turned ‘red’ in colour to indicate that the muzzle image being captured is a bad quality image.

9. A system for facilitating identification of cattle, the system comprising:
an electronic device 104 configured to execute a cattle identification module via a server 108 in a communication network 106;
a camera module 102 associated with the electronic device 104;
the cattle identification module configured to:
receive in real-time, a muzzle image of the cattle, the muzzle image being captured live by a user via the camera module 102;
process, in real-time, the received muzzle image to thereby identify the muzzle image either as ‘good quality image’ or ‘bad quality image’ based on a plurality of predefined parameters;
send a feedback notification to the electronic device 104 upon identifying the muzzle image as a ‘bad quality image’;
prompt the user to align the camera module 102 as indicated in the received feedback notification;
trigger the camera module 102 to automatically capture the muzzle image with good quality, without requiring the user to manually trigger the camera module 102.

10. The system of claim 9, wherein the cattle identification module is further configured to identify unique patterns and/or unique identification marks present in the muzzle image and/or any body part of the cattle being clicked by the camera module 102.

11. The system of claim 9, wherein the cattle identification module is an Artificial Intelligence based (AI-based) module that executes machine learning algorithms to identify cattle based on a plurality of “muzzle pattern” through images of muzzles.

12. The system of claim 9, wherein the cattle identification module is further configured to generate a guiding frame 212 that is displayed on the electronic device 104 to enable the user to align the camera module 102 in correct position to capture the muzzle image with good quality in real time.

13. The system of claim 12, wherein the cattle identification module is further configured to process the image being captured inside the guiding frame 212 in real time.

14. The system of claim 12, wherein the cattle identification module is further configured to filter out any portion of the image captured outside the guiding frame 212.

15. The system of claim 12, wherein the guiding frame 212 is turned ‘green’ in colour to indicate that the muzzle image being captured is a good quality image.

16. The system of claim 12, wherein the guiding frame 212 is turned ‘red’ in colour to indicate that the muzzle image being captured is a bad quality image.

Documents

Application Documents

# Name Date
1 202141018446-PROVISIONAL SPECIFICATION [21-04-2021(online)].pdf 2021-04-21
2 202141018446-POWER OF AUTHORITY [21-04-2021(online)].pdf 2021-04-21
3 202141018446-FORM 1 [21-04-2021(online)].pdf 2021-04-21
4 202141018446-DRAWINGS [21-04-2021(online)].pdf 2021-04-21
5 202141018446-Correspondence, Form-1 and POA_04-10-2021.pdf 2021-10-04
6 202141018446-DRAWING [30-05-2022(online)].pdf 2022-05-30
7 202141018446-COMPLETE SPECIFICATION [30-05-2022(online)].pdf 2022-05-30
8 202141018446-FORM FOR STARTUP [07-12-2023(online)].pdf 2023-12-07
9 202141018446-FORM 18 [07-12-2023(online)].pdf 2023-12-07
10 202141018446-FER.pdf 2025-04-09
11 202141018446-FORM-5 [09-10-2025(online)].pdf 2025-10-09
12 202141018446-FORM 3 [09-10-2025(online)].pdf 2025-10-09
13 202141018446-FER_SER_REPLY [09-10-2025(online)].pdf 2025-10-09
14 202141018446-DRAWING [09-10-2025(online)].pdf 2025-10-09
15 202141018446-CLAIMS [09-10-2025(online)].pdf 2025-10-09

Search Strategy

1 Document1E_22-03-2024.pdf