Sign In to Follow Application
View All Documents & Correspondence

A System And A Method For Validating An Image For A Medical Test

Abstract: ABSTRACT A SYSTEM AND A METHOD FOR VALIDATING AN IMAGE FOR A MEDICAL TEST A system and a method for validating an image of a test strip for a medical test is disclosed. The system receives an image of a test strip. The system identifies a set of regions in the image. The set of regions comprises a code, a set of position markers, a set of colour markers, and a set of chemical pads. Subsequently, the system obtains test data based on the code. The test data comprises a set of health parameters, and chemical pad data. Further, the system determines an extent of a chemical reaction by analysing the set of chemical pads based on the test data. Furthermore, the set of health parameters may be computed based on the extent of the chemical reaction. [To be published with Figure 1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
10 April 2023
Publication Number
19/2023
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-10-01
Renewal Date

Applicants

NEODOCS HEALTHCARE PVT. LTD.
502, Maskar House, Naikwadi Aarey Road, Goregaon East Mumbai, Maharashtra, 400063, India

Inventors

1. MEENA, Anurag
1201, P-1, The Address, LBS Marg, Ghatkopar West, Mumbai, Maharashtra, India-400086
2. LODHA, Pratik
502, Maskar House, Naikwadi Aarey Rd, Goregaon East, Mumbai, Maharashtra, India-400063
3. MALPANI, Nikunj
1506, Brighton Tower, 2nd Cross Lane, Lokhandwala Complex, Andheri West, Mumbai, Maharashtra, India-400053

Specification

Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
A SYSTEM AND A METHOD FOR VALIDATING AN IMAGE FOR A MEDICAL TEST

Applicant:
NEODOCS HEALTHCARE PVT. LTD.
An Indian Company,
Having their Address – 502, Maskar House, Naikwadi Aarey Road, Goregaon East Mumbai, Maharashtra, 400063, India

The following specification describes the invention and the manner in which it is to be performed.
PRIORITY INFORMATION
[001] The present application does not claim priority from any other application.
TECHNICAL FIELD
[002] The present subject matter described herein, in general, relates to the field of image processing for medical purposes. More specifically, the present disclosure relates to systems and methods for validating images for urine tests.
BACKGROUND
[003] Urine testing is a common diagnostic tool used to evaluate various aspects of health, such as kidney function, liver function, and the presence of infections or other medical conditions. Typically, fluid samples are analysed using test strips that detect the presence of specific substances or biomarkers in the fluid sample. The fluid samples may be blood, urine, saliva, and other body fluids. However, traditional testing methods involve manual interpretation of test strip colours, which may be subjective and prone to human error. Furthermore, manual interpretation is time-consuming and may not be practical in high-throughput clinical settings.
[004] Computer vision has been used to collect quantitative and qualitative clinical data in medical testing. Conventionally, regulatory agencies-approved clinical devices include specialised hardware, such as pre-calibrated scanners that work under controlled capture and lighting circumstances. In addition, these devices have classifiers that operate based on the calibrated pictures generated by the scanners.
[005] Smartphones now feature tremendous computing power, wireless Internet connection, and high-resolution cameras. With these features, many dipstick-based home tests can be performed by dipping the dipstick in the fluid sample and using an image of the dipped dipstick for analysis of health parameters. Smartphones as regulatory-approved clinical devices are difficult because users are not always medically trained.
[006] For most of the dipstick-based tests, a particular chemical on the dipstick changes colour after reacting with a compound in the fluid sample and the image of the dipstick must be clicked at a particular point during the reaction to produce accurate test results. Even with algorithms and software that can leverage computer vision, smartphone cameras, and image processing, it may be difficult for an untrained person to click a valid image or verify an image that they will feed to the algorithm or software to produce the test results.
SUMMARY
[007] Before the present system(s) and method(s), are described, it is to be understood that this application is not limited to the particular system(s), and methodologies described, as there can be multiple possible embodiments that are not expressly illustrated in the present disclosures. It is also to be understood that the terminology used in the description is for the purpose of describing the particular implementations or versions or embodiments only and is not intended to limit the scope of the present application. This summary is provided to introduce aspects related to a system and a method for analysing a urine sample. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[008] In one implementation, a method for validating an image of a test strip for a medical test is disclosed. The method comprises receiving an image of a test strip. The image may comprise a dipped region and a non-dipped region. The dipped region is a region of the test strip dipped in a fluid sample. The non-dipped region may comprise a code such as a Quick response code, a bar code, and the like. The code may be deciphered by processing the image. Further, a colour change of the dipped region is determined using a machine learning model. The machine learning model may be referred to as a chroma-net model). The colour change may be indicated by a percentage value between 0 and 100. Further, an elapsed time may be determined based on the colour change using a prediction model. The elapsed time may be a time duration between dipping of the test strip in the fluid sample and clicking of the image. Finally, the elapsed time may be compared with an ideal time to compute a validity score of the image. The ideal time may be based on the code present in the non-dipped region. The validity score may be indicated by a value between 0 and 100. The validity score may be inversely proportional to a difference between the elapsed time and the ideal time. The method may comprise generating feedback for a user to click a new image based on the validity score. The feedback may be generated when the validity score is less than a threshold.
[009] In another implementation, a non-transitory computer readable medium embodying a program executable in a computing device for validating an image of a test strip for a medical test is disclosed. The program may comprise a program code for receiving an image of a test strip. The image may comprise a dipped region and a non-dipped region. The dipped region is a region of the test strip dipped in a fluid sample. The non-dipped region may comprise a code such as a Quick response code, a bar code, and the like. Further, the program may comprise a program code for determining a colour change of the dipped region using a chroma-net model. The colour change may be indicated by a percentage value between 0 and 100. Subsequently, the program may comprise a program code for determining an elapsed time based on the colour change using a prediction model. The elapsed time may be a time duration between dipping of the test strip in the fluid sample and clicking of the image. Further, the program may comprise a program code for comparing the elapsed time with an ideal time to compute a validity score of the image. The ideal time may be based on the code present in the non-dipped region. The validity score may be indicated by a value between 0 and 100. The validity score may be inversely proportional to a difference between the elapsed time and the ideal time. Finally, the program may comprise a program code for generating feedback for a user to click a new image based on the validity score. The feedback may be generated when the validity score is less than a threshold.
BRIEF DESCRIPTION OF THE DRAWINGS
[010] The detailed description of embodiments is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present subject matter, an example of a construction of the present subject matter is provided as figures, however, the invention is not limited to the specific method and system for validating an image of a test strip for a medical test disclosed in the document and the figures.
[011] The present subject matter is described in detail with reference to the accompanying figures.
[012] Figure 1 illustrates a network implementation for validating an image of a test strip used for a medical test, in accordance with an embodiment of the present subject matter.
[013] Figure 2 illustrates a method for validating an image of a test strip used for a medical test, in accordance with an embodiment of the present subject matter.
[014] Figure 3 illustrates an example of a test strip dipped in a fluid sample and a test strip not dipped in a fluid sample, in accordance with an embodiment of the present subject matter.
[015] Figure 4 illustrates an example view of a vector space, in accordance with an embodiment of the present subject matter.
[016] Figure 5 illustrates an example artificial neural network, in accordance with an embodiment of the present subject matter.
[017] The figure depicts an embodiment of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.

DETAILED DESCRIPTION

[018] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words "receiving," " determining," " comparing," "computing," and other forms thereof, are intended to be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any system and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary, system and methods are now described.
[019] The disclosed embodiments are merely examples of the disclosure, which may be embodied in various forms. Various modifications to the embodiment will be readily apparent to those skilled in the art, and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure is not intended to be limited to the embodiments described but is to be accorded the widest scope consistent with the principles and features described herein.
[020] The present subject matter discloses a method and a system for validating an image of a test strip for a medical test. The system may receive an image of a test strip that has been dipped in a fluid sample. The image comprises a dipped region and a non-dipped region. the dipped region is a region of the test strip dipped in a fluid sample. The non-dipped region may comprise a code that may be scannable using image processing. The code may comprise at least one of a Quick Response (QR) code and a Bar Code and the like. The dipped region may comprise a set of chemical pads.
[021] The system further fetches ideal time based on the code present on the non-dipped region of test strip. Furthermore, a colour change of the dipped region is determined using a machine learning model (referred to as a chroma-net model). Subsequently, an elapsed time may be determined based on the colour change using a prediction model. Finally, the elapsed time may be compared with the ideal time to compute a validity score of the image.
[022] The disclosed method provides an automated and objective approach to decrease potential for human error in fluid sample analysis. Furthermore, the use of machine learning models trained using a large training dataset allows accurate validation of images clicked by human users for a medical test.
[023] Referring now to Figure 1, a network implementation 100 of a system 102 for validating an image of a test strip for a medical test is disclosed. Initially, the system 102 receives an image of a test strip. In an example, the software may be installed on a user device 104-1. It may be noted that the one or more users may access the system 102 through one or more user devices 104-2, 104-3…104-N, collectively referred to as user devices 104, hereinafter, or applications residing on the user devices 104. The system 102 receives an image of a test strip from one or more user devices 104. Further, the system may also 102 receive a feedback from a user using the user devices 104.
[024] Although the present disclosure is explained considering that the system 102 is implemented on a server, it may be understood that the system 102 may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a virtual environment, a mainframe computer, a server, a network server, a cloud-based computing environment. It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2, 104-3…104-N. In one implementation, the system 102 may comprise the cloud-based computing environment in which the user may operate individual computing systems configured to execute remotely located applications. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices 104 are communicatively coupled to the system 102 through a network 106.
[025] In one implementation, the network 106 may be a wireless network, a wired network, or a combination thereof. The network 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[026] In one embodiment, the system 102 may include at least one processor 108, an Input/Output (I/O) interface 110, and a memory 112. The at least one processor 108 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, Central Processing Units (CPUs), state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 108 is configured to fetch and execute computer-readable instructions stored in the memory 112.
[027] The I/O interface 110 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 110 may allow the system 102 to interact with the user directly or through the client devices 104. Further, the I/O interface 110 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 110 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as Wireless Local Area Network (WLAN), cellular, or satellite. The I/O interface 110 may include one or more ports for connecting a number of devices to one another or to another server.
[028] The memory 112 may include any computer-readable medium or computer program product known in the art, including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, Solid State Disks (SSD), optical disks, and magnetic tapes. The memory 112 may include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The memory 112 may include programs or coded instructions that supplement applications and functions of the system 102. In one embodiment, the memory 112, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the programs or the coded instructions.
[029] As there are various challenges observed in the existing art, the challenges necessitate the need to build the system 102 for validating an image of a test strip used for a medical test. At first, a user may use the user device 104 to access the system 102 via the I/O interface 110. The user may register the user devices 104 using the I/O interface 110 in order to use the system 102. In one aspect, the user may access the I/O interface 110 of the system 102. The detail functioning of the system 102 is described below with the help of figures.
[030] The present subject matter discloses a system 102 for validating an image of a test strip used for a medical test. The system receives an image (also referred to as a received image) of a test strip. It may be noted that a region of the test strip is dipped in a fluid sample. The test strip may also be referred to as a dipstick or a fluid test strip. The test strip is a basic diagnostic tool used to determine pathological changes in a patient's fluid in standard urinalysis. In an embodiment, a user or a patient may click an image of the test strip using a smartphone. Further, the user may upload the image to the system. The image may comprise a dipped region and a non-dipped region. The dipped region may be the region of the test strip that is dipped in the fluid sample. The dipped region may comprise a set of chemical pads. The set of chemical pads may comprise one or more chemicals. The non-dipped region may comprise a set of colour markers, a set of position markers and a code. The set of colour markers may comprise one or more colour boxes having defined colour values. The code may be used to fetch data including a test, a set of health parameters to be tested, and chemical pad data. The set of health parameters may comprise a Potential of Hydrogen (pH) in fluid, concentration of Glucose, Bilirubin, Ketone (Acetoacetic Acid), Blood, Protein, Urobilinogen, Leukocytes, Nitrites, Ascorbic Acid, and Specific Gravity. The chemical pad data may comprise positions of the set of chemical pads with a list of chemicals on each chemical pad from the set of chemical pads, and a set of colour values for each chemical pad.
[031] In an embodiment, the set of colour markers and the set of position markers may be used to standardize the received image. The position markers may have a specific location on the test strip. Further, the specific location of the set of position markers and a size of the set of position markers may be defined for a standardized image. The system may use image processing to compare the size and location of the set of position markers in the received image with the size and location of the set of position markers for the standardized image. Finally, the system may use image processing to adjust the received image based on the comparison to match the size and the location of the set of position markers in the received image with the size and the location of the set of position markers for the standardized image.
[032] Further, the system may compare, using image processing, colour values of the one or more colour boxes from the set of colour markers with the defined colour values. Subsequently, the system may adjust colour values of the received image based on the comparison in order to match the colour values of the one or more colour boxes from the set of colour markers to the defined colour values. The colour values may be at least one of Red Green Blue (RGB) colour values and Commission on Illumination Laboratory (CIELAB) colour space values. The CIELAB colour space values comprise a Lightness value (L*), a green-red colour component value (a*), and a blue-yellow component value (b*). In an embodiment, the system may convert RGB colour values to CIELAB colour values for processing.
[033] Further to standardizing the received image, the system 102 may determine an elapsed time based on a colour change of the dipped region using a machine learning model referred to as a chroma-net model. The elapsed time may be a time duration between dipping of the test strip in the fluid sample and clicking of the received image. In an embodiment, the machine learning model may be trained using a training dataset of a set of images of a plurality of test strips dipped in a fluid sample. The set of images comprises one or more images of the plurality of test strips, and wherein the one or more images are clicked at one or more predetermined elapsed times. Consider an example for a test strip used to perform a Urinary Tract Infection (UTI) test by dipping the test strip in a urine sample. The chroma-net model may be trained using one or more images of the test strip dipped in the urine sample. The one or more images may comprise a plurality of images clicked 0 seconds after the test strip is dipped in the urine sample, a plurality of images clicked 30 seconds after the test strip is dipped in the urine sample, a plurality of images clicked 60 seconds, 90 seconds, 120 seconds, 150 seconds, and 180 seconds after the test strip is dipped in the urine sample. Each of the one or more images may be annotated with the corresponding elapsed time. Further, the chroma-net model may be trained to detect a colour change between two images using a training dataset comprising a plurality of images of dipped test strips, a plurality of images of non-dipped test strips, and a corresponding colour change value for a pair of images comprising an image from the plurality of dipped strips and an image from a plurality of non-dipped test strips. The pair of images comprises images of test strips for a common test.
[034] In an embodiment, the system may extract features from an image from the training dataset. Further, the system may use the features to train the machine learning model or the chroma-net model. The machine learning model may identify features in a new image and find a similar image in the training dataset based on the identified features. The features may be plotted in a vector space as shown in Figure 4. Similar images maybe identified based on distance between features of the images plotted in the vector space. Using the features to train the machine learning algorithm reduces time of training and memory consumption.
[035] The system may use the chroma-net model to determine the elapsed time for the received image by comparing the received image with the plurality of images in the training dataset of the chroma-net model. The system may identify an image, from the training dataset, similar to the received image. Further, the system may obtain the elapsed time of the similar image from the training dataset to determine the elapsed time for the received image based on the chroma-net model.
[036] In another embodiment, the system may determine the colour change of the received image as percentage value between 0 and 100 using the chroma-net model. The colour change may be determined based on a chemical pad data. The chroma-net model may be trained using a plurality of images of a plurality of test strips, corresponding colour values for a set of chemical pads in the plurality of images of the plurality of test strips, a corresponding chemical pad data for the plurality of test strips, and a corresponding colour change for each of the plurality of images of the plurality of test strips. The system may obtain the chemical pad data comprising colour values of the set of chemical pads using the code from the received image. Further, the system may determine colour values of the set of chemical pads in the received image. Subsequently, the system may compare the colour values of the colour pads in the received image with the colour values comprised in the chemical pad data. Finally, the system may determine the colour change for dipped region of the received image as the percentage value based on the chroma-net model. After determining the colour change for the received image, the system may determine the elapsed time based on the colour change percentage value and the chroma-net model. The chroma-net model may be trained using a training dataset of a plurality of colour change percentage values and corresponding elapsed times.
[037] Further to determining the elapsed time for the received image, the system may compare the elapsed time with the ideal time. The ideal time may be based on the code present on the non-dipped region. The ideal time may be different for different type of tests. The ideal time correspond to a time duration between dipping a test strip in a fluid sample and clicking an image of the test strip after dipping it. The time duration for the ideal time may be enough time duration required for the chemicals on the chemical pads to completely react with compounds in the fluid sample. Finally, the system may compute a validity score based on the comparison of the elapsed time with the ideal time. The validity score may be indicated by a value between 0 and 100. The validity score may be inversely proportional to absolute value of difference between the elapsed time and the ideal time. The validity score may be computed using a pre-stored database of elapsed time, difference between an elapsed time and ideal time, and corresponding validity scores and at least one of a matching, sorting, and searching algorithm.
[038] In an embodiment, the system may generate a feedback for a user to click a new image based on the validity score. In case the validity score is less than a threshold, the system may request the user to click a new image and upload the new image as the received image to the system.
[039] For example, let us assume that the threshold for the validity score is 60, the ideal time is 60 seconds and the determined elapsed time of the received image in the example is 120 seconds. The validity score of the image is 20. The system will generate a feedback for the user to click a new image.
[040] Consider another example, let us assume that the threshold for the validity score is 60, the ideal time is 60 seconds and the determined elapsed time of the received image in the example is 20 seconds. The validity score of the image is 30. The system will generate a feedback for the user to click a new image.
[041] Consider another example, let us assume that the threshold for the validity score is 60, the ideal time is 60 seconds and the determined elapsed time of the received image in the example is 60 seconds. The validity score of the image is 100. The system will not generate a feedback.
[042] In case the validity score is greater than the threshold, the system may validate the received image for a medical test. The medical test may be performed by analysing the received image using one or more machine learning algorithms to determine the set of health parameters.
[043] Referring now to Figure 2, a method 200 for validating an image of a test strip for a medical test is shown, in accordance with an embodiment of the present subject matter. The method 200 may be described in the general context of computer-executable instructions. Generally, computer-executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types.
[044] The order in which the method 200 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 200 or alternate methods for validating an image of a test strip used for a medical test. Additionally, individual blocks may be deleted from the method 200 without departing from the scope of the subject matter described herein. Furthermore, the method 200 for validating an image of a test strip for a medical test can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 200 may be considered to be implemented in the above-described system 102.
[045] At block 202, an image of a test strip may be received. The image may comprise a dipped region and a non-dipped region. It may be noted that the dipped region in the image is the region of the test strip dipped in a fluid sample.
[046] At block 204, an elapsed time is determined based on a colour change of the dipped region using a machine learning model.
[047] At block 206, the elapsed time may be compared with an ideal time to compute a validity score of the image.
[048] Referring now to Figure 3, an example of the test strip not dipped in a fluid sample 300-a and an example of a test strip dipped in a fluid sample 300-b is illustrated. The test strip comprises a dipped region 310 and a non-dipped region 312. The non-dipped region 312 comprises a code 302, a set of position markers 304 A and 304 B, a set of colour markers 306. The dipped region comprises and a set of chemical pads 308. The code comprises information related to a type of test being performed and details of the test strip, such as lot number, and manufacturing information. In an embodiment, the system may validate an image of the test strip based on the set of regions by checking the presence of each region. The set of position markers 304 A and 304 B helps the system 102 for pose and orientation standardization. Further, the set of colour markers 306 is placed to ensure that there is no loss of information in the image of the test strip when compared to a physical copy of the test strip. Furthermore, the set of chemical pads 308 may comprise one or more chemicals. It may be noted that the user dips the test strip 300 in the fluid. The set of chemical pads 308 react with compounds in the fluid sample. Further, the user, after a defined time, may click an image of the test strip 300 and upload on the system. In an example, the user may click the image of the test strip after a couple of minutes or as prescribed in a fluid analysis kit. It may be noted that fluid analysis kit comprises the test strip and a manual for performing the fluid analysis.
[049] In an embodiment, an automated system may be used to dip the test strips in the fluid sample. In another embodiment, the fluid sample may be dropped over the dipped region of the test strip using a controlled robotic and automatic mechanism.
[050] Figure 4 illustrates an example view of a vector space 400. In particular embodiments, an object may be represented in a d-dimensional vector space, where d denotes any suitable number of dimensions. Although the vector space 400 is illustrated as a three-dimensional space, this is for illustrative purposes only, as the vector space 400 may be of any suitable dimension. In particular embodiments, an image may be represented in the vector space 400 as a collection of feature vectors referred to as a term embedding. Each feature vector may comprise coordinates corresponding to a particular point in the vector space 400 (i.e., the terminal point of the vector). As an example and not by way of limitation, feature vectors 410, 420, and 430 may be represented as points in the vector space 400, as illustrated in Figure 4.
[051] As another example and not by way of limitation, an image database trained to map images to a feature vector representation may be utilized, or such an image database may be itself generated via training. As another example and not by way of limitation, a model, such as KMeans, may be used to map an image feature to a vector representation in the vector space 400. In particular embodiments, the image feature may be mapped to a vector representation in the vector space 400 by using a machine leaning model (e.g., a neural network). The machine learning model may have been trained using a sequence of training data (e.g., a corpus of objects each comprising images and image features).
[052] In particular embodiments, an object may be represented in the vector space 400 as a vector referred to as a feature vector or an object embedding. In particular embodiments, an object may be mapped to a vector based on one or more properties, attributes, or features of the object, relationships of the object with other objects, or any other suitable information associated with the object. As an example and not by way of limitation, an object comprising a video or an image may be mapped to a vector by using an algorithm to determine a colour change between in images of one or more test strips. Each of the one or more test strips may be dipped in different fluid samples. Features used to calculate the vector may be based on information obtained from edge detection, corner detection, blob detection, ridge detection, scale-invariant feature transformation, edge direction, changing intensity, autocorrelation, motion detection, optical flow, thresholding, blob extraction, template matching, Hough transformation (e.g., lines, circles, ellipses, arbitrary shapes), or any other suitable information. Although this disclosure describes representing an n-gram or an object in a vector space in a particular manner, this disclosure contemplates representing an n-gram or an object in a vector space in any suitable manner.
[053] In particular embodiments, the system 102 may calculate a similarity metric of vectors in vector space 400. A similarity metric may be a cosine similarity, a Minkowski distance, a Mahalanobis distance, a Jaccard similarity coefficient, or any suitable similarity metric. The similarity metric of two vectors may represent how similar the two objects or n-grams corresponding to the two vectors, respectively, are to one another, as measured by the distance between the two vectors in the vector space 400. As an example and not by way of limitation, vector 410 and vector 420 may correspond to objects that are more similar to one another than the objects corresponding to vector 410 and vector 430, based on the distance between the respective vectors. Although this disclosure describes calculating a similarity metric between vectors in a particular manner, this disclosure contemplates calculating a similarity metric between vectors in any suitable manner.
[054] Figure 5 illustrates an example artificial neural network (“ANN”) 500 used to train at least one of the machine learning models including the chroma-net model. In particular embodiments, an ANN may refer to a computational model comprising one or more nodes. Example ANN 500 may comprise an input layer 510, hidden layers 520, 530, 540, and an output layer 550. Each layer of the ANN 500 may comprise one or more nodes, such as a node 505 or a node 515. In particular embodiments, each node of an ANN may be connected to another node of the ANN. As an example and not by way of limitation, each node of the input layer 510 may be connected to one of more nodes of the hidden layer 520. In particular embodiments, one or more nodes may be a bias node (e.g., a node in a layer that is not connected to and does not receive input from any node in a previous layer). In particular embodiments, each node in each layer may be connected to one or more nodes of a previous or subsequent layer. Although Figure 5 depicts a particular ANN with a particular number of layers, a particular number of nodes, and particular connections between nodes, this disclosure contemplates any suitable ANN with any suitable number of layers, any suitable number of nodes, and any suitable connections between nodes. As an example and not by way of limitation, although Figure 5 depicts a connection between each node of the input layer 510 and each node of the hidden layer 520, one or more nodes of the input layer 510 may not be connected to one or more nodes of the hidden layer 520.
[055] In particular embodiments, an ANN may be a feedforward ANN (e.g., an ANN with no cycles or loops where communication between nodes flows in one direction beginning with the input layer and proceeding to successive layers). As an example and not by way of limitation, the input to each node of the hidden layer 520 may comprise the output of one or more nodes of the input layer 510. As another example and not by way of limitation, the input to each node of the output layer 550 may comprise the output of one or more nodes of the hidden layer 540. In particular embodiments, an ANN may be a deep neural network (e.g., a neural network comprising at least two hidden layers). In particular embodiments, an ANN may be a deep residual network. A deep residual network may be a feedforward ANN comprising hidden layers organized into residual blocks. The input into each residual block after the first residual block may be a function of the output of the previous residual block and the input of the previous residual block. As an example and not by way of limitation, the input into residual block N may be F(x)+x, where F(x) may be the output of residual block N-1, x may be the input into residual block N-1. Although this disclosure describes a particular ANN, this disclosure contemplates any suitable ANN.
[056] In particular embodiments, an activation function may correspond to each node of an ANN. An activation function of a node may define the output of a node for a given input. In particular embodiments, an input to a node may comprise a set of inputs. As an example and not by way of limitation, an activation function may be an identity function, a binary step function, a logistic function, or any other suitable function.
[057] In particular embodiments, the input of an activation function corresponding to a node may be weighted. Each node may generate output using a corresponding activation function based on weighted inputs. In particular embodiments, each connection between nodes may be associated with a weight. As an example and not by way of limitation, a connection 525 between the node 505 and the node 515 may have a weighting coefficient of 0.4, which may indicate that 0.4 multiplied by the output of the node 505 is used as an input to the node 515. In particular embodiments, the input to nodes of the input layer may be based on a vector representing an object. Although this disclosure describes particular inputs to and outputs of nodes, this disclosure contemplates any suitable inputs to and outputs of nodes. Moreover, although this disclosure may describe particular connections and weights between nodes, this disclosure contemplates any suitable connections and weights between nodes.
[058] In particular embodiments, the ANN may be trained using training data. As an example and not by way of limitation, training data may comprise inputs to the ANN 500 and an expected output. As another example and not by way of limitation, training data may comprise vectors each representing a training object and an expected label for each training object. In particular embodiments, training the ANN may comprise modifying the weights associated with the connections between nodes of the ANN by optimizing an objective function. As an example and not by way of limitation, a training method may be used (e.g., the conjugate gradient method, the gradient descent method, the stochastic gradient descent) to backpropagate the sum-of-squares error measured as a distances between each vector representing a training object (e.g., using a cost function that minimizes the sum-of-squares error). In particular embodiments, the ANN may be trained using a dropout technique. As an example and not by way of limitation, one or more nodes may be temporarily omitted (e.g., receive no input and generate no output) while training. For each training object, one or more nodes of the ANN may have some probability of being omitted. The nodes that are omitted for a particular training object may be different than the nodes omitted for other training objects (e.g., the nodes may be temporarily omitted on an object-by-object basis). Although this disclosure describes training the ANN in a particular manner, this disclosure contemplates training the ANN in any suitable manner.
[059] Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include those provided by the following features.
[060] The system and the method enable image processing and machine learning algorithms to provide a more accurate and reliable method for analysing fluid samples compared to the traditional manual interpretation of test strips, which is subjective and prone to human error.
[061] The system and the method enable automated validation of images used for a medical test.
[062] The system and the method eliminate invalid images in medical tests performed using images captured through smartphone cameras.
[063] The system and method enable a user to perform medical tests at home without professional clinical supervision.
[064] The system and the method enable real-time fluid test results, which can help to improve patient outcomes by allowing for faster diagnosis and treatment.
[065] Although implementations for methods and system for validating an image of a test strip for a medical test have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for validating an image of a test strip used for a medical test. , Claims:We Claim:

1. A method for validating an image of a test strip used for a medical test, the method comprising:
receiving, by a processor, the image of the test strip, wherein the image comprises a dipped region and a non-dipped region, and wherein the dipped region is a region of the test strip dipped in a fluid sample;
determining, by the processor, an elapsed time based on a colour change of the dipped region using a machine learning model, wherein the elapsed time is a time duration between dipping of the test strip in the fluid sample and clicking of the image; and
comparing, by the processor, the elapsed time with an ideal time to compute a validity score of the image.

2. The method in claim 1, wherein the colour change is indicated by a percentage value between 0 and 100.

3. The method as claimed in claim 1, wherein the machine learning model is trained to determine the colour change and the elapsed time using a training dataset comprising at least one of a set of images of a plurality of test strips dipped in a fluid sample a plurality of images of a plurality of test strips not dipped in a fluid sample, and corresponding chemical pad data for the plurality of test strips dipped in the fluid sample.

4. The method as claimed in claim 3, wherein the set of images comprises one or more images of the plurality of test strips, and wherein the one or more images are clicked at one or more predetermined elapsed times, and wherein the chemical pad data is based on a code present in the non-dipped region.

5. The method as claimed in claim 1, wherein the validity score is inversely proportional to a difference between the elapsed time and the ideal time.
6. The method as claimed in claim 1, wherein the colour values are Commission on Illumination (CIELAB) colour space values, and wherein the CIELAB colour space values comprise a Lightness value (L*), a green-red colour component value (a*), and a blue-yellow component value (b*).

7. The method as claimed in claim 3, wherein the ideal time is based on the code, wherein the code is deciphered by processing the image.

8. The method as claimed in claim 3, wherein the code is scannable using image processing, and wherein the code comprises at least one of a Quick Response (QR) code and a Bar Code.

9. The method as claimed in claim 1, wherein the validity score is a value between 0 and 100.

10. The method as claimed in claim 1, further comprising generating feedback for a user to click a new image based on the validity score, wherein the feedback is generated when the validity score is less than a threshold.

11. A system for validating an image of a test strip used for a medical test, the system comprises:
a memory; and
a processor coupled to the memory, wherein the processor is configured to execute program instructions stored in the memory for:
receiving an image of a test strip, wherein the image comprises a dipped region and a non-dipped region, and wherein the dipped region is a region of the test strip dipped in a fluid sample;
determining an elapsed time based on a colour change of the dipped region using a machine learning model, wherein the elapsed time is a time duration between dipping of the test strip in the fluid sample and clicking of the image; and
comparing the elapsed time with an ideal time to compute a validity score of the image.

12. A non-transitory computer program product having embodied thereon a computer program for validating an image of a test strip used for a medical test, the computer program product storing instructions for:
receiving an image of a test strip, wherein the image comprises a dipped region and a non-dipped region, and wherein the dipped region is a region of the test strip dipped in a fluid sample;
determining an elapsed time based on a colour change of the dipped region using a machine learning model, wherein the elapsed time is a time duration between dipping of the test strip in the fluid sample and clicking of the image; and
comparing the elapsed time with an ideal time to compute a validity score of the image.

Documents

Application Documents

# Name Date
1 202321026609-STATEMENT OF UNDERTAKING (FORM 3) [10-04-2023(online)].pdf 2023-04-10
2 202321026609-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-04-2023(online)].pdf 2023-04-10
3 202321026609-PROOF OF RIGHT [10-04-2023(online)].pdf 2023-04-10
4 202321026609-POWER OF AUTHORITY [10-04-2023(online)].pdf 2023-04-10
5 202321026609-FORM-9 [10-04-2023(online)].pdf 2023-04-10
6 202321026609-FORM FOR SMALL ENTITY(FORM-28) [10-04-2023(online)].pdf 2023-04-10
7 202321026609-FORM FOR SMALL ENTITY [10-04-2023(online)].pdf 2023-04-10
8 202321026609-FORM 1 [10-04-2023(online)].pdf 2023-04-10
9 202321026609-FIGURE OF ABSTRACT [10-04-2023(online)].pdf 2023-04-10
10 202321026609-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-04-2023(online)].pdf 2023-04-10
11 202321026609-EVIDENCE FOR REGISTRATION UNDER SSI [10-04-2023(online)].pdf 2023-04-10
12 202321026609-DRAWINGS [10-04-2023(online)].pdf 2023-04-10
13 202321026609-DECLARATION OF INVENTORSHIP (FORM 5) [10-04-2023(online)].pdf 2023-04-10
14 202321026609-COMPLETE SPECIFICATION [10-04-2023(online)].pdf 2023-04-10
15 202321026609-MSME CERTIFICATE [11-04-2023(online)].pdf 2023-04-11
16 202321026609-FORM28 [11-04-2023(online)].pdf 2023-04-11
17 202321026609-FORM 18A [11-04-2023(online)].pdf 2023-04-11
18 Abstract.jpg 2023-05-09
19 202321026609-FER.pdf 2023-06-21
20 202321026609-OTHERS [09-08-2023(online)].pdf 2023-08-09
21 202321026609-FER_SER_REPLY [09-08-2023(online)].pdf 2023-08-09
22 202321026609-DRAWING [09-08-2023(online)].pdf 2023-08-09
23 202321026609-COMPLETE SPECIFICATION [09-08-2023(online)].pdf 2023-08-09
24 202321026609-CLAIMS [09-08-2023(online)].pdf 2023-08-09
25 202321026609-ABSTRACT [09-08-2023(online)].pdf 2023-08-09
26 202321026609-Response to office action [12-03-2024(online)].pdf 2024-03-12
27 202321026609-PatentCertificate01-10-2024.pdf 2024-10-01
28 202321026609-IntimationOfGrant01-10-2024.pdf 2024-10-01
29 202321026609-FORM 4 [16-05-2025(online)].pdf 2025-05-16
30 202321026609-POST GRANT EVIDENCE OPPOSITION [01-10-2025(online)].pdf 2025-10-01
31 202321026609-OTHERS [01-10-2025(online)].pdf 2025-10-01
32 Post Grant Opposition INTIMATION 551820.pdf 2025-11-24

Search Strategy

1 searchstrategy_202321026609E_15-06-2023.pdf

ERegister / Renewals

3rd: 16 May 2025

From 10/04/2025 - To 10/04/2026