Sign In to Follow Application
View All Documents & Correspondence

System And Method For Improving Medication Adherence

Abstract: The invention relates to system (100) and method for improving medication adherence. The method includes sending an invitation key from a practitioner to a user (116); registering the user (116) upon successfully receiving the invitation key from the user (116); assigning a medicine schedule to the user (116) through the practitioner; notifying the user (116) to take each of the set of medicines based on the associated dosage and the medicine schedule via a notification; determining whether the notification is checked by the user (116); communicating with at least one of the user (116) and the practitioner when the notification is not checked by the user (116) for a predefined threshold time; receiving real-time data corresponding to administration of each of the set of medicines by the user (116); and validating administration of each of the set of medicines by the user (116) based on associated dosage and medicine schedule from the real-time data through an Artificial Intelligence (AI) model (110a). [To be published with FIG. 1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
09 December 2021
Publication Number
31/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
docketing@inventip.in
Parent Application

Applicants

Dexium Technologies Private Limited
Mukund bld, 1st Floor Nr St. Sebastian Chapel Murida, Fatorda, Margao Goa INDIA 403602

Inventors

1. Dimple Dang
J1309, Amethyst Tower PBEL City, Peeramcheruvu Village, Rajendra Nagar Mandal Hyderabad, Telangana India 500091
2. Amit Dang
J1309, Amethyst Tower PBEL City, Peeramcheruvu Village, Rajendra Nagar Mandal Hyderabad, Telangana India 500091
3. Pawan Rane
Mukund bld, 1st Floor Nr St. Sebastian Chapel Murida, Fatorda Margao Goa India 403602

Specification

Claims:CLAIMS
WHAT IS CLAIMED IS:

1. A method for improving medication adherence, the method comprising:
sending, by an adherence improvement device (100a), an invitation key from a practitioner to a user (116);
registering, by the adherence improvement device (100a), the user (116) upon successfully receiving the invitation key from the user (116);
assigning, by the adherence improvement device (100a), a medicine schedule to the user (116) through the practitioner, wherein the medicine schedule comprises a set of medicines corresponding to the user (116), and wherein each of the set of medicines is associated with a dosage prescribed by the practitioner;
notifying, by the adherence improvement device (100a), the user (116) to take each of the set of medicines based on the associated dosage and the medicine schedule via a notification;
determining, by the adherence improvement device (100a), whether the notification is checked by the user (116);
communicating, by the adherence improvement device (100a), with at least one of the user (116) and the practitioner when the notification is not checked by the user (116) for a predefined threshold time;
receiving, by the adherence improvement device (100a), real-time data corresponding to administration of each of the set of medicines by the user (116) through at least one camera; and
validating, by the adherence improvement device (100a), administration of each of the set of medicines by the user (116) based on the associated dosage and the medicine schedule from the real-time data through an Artificial Intelligence (AI) model (110a).

2. The method of claim 1, further comprising:
determining an adherence score based on an adherence of the user (116) to the medicine schedule; and
assigning reward points to the user (116) based on the adherence score when the adherence score is equal or above a predefined threshold adherence score, wherein each of the reward points is corresponds to a predefined monetary value.

3. The method of claim 1, wherein, for each of the set of medicines, the real-time data comprise a face of the user (116) administering a medicine, and wherein, for each of the set of medicines, validating administration of each of the set of medicines by the user (116) comprises:
identifying the face of the user (116) from the real-time data through the AI model (110a), wherein the AI model (110a) is based on at least one of a Histogram of Oriented Gradients (HOG) algorithm and a linear Support Vector Machine (SVM) algorithm;
identifying a medicine administration action of the user (116) from the real-time data; and
establishing a successful administration of the medicine by the user (116) from the real-time data through the AI model (110a) upon successfully identifying the medicine administration action of the user (116).

4. The method of claim 1, wherein, for each of the set of medicines, the real-time data comprise a palm of the user (116) comprising a medicine, and wherein, for each of the set of medicines, validating administration of each of the set of medicines by the user (116) comprises:
identifying the medicine in the palm of the user (116) from the real-time data through the AI model (110a), wherein the AI model (110a) is based on at least one of a You Only Look Once (YOLO) series of algorithms;
determining a number of medicine units in the palm of the user (116) from the real-time data through the AI model (110a);
comparing the determined number of medicine units with the associated dosage of the medicine for the user (116); and
establishing a successful administration of the medicine by the user (116) from the real-time data through the AI model (110a) when the determined number of medicine units is in accordance with the associated dosage of the medicine for the user (116).

5. The method of claim 1, further comprising assigning reward points to the user (116) upon successfully receiving the invitation key from the user (116), wherein each of the reward points is corresponds to a predefined monetary value.

6. The method of claim 1, wherein communicating, with the at least one of the user (116) and the practitioner further comprises, one of:
communicating through a text message with the at least one of the user (116) and the practitioner when the notification is not checked by the user (116); or
communicating through a voice call when the message is not seen within a predefined time period.

7. A system (100) for improving medication adherence, the system (100) comprising:
a processor; and
a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which when executed by the processor, cause the processor to:
send an invitation key from a practitioner to a user (116);
register the user upon successfully receiving the invitation key from the user (116);
assign a medicine schedule to the user (116) through the practitioner, wherein the medicine schedule comprises a set of medicines corresponding to the user (116), and wherein each of the set of medicines is associated with a dosage prescribed by the practitioner;
notify the user (116) to take each of the set of medicines based on the associated dosage and the medicine schedule via a notification;
determine whether the notification is checked by the user (116);
communicate with at least one of the user (116) and the practitioner when the notification is not checked by the user (116) for a predefined threshold time;
receive real-time data corresponding to administration of each of the set of medicines by the user (116) through at least one camera; and
validate administration of each of the set of medicines by the user (116) based on the associated dosage and the medicine schedule from the real-time data through an Artificial Intelligence (AI) model (110a).

8. The system (100) of claim 7, wherein the processor instructions, on execution, further cause the processor to:
determine an adherence score based on an adherence of the user (116) to the medicine schedule; and
assign reward points to the user (116) based on the adherence score when the adherence score is equal or above a predefined threshold adherence score, wherein each of the reward points is corresponds to a predefined monetary value.

9. The system (100) of claim 7, wherein, for each of the set of medicines, the real-time data comprise a face of the user (116) administering a medicine, and wherein, for each of the set of medicines, to validate administration of each of the set of medicines by the user (116), the processor instructions, on execution, further cause the processor to:
identify the face of the user (116) from the real-time data through the AI model, wherein the AI model (110a) is based on at least one of a Histogram of Oriented Gradients (HOG) algorithm and a linear Support Vector Machine (SVM) algorithm;
identify a medicine administration action of the user (116) from the real-time data; and
establish a successful administration of the medicine by the user (116) from the real-time data through the AI model (110a) upon successfully identifying the medicine administration action of the user (116).

10. The system (100) of claim 7, wherein, for each of the set of medicines, the real-time data comprise a palm of the user (116) comprising a medicine, and wherein, for each of the set of medicines, to validate administration of each of the set of medicines by the user (116), the processor instructions, on execution, further cause the processor to:
identify the medicine in the palm of the user (116) from the real-time data through the AI model (110a), wherein the AI model (110a) is based on at least one of a You Only Look Once (YOLO) series of algorithms;
determine a number of medicine units in the palm of the user (116) from the real-time data through the AI model (110a);
compare the determined number of medicine units with the associated dosage of the medicine for the user (116); and
establish a successful administration of the medicine by the user (116) from the real-time data through the AI model (110a) when the determined number of medicine units is in accordance with the associated dosage of the medicine for the user (116).
, Description:DESCRIPTION
Technical Field
[001] Generally, the invention relates to Artificial Intelligence (AI). More specifically, the invention relates to a system and method for improving medical adherence using an AI enabled adherence improvement device.
BACKGROUND
[002] Typically, a doctor prescribes an appropriate routine of medication to a patient based on evaluation of patient’s problem. The medication adherence may be referred as usage of the prescribed medication routine in a correct way. The correct way may include intake of correct dosage on regular basis, which may lead to a successful treatment and management. In other words, adherence to a medication schedule sincerely is a key element of treatment accomplishment. Further, a medication non-adherence may affect the patient and significantly worse the patient’s condition, that may further lead to enhanced health care expenditures and sometimes to death.
[003] Various methods are available for medication adherence. However, the available methods address lack of adherence only by counselling, pill tracking, and self-reported medication intake. Further, the available methods are not sufficient to keep track of the patient accurately, and there may be chances of error. Therefore, there is a need to develop a system that may automatically monitor the patient, remind the patient to take medicine based on prescribed medication schedule, and also encourages the patients, by providing incentives, to follow the medication schedule.

SUMMARY
[004] In one embodiment, a method for improving medication adherence is disclosed. In one example, the method includes sending an invitation key from a practitioner to a user. Further, the method includes registering the user upon successfully receiving the invitation key from the user. Further, the method includes assigning a medicine schedule to the user through the practitioner. The medicine schedule includes a set of medicines corresponding to the user. Each of the set of medicines is associated with a dosage prescribed by the practitioner. Further, the method includes notifying the user to take each of the set of medicines based on the associated dosage and the medicine schedule via a notification. Further, the method includes determining whether the notification is checked by the user. Further, the method includes communicating with at least one of the user and the practitioner when the notification is not checked by the user for a predefined threshold time. Further, the method includes receiving real-time data corresponding to administration of each of the set of medicines by the user through at least one camera. Further, the method includes validating administration of each of the set of medicines by the user based on the associated dosage and the medicine schedule from the real-time data through an Artificial Intelligence (AI) model.
[005] In one embodiment, a system for improving medication adherence is disclosed. In one example, the system includes a processor and a computer-readable medium communicatively coupled to the processor. The computer-readable medium may store processor-executable instructions, which, on execution, may cause the processor to send an invitation key from a practitioner to a user. The processor-executable instructions, on execution, may further cause the processor to register the user upon successfully receiving the invitation key from the user. The processor-executable instructions, on execution, may further cause the processor to assign a medicine schedule to the user through the practitioner. The medicine schedule includes a set of medicines corresponding to the user. Each of the set of medicines is associated with a dosage prescribed by the practitioner. The processor-executable instructions, on execution, may further cause the processor to notify the user to take each of the set of medicines based on the associated dosage and the medicine schedule via a notification. The processor-executable instructions, on execution, may further cause the processor to determine whether the notification is checked by the user. The processor-executable instructions, on execution, may further cause the processor to communicate with at least one of the user and the practitioner when the notification is not checked by the user for a predefined threshold time. The processor-executable instructions, on execution, may further cause the processor to receive real-time data corresponding to administration of each of the set of medicines by the user through at least one camera. The processor-executable instructions, on execution, may further cause the processor to validate administration of each of the set of medicines by the user based on the associated dosage and the medicine schedule from the real-time data through an Artificial Intelligence (AI) model.
[006] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
[007] The present application can be best understood by reference to the following description taken in conjunction with the accompanying drawing figures, in which like parts may be referred to by like numerals
[008] FIG. 1 illustrates a block diagram of an exemplary system for improving medication adherence, in accordance with some embodiments of the present disclosure.
[009] FIG. 2 illustrates a flow diagram of an exemplary process for improving medication adherence from a practitioner perspective, in accordance with some embodiments of the present disclosure.
[010] FIGS. 3A-B illustrate flow diagrams of an exemplary process for improving medication adherence from a user perspective, in accordance with some embodiments of the present disclosure.
[011] FIG. 4 illustrates an exemplary scenario for facial detection and identification of the user from an image through an AI model, in accordance with some embodiments of the present disclosure.
[012] FIGS. 5A-C illustrate detecting objects from an image through an AI model, in accordance with some embodiments of the present disclosure.
[013] FIGS. 6A-C illustrate a detecting objects from an image through an AI model, in accordance with some embodiments of the present disclosure.
[014] FIG. 7 illustrates counting of an object in an image through an AI model, in accordance with some embodiments of the present disclosure.

DETAILED DESCRIPTION OF THE DRAWINGS
[015] The following description is presented to enable a person of ordinary skill in the art to make and use the invention and is provided in the context of particular applications and their requirements. Various modifications to the embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the invention. Moreover, in the following description, numerous details are set forth for the purpose of explanation. However, one of ordinary skill in the art will realize that the invention might be practiced without the use of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order not to obscure the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown but is to be accorded the widest scope consistent with the principles and features disclosed herein.
[016] While the invention is described in terms of particular examples and illustrative figures, those of ordinary skill in the art will recognize that the invention is not limited to the examples or figures described. Those skilled in the art will recognize that the operations of the various embodiments may be implemented using hardware, software, firmware, or combinations thereof, as appropriate. For example, some processes can be carried out using processors or other digital circuitry under the control of software, firmware, or hard-wired logic. (The term “logic” herein refers to fixed hardware, programmable logic and/or an appropriate combination thereof, as would be recognized by one skilled in the art to carry out the recited functions.) Software and firmware can be stored on computer-readable storage media. Some other processes can be implemented using analog circuitry, as is well known to one of ordinary skill in the art. Additionally, memory or other storage, as well as communication components, may be employed in embodiments of the invention.
[017] Referring now to FIG. 1, a block diagram of an exemplary system 100 for improving medication adherence is illustrated, in accordance with some embodiments of the present disclosure. The system 100 may be configured to provide reminders, incentives, and other information (for example, information therapy and side effect of a medicine) required to improve the medication adherence. Further, the system 100 may identify intake of medicine by a user (i.e., a patient), automatically.
[018] To improve medication adherence, the system 100 may include an adherence improvement device 100a. Examples of the adherence improvement device 100a may include, but are not limited to, a server, a desktop, a laptop, a notebook, a tablet, a smartphone, a mobile phone, an application server, or the like. Further, the adherence improvement device 100a may include, but not limited to, a registration module 102a, a medicine schedule checker 102, a reminder module 104, a first interaction module 106, a second interaction module 108, a validation module 110, a score generation module 112, a score analyzer 112a, and a reward transferring module 114. The user(s) 116, as illustrated in system 100, may be a doctor and/or a patient who may use the adherence improvement device 100a.
[019] In some embodiments, the registration module 102a may be used by the patient for registration purpose. In some other embodiments, the registration module 102a may be used by the doctor to generate a registration code for various patients. The patient may enter the registration code shared by the doctor and complete the registration by providing details (for example, contact information, name, and age). Thus, the patient may then be considered in a registered patient list of the respective doctor. Additionally, there may be some other data fields in the registration module 102a that need to be filled by the patient, such as bank account number, UPI ID, blood group information, and the like.
[020] The medicine schedule checker 102 may be configured to identify the medicine schedule of the patients. For example, there may be a number of registered patients with different medicine schedules. Therefore, the medicine schedule checker 102 may help the doctor in identifying the medicine schedule for multiple registered patients accurately. Through which, the doctor may further be able to keep track of the patients’ activity (i.e. if the patients are regularly taking medicines or not). Further, the medicine schedule checker 102 may further be communicatively connected to a reminder module 104. In some embodiments, the reminder module 104 may generate reminders and transmit it to the patients to take medicine based on their respective identified medicine schedules. Further, reminder module 104 may verify if the patients have seen the reminder or not. Additionally, in some other embodiments, the reminder module 104 may be configured to receive a reminder for taking prescribed medicines on time.
[021] The reminder module 104 may be communicatively coupled to the first interaction module 106, the second module 108, and the validation module 110. The first interaction module 106b may be a message module. The message module may automatically generate a note (for example, a text note) and send it to the patient when the reminder is not seen by the patient. The message may also be sent to a caretaker of the patient. In some embodiments, the message module may be used by the doctor manually to write an advice or for sending a query, to the patient. Further, the message module may check if the message is seen by the patient or not. In case the message is not seen by the patient, the second interaction module 108 may initiate a voice call to the patient. Therefore, the second interaction module 108 may be a voice call module. The sequence of sending message and voice call may be changed based on the application requirement. Once at least one of the verification of the reminder, verification of the message, and interaction with the user 116 or the caretaker through the voice call is successful, the medication adherence device 100a may monitor whether the patient is taking medicine or not.
[022] Further, the validation module 110 may monitor the patient. In particular, the validation module 110 may check if the patient is taking medicines on time or not. For this, the validation module 110 may employ Artificial Intelligence (AI) model(s) 110a. The AI model 110a may use computer vision technique for identification and validation of administration of medicine by the user 116. In some embodiments, two separate AI models may be used for face detection and pill detection.
[023] In an embodiment, the AI model 110a may include an input layer, an output layer, and one or more hidden layers. The AI model 110a may receive input data (images) from the input layer. In case of face recognition, the input data may include real-time image data of a face of the user 116 administering a medicine and in case of pill detection, the input data may include real-time image data of a palm of the user 116 comprising a medicine. Each of the input layer, the output layer, and the one or more hidden layers includes one or more nodes. In the AI model 110a, information flow is from the input layer to the output layer. In some embodiments, for facial recognition, the AI model 110a may be based on at least one of a Histogram of Oriented Gradients (HOG) algorithm and a linear Support Vector Machine (SVM) algorithm. In some embodiments, for pill detection, the AI model 110a may be based on at least one of a You Only Look Once (YOLO) series of algorithms.
[024] For facial recognition, validation of administration of medicines through the AI model 110a may include identifying the face of the user 116 from the real-time data through the AI model 110a. Further, validation of administration of medicines through the AI model 110a may include identifying a medicine administration action of the user 116 from the real-time data. Further, validation of administration of medicines through the AI model 110a may include establishing a successful administration of the medicine by the user 116 from the real-time data through the AI model 110a upon successfully identifying the medicine administration action of the user 116.
[025] For pill detection, validation of administration of medicines through the AI model 110a may include identifying the medicine in the palm of the user 116 from the real-time data through the AI model 110a. Further, validation of administration of medicines through the AI model 110a may include determining a number of medicine units in the palm of the user 116 from the real-time data through the AI model 110a. Further, validation of administration of medicines through the AI model 110a may include comparing the determined number of medicine units with the associated dosage of the medicine for the user 116. Further, validation of administration of medicines through the AI model 110a may include establishing a successful administration of the medicine by the user from the real-time data through the AI model 110a when the determined number of medicine units is in accordance with the associated dosage of the medicine for the user 116. Validation of administration of medicines using the AI model 110a may be further explained in detail in conjunction with FIGS. 5-7.
[026] In some embodiments, nodes of the input layer may be passive nodes (nodes that do not modify the input data). It may be noted that the nodes of the input layer may receive a single value and duplicate the value to multiple outputs. Further, the nodes of the hidden layers and the output layer may be active nodes (nodes that modify the input data). Further, variables (x1-xp) may hold data to be evaluated. For example, the variables may be pixel values from an image. In some embodiments, the variables may be output (such as, diameter, brightness, edge sharpness, etc.) of some other classification algorithms, such as classifiers in cancer detection.
[027] Further, the score generation module 112 may interact with the AI model(s) 110a and determine an adherence score for the patient based on validation. The adherence score may be calculated based on points gathered by the patient in a predefined period of time (for example, points gathered in one month). Further, the reward transferring module 114 may analyze the adherence score determined by the score generation module 112. Further, the reward transferring module 114 may transfer rewards, when the adherence score is greater than or equal to a predefined percentage, to at least one of the wallet of the user 116, and to a linked bank account. The reward may depend on total points secured by the patient and in each iteration the points may be added to the previous balance. By way of an example, for one point, the user may earn Rupee 1 as reward.
[028] It should be noted that the adherence improvement device 100a may be implemented in programmable hardware devices such as programmable gate arrays, programmable array logic, programmable logic devices, or the like. Alternatively, the adherence improvement device 100a may be implemented in software for execution by various types of processors. An identified engine/module of executable code may, for instance, include one or more physical or logical blocks of computer instructions which may, for instance, be organized as a component, module, procedure, function, or other construct. Nevertheless, the executables of an identified engine/module need not be physically located together but may include disparate instructions stored in different locations which, when joined logically together, include the identified engine/module and achieve the stated purpose of the identified engine/module. Indeed, an engine or a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices.
[029] As will be appreciated by one skilled in the art, a variety of processes may be employed for improving medication adherence. For example, the exemplary system 100 and associated adherence improvement device 100a may improve the medication adherence, by the process discussed herein. In particular, as will be appreciated by those of ordinary skill in the art, control logic and/or automated routines for performing the techniques and steps described herein may be implemented by the system 100 and the associated adherence improvement device 100a either by hardware, software, or combinations of hardware and software. For example, suitable code may be accessed and executed by the one or more processors on the system 100 to perform some or all of the techniques described herein. Similarly, application specific integrated circuits (ASICs) configured to perform some or all the processes described herein may be included in the one or more processors on the system 100.
[030] Referring now to FIG. 2, an exemplary process 200 for improving medication adherence from a practitioner perspective is depicted via a flow diagram, in accordance with some embodiments of the present disclosure. Fig. 2 is explained in conjunction with FIG. 1. Each step of the process 200 may be performed using various modules within an adherence improvement device (similar to the modules 102-114 of adherence improvement device 100a).
[031] At step 202, a user may log into account by providing details. The details may include, but not limited to, a username and password. In some embodiments, another way of authentication (face recognition) may also be used instead of entering the details. Further, in some embodiments, there may also be an option of forget password for the user when forgets the password. It should be noted that, here, the user may be a doctor. At step 204, the doctor may share the invitation code with a patient. In some embodiments, the invitation code may be used by the patient for registration. The invitation code may include digits, alphabets, special characters, or combination of them. Thereafter, at step 206, It is checked if the patient has used the provided invitation code or not. In case the invitation code is used by the patient, points may be added to the patient’s balance, at step 208. In other words, the patient may earn pre-specified points upon using the invitation code. Otherwise, no points may be added to the patient’s balance.
[032] Further, at step 210, current statistics may be checked. At step 212, the patient details may be accessed. The patient details may include, but not limited to, general details, treatment details, shared media, adherence score, and contact details. The patient details may be accessed to identify the medicine schedule of the patient. At step 214, the patient may be followed up and at step 216 a call or a message may be sent to the patient depending on the situation.
[033] Referring now to FIGS. 3A and 3B, an exemplary process 300 for improving medication adherence from a user perspective is depicted via a flow diagram, in accordance with some embodiments of the present disclosure. Fig. 3A and 3B are explained in conjunction with FIG. 1. Each step of the process 300 may be executed by various modules within an adherence improvement device (such as the modules 102a-114 of the adherence improvement device 100a).
[034] In FIG. 3A, at step 302, a registration may be done by a patient using an invitation code provided by the doctor. It should be noted that a registration module (for example, the registration module 102a) may be used for the registration. By way of an example, the invitation code may be ‘KYT002S1’. At step 304, a medication schedule required to be followed by the user may be identified by checking user details or prescription. For example, a patient should intake a specific pill prescribed by the doctor 3 times a day. At step 306, a reminder may be generated and sent to the patient. The reminder may be sent based on the medicine schedule of the patient. For the above-mentioned example, in some embodiments, the reminder may be sent to user 3 times per day at pre-defined intervals of time. In some embodiments, it may be verified if the remainder is seen by the receiver (i.e., the user) or not.
[035] Thereafter at step 308, a message or SMS may be sent to the patient if the patient does not see the reminder within a pre-defined period of time. For example, after one hour of sending the reminder, the SMS may be sent to the patient when the reminder verification is unsuccessful. In some embodiments, it may also be checked if the user has seen the SMS or not. Further, at step 310, the SMS may be transmitted to a caretaker of the user. In some embodiments, it may be checked whether the message is seen by the caretaker or not. Thereafter, at step 312, a voice call may be initiated and connected to the caretaker. The voice call may be connected based on the contact information provided in the user details. Further, it may be checked if the voice call is attended by the caretaker or not. If none of the above-mentioned verifications (i.e., reminder, SMS, and call verification) is successful, the user may not earn any point.
[036] On the other hand, in case of successful verification (any of the above-mentioned verifications), various steps illustrated in FIG. 3B may be executed sequentially. At step 314, medicine in patient’s palm may be checked using a camera. Further, at step 316, an AI validation may be performed and at step 316a it may be checked if the AI validation is successful. In some embodiments, an object detection may be performed by AI models to check whether the patient is taking medicine. In particular, patient’s face detection and pill detection may be performed by the AI models. In some embodiments, a YOLO neural network technique may be used for pill detection. Additionally, in some embodiments, a Convolution Neural Network (CNN)-based model or a model based on HOG and linear SVM may be used to detect patient’s face. This may be further explained in detail in conjunction with FIG 4.
[037] In case of successful validation, at step 318, points may be added to the patient’s wallet. After that, at step 320, an adherence percentage score may be determined based on total earned points within a specific time period (for example, after one month). In some embodiments, it may be checked if the determined score is greater than or equal to a threshold (e.g., 80%). At step 322, reward based on the earned points after successful validation may be transferred to the bank account of the patient. Further, if the percentage adherence score is less than the threshold then the patient may not earn a single point.
[038] Referring now to FIG. 4, an exemplary scenario 400 for facial detection and identification of the user from an image through an AI model (such as, the AI model 110a) is illustrated, in accordance with some embodiments of the present disclosure. The image may include one or more faces at various orientations and angles. It may be noted that Dlib may include facial recognition algorithms to perform face detection and facial landmark detection of the user. The face detection and facial landmark detection of the patient may be performed based on HOG and linear SVM. It should be noted that combination of HOG and SVM may enable accurate detection for real time videos with a higher speed.
[039] Further, a Convolutional Neural Network (CNN)-based face detector may be available in Dlib. The CNN-based face detector may detect faces from a plurality of angles. However, the CNN-based face detector is not suitable for real-time face detection. Further, the CNN-based face detector requires a Graphics Processing Unit (GPU) for execution. To get same speed as the HOG-based detector, the CNN-based face detector may require a powerful GPU.
[040] On the other hand, the facial recognition algorithms of Dlib may allow the user to implement face detection, facial recognition, and real-time face tracking in multiple projects.
[041] It may be noted that, for each of a plurality of users, at least 5 training images may be required by Support Vector Classifier (SVC) to produce meaningful results. Further, a directory may be created with training images for each of the plurality of users in below format:
face_recognize.py
test_image.jpg
train_dir/
person_1/
person_1_face-1.jpg
person_1_face-2.jpg


person_1_face-n.jpg
person_2/
person_2_face-1.jpg
person_2_face-2.jpg


person_2_face-n.jpg


person_n/
person_n_face-1.jpg
person_n_face-2.jpg


person_n_face-n.jpg
[042] Further, face_recognition Application Programming Interface (API) may generate face encodings for a face detected in the images. It should be noted that a face encoding is a way to represent the face using a set of 128 computer-generated measurements. Two different pictures of the same person may have similar face encodings and pictures of two different people may have different face encodings.
[043] Upon generating face encodings for each of the plurality of users, Support Vector Classifier (SVC) with scikit-learn may be trained using the face encodings and labels associated with the face encodings for known faces in the training directory. Further, the API may detect faces in a test image and the trained SVC may predict each of the known faces in the test image.
[044] By way of an example, facial recognition may be implemented through the following code:
# importing libraries
import face_recognition
import docopt
from sklearn import svm
import os

def face_recognize(dir, test):
# Training the SVC classifier
# The training data would be all the
# face encodings from all the known
# images and the labels are their names
encodings = []
names = []

# Training directory
if dir[-1]!='/':
dir += '/'
train_dir = os.listdir(dir)

# Loop through each person in the training directory
for person in train_dir:
pix = os.listdir(dir + person)

# Loop through each training image for the current person
for person_img in pix:
# Get the face encodings for the face in each image file
face = face_recognition.load_image_file(
dir + person + "/" + person_img)
face_bounding_boxes = face_recognition.face_locations(face)

# If training image contains exactly one face
if len(face_bounding_boxes) == 1:
face_enc = face_recognition.face_encodings(face)[0]
# Add face encoding for current image
# with corresponding label (name) to the training data
encodings.append(face_enc)
names.append(person)
else:
print(person + "/" + person_img + " can't be used for training")

# Create and train the SVC classifier
clf = svm.SVC(gamma ='scale')
clf.fit(encodings, names)

# Load the test image with unknown faces into a numpy array
test_image = face_recognition.load_image_file(test)

# Find all the faces in the test image using the default HOG-based model
face_locations = face_recognition.face_locations(test_image)
no = len(face_locations)
print("Number of faces detected: ", no)

# Predict all the faces in the test image using the trained classifier
print("Found:")
for i in range(no):
test_image_enc = face_recognition.face_encodings(test_image)[i]
name = clf.predict([test_image_enc])
print(*name)

def main():
args = docopt.docopt(__doc__)
train_dir = args["--train_dir"]
test_image = args["--test_image"]
face_recognize(train_dir, test_image)

if __name__=="__main__":
main()
[045] In an embodiment, the above exemplary code may be executed on a terminal through the following command:
python face_recognize.py -d train_dir -i test_image.jpg
[046] Referring now to FIGS. 5A-C, detecting objects from an image through an AI model (such as, the AI model 110a) is illustrated, in accordance with some embodiments of the present disclosure. By way of an example, detecting objects from the image may include three stages (500a, 500b and 500c). As will be appreciated, object detection is a computer vision task that involves both localizing one or more objects within an image and classifying each object in the image. Object detection requires both successful object localization to locate and draw a bounding box around each object in an image and object classification to predict the correct class of object that was localized.
[047] In some embodiments, a model from YOLO family of models may be used as the AI model for detecting objects. The YOLO family of models includes a series of end-to-end deep learning models designed for unified and real-time object detection. Typically, a YOLO model may involve a single deep CNN (originally a version of GoogLeNet, later updated and called DarkNet based on VGG) that splits an input image into a grid of cells. Further, each of the grid of cells directly predicts a bounding box and object classification. A large number of candidate bounding boxes are consolidated into a final prediction by a post-processing step as an output result.
[048] The YOLO family of models may include multiple variations (for example, YOLOv1, YOLOv2, YOLOv3, and YOLOv4). YOLOv1 proposed a general architecture. YOLOv2 refined design and made use of predefined anchor boxes to improve bounding box proposal. YOLOv3 further refined model architecture and training process.
[049] Although accuracy of the YOLO family of models is close but not as good as Region-Based Convolutional Neural Networks (R-CNNs). However, the YOLO family of models is suitable for object detection because of detection speed, often demonstrated in real-time on video or with camera feed input.
[050] Object detection is a computer vision technique that works to identify and locate objects within an image or video. Specifically, object detection draws bounding boxes around detected objects, enabling localization of the said objects in a given scene.
[051] It should be noted that object detection is not the same as image recognition. As will be appreciated, image recognition assigns a label to an image. For example, a picture of a dog receives the label “dog”. A picture of two dogs still receives the label “dog”. Object detection, on the other hand, draws a box around each of the two dogs and labels the box as “dog”. An object detection model predicts where each object is and what label should be applied. Therefore, object detection provides more information about an image than image recognition.
[052] Object detection algorithms may be based on classification or regression. Classification algorithms are implemented in two stages. First, the classification algorithms select regions of interest in an image. Second, the classification algorithms classify the regions of interest using CNN. The classification algorithms may be slow due to running predictions for each of the regions of interest. By way of an example, the classification algorithms may include Region-based Convolutional Neural Network (RCNN), Fast-RCNN, Faster-RCNN, Mask-RCNN, and RetinaNet. Regression algorithms predict classes and bounding boxes for entire image in one run instead of selecting regions of interest in the image. By way of an example, the regression algorithms may include YOLO family of algorithms, and Single Shot Multibox Detector (SSD). In general, the regression algorithms trade a bit of accuracy for large improvements in speed. Therefore, the regression algorithms are commonly used for real-time object detection.
[053] In FIG. 5A, a bounding box is generated around a detected object in the image for preprocessing the image. The bounding box in the image may be described using descriptors such as, center of a bounding box (bx, by), width (bw), height (bh), and a value “c” corresponding to a class of an object (such as, car, traffic lights, etc.). Further, a pc value (probability that there is an object in the bounding box) may be predicted via the AI model.
[054] In FIG. 5B, the preprocessed image is received as an input. The YOLO algorithm (based on deep CNN with a reduction factor of 32) may not search for regions of interest that may potentially contain an object in the image. Instead, the YOLO algorithm splits the image into encoding cells (typically using a 19×19 grid). Each of the encoding cells is responsible for predicting 5 bounding boxes (in case there is more than one object in the cell). Therefore, 1805 bounding boxes may be obtained for one image.
[055] In FIG. 5C, the encoding cells and bounding boxes are received. Most of the encoding cells and bounding boxes in the image may not contain an object. Therefore, boxes with low object probability (pc) and bounding boxes with the highest shared area in are removed by a process called Non-Maximum Suppression (NMS).
[056] Referring now to FIG. 6A-C, detecting objects from an image through an AI model (such as, the AI model 110a) is illustrated, in accordance with some embodiments of the present disclosure. In FIG. 6A, an exemplary scenario 600a of detecting an object (such as, tomatoes or pills) through a YOLO model is shown. As will be appreciated, YOLOv4 is a state-of-the-art object detection method that evolved from YOLOv3 and YOLOv2. Unlike Faster R-CNN, YOLOv4 is a single-stage detector that formulates a detection problem as a regression problem.
[057] The input image of the YOLO model may be divided into a S×S grid 602, and detections may be made in each grid cell. It should be noted that each grid cell may predict ‘B’ bounding boxes along with the confidence of the boxes. For example, a confidence score may reflect presence of the pill in the grid cell. When there is no pill in the grid cell, the confidence score may be 0 and when there is a pill in the grid cell, the confidence score may be equal to Intersection over Union (IoU) between predicted box and the ground truth (GT). The confidence may be evaluated using an equation (1), given below:
Confidence=Pr (Object)×IoU (GT, pred), (1)
where Pr (Object)? [0,1]
[058] In FIG. 6B, a network architecture of YOLOv3 model for pill detection is shown. Based on the YOLO model, a dense architecture 604 may be incorporated for better feature reuse and representation. Furthermore, a Circular Bounding Box (C-Bbox) 606 may match shape of a pill, consequently making a more precise localization. Moreover, the C-Bbox 504 may derive a more accurate IoU between the predictions, which may play an important role in the NMS process, and thus improve the detection results. In an embodiment, total training time for the YOLO model is about 10?h to about 15?h. Confidence score of the YOLO model may reach up to about 95.99%. In FIG. 6C, a network architecture of YOLOv4 model for pill detection.
[059] Referring now to FIG. 7, counting of an object (such as, pills) in an image through an AI model (such as, the AI model 110a) is illustrated, in accordance with some embodiments of the present disclosure. The image may include a pill on a palm of the user. The AI model 110a may detect the object (i.e. the pill) and enclose the detected object with a bounding box. Further, the AI model 110a may display number of objects detected (i.e., 1) on a top left corner of the image. By way of an example, the AI model may be YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny, or the like. Commands for performing object detection may be run using TensorFlow 2.0, TensorFlow Lite, or TensorRT models on images, video, and webcam. Two custom functions may be created with YOLOv4. First function is for counting objects within images and videos and second function is a custom flag to show detailed information on YOLOv4 detections (class, confidence score, and bounding box coordinates).
[060] YOLO style Darknet weights may be converted into saved TensorFlow models, and the saved models may be executed. Further, easy command line flags may be used to enable custom functions such as, object counting. It may be noted that YOLOv4, as a TensorFlow Lite model, is lightweight which makes YOLOv4 perfect for mobile and edge devices such as, a Raspberry Pi. Additionally, with a GPU, running YOLOv4 with TensorFlow TensorRT may increase performance by up to about 8 times.
[061] In some embodiments, the AI models may be trained from scratch. However, there is a need of large data (images) to train the AI models from scratch. Thus, in some other embodiments, pre-trained AI models may be fine-tuned (fine-tuning of weights). The pre-trained may use it to construct 128-d embedding for each of the 218 faces in the datasets. During classification, a simple k-NN model and votes may be used to make a final face classification.
[062] In the present disclosure, Dlib may be used for face landmarks detection, and YOLOv3 may be used for pill detection. A comparison may be performed among various models including SSD, CNN, MobileNetV3, and YOLO. A Tiny-YOLO is a lightweight implementation which may be be used as an alternative structure for YOLOv2 or YOLOv3 in scenarios where the demand for precision is not high. Its detection speed is fast, however, in case of non-GPU based devices, Tiny-YOLO still encounters difficulty meeting the requirements of real-time detection. A YOLO-LITE model is a lightweight version of YOLOv3, which may be faster than Tiny-YOLOv3 but with a lower average precision.
[063] Large amount of model parameters and computation, high requirements for device performance, and slow inference speed make it difficult to migrate complex networks to embedded. Although lightweight networks such as MobileNet and YOLO-LITE have greatly improved their detection speed, their accuracy.
[064] Thus, the present disclosure may overcome drawbacks of traditional systems discussed before. The disclosed system and method for improving medication adherence in the present disclosure may help in keeping a patient healthier, in decreasing hospitalization rates, and thereby decreases cost. The disclosed system rewards the patient based on earned points, which helps the patient to earn money. Therefore, this may automatically decrease treatment cost. The disclosed system may be used in public health to monitor patients taking treatment for tuberculosis, where insurance agencies may be able to track patient medication to keep them healthy.
[065] It will be appreciated that, for clarity purposes, the above description has described embodiments of the invention with reference to different functional units and processors. However, it will be apparent that any suitable distribution of functionality between different functional units, processors or domains may be used without detracting from the invention. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controller. Hence, references to specific functional units are only to be seen as references to suitable means for providing the described functionality, rather than indicative of a strict logical or physical structure or organization.
[066] Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the invention.
[067] Furthermore, although individually listed, a plurality of means, elements or process steps may be implemented by, for example, a single unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. Also, the inclusion of a feature in one category of claims does not imply a limitation to this category, but rather the feature may be equally applicable to other claim categories, as appropriate.

Documents

Application Documents

# Name Date
1 202121057345-STATEMENT OF UNDERTAKING (FORM 3) [09-12-2021(online)].pdf 2021-12-09
2 202121057345-STATEMENT OF UNDERTAKING (FORM 3) [09-12-2021(online)]-1.pdf 2021-12-09
3 202121057345-PROOF OF RIGHT [09-12-2021(online)].pdf 2021-12-09
4 202121057345-POWER OF AUTHORITY [09-12-2021(online)].pdf 2021-12-09
5 202121057345-FORM FOR STARTUP [09-12-2021(online)].pdf 2021-12-09
6 202121057345-FORM FOR SMALL ENTITY(FORM-28) [09-12-2021(online)].pdf 2021-12-09
7 202121057345-FORM 1 [09-12-2021(online)].pdf 2021-12-09
8 202121057345-FIGURE OF ABSTRACT [09-12-2021(online)].jpg 2021-12-09
9 202121057345-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-12-2021(online)].pdf 2021-12-09
10 202121057345-EVIDENCE FOR REGISTRATION UNDER SSI [09-12-2021(online)].pdf 2021-12-09
11 202121057345-DRAWINGS [09-12-2021(online)].pdf 2021-12-09
12 202121057345-DECLARATION OF INVENTORSHIP (FORM 5) [09-12-2021(online)].pdf 2021-12-09
13 202121057345-COMPLETE SPECIFICATION [09-12-2021(online)].pdf 2021-12-09
14 Abstract1.jpg 2022-04-01
15 202121057345-FORM-9 [29-07-2022(online)].pdf 2022-07-29
16 202121057345-STARTUP [02-08-2022(online)].pdf 2022-08-02
17 202121057345-FORM28 [02-08-2022(online)].pdf 2022-08-02
18 202121057345-FORM 18A [02-08-2022(online)].pdf 2022-08-02
19 202121057345-FER.pdf 2022-09-01
20 202121057345-FORM 4(ii) [27-02-2023(online)].pdf 2023-02-27
21 202121057345-RELEVANT DOCUMENTS [18-03-2023(online)].pdf 2023-03-18
22 202121057345-POA [18-03-2023(online)].pdf 2023-03-18
23 202121057345-MARKED COPIES OF AMENDEMENTS [18-03-2023(online)].pdf 2023-03-18
24 202121057345-FORM 13 [18-03-2023(online)].pdf 2023-03-18
25 202121057345-AMENDED DOCUMENTS [18-03-2023(online)].pdf 2023-03-18
26 202121057345-FORM 3 [29-03-2023(online)].pdf 2023-03-29
27 202121057345-FER_SER_REPLY [29-03-2023(online)].pdf 2023-03-29
28 202121057345-DRAWING [29-03-2023(online)].pdf 2023-03-29
29 202121057345-CORRESPONDENCE [29-03-2023(online)].pdf 2023-03-29
30 202121057345-COMPLETE SPECIFICATION [29-03-2023(online)].pdf 2023-03-29
31 202121057345-CLAIMS [29-03-2023(online)].pdf 2023-03-29
32 202121057345-US(14)-HearingNotice-(HearingDate-19-03-2024).pdf 2024-02-19
33 202121057345-Correspondence to notify the Controller [14-03-2024(online)].pdf 2024-03-14

Search Strategy

1 202121057345E_31-08-2022.pdf