Sign In to Follow Application
View All Documents & Correspondence

System And Method For Detection Of Drug Abuse Using Retinal Imaging

Abstract: The present disclosure provides a system and method for detection of presence of weed in a person using retinal imaging. The system includes an image acquisition unit 100 to capture a retinal image of a person and, a computing unit 110 to receive the retinal image, extract one or more retinal attributes from the received retinal image, generate an electronic signature and compare the generated electronic signature with pre-existing electronic signatures associated with the set of retinal images of weed consumed persons. The system uses the deep learning model 118 to detect presence of weed.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 November 2019
Publication Number
21/2021
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
info@khuranaandkhurana.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-12-14
Renewal Date

Applicants

Chitkara Innovation Incubator Foundation
SCO: 160-161, Sector -9c, Madhya Marg, Chandigarh- 160009, India.

Inventors

1. GERA, Ashish
Third Year Student, Department of Computer Science and Engineering, Chitkara University, Chandigarh Patiala National Highway (NH-64), Village, Jansla, Rajpura, Punjab-140401, India.
2. AHUJA, Rakesh
Professor-CSE, Chitkara University, Chandigarh Patiala National Highway (NH-64), Village, Jansla, Rajpura, Punjab-140401, India.

Specification

The present disclosure relates to the field of drug abuse detection. More
particularly, the present disclosure relates to systems and methods for detection of drug abuse using retinal imaging.
BACKGROUND
[0002] Background description includes information that may be useful in
understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] In recent years, with an exponential increase in the rate of spread of menace of
drug intake, the overall societal health is getting deteriorated day-by-day. Drugs can have
short and long-term effects on the brain and disrupt the brain's communication pathways.
These can influence mood, behaviour and other cognitive function which may lead to violent,
intolerable and inappropriate behaviour. Brain damage may also occur through alcohol-
induced nutrition deficiencies, alcohol-induced seizures and liver disease. In pregnant
women, drug exposure can impact the brains of unborn babies, resulting in fetal alcohol
spectrum disorders. The most severe health consequences of drug abuse are death.
[0004] Signs of drug addiction include changes in sleeping or eating habits, loss of
interest in sex, negligence of personal hygiene and appearance, mood swings, downward spiral in general attitude or not caring about the future, anger and irritability, mistreating others, sneaky behavior, lying, or stealing, deteriorating relationships with family, friends, or co-workers, problems at work or school, legal or money problems, loss of interest in activities one used to enjoy, reluctance to introduce new friends to family members and old friends.
[0005] Eye infections fall into three specific categories based on their cause: viral,
bacterial, or fungal, and each is treated differently. Common symptoms of eye infections
include red eyes, pain in the eye, eye discharge, watery eyes, dry eyes, sensitivity towards
light, swollen eyes, swelling around the eyes, itching and blurry vision.
[0006] The conventional diagnostic methods are not so accurate, effective and
reliable. There are many invasive methods to detect the presence of the drug in the body of a

person or eye infection, however, they may add a new infection in the body of a person due to lack in hygiene, whereas, the non-invasive one lack accuracy and also time- consuming. A lot of time is elapsed in the diagnosis once sample is taken. The delay in diagnosis makes the situation worse.
[0007] There is, therefore, a need for a system and method of detecting presence of
drugs, specifically weed, in minimum possible time.
OBJECTS OF THE PRESENT DISCLOSURE
[0008] Some of the objects of the present disclosure, which at least one embodiment
herein satisfies are as listed herein below.
[0009] It is an object of the present disclosure to provide system and method to detect
drug abuse by a person.
[0010] It is another object of the present disclosure to provide system and method to
detect drug abuse by a person non-invasively and/or non-contact by using retinal imaging.
[0011] It is another object of the present disclosure to provide system and method to
detect drug abuse by a person that is fast, reliable and cost-effective.
[0012] It is another object of the present disclosure to provide system and method to
detect drug abuse by a person that can help the government to detect drug abusers efficiently
to help lower down the cases of drug abuse.
SUMMARY
[0013] The present disclosure relates to the field of drug abuse detection. More
particularly, the present disclosure relates to systems and methods for detection of drug abuse using retinal imaging.
[0014] An aspect of the present disclosure pertains to A system for detecting drug
abuse by a person using retinal imaging, the system comprising: an image acquisition unit configured to capture a retinal image of the person; an image-processing unit operatively coupled to the image acquisition unit, the image-processing unit comprising one or more filters configured to remove noise from the captured retinal image to generate a filtered retinal image; and a computing unit operatively coupled with the image-processing unit, the computing unit comprising one or more processors configured with a deep learning model and coupled with a memory, the memory storing instructions executable by the one or more processors and configured to: receive the filtered retinal image from the image-processing unit; extract one or more retinal attributes from the filtered retinal image; evaluate a weighted

amalgamation of the one or more extracted retinal attributes; generate an electronic signature
based on the weighted amalgamation of the one or more extracted retinal attributes; and
compare the generated electronic signature with a dataset comprising set of pre-existing
electronic signatures of the weighted amalgamation of one or more retinal attributes
associated with the filtered retinal image of eyes of the person that abused drugs; wherein the
computing unit is configured to generate a first set of signals based on the positive
comparison, the generated first set of signals is indicative of detection of the person that
abused drugs.
[0015] In an aspect, the one or more retinal attributes of the retinal images comprises
any or a combination of colour and shape of the optic nerve, texture of macula, colour and
shape of eye, colour and texture of retina, size and area of pupil, and size and area of blood
vessels.
[0016] In an aspect, the system is configured to generate a second set of signals based
on the negative comparison.
[0017] In an aspect, the system comprises a communication unit operatively coupled
to the computing unit to enable communicative coupling of the computing unit to a
computing device, and wherein the first set of signals, and the second set of signals are
processed by the computing unit so as to be represented on the computing device.
[0018] In an aspect, the system comprises a display unit operatively coupled with the
computing unit, and wherein the display unit configured to display any or a combination of
the captured retinal image, the filtered retinal image, and corresponding message based on the
received any of the first set of signals and the second set of signals.
[0019] In an aspect, the corresponding message comprises positive drug abuse by the
person and negative drug abuse by the person.
[0020] In an aspect, the image-processing unit is configured to calculate a pixel
density of the captured retinal image, and wherein the computing unit is configured to extract
the one or more attributes from the captured retinal image when the calculated pixel density
is above a predefined threshold pixel density.
[0021] In an aspect, the system is configured to store the captured retinal image, the
electrical signatures of the weighted amalgamation of the one or more retinal attributes of the
captured retinal image, and results corresponding to the captured retinal images, to update a
training and testing data set for the deep learning model.

[0022] In an aspect, the image acquisition unit is selected from a group consisting of
an iris scanner, a fund us camera, a digital camera and a digital single-lens reflex (DSLR) camera.
[0023] According to another aspect, a method for detecting drug abuse by a person
using retinal image, the method comprising the steps of: capturing, by an image acquisition unit, a retinal image of the person; removing, by an image processing unit, noise from the captured retinal image to generate a filtered retinal image; receiving, by one or more processors of a computing unit operatively coupled to the image processing unit, the filtered retinal image from the image-processing unit, wherein the computing unit comprises one or more processors with a deep learning model; extracting, by the one or more processors, one or more retinal attributes from the filtered retinal image; evaluating, by the one or more processors, a weighted amalgamation of the one or more extracted retinal attributes; generating, by the one or more processors, an electronic signature based on the weighted amalgamation of the one or more extracted retinal attributes; comparing, by the one or more processors, the generated electronic signature with a dataset comprising set of pre-existing electronic signatures of the weighted amalgamation of one or more retinal attributes associated with the filtered retinal image of eyes of the person that abused drugs; detecting, by the one or more processors, drug abuse by the person by using the deep learning model, wherein based on the positive comparison, generated first set of signals is indicative of detection of the person that abused drugs, a first set of signals is generated indicative of detection of the person that abused drugs.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The accompanying drawings are included to provide a further understanding
of the present disclosure, and are incorporated in and constitute a part of this specification.
The drawings illustrate exemplary embodiments of the present disclosure and, together with
the description, serve to explain the principles of the present disclosure.
[0025] The diagrams are for illustration only, which thus is not a limitation of the
present disclosure, and wherein:
[0026] FIG. 1 illustrates an exemplary overall architecture of the proposed system for
detection of weed using retinal imaging to elaborate its working, in accordance with an
exemplary embodiment of the present disclosure.

[0027] FIG. 2 illustrates an exemplary flow diagram of the proposed method for
detection of weed using retinal imaging, in accordance with an exemplary embodiment of the present disclosure.
DETAILED DESCRIPTION
[0028] The following is a detailed description of embodiments of the disclosure
depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0029] Various terms as used herein are shown below. To the extent a term used in a
claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0030] In some embodiments, the numerical parameters set forth in the written
description and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the invention may contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
[0031] As used in the description herein and throughout the claims that follow, the
meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[0032] The recitation of ranges of values herein is merely intended to serve as a
shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by

context. The use of any and all examples, or exemplary language (e.g. "such as") provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
[0033] Groupings of alternative elements or embodiments of the invention disclosed
herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all groups used in the appended claims.
[0034] The present disclosure relates to the field of drug abuse detection. More
particularly, the present disclosure relates to systems and methods for detection of drug abuse using retinal imaging.
[0035] An aspect of the present disclosure pertains to A system for detecting drug
abuse by a person using retinal imaging, the system comprising: an image acquisition unit configured to capture a retinal image of the person; an image-processing unit operatively coupled to the image acquisition unit, the image-processing unit comprising one or more filters configured to remove noise from the captured retinal image to generate a filtered retinal image; and a computing unit operatively coupled with the image-processing unit, the computing unit comprising one or more processors configured with a deep learning model and coupled with a memory, the memory storing instructions executable by the one or more processors and configured to: receive the filtered retinal image from the image-processing unit; extract one or more retinal attributes from the filtered retinal image; evaluate a weighted amalgamation of the one or more extracted retinal attributes; generate an electronic signature based on the weighted amalgamation of the one or more extracted retinal attributes; and compare the generated electronic signature with a dataset comprising set of pre-existing electronic signatures of the weighted amalgamation of one or more retinal attributes associated with the filtered retinal image of eyes of the person that abused drugs; wherein the computing unit is configured to generate a first set of signals based on the positive comparison, the generated first set of signals is indicative of detection of the person that abused drugs.

[0036] In an aspect, the one or more retinal attributes of the retinal images comprises
any or a combination of colour and shape of optic nerve, texture of macula, colour and shape
of eye, colour and texture of retina, size and area of pupil, and size and area of blood vessels.
[0037] In an aspect, the system is configured to generate a second set of signals based
on the negative comparison.
[0038] In an aspect, the system comprises a communication unit operatively coupled
to the computing unit to enable communicative coupling of the computing unit to a
computing device, and wherein the first set of signals, and the second set of signals are
processed by the computing unit so as to be represented on the computing device.
[0039] In an aspect, the system comprises a display unit operatively coupled with the
computing unit, and wherein the display unit configured to display any or a combination of
the captured retinal image, the filtered retinal image, and corresponding message based on the
received any of the first set of signals and the second set of signals.
[0040] In an aspect, the corresponding message comprises positive drug abuse by the
person and negative drug abuse by the person.
[0041] In an aspect, the image-processing unit is configured to calculate a pixel
density of the captured retinal image, and wherein the computing unit is configured to extract
the one or more attributes from the captured retinal image when the calculated pixel density
is above a predefined threshold pixel density.
[0042] In an aspect, the system is configured to store the captured retinal image, the
electrical signatures of the weighted amalgamation of the one or more retinal attributes of the
captured retinal image, and results corresponding to the captured retinal images, to update a
training and testing data set for the deep learning model.
[0043] In an aspect, the image acquisition unit is selected from a group consisting of
an iris scanner, a fund us camera, a digital camera and a digital single-lens reflex
(DSLR) camera.
[0044] Another aspect of the present disclos a method for detecting drug abuse by a
person using retinal image, the method comprising the steps of: capturing, by an image
acquisition unit, a retinal image of the person; removing, by an image processing unit, noise
from the captured retinal image to generate a filtered retinal image; receiving, by one or more
processors of a computing unit operatively coupled to the image processing unit, the filtered
retinal image from the image-processing unit, wherein the computing unit comprises one or
more processors with a deep learning model; extracting, by the one or more processors, one
or more retinal attributes from the filtered retinal image; evaluating, by the one or more

processors, a weighted amalgamation of the one or more extracted retinal attributes; generating, by the one or more processors, an electronic signature based on the weighted amalgamation of the one or more extracted retinal attributes; comparing, by the one or more processors, the generated electronic signature with a dataset comprising set of pre-existing electronic signatures of the weighted amalgamation of one or more retinal attributes associated with the filtered retinal image of eyes of the person that abused drugs; detecting, by the one or more processors, drug abuse by the person by using the deep learning model, wherein based on the positive comparison, generated first set of signals is indicative of detection of the person that abused drugs, a first set of signals is generated indicative of detection of the person that abused drugs.
[0045] FIG. 1 illustrates an exemplary overall architecture of the proposed system for
detection of weed using retinal imaging to elaborate its working, in accordance with an exemplary embodiment of the present disclosure.
[0046] As illustrated, in an aspect, the proposed system can include an image
acquisition unit 102 configured to capture a retinal image of a person. The image acquisition unit 102 can be an iris scanner, a fund us camera and/or any devices that capture retinal images of an eye, but not limited to the likes.
[0047] In an embodiment, the system can include an image processing unit 104
operatively coupled to the image acquisition unit 102. The image processing unit 104 can include one or more filters and can be configured to remove noise from the captured retinal image.
[0048] In an embodiment, the system can include a computing unit 110 operatively
coupled to the image-processing unit 104 and configured to receive the filtered retinal image.
The computing unit can include one or more processors configured with a deep learning
model 118 to process the filtered retinal image to detect a presence of weed in the person.
[0049] In an embodiment, the one or more processor(s) 112 can be implemented as
one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 112 can be configured to fetch and execute computer-readable instructions stored in a memory 114 of the computing unit. The memory 114 can store one or more computer-readable instructions or routines, which can be fetched and executed to create or share the data units over a network service. The memory 114 can be any non-transitory storage device including, for example,

volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0050] The computing unit 110 can include an interface(s) 116. The interface(s) 116
can include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 116 can facilitate communication of the computing unit with various devices coupled to the computing unit such as an input unit and an output unit. The interface(s) 116 can also provide a communication pathway for one or more components of the computing unit and the proposed system 100. Examples of such components include, but not limited to, processing engine(s) 120 and database (130).
[0051] The processing engine(s) 120can be implemented as a combination of
hardware and programming (for example, programmable instructions) to implement one or
more functionalities of the processing engine(s) 120. In examples described herein, such
combinations of hardware and programming may be implemented in several different ways.
For example, the programming for the processing engine(s) 120can be processor executable
instructions stored on a non-transitory machine-readable storage medium and the hardware
for the processing engine(s) 120 can include a processing resource (for example, one or more
processors), to execute such instructions. In the present examples, the machine-readable
storage medium may store instructions that, when executed by the processing resource,
implement the processing engine(s) 120. In such examples, the processing unit 120 can
include the machine-readable storage medium storing the instructions and the processing
resource to execute the instructions, or the machine-readable storage medium may be
separate but accessible to computing unit 110 and the processing resource. In other examples,
the processing engine(s) 120 can be implemented by electronic circuitry.
[0052] The data 130 can include data that is either stored or generated as a result of
functionalities implemented by any of the components of the processing engine(s) 120.
[0053] In an embodiment, the processing engine 120 can include a retinal attribute
extraction engine 122, an electronic signature generation engine 124, a detection engine 126 and other engines 127.
[0054] In an embodiment, the retinal attribute extraction engine 122 can enable the
one or more processors 112 to execute a set of instructions to extract one or more retinal attributes from the filtered retinal image. In an exemplary embodiment, the one or more retinal attributes of the retinal images comprises any or a combination of colour and shape of

optic nerve, texture of macula, color and shape of eye, color and texture of retina, size and
area of pupil, and size and area of blood vessels, but not limited to the likes.
[0055] In an embodiment, the electronic signature generation engine 124can enable
the one or more processors 112 to execute a set of instructions to evaluate a weighted
amalgamation of the one or more extracted retinal attributes. The weighted amalgamation of
one or more retinal attributes can be calculated by analyzing the one or more extracted retinal
attributes of the filtered retinal image, assigning specific values/weights to the extracted one
or more retinal attributes, and layering the one or more retinal attributes based on the
assigned weights to evaluate the weighted amalgamation.
[0056] In an embodiment, the electronic signature generation engine 124can be
configured to generate an electronic signature based on the evaluated weighted
amalgamation.
[0057] In an embodiment, a detection engine 126can enable the one or more
processors to execute a set of instructions to compare the generated electronic signature with
pre-existing electronic signatures of the weighted amalgamation of one or more retinal
attributes associated with a set of retinal images of weed consumed eyes. Further, the deep
learning model 118 can enable detection of weed in the person when the generated electronic
signature matches the pre-existing electronic signatures of the weighted amalgamation of the
one or more retinal attributes associated with the set of retinal images of weed consumed
persons.
[0058] The retinal attributes of weed consumed person eye can include presence of
bloodshot through optic nerves, macula dryness, blood eye, retinal haemorrhage, shrinkage or
expansion of pupil, and dilation of retinal blood vessels, but not limited to the likes.
[0059] In an embodiment, the computing unit 110 can be configured to generate a
first set of signals upon detection of weed in the person, and a second set of signals when no
weed is detected in the person.
[0060] In an embodiment, the image processing unit can be configured to calculate a
pixel density of the captured retinal image, and configured to extract the one or more
attributes from the captured retinal image when the calculated pixel density is above a
predefined threshold pixel density, to improve the detection efficiency of the system.
[0061] In an embodiment, the system can be configured to store the captured retinal
image, the electrical signatures of the weighted amalgamation of the one or more retinal
attributes of the captured retinal image, and results corresponding to the captured retinal
images, to update a training and testing data set for the deep learning model 118. The updated

training and testing dataset can be used stored in the system 100 for future weed detection process.
[0062] In an embodiment, the system 100 can include a display unit 106 operatively
coupled with the computing unit 110. The computing unit 110 can be configured to process the first set of signals, and the second set of signals so as to be represented on the display unit 106. The display unit can be configured to display any or a combination of the captured retinal image, the filtered retinal image, and presence or absence of weed in the person, but not limited to the likes.
[0063] In an exemplary embodiment, the display unit 106 can be any or a
combination of an LCD, LED, OLED, monochromic display, but not limited to the likes.
[0064] In an embodiment, the system can include one or more communication units
108 operatively coupled to the computing unit 110 to communicatively couple the system 100 to one or more mobile computing devices such as any or a combination of smartphone, computer, and cloud-based server, but not limited to the likes.
[0065] In an embodiment, the computing unit 110 can be configured to transmit the
first set of signals, and the second set of signals to the one or more mobile computing
devices. The computing unit 110 can be configured to process the first set of signals, and the
second set of signals so as to be represented on the one or more mobile computing devices.
[0066] FIG. 2 illustrates exemplary flow diagram of the proposed method for
detection of weed using retinal imaging, in accordance with an exemplary embodiment of the present disclosure.
[0067] As illustrated, the proposed method for detection of weed in a person can
include a step 202 of capturing, by an image acquisition unit, a retinal image of a person. The
image acquisition unit can be any or a combination of an Iris scanner and a fund us camera.
[0068] In an embodiment, the method can include a step 204 of removing, by an
image processing unit, noise from the retinal image captured in the step 202. The image
processing unit can include one or more filters to remove noise from the captured image.
[0069] In an embodiment, the method can include a step 206 of extracting, by the
computing unit, one or more retinal attributes from the retinal image filtered in the step 204.
[0070] In an embodiment, the method can include a step 208 of evaluating, by the
computing unit, a weighted amalgamation of the one or more retinal attributes extracted in the step 206. The step 208 can include generation of an electronic signature based on the weighted amalgamation.

[0071] In an embodiment, the method can include a step 210 of comparing, by the
computing unit, the generated electronic signature of the step 208 with pre-existing electronic signatures of the weighted amalgamation of one or more retinal attributes associated with a set of retinal images of weed consumed persons.
[0072] In an embodiment, the method can include a step 212 of detecting, by the
computing unit, a presence of weed using the deep learning model when the generated electronic signature matches the pre-existing electronic signatures of the weighted amalgamation of the one or more retinal attributes associated with the set of retinal images of weed consumed persons.
[0073] While the foregoing describes various embodiments of the invention, other
and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE INVENTION
[0074] The present disclosure provides a system and method to detect drug abuse by a
person.
[0075] The present disclosure provides a system and method to detect drug abuse by a
person non-invasively and/or non-contact by using retinal imaging.
[0076] It is another object of the present disclosure to provide system and method to
detect drug abuse by a person that is fast, reliable and cost-effective.
[0077] It is another object of the present disclosure to provide system and method to
detect drug abuse by a person that can help the government to detect drug abusers efficiently
to help lower down the cases of drug abuse.


We Claim:

1. A system for detecting drug abuse by a person using retinal imaging, the system
comprising:
an image acquisition unit configured to capture a retinal image of the person; an image-processing unit operatively coupled to the image acquisition unit, the image-processing unit comprising one or more filters configured to remove noise from the captured retinal image to generate a filtered retinal image; and
a computing unit operatively coupled with the image-processing unit, the computing unit comprising one or more processors configured with a deep learning model and coupled with a memory, the memory storing instructions executable by the one or more processors and configured to:
receive the filtered retinal image from the image-processing unit;
extract one or more retinal attributes from the filtered retinal image;
evaluate a weighted amalgamation of the one or more extracted retinal attributes;
generate an electronic signature based on the weighted amalgamation of the one or more extracted retinal attributes; and
compare the generated electronic signature with a dataset comprising set of pre-existing electronic signatures of the weighted amalgamation of one or more retinal attributes associated with the filtered retinal image of eyes of the person that abused drugs;
wherein the computing unit is configured to generate a first set of signals based on the positive comparison, the generated first set of signals is indicative of detection of the person that abused drugs.
2. The system as claimed in claim 1, wherein the one or more retinal attributes of the retinal images comprises any or a combination of colour and shape of optic nerve, texture of macula, colour and shape of eye, colour and texture of retina, size and area of pupil, and size and area of blood vessels.
3. The system as claimed in claim 1, wherein the system is configured to generate a second set of signals based on the negative comparison.
4. The system as claimed in claim 3, wherein the system comprises a communication unit operatively coupled to the computing unit to enable communicative coupling of the computing unit to a computing device, and wherein the first set of signals, and the

second set of signals are processed by the computing unit so as to be represented on the computing device.
5. The system as claimed in claim 4, wherein the system comprises a display unit
operatively coupled with the computing unit, and wherein the display unit configured
to display any or a combination of the captured retinal image, the filtered retinal
image, and corresponding message based on the received any of the first set of signals
and the second set of signals.
6. The system as claimed in claim 5,wherein the corresponding message comprises positive drug abuse by the person and negative drug abuse by the person.
7. The system as claimed in claim 1, wherein the image-processing unit is configured to calculate a pixel density of the captured retinal image, and wherein the computing unit is configured to extract the one or more attributes from the captured retinal image when the calculated pixel density is above a predefined threshold pixel density.
8. The system as claimed in claim 1, wherein the system is configured to store the captured retinal image, the electrical signatures of the weighted amalgamation of the one or more retinal attributes of the captured retinal image, and results corresponding to the captured retinal images, to update a training and testing data set for the deep learning model.
9. The system as claimed in claim 1, wherein the image acquisition unit is selected from a group consisting of an iris scanner, a fund us camera, a digital camera and a digital single-lens reflex (DSLR) camera.
10. A method for detecting drug abuse by a person using retinal image, the method comprising the steps of:
capturing, by an image acquisition unit, a retinal image of the person;
removing, by an image processing unit, noise from the captured retinal image to generate a filtered retinal image;
receiving, by one or more processors of a computing unit operatively coupled to the image processing unit, the filtered retinal image from the image-processing unit, wherein the computing unit comprises one or more processors with a deep learning model;
extracting, by the one or more processors, one or more retinal attributes from the filtered retinal image;
evaluating, by the one or more processors, a weighted amalgamation of the one or more extracted retinal attributes;

generating, by the one or more processors, an electronic signature based on the weighted amalgamation of the one or more extracted retinal attributes;
comparing, by the one or more processors, the generated electronic signature with a dataset comprising set of pre-existing electronic signatures of the weighted amalgamation of one or more retinal attributes associated with the filtered retinal image of eyes of the person that abused drugs;
detecting, by the one or more processors, drug abuse by the person by using the deep learning model, wherein based on the positive comparison, generated first set of signals is indicative of detection of the person that abused drugs, a first set of signals is generated indicative of detection of the person that abused drugs.

Documents

Application Documents

# Name Date
1 201911046436-IntimationOfGrant14-12-2023.pdf 2023-12-14
1 201911046436-STATEMENT OF UNDERTAKING (FORM 3) [14-11-2019(online)].pdf 2019-11-14
2 201911046436-FORM FOR STARTUP [14-11-2019(online)].pdf 2019-11-14
2 201911046436-PatentCertificate14-12-2023.pdf 2023-12-14
3 201911046436-FORM FOR SMALL ENTITY(FORM-28) [14-11-2019(online)].pdf 2019-11-14
3 201911046436-CLAIMS [25-11-2022(online)].pdf 2022-11-25
4 201911046436-FORM 1 [14-11-2019(online)].pdf 2019-11-14
4 201911046436-COMPLETE SPECIFICATION [25-11-2022(online)].pdf 2022-11-25
5 201911046436-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-11-2019(online)].pdf 2019-11-14
5 201911046436-DRAWING [25-11-2022(online)].pdf 2022-11-25
6 201911046436-FER_SER_REPLY [25-11-2022(online)].pdf 2022-11-25
6 201911046436-EVIDENCE FOR REGISTRATION UNDER SSI [14-11-2019(online)].pdf 2019-11-14
7 201911046436-FER.pdf 2022-05-30
7 201911046436-DRAWINGS [14-11-2019(online)].pdf 2019-11-14
8 201911046436-FORM 18 [01-09-2021(online)].pdf 2021-09-01
8 201911046436-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2019(online)].pdf 2019-11-14
9 201911046436-COMPLETE SPECIFICATION [14-11-2019(online)].pdf 2019-11-14
9 201911046436-FORM-26 [06-12-2019(online)].pdf 2019-12-06
10 201911046436-Proof of Right (MANDATORY) [06-12-2019(online)].pdf 2019-12-06
10 abstract.jpg 2019-11-15
11 201911046436-Proof of Right (MANDATORY) [06-12-2019(online)].pdf 2019-12-06
11 abstract.jpg 2019-11-15
12 201911046436-COMPLETE SPECIFICATION [14-11-2019(online)].pdf 2019-11-14
12 201911046436-FORM-26 [06-12-2019(online)].pdf 2019-12-06
13 201911046436-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2019(online)].pdf 2019-11-14
13 201911046436-FORM 18 [01-09-2021(online)].pdf 2021-09-01
14 201911046436-DRAWINGS [14-11-2019(online)].pdf 2019-11-14
14 201911046436-FER.pdf 2022-05-30
15 201911046436-EVIDENCE FOR REGISTRATION UNDER SSI [14-11-2019(online)].pdf 2019-11-14
15 201911046436-FER_SER_REPLY [25-11-2022(online)].pdf 2022-11-25
16 201911046436-DRAWING [25-11-2022(online)].pdf 2022-11-25
16 201911046436-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-11-2019(online)].pdf 2019-11-14
17 201911046436-COMPLETE SPECIFICATION [25-11-2022(online)].pdf 2022-11-25
17 201911046436-FORM 1 [14-11-2019(online)].pdf 2019-11-14
18 201911046436-FORM FOR SMALL ENTITY(FORM-28) [14-11-2019(online)].pdf 2019-11-14
18 201911046436-CLAIMS [25-11-2022(online)].pdf 2022-11-25
19 201911046436-PatentCertificate14-12-2023.pdf 2023-12-14
19 201911046436-FORM FOR STARTUP [14-11-2019(online)].pdf 2019-11-14
20 201911046436-STATEMENT OF UNDERTAKING (FORM 3) [14-11-2019(online)].pdf 2019-11-14
20 201911046436-IntimationOfGrant14-12-2023.pdf 2023-12-14

Search Strategy

1 201911046436E_27-05-2022.pdf

ERegister / Renewals