Abstract: ABSTRACT “A System for Automatic Screening, Prediction and Detection of Glaucoma and a Method thereof” Present invention discloses a system for automatic screening, prediction and detection of glaucoma and a method thereof. The invention provides an automatic Computer aided detection (CADe) system and method that uses deep learning to generate a Gaze exploration index for detection of glaucoma. The detection of visual field loss in glaucoma patients while performing different day-to-day activities is done using exploratory eye gaze data. The system incorporates an eye-tracking system to identify eye movements of patients while performing different activities depicted as visual exploration tasks. The components of the system are data acquisition, feature analysis and clinical validation, model creation, explainability of the model and features, visualization, and the screening index. The CADe system is cost effective, portable and reliable that enables professionals at primary health care facilities to detect glaucoma early on resulting in timely medical interventions. Figure 1
DESC:FIELD OF THE INVENTION
The present invention relates to a system for automatic screening, prediction and detection of glaucoma and a method thereof. More particularly, the present invention discloses an explainable computer aided detection system that uses deep learning to generate a Gaze exploration index, based on eye movement analytics applied to eye-tracker data acquired in response to visual exploration tasks, for the screening, prediction, and detection of glaucoma to assist healthcare professionals.
BACKGROUND OF THE INVENTION
Glaucoma is a medical condition caused by an increase in intraocular pressure due to fluid buildup in the front part of the eye. This results in optic nerve damage and blind spots developing over time, which may go unnoticed until significant nerve fiber loss has occurred. If left untreated, glaucoma can eventually lead to complete blindness.
Several studies have shown that individuals with severe glaucoma exhibit a loss in their ability to execute tasks that require visual function, including reading newspapers, climbing stairs, searching objects, communicating with others, engaging in leisure activities, dark adaptation, other outdoor activities, etc. resulting in heavy treatment expenses, loss of employment, and less productivity.
A number of literature have been published including patents and non-patents documents in said domain. A non-patent literature by S. Dubey, H. Bedi, M. Bedi, P. Matah, J. Sahu, S. Mukherjee, and L. Chauhan, titled, “Impact of visual impairment on the wellbeing and functional disability of patients with glaucoma in India'', published in J. Current Ophthalmol, in 2019, describes visual impairment or vision loss as the decreased ability to see, which is not correctable using glasses or lenses and leads to difficulties with day-to-day activities. Globally, glaucoma is the second leading cause of blindness after cataract and needs early detection and diagnosis.
According to the World Health Organization (WHO), there are 62 million visually impaired people. As per a publication by E. W. Chan, X. Li, Y.-C. Tham, J. Liao, T. Y. Wong, T. Aung, and C.-Y. Cheng, “Glaucoma in Asia: Regional prevalence variations and future projections,” in Brit. J. Ophthalmol., 2016, more than 90 percent of glaucoma cases remain undiagnosed, in contrast to 40-60 % in developed countries. This is because most patients have no early symptoms or feel pain and hence the disease goes unnoticed at the initial stages. Hence, regular examination and screening of glaucoma are necessary to detect glaucoma at an initial stage.
As per a non-patent literature by R. Krishnan, V. Sekhar, J. Sidharth, S. Gautham, and G. Gopakumar, titled, “Glaucoma detection from retinal fundus images” published in Proc. Int. Conf. Commun. Signal Process. (ICCSP), Jul. 2020, the detection is carried out in a traditional way using an existing pipeline in which segmentation of optic disc and cup is carried out first followed by CDR calculation based on which a prediction is made. This approach makes use of only structural analysis of the eyes using fundus images and does not take care of the functional deficit analysis. Besides, cup segmentation is a challenging and hard problem entailing high computational requirements. Hence the traditional methods have the drawback of being high on demand of computational requirements and being less effective.
Numerous patents and non-patent literature exist that aim to improve traditional screening methods for detecting glaucoma. In another non-patent literature by Y.S. Kim, M.Y. Yi, Y.J. Hong and K.H. Park, titled, “The impact of visual symptoms on the quality of life of patients with early to moderate glaucoma”, published in International Ophthalmology, 2018, it was found that clinical testing for glaucoma is typically carried out in laboratory settings where individuals with glaucoma do not significantly differ from healthy individuals. Visual field perimetry is a machine used in laboratory settings that detects loss in the central and peripheral field of vision. Even slight vision impairment due to this disease can cause difficulties for patients when performing daily activities. This highlights the need for further research into alternative testing methods that better reflect real-world scenarios faced by those living with glaucoma. Besides, clinical tests to screen for glaucoma are not available in primary health care centers. These also need expensive instruments and devices besides expertise in diagnosis. In other words, eye care testing and diagnostic services in bigger places are not well utilized due to accessibility, affordability, and availability of services.
Researchers have investigated visual functional deficits among glaucoma patients, discovering variations in eye movement patterns while performing various visual exploration tasks like reading, face recognition, watching TV/video, driving, walking, and shopping. As per the findings of a visual search experiment published by E. Wiecek, L. R. Pasquale, J. Fiser, S. Dakin and P. J. Bex, titled, "Effects of peripheral visual field loss on eye movements during visual search", 2012, it was found that saccadic eye movements, including their number per trial, amplitude, size, and fixation duration do not appear to correlate with peripheral visual field loss. During visual search, glaucoma patients compensated for visual field loss by shifting their saccades in a different direction. Therefore, the approach in the state of art methods of analyzing visual impairment due to glaucoma by using only saccadic gaze parameters without considering exploratory eye gaze parameters does not give an accurate and effective method of detection of glaucoma.
In another non patent literature by D. P. Crabb, N.D. Smith and H. Zhu, titled "What’s on TV? Detecting age-related neurodegenerative eye disease using eye movement scanpaths”, in Frontiers Aging Neurosci, 2014, scapular movements that land within the visual field are considered to be indicative of vision loss, which may help distinguish between normal visual fields and those affected by glaucoma. This study found that saccadic map features and field loss were not linked or correlated with each other - thus indicating a limitation in determining the extent of vision loss solely based on these characteristics. Therefore, there is a sizable drawback and implies that further research is needed to determine the relationship between clinical measurements and eye gazing characteristics in order to enable early screening for this disease.
In another study by K. Sippel, E. Kasneci, K.Aehling, M.Heister, W.Rosenstiel, U.Schiefer et al., titled, “Binocular glaucomatous visual field loss and its impact on visual exploration - A supermarket study”, in PLoS ONE, 2014, it was found that age plays a significant role in eye movement patterns during driving scenarios among individuals with glaucoma. Eye movement scanning of young participants is different from the older adults with glaucoma in the driving scenes The study also revealed that only a small percentage of young and senior glaucoma patients use compensatory strategies like increasing their head movements or saccades to improve vision function, according to research findings reported in the literature on this topic. This reiterates the need for further research into alternative testing methods that better reflect practical scenarios faced by those living with glaucoma across different age groups.
Certain studies have shown a correlation between clinical measures and eye gaze parameters among glaucoma patients compared to normal participants based on the level of severity. A non-patent literature by M. C. C. Sousa, L. G. Biteli, S. Dorairaj, J. S. Maslin, M. T. Leite, and T. S. Prata, titled, “Suitability of the visual field index according to glaucoma severity'', as published in J. Current glaucoma Pract., 2015, attempts to correlate the standard automated perimetry (SAP) test results with visual field index for the evaluation of glaucoma patients’ with various levels of severity. However, this uses a laboratory testing method and has its limitations. Hence, further research is needed, to better understand the influence of varying degrees of severity in visual field loss on eye movement behavior with more accurate screening tools, treatment methods, and strategies that reflect practical problems faced by those living with glaucoma across different stages of severity.
A patent document US-2018018564-W, titled, “Eye examination device, system and method”, relates to eye examination devices, systems and methods for detecting disorders like glaucoma. The method involves an eye-examination performed virtually or remotely using a device provided to the patient or user, facilitating examinations without requiring a visit to an examination clinic The patent document discusses optical examination routines and does not include visual exploration tasks. In certain other prior arts, in a computer aided detection that includes a method of diagnosis using computer-based tasks, the glaucoma affected patients tend to ignore regions of visual field loss and do not focus on all parts of the scene. By ignoring of certain areas, or by not including eye gaze exploration data, these computer-based tasks may not be capable of providing comprehensive insights into an individual's visual function, which could result in inadequate diagnosis or treatment strategies being developed. There is another limitation in CADe system that the data quality captured and extracted is not all-inclusive as these normally incorporate low-cost eye tracker system.
Prior research shows that the advancement of artificial intelligence-based tools in the field of eye care is quite promising as seen in a non-patent publication by M. Sushil, G. Suguna, R. Lavanya, and M. N. Devi, titled “Performance comparison of pre-trained deep neural networks for automated glaucoma detection,” in Proc. Int. Conf. ISMAC Comput. Vis. Bio-Eng. Cham, Switzerland Springer, 2018. The non-patent literature is centered on structural based analysis on fundus images for glaucoma detection. Besides, it also acknowledges a significant drawback present in many such systems, that of the barrier of the black-box approach of the machine learning systems making it difficult to understand how these arrived at their outputs or predictions. Another patent document IN-201941009943, titled, “An eye-gaze system and a method for operating the same”, uses eye tracking for analyzing the eye movement patterns and the visual field using a deep learning neural network. This document also has a black box approach wherein its machine learning computation and results are not explained or understood by the subject or the health care professionals. Therefore, these methods are less transparent and lack explanation regarding how they arrive at their conclusions or predictions.
Therefore, there is a need for a trustworthy automatic computer aided detection system that is cost effective, portable, easy for non-expert health care providers to use, and effective in screening, prediction and detection of glaucoma conditions in subjects.
Accordingly, the present invention discloses an explainable computer aided detection system that fuses task performance parameters and eye gaze parameters during visual exploration tasks onto images using deep learning and to guide health care professionals of primary eye care centers in automatic screening, prediction and detection of glaucoma in subjects.
OBJECT OF THE INVENTION
In order to overcome the shortcomings in the existing state of the art the main object of the present invention is to provide an automatic computer aided detection system using deep learning to generate a Gaze exploration index for screening, prediction and detection of glaucoma in subjects.
Yet another object of the present invention is to provide a method for automatic computer aided detection using deep learning to generate a Gaze exploration index for screening, prediction and detection of glaucoma in subjects.
Yet another object of the present invention is to provide a Gaze exploration index to measure and categorize the results of the computer aided detection system and method to assist the primary health care providers to interpret the outcomes for an early screening, prediction and detection of glaucoma in subjects easily and accurately.
Yet another object of this invention is to provide a Gaze exploration index for the purpose of monitoring the variation of conditions or symptoms of glaucoma in the glaucoma patients, including before and after treatment or during different periods of time of a particular patient, in order to assess and understand the progression or regression of said patient's visual parameters.
Yet another object of the present invention is to provide a comprehensive and reliable method to screen, predict, and detect glaucoma in subjects by extracting and amalgamating exploratory eye gaze and task performance parameters from eye-tracker data in response to visual exploration tasks as stimuli and applying deep learning algorithms.
Yet another objective of this invention is to provide an Explainable trustworthy computer aided detection system for screening, prediction and detection of glaucoma in subjects achieving interpretability and easy understanding of the outcomes.
Yet another objective of this invention is to present the influence of age and severity on the performance in day to day activities of subjects affected by glaucoma by comparing their visual exploration responses.
Yet another object is to provide a cost effective, portable, accurate and reliable Computer Aided Detection (CADe) system that is easy to use by non-experts to screen, predict and detect glaucoma in subjects and assist healthcare professionals at primary health care facilities.
SUMMARY OF THE INVENTION:
Accordingly, the present invention relates to an automatic Computer aided detection system and method using deep learning that generates a Gaze exploration index for screening, prediction and detection of glaucoma in subjects.
The present invention provides an automatic Computer Aided Detection (CADe) system that uses deep learning to screen visual field loss in glaucoma patients while performing different day-to-day activities such as searching objects, viewing photographs, etc. The system of the present invention incorporates an eye-tracking system to identify eye movements of glaucoma patients while performing different activities. An eye tracking system comprises of eye tracker device or remote eye tracking system . The different day-to-day activities are depicted in the form of visual exploration tasks. The CADe system of the invention fuses task performance parameters and eye gaze parameters during visual exploration tasks onto images, to guide health care professionals of primary eye care centers in glaucoma screening. The pertinent eye gaze and task performance parameters are visualized in the form of fusion maps not limited to Gaze Fusion Map (GFM), Gaze Fusion Reaction Time (GFRT) map, Gaze Convex Hull Map (GCHM) etc. that are the outcomes of different visual exploration tasks.
The present invention also discloses Explainability techniques to the CADe system generated useful index called Gaze Exploration - index (GE-i) to easily and reliably discriminate glaucoma from normal, providing a detection system for early screening, prediction and detection of glaucoma in subjects. Explainable detection system includes a pretrained deep neural network and gaze exploration visualization. The explainability of the detection system is based on the contribution of different features in generation of Gaze Exploration - index (GE-i) and the visualization of exploration tasks of different subgroups of subjects participating. This technique facilitates achieving interpretability and easy comprehension of the outcomes.
This invention provides a Gaze exploration index for the purpose of screening, prediction and detection of glaucoma in subjects. This index also facilitates monitoring of the variation of conditions or symptoms of glaucoma in the glaucoma patients, including but not limited to before and after treatment or during different periods of time of a particular patient, in order to assess and understand the progression or regression of said patient's visual parameters such as visual exploration etc.
The invention provides for Explainable CADe system that addresses the limitation of black box approach of systems by providing more transparency into the decision-making process behind this Artificial powered system. This dimension of the system helps build trust among healthcare professionals and patients while promoting better understanding and explanation of results.
The system comprises of modules not limited to visual exploration tasks, estimation of Extensive Gaze and Performance (EXGP) module, EXGP Feature Analysis, explainable detection system etc. The method of the present invention comprises steps of but not limited to data acquisition, feature analysis and clinical validation, prototypical creation, explainability, visualization that subsequently outputs the screening index. The explainability of the system is performed in a platform such as dashboard in the form of various plots such as waterfall plots, contribution plots etc. The platform or dashboard is also programmed to generate Gaze Exploration-index (GE-i).
The present invention provides a method of detection of glaucoma that illustrates and incorporates the influence of age and severity on performance of subjects affected by glaucoma in their day to day activities. This was demonstrated by comparing the performance of glaucoma affected subjects during the visual exploration tasks and understanding whether compensatory eye movement patterns reflect in such different tasks.
Accordingly, the present invention provides a cost effective, portable, accurate and reliable Computer Aided Detection (CADe) system that is easy to use by non-experts for early screening, prediction and detection of glaucoma in subjects and assisting healthcare professionals at primary health care facilities.
BRIEF DESCRIPTION OF DRAWINGS
Figure 1 displays an overview of Gaze Exploration - index (GE-i) based explainable detection model.
Figure 2 displays flowchart of data collection.
Figure 3 displays a flow diagram of subgroup analysis.
Figure 4 displays sample of visual exploration tasks (V31, V32,V33) (a) V31- Simple Dot task, (b) V32- Visual search,(c) V33- Free viewing task.
Figure 5 displays histogram plot. L_MD and R_MD (left and right eye mean deviation), L_VFI and R_VFI (left and right eye visual field index), L_PSD and R_PSD (left and right eye pattern standard deviation).
Figure 6 displays workflow in explainable detection model module (D).
Figure 7 displays iterative improvement in accuracy of DNN model based on feature relevance.
Figure 8 displays feature relevance on detection model.
Figure 9 displays feature interaction between dot_average miss and
star_fixation connection length.
Figure 10 displays feature interaction between fv_convex hull area and star_fixation connection length.
Figure 11 displays water fall plot of feature relevance.
Figure 12 displays a comparison of fusion maps generated for different tasks.
Figure 13 displays box plot of GE-i value.
Figure 14(a) displays contribution plot of relevant features towards the prediction: sub_44 participant.
Figure 14(b) displays Contribution plot of relevant features towards the prediction: sub_64 participant.
Figure 14(c) displays Contribution plot of relevant features towards the prediction: sub_45 participant.
Figure 14(d) displays Contribution plot of relevant features towards the prediction: sub_75 participant.
DETAILED DESCRIPTION OF THE INVENTION WITH ILLUSTRATIONS AND EXAMPLES
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of “a”, “an”, and “the” include plural references. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
The abbreviations used in the invention are represented in table 1 as below:
Table 1: Legend of abbreviations
S.no. Particulars Legend
1 Primary Open-Angle Glaucoma POAG
2 Computer aided detection system CADe
3 Artificial Intelligence AI
4 Human computer interaction system HCI
5 Gaze Fusion Map GFM
6 Gaze Fusion Reaction Time GFRT
7 Gaze Convex Hull Map GCHM
8 User interface UI
9 User experience UX
10 Saccade Count SC
11 Fixation Duration FD
12 Fixation Count (FC), FC
13 Fixation/Saccade ratio F/S ratio
14 Scanpath Length SL
Some of the technical terms used in the specification are elaborated as below:
Glaucoma- Glaucoma is a class of diseases that indicates irrevocable and continuous damage to the optic nerve. The aqueous outflow system of the eye is impaired, leading to an imbalance in aqueous production and drainage. The imbalance results in increased intraocular pressure that damages the optic nerve. The central vision may be clear, but a scotoma (blind spot) will appear in the peripheral vision that gradually leads to blindness if untreated. The vision loss increases from the peripheral vision towards the central vision, leading to blindness if untreated. Three-fourths of reported glaucoma cases are Primary Open-Angle Glaucoma (POAG), a silent type of glaucoma that does not show visible symptoms in the early stages. Some of the glaucoma risk factors are high internal eye pressure, family history, and prolonged use of corticosteroids.
Scotoma- An area of partial alteration in the field of vision with partially diminished or entirely degenerated visual acuity which is surrounded by a field of normal vision.
Computer aided detection (CADe) system- Systems that assist doctors in the interpretation of medical images are called CADe systems. Imaging techniques in X-ray, MRI, Endoscopy, and ultrasound diagnostics yield a great deal of information in the form of digital images or videos that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process such digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional. These systems are usually confined to marking conspicuous structures and sections. Here in the specifications the computer aided detection system is also termed as computer aided detection model or explainable detection model or Gaze exploration-index (GE-i) explainable detection model or.
Explainability- The term explainability also referred to as “interpretability” is the concept that a machine learning model and its output can be explained to a human being at an acceptable level. AI algorithms such as deep learning algorithms often are perceived as black boxes making inexplicable decisions as these systems, while being more performant, remain much harder to explain. In practical circumstances it becomes difficult to explain the outcome of a machine learning model to a business stakeholder, regulator, or customer. This lack of transparency can lead to significant losses if AI models are misunderstood and improperly applied resulting in making bad business decisions thus leading to user distrust and refusal to use AI applications. Therefore, Explainability is an aspect of machine learning systems that provides means and tools towards improving the ability to explain AI systems.
Human computer interaction (HCI) system– It is the study of how people interact with computers. User-centered design, User interface (UI) and User experience(UX) are combined with HCI to provide intuitive technology and products. Human computer interaction (HCI) system in this specification means a system that assists healthcare professionals to assess gaze exploration of glaucoma patients. The present invention HCI enhances the clinical equipment by making it portable and flexible for patients in assessing visual field loss.
Visual field or Field of vision: This term refers to the entire area that can be seen when the eye is directed forward, including that which is seen with peripheral vision.
Visual field testing- A visual field test is an examination that may be performed to analyze a patient's visual field. The test may be performed by a technician directly, with the assistance of a machine, or completely by an automated machine. Machine based tests aid diagnostics by allowing a detailed printout of the patient's visual field.
Fovea- In the human eye the term fovea (or fovea centralis) is the "pit" in the retina that allows for maximum acuity of vision. The human fovea has a diameter of about 1.0 mm with a high concentration of cone photoreceptors.
Cupping (Disk cupping)- An enlargement of the cup or central depression in the optic nerve head. Cupping is visible when viewing the back of the eye with an ophthalmoscope. An enlarged cup especially if accompanied by a notch or a small spot of bleeding is a sign of glaucoma. Cupping is a clinical sign that indicates that a large number of nerve fibers in the optic nerve have been lost.
Gaze- The term refers to looking in one direction for a period of time. It also means the act or state of looking steadily in one direction.
Eye tracking system- An eye tracking system comprises of eye tracker device or remote eye tracking system. An eye tracker is a device for measuring eye positions and eye movements. Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human-computer interaction, and in product design. The process of eye tracking could be carried out remotely too.
Eye gaze tracking- Eye gaze tracking is the process of measuring and analyzing the movements of a person's eyes to determine where they are looking. It involves the use of sensors and algorithms to track the movement of the eyes and determine where the person is looking in relation to their environment. The data acquired during eye gaze tracking is called eye gaze data or samples. Gaze exploration is a term defined in the present invention meaning the data of eye gaze when the gaze is involved in exploring to look for or comprehend useful information in the environment or ongoing activities.
Visual search- Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature (the target) among other objects or features (the distractors).Visual search can take place with or without eye movements. It is the ability to consciously locate an object or target amongst a complex array of stimuli.
Visual exploration - Visual exploration refers to the active process of looking around for acquisition of information in the environment through coordinated movements of the eyes, head, and body. Observers or subjects survey the environment by shifting their gaze from one location to another (“scanning”) to gather visual information that supports ongoing activities. The predominant paradigm for measuring visual exploration is recording eye movements in observers or subjects who look at screens. Visual exploration is different from visual search in that it does not involve looking for specific information or objects. Gaze exploration is a measure or parameter that is part of the present invention.
Eye gaze parameters - These are parameters used to measure vision characteristics of eye gaze of a person through processes of eye tracking. Some parameters are called basic parameters that are obtained directly from testing processes such as eye tracking. Some of the basic eye gaze parameters are fixations, saccades etc. Some parameters are derived from the basic eye gaze parameters for further analysis and studies such as Fixation Count (FC), Saccade Count (SC), Saccade rate, Fixation Duration (FD), Fixation/Saccade ratio (F/S ratio), Saccade Velocity, Scanpath, Scanpath Length (SL), Saccadic Direction, Convex Hull Area etc.
Fixation- Fixations are estimated as clusters of still eye movements or eye gaze samples or data over a time period. The number of fixations or gaze points that are directed towards a certain part of an image (relative to other parts) shows that more visual attention has been directed there.
Saccade- A saccade or saccadic eye movement is a rapid, conjugate, eye movement that shifts the center of gaze from one part of the visual field to another or from one fixation to another. Saccades are mainly used for orienting gaze towards an object of interest. Saccades may be horizontal, vertical, or oblique. They can be both voluntarily executed at will or involuntary and reflexive.
Performance parameters- The term refers to the performance of glaucoma subjects or other subjects during the visual exploration tasks that assist in research and studies about vision especially to understand whether compensatory eye movement patterns reflect in such different exploration tasks. Some examples are, monocular performance as hit or miss, binocular performance as reaction time, fixation connection length, convex hull area, saccade rate etc.
Visual exploration tasks- Visual exploration tasks are screen-based tasks that are designed to depict specific tasks such as searching for an object, watching T.V., and viewing photographs in daily life.
Monocular vision- Type of vision where an individual is reliant on only one eye for their vision. This may be due to the loss of vision in one eye due to a disease process, or as a result of a need to cover (occlude) one eye using a patch or similar to stop double vision (diplopia).
Binocular vision- Type of vision in which an animal/ human subject has two eyes capable of facing the same direction to perceive a single three-dimensional image of its surroundings. Binocular vision offers several advantages compared to monocular vision. The visual field is extended by having two eyes and the offset in the overlapping retinal images allows the brain to discriminate depth in the visual scene. Having two visual inputs of the same stimulus also allows for binocular summation which results in higher visual acuity, greater contrast sensitivity and faster processing speed of visual stimuli.
Fusion maps- The term is used when multiple images/visual output of a subject/patient are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality or by combining information from multiple modalities. The single fused image is more informative and accurate than any single source image, and it consists of all the necessary information. The purpose of image fusion is not only to reduce the amount of data but also to construct images that are more appropriate and understandable for human and machine perception. Fusion maps are created through this image fusion process.
Gaze Fusion Map (GFM)- Gaze Fusion Map in the present invention was generated by fusing relevant information of at least 30 images. It is the outcome of monocular performance of different participants by fusing ‘hit/miss’ of 30 images. The dark spot represents ‘not seen’ the target and red spot represents ‘seen’ the target. GFM in the invention highlighted the monocular ability of search performance.
Gaze Fusion Reaction Time (GFRT) map- Gaze Fusion Reaction Time map in the present invention was generated by fusing relevant information or target on to a single image. Two variables such as hit/miss and reaction time during visual search task were overlaid onto the image. It is the outcome of binocular performance during 20 images. GFRT visualization helped to understand the position of different targets, hit/miss of the target, and average reaction time. Reaction time and miss of different participants/subjects visualized on to a single image, highlighted the difficult regions irrespective of exploratory gaze pattern.
Gaze Convex Hull Map (GCHM)- Gaze Convex Hull Map in the present invention was generated from the convex hull during the search tasks. It is the polygonal space that covers all the fixations of a participant. This is calculated at the end of a task for each participant.
Subjects- A person or organism that is the object of research, treatment, experimentation, or dissection. Here in the description the term subjects include humans or animals who have been researched upon for screening, prediction and detection of medical conditions such as glaucoma. The term includes the patients who undergo tests or any screening for glaucoma.
Experimenter/health care person handling the participants/subjects- The persons who are assisting in conducting the experiments related to the working of the present invention. This term also includes the persons who are assisting in explaining to the participating subjects the modalities of the working of the system of present invention and helping in conducting the activities and collecting responses from the subjects participating.
The reference numerals used in the present invention are tabulated below in table 2.
Table 2: Legend of Reference numerals
Ser no. Item description Reference numerals
1 System S
2 Visual exploration task module V
Display screen V1
Eye tracker system/ eye tracker submodule V2
Visual exploration tasks submodule V3
Visual exploration task 1 V31
Visual exploration task 2 V32
Visual exploration task 3 V33
3 Extensive gaze and performance module (EXGP) E
Submodule to assess basic eye gaze parameters E1
Basic eye gaze measures/parameters E1P1,E1P2,E1P3,…
Submodule to assess eye gaze derived parameters E2
Eye gaze derived measures/parameters E2P1,E2P2,E2P3,…
Task performance measures submodule E3
Performance measures /parameters E3P1,E3P2,E3P3,…
EXGP Feature set (Output) submodule E4
4 EXGP feature analysis module F
Analysis of feature set submodule F1
Analysis based on severity(severity based analysis) F11
Analysis based on age(age based analysis) F12
Clinical validation submodule F2
Assessment of progression of Glaucoma in subjects submodule F3
Estimation of compensation for visual field loss submodule F4
5 Explainable detection model module D
Computation algorithm DA
Detection model/ Gaze exploration detection model DM
Explainable AI tool DE
Deep neural network D1
Dashboard D2
Explainability D21
Plot 1 D211
Plot 2 D212
Plot 3 D213
Gaze exploration (GE ) index D22
Glaucoma D221
Normal D222
Gaze exploration Visualization D3
Based on task 1(V31)- GFDM Map D31
Based on task 2(V32)- GFRT Map D32
Based on task 3(V33)- GCHM Map D33
The present invention discloses a system (S) for automatic screening, prediction and detection of glaucoma and a method thereof. The present invention provides an automatic computer aided detection (CADe) system also termed as Gaze exploration-index (GE-i) explainable detection model hereafter that uses deep learning to screen visual field loss in glaucoma patients evaluated while they perform different day-to-day activities such as searching objects, viewing photographs, etc. depicted as visual exploration tasks (V3). The CADe system fuses performance parameters and eye gaze parameters during visual exploration tasks (V3) onto images, to guide health care professionals of primary eye care centers in glaucoma screening.
The method of the present invention comprises of steps of data acquisition, feature analysis and clinical validation, detection model creation, explainability with visualization and further providing the final output such as but not limited to the screening index. The components in the AGE-i system are data acquisition, feature analysis and clinical validation, model creation, explainability of the model and features, visualization, and the screening index. The detection model apart from generating gaze exploration screening index also creates fusion maps based on the information derived from visual exploration task performance and eye movement behavior. The explainability techniques applied in the CADe model help to comprehend the relevant features that contribute to the prediction of glaucoma in subjects or patients.
The development of present invention work involved human subjects or animals during research stage and all the experimental work has been done following the necessary protocols. The approval of all ethical and experimental procedures and protocols was obtained from the Narayana Nethralaya, Narayana Health City, Bengaluru.
The system (S) and method for automatic screening, prediction and detection of glaucoma is as described below.
As per an embodiment of the invention the overall architecture of the system (S) of the present invention also termed hereafter as Gaze Exploration-index (GE-i) Explainable Detection Model is shown in figure 1. The system (S) comprises of at least four main modules namely visual exploration tasks module (V) , extensive gaze and performance (EXGP) module (E) , EXGP feature analysis module (F), explainable – detection model module (D) etc. The various modules are elaborated upon in the following paragraphs.
The visual exploration tasks module (V) is configured to capture vision exploration data to understand the visual functional skills of subjects during daily activities. The daily activities are depicted in the form of visual exploration tasks (V3). The data captured is based on at least three daily activities or day to day tasks that are depicted as visual exploration tasks: simple dot (T1) task herein called as visual exploration task 1 (V31), visual search (T2) task herein called as visual exploration task 2 (V32) and free-viewing (T3) herein called as visual exploration task 3 (V33) task. The system first presents a task of visual exploration task 1 (V31) that is a simple dot task (T1), which analyzes the performance of each eye (monocular vision) and identifies the more defected eye and less defected eye of the subjects. This subsequently checks the contribution of binocular vision by presenting visual exploration task 2 (V32) that is visual search (T2) task and visual exploration task 3 (V33) task that is a free viewing (T3) in the visual performance of subjects during day-to-day tasks such as visual search and image viewing and how they compensate for their visual field loss. There can be similar such visual exploration tasks besides V31, V32, V33 that depict or simulate day to day real life activity that can be utilized as part of the visual exploration module (V) to capture vision exploration data to understand the visual functional skills of subjects.
As per an embodiment of the invention the system includes at least one estimation of extensive gaze and performance (EXGP) module (E) that is selected from at least one submodule to assess basic eye gaze parameters (E1) and at least one submodule to assess eye gaze derived parameters (E2). Either open source software (E1) or a closed source software (E2) may be utilized to evaluate basic eye gaze parameters or measures (E11, …, to E1n) . A customized software (E2) is utilized to evaluate eye gaze derived parameters (E21, …, to E2n). Based on these eye gaze parameters or measures that includes basic eye gaze parameters and eye gaze derived parameters, the extensive gaze and performance (EXGP) module (E) estimates the values for the submodule of task performance (E3). Thus, the extensive gaze and performance (EXGP) module is capable of estimating at least 20-45 parameters to include 28 parameters that includes eye gaze parameters (E1P1, …, to E1Pn) and performance parameters (E3P1, …, to E3Pn) of different participating subjects after viewing visual exploration tasks (V3) . The output of the extensive gaze and performance (EXGP) module is depicted by the EXGP feature set submodule (E4).
The EXGP feature analysis module (F) is the next module of the invention wherein the EXGP feature set ( E4) of the previous module is analyzed based on severity grade (F11) and age (F12). The analysis of feature set (F1) is subjected to clinical validation ( F2) based on certain clinical measures. The feature analysis was based on statistical measures. The module also includes a submodule for assessment of progression (F3) of glaucoma in subjects already diagnosed with Glaucoma. The module includes another submodule for estimation of compensation (F4) exhibited by the subjects for visual loss.
As per an embodiment of the present invention, the next module namely explainable detection model module (D) is wherein the EXGP feature set (E4) is fed to at least one pretrained deep neural network (D1), that is part of explainable detection or Computer-Aided Detection model (CADe) module (D). The deep neural network (D1) predicts in case any data is not seen by the subjects during the visual explorations tasks as that of glaucoma or not. The accuracy of the model is based on the feature relevance.
The DNN (D1) was pretrained using a sample dataset that included 98 cases,
wherein 67% of the dataset was set as training dataset and 33% was set as a testing dataset. The Sequential Deep Neural Network (DNN) model fits the training dataset, and the accuracy score of the testing dataset is recorded after every iteration of feature relevance. The final list of five relevant features predicted the unseen samples and improved the accuracy score of the model to 0.83.
The explainability (D21) of the model was performed in a dashboard (D2) using explainable AI tool (DE), that is also part of explainable detection model module (D), in the form of various plots such as plot 1 (D211) to include waterfall plots , plot 2 (D212) to include contribution plots etc. The dashboard (D2) is also designed to generate screening index called Gaze exploration-index (GE-i) or (G22). The explainable detection model module (D) comprises of gaze exploration visualization (D3) corresponding to different visual exploration tasks (V31, V32, V33,… V3n), etc. in the form of fusion maps.
During the working of the present invention, the subjects are engaged and oriented to different visual exploration tasks (V3) displayed on the display screen (V1) such as but not limited to a laptop screen. The participating subjects explore the relevant information driven into the visual field using saccadic eye movements. An eye tracker system (V2) is connected to the display screen(V1) and the gaze exploration model (DM) in AGE-i system (S) that transfers eye tracking data or gaze samples or information obtained from extensive gaze and performance module (E) to events or basic eye gaze measures such as but not limited to fixations and saccades.
The underlying computer-aided detection model also called explainable detection model module (D) of the present invention generates screening index and creates fusion maps, that utilize the information obtained from task performance and eye movement behavior of the subjects. The explainable detection model module (D) of the system estimates exploratory gaze patterns that reflect in all tasks for the glaucoma group.
The software of the extensive gaze and performance module (E) to include open source software (E1) and customized software (E2), derive different eye gaze measures from the basic eye gaze events that assist in comprehending and evaluating the visual exploration skills.
The visual exploration skills are evaluated based on how the subjects perform global and local scanning during visual exploration tasks (V3) such as visual search (V32) or free-viewing images (V33). Participants explored the semantic importance while viewing images and made the search efficiently by utilizing more fixation and saccadic eye movements. The software of the extensive gaze and performance module (E) estimate different eye gaze parameters of the subjects based on both the exploration and the performance of the subjects.
A computational algorithm (DA) in the gaze exploration model (DM) identifies the target for searching and the salient features in the images. Computational algorithm identifies the region of interest based on the target or salient features. If Euclidean distance between fixation and target location is within a threshold (say 100 pixels), the participant has identified the target. Search duration and different eye gaze parameters are associated based on the region of interest. The gaze exploration of the subjects is compared with the output of the computational algorithm (DA). The comparison is expressed in terms of search duration and eye gaze parameters of the subjects during different visual exploration tasks (V3). This is visualized by adding the attention of the participants and output as feature maps of different tasks.
Visualization is based on performance and eye gaze measures help to understand gaze exploration skills. The feature maps depict the exploration and performance of different participants. The AGE-i system is validated based on different clinical tests. The gaze exploration of different participating subjects is expressed as gaze exploration index. The explainability of AGE-i system is done using ‘explainer dashboard’ which makes the system trustworthy.
The various hardware, devices and software which together form the system (S) comprises of but limited to as below:
Hardware Design
- Eye tracker system (V2)
- Monitor or display screen
Webcam enabled computer system adapter charger
Processor
Touch control board (Printed circuit board (PCB)
Power supply
User interface/Communication interface
Cloud interface to interact with server
Software design
- Open or closed source software
- Customized software
- Explainable AI tools
The model development of the present invention through research, experimental work etc. and training of the model were performed as described in the following paragraphs.
Hospital based research and experiments for development of model were conducted in Narayana Nethralaya, Narayana Health City, Bengaluru that were ethically approved by Ethics Committee of the hospital. The purpose of the experiment was explained and after the participating subjects signed the consent sheet, he/she was invited for the research.
Research cum experiments were conducted on a group of participants diagnosed with glaucoma by the standard test (clinical evaluation, visual field test, imaging techniques) and the same number of age-related controls, with an age group
of 30-70 years and no constraint on gender. The participants were selected during their regular glaucoma screening. After the visual field test, the Humphrey Field Analyzer (HFA) with 24-2 program produced a visual field report of different participants. The experimenter/health care person handling the participants maintained a copy of visual field report for clinical validation at a later stage . The flowchart of data collection is as depicted in figure 2.
The research excluded participants who had undergone ocular surgery in the past three months, history with a squint and retinal surgery, and glaucoma suspects. Participants satisfying the inclusion criteria for the study were recruited from the outpatient department (OPD) in said hospital. Glaucoma Hemifield Test (GHT) on the perimetry provides the label of ‘outside normal limits’ for glaucoma participants.
Glaucoma in the participants was diagnosed as mild, moderate, and severe based on the Visual Field Index (VFI) values obtained in the experiments. VFI value less than 40 was considered as severe category, VFI between 40 and 60 was considered as moderate category, and VFI value between 60 and 100 was considered as mild category. The subgroups of glaucoma and normal were also identified based on age groups such as young (age less than 45), middle-age (age between 46 and 60) and elder (age greater than 60) subgroups. The subgroup analysis based on severity grade and age-group broadened the understanding of exploratory gaze patterns in different tasks. The flow diagram of subgroup analysis is shown in figure 3.
The experimenter or health care person conducting the eye-tracking experiment explained the different tasks to the participants. The distance of the participant from the monitor or the display screen (V1) was maintained at 60 cm. A non-invasive eye tracker system (V2) such as but not limited to Eye Tribe 60 Hz with accuracy 0.5° and spatial resolution 0.1° was attached to the display screen (V1). The eye tracker (V2) used infrared illumination to capture the eye movements of the participants when viewing the stimulus on the computer screen also referred to as display screen (V1) here. Before starting the experiment, 9-point calibration was run to get the Pupil-Corneal Reflex (PCR) of participants and if required, re-calibration was done to get the pupil position correctly. The system (S) of present invention focused on the strategy of exploratory eye gaze patterns of different participants or subjects in different visual exploration tasks (V3).
Visual exploration tasks (V3) are screen-based tasks that depict specific tasks such as but not limited to searching for an object, watching T.V. and viewing photographs in daily life. These tasks are based on images that include certain scenes or contain a target with distractors. Participants engage and explore images based on the instructions either displayed on the display screen (V1) or given by the experimenter/ health care person. These images are designed in such a way that the participants would utilize all parts of the image on display screen (V1). Figure 4 shows a few sample images of various visual exploration tasks (V3).
One of the visual exploration tasks named simple dot task is the first task (T1), in which the stimulus includes a dot of certain size on an image to be displayed on the display screen (V1). This dot is preferably a white dot of size in the range of 15 to 20 pixels preferably 12 pixels. A plurality of these dots was displayed randomly on the screen one dot on one image for a certain period of time. The position of each dot is arranged or positioned in any of four quadrants of the display screen (V1) such as Top Left (TL), Bottom Left (BL), Top Right (TR) and Bottom Right (BR). These tasks are in numbers ranging between 1 to 50 and each image with said dot is displayed for a time period ranging from 0.5 seconds to 5 seconds to be viewed by said subjects monocularly and is not required to provide any response after viewing. As per an embodiment there are 30 images that are part of visual exploration task (V31) in said simple dot task and each image was displayed for 1.5 sec. The participant views the image monocularly (each eye separately) and is not required to provide any response. The output of this task (V31) is collected as gaze samples by the eye tracker system (V2). Figure 4(a) shows a sample image of visual exploration task (V31) also called T1.
In an embodiment of the present invention a second task of visual exploration task module (V), herein called as visual exploration task 2 (V32) is presented to the subjects, that is a visual search task is a task-oriented activity that includes a set of images such as but not limited to cartoon images, in which each image is displayed on the display screen (V1) for a short period of time and the subjects are prompted to search for a specified target in each of said images. A target in any form such as but not limited to a ‘star’ is placed in various positions on each of said images.
The subjects are engaged in said visual exploration task 2 (V32) in the following ways
One of the set of images is displayed on the display screen (V1) for a short period of time ranging from 5 seconds to 30 seconds preferably 20 seconds.
The subjects are prompted or instructed to search for said specified target such as but not limited to ‘star’ that is placed in various positions of said displayed image.
The subject is allowed to search for the target in the displayed image in order that when he/she finds the target he/she responds either by clicking mouse button on said target or by indicating his/her finding about position of said target to the experimenter/ health care person.
After the display of the above image, one central dot is displayed for a short period of time in the range of 0.5 seconds to 2 seconds preferably for 1 second.
The above four steps are repeated to complete the activity of search in each of the set of images of said visual exploration task 2 (V32).
Collecting the responses and storing as data frame to be fed to Extensive gaze and performance module (EXGP) (E). The responses of the subjects are collected as gaze samples by the eye tracker system (V2) in the following steps.
Responses are collected as x, y positions with respect to time, which are called raw eye gaze data.
The raw eye gaze data is processed to find events such as fixations and saccades.
The events are processed to find derived eye gaze parameters.
The number of such images as part of visual exploration task 2 (V32) ranges in number from 5 to 30, preferably 20 images, to include color and gray scale images wherein said images of visual exploration task 2 (V32) may be selected from open sources such as search engines to include Bing search engine, datasets, any image repositories, any image sources. The target includes different modalities such as size, orientation, position, and opacity in four quadrants viz. top left, top right, bottom right, bottom left. The size of the target may vary between 1 to 20 pixels preferably 10 and 14 pixels to match the background of the images. Visual exploration task 2 (V32) is designed in such a way that at least four images of the set of images are positioned in each quadrant. A central dot is displayed for a short span of time in the range of 0.5 seconds to 2 seconds preferably 1 sec after every image. Figure 4(b) shows an illustration of but limited to sample image of visual exploration task 2 (V32 or T2).
In an embodiment of the present invention, another task T3 herein termed as visual exploration task 3 (V33) forms part of visual exploration task module (V). The task includes a set of images in the range of 5 to 40 in numbers wherein the images are presented to the subjects one after the other and the subjects are instructed to observe different salient features in these images such as but not limiting to traffic lights, people, animals, etc. The subjects are not required to provide any responses to the task (V33). These images may be selected from search engines such as but not limited to Bing image search engine or datasets such as but not limited to CAT2000 benchmark dataset that are selective Indian based images, any image repositories, any image sources etc. The task includes images that are color and grayscale and the time for performance of viewing of each image ranges between 1 second to 10 seconds preferably 4 seconds in time. The images may be of different sizes but not limited to including half HD (1280 x 720), standard (1366×768) or full HD(1920 x 1080) or any other size that can be displayed fully on the display screen (V1).
A central dot is displayed for a short span of time in the range of 0.5 seconds to 5 seconds preferably 1 sec after every image displayed of visual exploration task (V33).
The subjects are engaged in said visual exploration task 3 (V33) in the following ways.
One of the set of images is displayed on the display screen (V1) for a short period of time ranging from 1 second to 10 seconds preferably 4 seconds.
The subjects are prompted or instructed to view the image presented to him/ her without asking for any response.
After the display of the above image, one central dot is displayed for a short period of time in the range of 0.5 seconds to 2 seconds preferably for 1 second.
The above three steps are repeated to complete the activity of viewing of each of the set of images of said visual exploration task 3 (V33).
The outputs of visual exploration are collected as gaze samples by the eye tracker system (V2) and wherein the responses of the subjects are collected as gaze samples by the eye tracker system (V2) in the following steps:
Responses are collected as x, y positions with respect to time, which are called raw eye gaze data.
The raw eye gaze data is processed to find events such as fixations and saccades.
The events are processed to find derived eye gaze parameters.
In order to seize or invite attention of subjects to the task a few of the images can be presented inverted or be augmented with sound effects or noise. Figure 4(c) shows an illustration of but not limited to a sample image of visual exploration task 3 (V33 or T3) with social scenes and applied sound effects /noise on it. Figure 4(c) shows also an illustration of but not limited to a sample image of visual exploration task V33 or T3 that has background scenes.
Eye gaze samples or the responses from eye tracker are collected as x, y positions with respect to time, which are called raw eye gaze data. The raw eye gaze data is processed to find events such as fixations and saccades. The events are processed to find derived eye gaze parameters to provide the exploratory gaze patterns in the EXGP module (E).
The system (S) of the present invention includes a module called Estimation of Extensive Gaze and Performance (EXGP) module (E). This module comprises of at least one submodule to assess basic eye gaze parameters comprising of open source or closed source software (E1) selected from a group of OGAMA 5.0, Gazepoint cloud, WebGazer, Gazerecorder, any python based eye gazing software etc. The open source software (E1) used here in an embodiment is OGAMA 5.0. The open source software (E1) or the closed source software (E2) estimates basic eye gaze parameters (E1P) from the gaze samples from the various vision exploration tasks (V3) captured by the eye tracker system (V2). The basic eye gaze parameters (E1P) are fed to customized software (E2) as inputs . A customized software (E2) estimates extensive eye gaze parameters also called as eye gaze derived parameters (E2P) that assists in differentiating eye movement behavior and identify it as glaucoma or normal. The customized software for estimating eye gaze derived parameters can be a proprietary software that is part of the present invention. The inclusion of open source software or closed source software (E1) in EXGP module (E) is required to calculate basic eye gaze parameters (E1P) from gaze samples that cannot be calculated using customized software (E2).
The open-source software (E1) preprocesses gaze samples from the eye tracker system (V2) to remove artefacts and outliers. This software (E1) estimates events such as fixations and saccades from the gaze samples from the eye tracker system (V2). The basic eye gaze parameters (E1P) include parameters also termed as events herein such as but not limited to fixations and saccades.
Since visual exploration tasks (V3) such as free viewing herein called as visual exploration task 3 (V33) and goal oriented tasks (visual search) herein called as visual exploration task 2 (V32) expect different manner of eye movements, the customized software (E2) is needed to calculate different comprehensive or extensive eye gaze measures also called as eye gaze derived parameters (E2P) from the basic eye gaze parameters (E1P). The summary of the relationship of eye gaze parameters with the visual field is depicted in table 3.
Table 3: Relation of eye gaze measures and its outcome.
Derived Parameters Definition Inference
Horizontal and Vertical Ratio (HV-ratio) Check of dominance of horizontal and vertical saccades Restriction in the visual field
Scan path Length Adding saccade amplitudes in a scan path
Convex Hull Area Polygonal space that covers all fixations of a participant
Saccadic direction Direction of saccades
Fixation Count Number of fixations Exploratory eye movement
Saccade Count Number of saccades towards the region of interest
Saccade rate Number of saccades per second
Saccade velocity Eye movement rate
Fixation Duration Duration taken by the eye to be still at a particular area Visual processing of the stimulus
Fixation/Saccade ratio Number of saccades greater than the amplitude threshold divided by the number of saccades smaller than the amplitude threshold (global scanning and detailed inspection
The calculation of eye gaze derived parameters (E2P) and their notations are described as follows:
Fixation Count (FC) – FC is calculated as the count of fixations done by the subject while viewing a stimulus. Average value of all trials is estimated for
each subject participating. Average FC for visual search also called as visual exploration task 2 (V32) and free-viewing tasks also called as visual exploration task 3 (V33) are denoted as Star_Avg_FC and fv_Avg_FC respectively.
Saccade Count (SC) – SC is calculated as the number of saccades made by subject or participant while viewing a stimulus. There can be saccades along a reverse direction called regression. Average of SC for visual search task also called as visual exploration task 2 (V32) and free-viewing task also called as visual exploration task 3 (V33)are abbreviated as Star_Avg Saccade Count and fv_Avg Saccade Count respectively.
Saccade rate – It is calculated as the number of saccades made by the subject or participant per second, and it is also known as eye movement rate. Average of SC for visual search task called as visual exploration task 2 (V32) and free-viewing task visual exploration task 3 (V33) are denoted as star_Avg Saccade Rate and fv_Avg Saccade Rate respectively.
Fixation Duration (FD) – The duration taken by the eye of a subject to be still at a particular area. While viewing the stimulus, the subjects or participants show a smaller number of fixations and higher number of saccade amplitude to understand visual information and later fixation duration will be increased and saccade amplitude will get reduced to understand the semantics of the stimulus. It is known in the art that visual field is longer horizontally than vertically. Average of SC for visual search task called as visual exploration task 2 (V32) and free-viewing task called as visual exploration task 3 (V33) are denoted as Star_Avg FD and fv_FD mean respectively.
Fixation/Saccade ratio (F/S ratio) – It indicates the number of saccades greater than the amplitude threshold divided by the number of saccades smaller than the amplitude threshold. It shows the difference between global scanning and detailed inspection. F/S ratio of free-viewing and visual search are prefixed with star and fv as star_F/S ratio and fv_F/s ratio respectively.
Saccade Velocity – Saccade velocity (SV) is the eye movement speed. It is calculated by dividing saccade amplitude by saccade duration. SV of free-viewing and visual search are prefixed with star and fv as star_SV and fv_SV respectively.
Scanpath Length (SL) - This parameter is calculated by adding saccade amplitudes in a scan path. SL of free viewing called as visual exploration task 3 (V33) and visual search called as visual exploration task 2 (V32) are prefixed with star and fv as star_SL and fv_SL respectively. The length between fixations is denoted as Fix Conn Length; prefixed with star and fv.
Saccadic Direction – Saccadic orientation is the direction of saccades and generally while viewing scenes participants show horizontal orientation and the dominance of horizontal and vertical saccades can be inspected using Horizontal and Vertical Ratio (HV-ratio). Saccadic Direction of free viewing called visual exploration task 3 (V33) and visual search called as visual exploration task 2 (V32) are prefixed with Star and fv as star_Saccadic direction and fv_Saccadic direction respectively.
Scanpath - Scanpath is a graph containing fixations as vertices and saccades as edges between vertices.
Convex Hull Area – It refers to the polygonal space that covers all fixations of a subject or participant across all trials. It shows the shape of the scanpath done by a subject or participant across all trials. Average of convex hull area for visual search task called as visual exploration task 2 (V32) and free viewing called as visual exploration task 3 (V33) are denoted as star_Convex Hull Area and fv_Convex Hull Area respectively. The derived parameters can include other parameters that are based on fixation and saccade with respect to duration, position and velocity.
The performance of subjects or participants during visual exploration tasks (V3) is estimated based on the parameters from the EXGP module (E) module. The monocular performance in simple dot task called as visual exploration task 1 (V31) and binocular performance in visual search task called as visual exploration task 2 (V32) is estimated based on the fixation over the target within the trial time.
The monocular performance of participants in the simple dot task (V31) is estimated using average miss. This performance parameter (E3P) is calculated by summing up of miss in different trials divided by number of images. It is estimated for each eye such as left eye miss and right eye miss. If the subject or participant is able to fixate on the target, it is considered as seen/hit. The performance measure (E3P) in simple-dot task is denoted as Dot_Avg Miss. The task is followed by binocular performance in daily routines tasks such as visual search (V32) and free-viewing tasks(V33). The reaction time to identify the target is calculated from the act of clicking the mouse on the target or region of interest such as but not limited to a ‘star’. The subjects participating can also communicate with the experimenter about the target’s such as star’s location and fixate on the target for five seconds. The threshold of fixation duration was earlier decided based on a pilot study. The average reaction time is calculated as the summing up of reaction time divided by the number of images. The performance measure (E3P) in visual search task is denoted as Star_Avg RT.
The free-viewing task (V33) is not a goal-oriented activity and hence no performance measure (E3P) is estimated. The extensive eye gaze measures (E2P) and performance measures (E3P) are together forming the feature set (E4) of EXGP module (E). EXGP module (E) outputs EXGP Feature Set (E4). The feature set (E4) includes at least 28 features that are fed to analysis framework or EXGP feature analysis module (F) module and detection model (DM) of Explainable detection model module (D).
Mean, standard deviation (in paranthesis) and p-value between glaucoma and normal of different features in EXGP feature set (E4) are shown in Table 4. The p-value <0.05 is shown in boldface manner, that are significant.
Table 4. Mean and standard deviation (in parenthesis) of EXGP features and p-value between glaucoma group and normal group.
Sl.No. EXGP features Glaucoma Normal p-value
1 Dot_Avg Miss 0.6(0.3) 0.2(0.2) p<0.001
2 Star_Avg RT 7.0(3.6) 5.6(2.9) 0.048
3 Star_Avg FC 5.5(4.7) 6.1(5.4) 0.59
4 Star_Avg FD 309.1(175.2) 240.9(97.7) 0.029
5 star_FC/s 0.6(0.4) 0.9(0.8) 0.0052
6 star_F/S ratio 185.1(141.3) 251.2(209.4) 0.09
7 star_Avg SL 255.9(77.8) 263.1(67.4) 0.64
8 star_Avg SV 1.2(0.9) 1.8(1.1) 0.004
9 star_Fix Conn Length 1391.0(1015.2) 2124.5(1418.5) 0.007
10 star_HV ratio 36.4(131.5) -6.6(52.9) 0.051
11 star_Saccadic direction 0.1(0.4) 0.4(1.4) 0.2
12 star_Convex Hull Area 3120.2(625.8) 3379.8(584.6) 0.05
13 star_Avg Saccade Count 6.5(4.4) 5.1(3.7) 0.11
14 star_Avg Saccade Rate 1.4(0.3) 1.3(0.3) 0.42
15 fv_FC 4.6(3.2) 6.1(4.3) 0.08
16 fv_FC/s 1.4(0.7) 1.8(0.9) 0.007
17 fv_FD mean 197.1(73.2) 185.3(77) 0.46
18 fv_SD mean 143.4(42.6) 135.5(32.7) 0.33
19 fv_F/s ratio 263.8(169.5) 282.6(208.2) 0.64
20 fv_Avg SL 184.8(95.9) 203.1(84.8) 0.35
21 fv_Avg SV 1.5(1.2) 1.7(1) 0.47
22 fv_Fix Conn Length 734.4(521.3) 1230.3(932.6) 0.003
23 fv_Regressions 0.7(0.3) 0.8(0.5) 0.3
24 fv_HV ratio 571.6(2763.9) 0.4(28.6) 0.18
25 fv_Saccadic direction 0.1(0.5) 0.1(0.2) 0.66
26 fv_Convex Hull Area 2400.7(632.6) 2934.2(463.8) p<0.001
27 fv_Avg Saccade Count 4.5(3.0) 5.8(3.6) 0.06
28 fv_Avg Saccade Rate 1.6 1.8(0.5) 0.03
The performance measures (E3P) such as Dot Avg Miss and Star Avg RT is significantly different between glaucoma and normal with p<0.001 and p<0.05 respectively. There is also a significant difference in fixation duration, fixation count per sec, saccade velocity and fixation connection length with p<0.05 between Glaucoma and Normal in visual search task (V32). It has been observed that Glaucoma group take longer fixation duration than the Normal group with a smaller number of fixation count and that makes their performance in visual search task (V32) poor.
There is also a significant difference in fixation count per second, fixation connection length, convex hull area and saccade rate between Glaucoma and Normal in free-viewing task (V33). Glaucoma subjects show lower fixation count and less convex hull area than Normal subjects in free-viewing task(V33). Eye gaze samples from eye tracker are collected as x, y positions with respect to time, which are called raw eye gaze data. The raw eye gaze data is processed to find events such as fixations and saccades. The events are processed to find derived eye gaze parameters.
The exploratory gaze patterns are estimated based on responses of the subjects to the visual exploration tasks(V3). The responses are collected as x, y positions with respect to time. These are called raw eye gaze data. The raw eye gaze data is given to eye gaze software such as Python that identifies fixations and saccades events.
EXGP module (E) in the GE-i proposed model estimates a number of parameters ranging from 20 to 45 . In an embodiment of the present invention at least 28 parameters, which includes 26 comprehensive eye gaze measures (E2P) that are estimated during visual search task (V32) and free-viewing task (V33) altogether, average miss estimated during simple dot task (V31) and average reaction time estimated during visual search task (V32). These parameters comprise to form EXGP feature set (E4). The feature set (E4) from Extensive gaze and performance (EXGP) module (E) is analyzed in the Analysis of feature set submodule (F1) of the EXGP feature analysis module (F) based on different severity (F11) grade subgroups and age (F12) based subgroups of glaucoma identified subjects or also called glaucoma group. The summary of significance testing is tabulated in Table 5, Table 6 and Table 7, and explained in the subsequent paragraphs.
Table 5: Summary of significance testing and impact in severe glaucoma subgroup.
Table 6: Summary of significance testing and impact in moderate glaucoma subgroup.
Table 7: Summary of significance testing and impact in elderly glaucoma subgroup
The Glaucoma group is categorized into three subgroups viz. severe, moderate and mild subgroups for the purpose of research and development of the present invention. During research it was seen that comparison of eye tracking or gaze measures (E1P, E2P) between severe and mild subgroups show a significant difference in saccade count and saccade rate p<0.001 during both visual search (V32) and free-viewing tasks (V33) and in the reaction time during visual search (V32). No significant difference has been identified in average miss as well as other eye gaze measures (E1P, E2P) between the severe and mild subgroups.
The comparison between moderate and mild subgroups shows that there is a significant difference between saccade count and saccade rate in free-viewing task and visual search task with p<0.001. But no significant difference has been identified in other EXGP features. Saccade rate and saccade count are the exploratory eye gaze patterns shown by mild glaucoma subgroups than moderate and severe glaucoma subgroups.
Based on age , the Glaucoma group is categorized into elder, middle-age and young subgroups. The comparison between elder and middle age subgroups shows a significant difference in average miss in simple dot task (V31) with p=0.001 and convex hull area in free-viewing task (V33) with p-value 0.037.
Elder and young subgroups show a significant difference in the performance of average miss and average reaction time with p-value=0.018 and p-value=0.014 respectively. There is a significant difference in fixation duration in both visual search (V32) and free-viewing tasks (V33) with p-value=0.031 an p-value=0.035 respectively. Convex hull area in visual search task shows p-value with 0.04 between elder and young subgroups.
No significant difference shows up in EXGP features between young and middle-age subgroups. There is a significant difference in the performance during tasks between elder and young subgroups. Elderly glaucoma subgroup shows
longer fixation duration that led to limited exploration during the performance of both tasks
Humphrey Field Analyzer (HFA) visual field test is done for left and right eyes separately. The parameters in the visual field report estimate the retinal sensitivity of each eye, which helps the clinicians to understand functional deficits in terms of the visual field are as follows.
Mean Deviation (MD) - the average deviation from the age-matched normal in terms of retinal sensitivity. The negative values show the presence of the worse field defect.
Pattern Standard Deviation (PSD) - Clinicians use PSD to understand the irregular depression in the visual field defect. The higher positive values indicate the higher functional loss.
Visual Field Index (VFI) - the percentage of visual field status. The lower value indicates worse field defects.
Glaucoma is diagnosed as mild, moderate, and severe based on the Visual Field Index (VFI). The higher severity grade is labeled in case each eye has different severity grades.
The distribution of data in different clinical measures is shown in Figure 5. The descriptive statistics of clinical measures and p-value between glaucoma group and normal group are given in Table 8.
Table 8: Mean and standard deviation (in parenthesis) of age and clinical features and p-value between glaucoma group and normal group.
Clinical Information Glaucoma (N=50) Normal (N=48) p-value
Age 54.9(13.7) 51.84(11.78) 0.264
L_MD -12.0(9.5) -3.04(2.34) p<0.001
L_PSD 6.2(3.3) 1.94(0.94) p<0.001
L_VFI 70.2(28.6) 97.26(3.14) p<0.001
L_G 1.5(1.2) 3.51(1.70) p<0.001
R_MD -9.5(9.1) -3.36(3.74) p<0.001
R_PSD 4.8(3.7) 2.25(1.74) p<0.001
R_VFI 78.2(28.2) 95.28(11.45) p<0.001
R_G 2.0(1.5) 3.35(1.72) p<0.001
EXGP features were validated with clinical features using Spearman Correlation Coefficient. Since visual search and free-viewing tasks were performed using both eyes, but clinical testing was performed for each eye, for validation purpose, clinical measures such as MD, PSD and VFI were taken for higher severity eye only.
Average Reaction Time is positively correlated with age 0.44. Convex hull area in visual search task is positively correlated with VFI of 0.42. Other EXGP features have weak correlation with clinical measures.
The EXGP feature analysis module (F) also incorporates a submodule for assessment of progression (F3) of glaucoma in subjects already diagnosed with Glaucoma. The submodule is able to indicate if the condition of Glaucoma in subjects already diagnosed with the disease is progressing or regressing or the pace of progression / regression by comparing with the previous conditions in said subjects.
Compensation exhibited by subjects with visual field loss is based on the shorter reaction time during tasks. This is estimated by the submodule to estimate compensation for visual loss in subjects. The visual exploration task 2 (V32) checks the contribution of binocular vision of said subjects in their visual performance during day-to-day tasks and further assesses how the subjects compensate for their visual field loss.
Explainable Detection Model (DM) includes explainable AI tool (DE), Deep Neural Network (D1) and Gaze Exploration Visualization (D3). The explainability of the detection model is based on the contribution of different features in generation of Gaze Exploration - index (GE-i) and the visualization of exploration tasks of different subgroups.
The eye gaze parameters (E1P,E2P) and performance measures (E3P) of visual exploration tasks unified to form EXGP feature set, which included 20 – 45 parameters that include at least 28 input parameters. The input feature vectors were fed to a sequential DNN model, which predicted the class label (glaucoma and normal).
Sequential DNN model is a stack of layers that produces output values based on the input feature vectors x1, x2,..,xm, where m is the number of feature vectors. The input shape of the DNN model was 28 features in EXGP feature set (F4) . The feature set is fed in to 28, 24, 22 stack of fully connected (dense) layers with drop out value 0.5 at the end of every dense layer.
Dropout technique helped to drop or retain the nodes for the next layer. Rectified Linear Unit (ReLU) activation function was applied to every dense layer to activate the nodes. ReLU is calculated as f (x) = max(0, x). The final dense layer
outputs the probability between 0 and 1 with threshold 0.5. Class 0 referred to as Normal and class 1 referred to as Glaucoma. The activation function is selected from rectified Linear Unit (ReLU), Sigmoid or Logistic Activation Function , SoftMax Function. Leaky ReLU Function and Tanh Function preferably the ReLU activation function that is compiled using Keras libraries with TensorFlow as the backend.
The DNN model was compiled using Keras libraries with TensorFlow as the backend. Loss of the model was defined as mean squared error and optimizer as Adam stochastic gradient descent algorithm. The model was fitted in the training dataset over 200 epochs. DNN model finally predicted on the test dataset and generated evaluation metrics such as accuracy score, sensitivity and specificity. The summary of the sequential model is given in table 9.
Table 9: Summary of DNN architecture
Layer Shape Activation Parameters
Dense 28 ReLU 812
Dropout 28 - 0
Dense 24 ReLU 696
Dropout 24 - 0
Dense 22 ReLU 550
Dropout 22 - 0
Dense 2 - 46
The decision of the input shape or input features fed to the DNN model was based on SHapley Additive explanations (SHAP) Kernel Explainer. Kernel Explainer computed the relevance of each feature towards DNN model based on SHAP values. Positive SHAP values inferred that the feature has a positive impact towards the model, otherwise it has a negative impact towards the model. SHAP values were generated mathematically using (1).
Ø_i (f,x)=?_(z^'?x^')¦(|z^' |-1)!(|x^' |-|z^' |)!/(|x^' |)!(f_x (z^' )-f_x (z^'\i))……………………(1)
The Explainer () in the SHAP library returned the relevant features from the EXGP dataset. The relevant features fed to the DNN (D1) model and the performance of the model were evaluated for different iterations. Iteration in the detection model (DM) is the repeated selection of features from the EXGP dataset and estimation of performance metrics of DNN (D1) model after feeding the pertinent features in the model.
In iteration number, t=1, all 28 features i.e., f=28 in EXGP feature set were fed to DNN (D1) model. The model was evaluated based on criteria such as accuracy, sensitivity and specificity. The explainability of DNN (D1) model was checked based on Kernel Explainer and top 10, i.e. f=10 which contributed towards evaluation were selected for iteration number, t=2.
The evaluation metrics of the model were calculated and recorded. The final feature list which includes top 5 features, f=5 were selected and fed to DNN model in iteration number, t= 3. The final Explainable Detection Model (DM) for the prediction of glaucoma is based on top 5 features: fv_Convex Hull Area, star_Fix Conn Length, star_Avg FC, Dot_Avg Miss and fv_Fix Conn Length. The workflow pipeline in Explainable Detection Model is shown in the Figure 6.
The explainability of the feature importance was amalgamated with an interactive dashboard (D2) using ‘explainer dashboard’ library. ‘Regression Explainer’ performed explainability of final list of 5 features using scikit-learn based machine learning model on test data. The dashboard (D2) helped to answer different ‘what if’ questions by showing feature dependence plot, feature contribution plot and table based on actual class: glaucoma and normal. Certain weights were assigned to the relevant features to discriminate between glaucoma and normal. Thus, a screening index called Gaze Exploration-index (GE-i) is generated.
Another task of Explainable Detection Model (DM) was the visualization of gaze exploration. The pertinent or exploratory gaze patterns were visualized on to a single image. This helped to understand the difficult regions of participants during the performance of tasks.
Gaze Fusion Map (GFM) map was generated by fusing relevant information of 30 images. It is the outcome of monocular performance of different participants by fusing ‘hit/miss’ of 30 images. The dark spot represents ‘not seen’ the target and red spot represents ‘seen’ the target.
Gaze Fusion Reaction Time (GFRT) map was generated by fusing relevant information or target into a single image. Two variables such as hit/miss and reaction time during visual search task were overlaid onto the image. It is the outcome of binocular performance during 20 images. GFRT visualization helps to understand the position of different targets, hit/miss of the target, and average reaction time.
Reaction time and miss of different participants visualized on to a single image, highlighted the difficult regions irrespective of exploratory gaze patterns.
Gaze Exploration-index (GE-i) Explainable Detection Model is an interactive platform that is an explainability technique written in the collaboratory notebook. Thus, it is open source software, which makes it very portable. The sample dataset included 98 cases, and 67% of the dataset was set as training dataset and 33% was set as a testing dataset. The Sequential Deep Neural Network (DNN) model fits the training dataset, and the accuracy score of the testing dataset is recorded after every iteration of feature relevance. The final list of 5 relevant features predicted the unseen samples and improved the accuracy score of the model to 0.80.
Kernel Explainer on SHAP (SHapley Additive explanations) explained different attributes of the detection model (DM) . The summary plot of SHAP depicted the feature relevance in descending order. The accuracy of the DNN (D1) model is improved based on the relevant features given as input variables. In each iteration, f=28, f=10, and f=5 pertinent features of the training dataset were fed to the model and recorded the accuracy, sensitivity, and specificity. The summary plot of iterative improvement in the accuracy of the DNN (D1) model after feature relevance is shown in Figure 7.
The final list of five relevant features is shown in the bar graph Figure 8.
A positive interaction existed between average miss during simple dot task and fixation connection length generated during visual search. The dependency plot between average miss and fixation length of visual search is shown in the Figure 9. There is a linear and negative trend between convex hull area and fixation connection length. The dependency plot between convex hull area generated during free-viewing task and fixation connection length during visual search is shown in Figure 10.
The waterfall plot shows the feature relevance towards the prediction of class. The waterfall plot of relevant features is depicted in Figure 11. The base value or E[f(X)] is the expected value that calculates the model output’s average.
Convex Hull Area estimated during the free-viewing task had a negative trend towards prediction result. Fixation connection length generated during the visual search task had a positive trend towards the prediction. Fixation count in visual search, fixation connection length generated during visual search task, and average miss in simple dot task had a positive trend towards the prediction result. Fixation connection length and convex hull area during the free-viewing task had a negative trend from the actual class label.
The comparison between fusion maps of an eye researched is shown in figure 12. The fusion maps of three subgroups of glaucoma: severe, moderate and mild, and normal are shown in the figure. GFM map of severe glaucoma participant, Sub_73 showed more dark spots in the edges of the screen and towards the center. GFRT map of severe glaucoma participants did not identify the target in most of the stimuli, and reaction time was longer towards the edge of the screen. The subgroup showed restriction in the field of view with a limited number of fixations in the GFCH map. GFM map and GFRT map of moderate subgroup, Sub_42 showed that the miss of target is less than that of the severe subgroup. The reaction time to find out the target is longer towards the screen.
GFCH map of moderate subgroup showed that the number of fixations is occupied on a specific part of the screen. The participant in the mild subgroup, Sub_97 showed less miss in simple-dot and visual search tasks than the severe and moderate subgroups. On the other hand, the GFCH map depicted that the moderate subgroup showed more fixations than higher severity subgroups. The normal group (Sub_86) could find almost all the target points in simple-dot and visual search tasks. The reaction time of the normal group during the visual search task is shorter than other subgroups of the glaucoma group. GFCH map of the normal group occupied more fixations overall portion of the screen without any restriction.
A dashboard (D2) was created to reveal model explainability and to generate Gaze Exploration-index (GE-i). The final relevant features are fv_Convex Hull Area, star_Fix Conn Length, star_Avg FC, Dot_Avg Miss and fv_Fix Conn Length. fv_Convex Hull Area showed high predictive power towards class label (Glaucoma and Normal). The equation is derived based on the weights formulated after regression equation.
Gaze Exploration-Index (GE-i) is a single parameter based on the top five relevant features. GE-i equation is generated as in (2), significantly different between glaucoma and normal.
GE – I
= w1 * fv_Convex Hull Area
+w2 * star_Fix Conn Length + w3 * star_AvgFC
+w4 * Dot_Avg Miss + w5 * fv_Fix Conn Length + b0. ………………..(2)
The mean and standard deviation of GE-i value of glaucoma and normal is shown in Table 10. There is a significant difference between glaucoma and normal in GE-i value. The box plot showed the distribution of GE-i of glaucoma and normal in figure 13.
Table 10: Mean and standard deviation (in parenthesis) of GE-i value
Measure Normal Glaucoma
GE-i value -0.92(1.04) 0.50(0.82)
The Gaze Exploration-index (GE-i) Explainable Detection Model comprised data acquisition, estimation of performance and eye gaze parameters, explainable detection model, and generation of screening index. The visual exploration tasks
were displayed on the screen to understand the exploratory eye movement patterns to compensate for the visual field loss. The feature extraction process included estimating basic eye gaze parameters using open source software or closed source software and estimating derived parameters using customized software in the proposed model. The explainable detection model determined the relevant features for predicting glaucoma and generated a screening index on gaze exploration.
During visual exploration tasks, glaucoma participants showed a slower response in search tasks because of longer fixation duration. Simple dot task is a short trial time task, and hence the difference in the performance between glaucoma and normal is enormous. Nevertheless, glaucoma subjects performed the task-oriented activity better than the free viewing task. The decrease in the performance is due to longer fixation duration, and they did not respond within the trial time. However, the free-viewing task’s restriction in eye movement behavior was explicitly seen due to less fixation count and shorter fixation connection length.
Analysis of EXGP parameters also enlightens compensatory eye gaze patterns among different glaucoma subgroups. Statistical measures showed that age reduced the effort of ignoring distractors and affected the performance of glaucoma participants during the simple-dot task and visual search task. Young glaucoma participants improved the search performance by coping with the difficulty by improving the frequency of fixation and its duration, neglecting their limitation in the field of view. Mild glaucoma or early-stage glaucoma patients showed many saccades to compensate for the visual field loss. The exploratory gaze patterns are involved in both free-viewing and visual search tasks.
Elderly glaucoma participants show restricted eye movement behavior compared to middle-aged and young glaucoma participants. Subgroup analysis of EXGP features highlights that visual exploration is worsened due to the impact of age rather than severity. Restriction or convex hull area is positively correlated with the visual field index or the indicator of severity grade. The pertinent features of each task were visualized onto fusion maps. Gaze Fusion Map (GFM) highlighted the monocular ability of search performance. The binocular performance of all glaucoma subgroups was improved due to compensatory eye movement patterns, which can be seen in the Gaze Fusion Reaction Time (GFRT) map. The visual processing was reduced during the free-viewing task which is depicted in Gaze Convex Hull Map (GCHM).
All EXGP features are initially fed to the deep learning model, and the performance is tuned based on the input of relevant features. The relevant features to discriminate between glaucoma and normal are present in different visual exploration tasks. In simple image-viewing tasks, glaucoma patients occupied their fixations in a limited field of view, and during task-oriented activity, they utilized compensatory eye movements in the form of a greater number of fixations and increased fixation connection length. All these features were dependent on each other and contributed to the detection of glaucoma.
Table 10 and Figure 14 (a) show the contribution table and plot of the top relevant EXGP features of a normal participant, Sub_44 respectively. The normal participant showed a large number of fixations in performing visual search tasks and free-viewing tasks and the average miss is less in the simple dot task. FC in the free-viewing task showed a positive impact on the model.
Table 10: Contribution table of relevant features towards the prediction: true case: normal prediction
Table 11 and Figure 14 (b) show the contribution table and plot of the top relevant EXGP features of a normal participant, Sub_64 towards the final prediction. The subject is the middle-age normal group. The convex hull area parameter during the free-viewing task is less than the mean value of glaucoma. The fixation connection length during visual search and free-viewing tasks is shorter than that of glaucoma participants. Hence the GE-i Explainable Detection model has misclassified Sub_64 participant as a glaucoma participant.
Table 11: Contribution table of relevant features towards the prediction: false prediction as glaucoma
Table 12 and Figure 14 (c) show the contribution table and plot of the top relevant EXGP features of a glaucoma participant, Sub_45 towards the final prediction. The severity of Sub_45 participant was severe in one eye and normal in another eye and belonged to the young age group. The explainable detection model misclassified the glaucoma participant as normal because the convex hull area is larger than the mean value of normal and fixation connection length in case both visual search and free-viewing task are longer, similar to normal. Since only one eye is affected with severe glaucoma and belonged to the young age group, the participant used compensatory eye movement patterns and showed more fixation count to find the target. Convex hull area also showed that they utilized a large portion of the screen to explore the images.
Table 12: Contribution table of relevant features towards the prediction: false prediction as normal
Table 13 and Figure 14 (d) show the contribution table and plot of the top relevant EXGP features of a glaucoma participant, Sub_75 towards the final prediction. The severity of Sub_75 participant was severe on both eyes and belonged to the middle age group. The value of relevant features shown by Sub_75 are less than the mean value computed for glaucoma, and hence the features positively impact the model and are correctly predicted as glaucoma.
Table 13: Contribution table of relevant features towards the prediction: true case: glaucoma prediction
Young age group participants with no glaucoma condition in one of the eyes show compensatory eye movements with more fixation counts in their defected visual field area.
In some prior research works, Glaucoma Risk Index (GRI) was formulated based on clinical measures calculated on structural fundus images. The system (S) of the present invention Gaze Exploration - index (GE-i) Explainable Detection Model is different from the previous research works in the aspect of eye movement measures while viewing any visual exploration tasks. The system (S) of the present invention focuses on how the glaucoma group utilizes the field of view in performing day-to-day tasks. GE-I is formulated based on the weights applied on the relevant features. The relevant features are taken based on the Shapley Additive explanations (SHAP) and the weights are generated using regression. The GE-i screening index discriminates glaucoma and normal based on the visual exploration on day-to-day tasks.
The method for automatic screening, prediction and detection of glaucoma wherein said method comprises of steps as mentioned below:
Acquiring eye gaze data of subjects by eye tracker system (V2) while presenting to and engaging said subjects with a plurality of visual exploration tasks (V31,…V3n) on display screen (V1) of visual exploration task module (V).
Estimating monocular performance of subjects by evaluating visual exploration task 1 (V31) and subsequently estimating binocular performance by evaluating other visual exploration tasks such as but not limited to 2 and 3 (V32, V32,…..V3n) to assess how subjects eyes compensate for their visual field loss.
Estimating a plurality of basic eye gaze parameters (E1P) from said acquired eye gaze data from eye tracker system (V2) utilizing open source software or closed source software of the submodule to assess basic eye gaze parameters (E1) of extensive gaze and performance module (E).
Estimating a plurality of eye gaze derived parameters (E2P) from said acquired eye gaze data from eye tracker system (V2) and said basic eye gaze parameters (E1P) obtained in previous step, utilizing customized software of submodule to assess eye gaze derived parameters (E2).
Estimating a plurality of performance parameters (E3P1, E3P2,…, E3Pn) by task performance measures submodule (E3) utilizing said basic eye gaze parameters (E1P) and said eye gaze derived parameters (E2P).
Providing output for extensive gaze and performance module (E) in the form of feature set (F4) comprising of said eye gaze derived parameters (E2P) and said performance parameters (E3P) obtained in the previous steps.
Analyzing statistically said feature set (E4) from extensive gaze and performance module (E) to obtain for glaucoma identified subjects forming glaucoma group an analysis based on severity (F11) and an analysis based on age (F12).
Identifying exploratory eye gaze patterns in different subgroups based on severity and age,.
Feeding feature set (E4) to pretrained deep neural network (D1) of explainable detection model module to create detection model (DM).
Predicting by deep neural network (D1) of any unseen data by subjects engaged in visual exploration tasks (V3) as due to Glaucoma and as normal in case of absence of any such unseen data.
Performing and presenting explainability (D21) of said detection model in a dashboard (D2) in the form of a plurality of plots (D211, D212,….D21n) such as but not limited to waterfall plots, contribution plots etc..
Providing visualization for said visual exploration tasks such as 1, 2, 3 (V31,V32,V33..) in the form of fusion maps by high-level abstraction onto said screen based images pf visual exploration tasks (V3) based on said eye gaze parameters (E1P,E2P) and said performance parameters(E3P).
Generating and providing screening index called gaze exploration-index-GE-I (D22) by detection model (DM).
Method of acquiring data for cross validation of detection model (DM) for pretraining of deep neural network (D1) comprises of steps of as summarized below:
Conducting standard tests such as but not limited to clinical evaluation, visual field test, imaging techniques for glaucoma screening of subjects and the same number of age-related controls with an age group of 30-70 years without constraint on gender.
Producing a visual field report of subject participants, after visual field test, by the Humphrey Field Analyzer (HFA) with 24-2 program that forms the sample dataset for pretraining of DNN(D1)wherein the pretraining is conducted by
Setting a certain percentage of sample dataset such as 67% as training dataset and the remaining as testing dataset. Utilizing the field report of the dataset.
Fitting the training dataset by the Sequential Deep Neural Network (DNN) model.
recording the accuracy score of the testing dataset after every iteration of feature relevance.
Identifying the final list of at least five relevant features
Predicting the unseen samples and improving the accuracy score of the model
Identifying subjects as subgroups of glaucoma and normal based on above screening.
Categorizing subjects with glaucoma as mild, moderate, and severe based on the Visual Field Index (VFI).
Categorizing said subgroups of glaucoma and normal based on age as young middle age and elder.
Storing said data of glaucoma and normal subjects for clinical validation submodule (F2) of EXGP feature analysis module (F) of system(S) for automatic screening, prediction and detection of glaucoma.
Performing clinical validation of acquired eye gaze data by system(S) for automatic screening, prediction and detection of glaucoma based on said visual field report.
EXAMPLES
The present invention shall now be explained with accompanying examples. These examples are non-limiting in nature and are provided only by way of representation. While certain language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be seeming to a person skilled in the art, various working alterations may be made to the method in order to implement the inventive concept as taught herein. The figures and the preceding description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of steps of method or processes of data flow described herein may be changed and is not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
In an exemplary embodiment, the various hardware, devices and software which together form the system (S) along with the working of the invention and the method thereof are illustrated below.
Hardware
DesignEye tracker system (V2) such as but not limited to Eye Tribe 60 Hz with accuracy 0.5° and spatial resolution 0.1°.
Monitor or display screen such as laptop or desktop screens
Webcam enabled computer system such as a laptop, desktop-
Processor such as Intel i7 processor or higher.
Printed circuit board (PCB) - Touch control board such as DSP PCB 7483
Power supply such as HP 65W AC
computer system adapter charger- 4.5 mm
User interface/Communication interfaceRJ-45
Cloud interface to interact with server
Software design
Open source software such as OGAMA 5.0, any python based eye gaze software, such as Gazepoint Cloud, WebGazer, and Gazerecorder
Customized software that can be a proprietary software of the present invention
Explainable AI tools such as SHAP, Lime. Deep Explainer, feature importance techniques.
The table 14 below displays list of hardware and software devices utilized in the present invention.
Table 14: List of hardware and software devices
Ser No. Device/tool List of options Specific name
1 Eye tracker system (V2)
Eye Tribe Tracker Eye Tribe 60 Hz with accuracy 0.5° and spatial resolution 0.1°.
2 Monitor or display screen Laptop, desktop screens Laptop
3 Webcam enabled computer system Laptop, desktop, Tablets Laptop
4 Processor Intel i7 processor or higher same
5 Printed circuit board (PCB) Touch control board such as DSP PCB 7483
6 Power supply Regular power supply HP 65W AC
7 Computer system adapter charger Regular specifications 4.5 mm
8 User interface/Communication interface Any user interface RJ-45
9 Cloud interface to interact with server Any cloud interface -
10 Open source software or closed source software any Python based eye gaze software , OGAMA 5.0 , Gazepoint Cloud, WebGazer, and Gazerecorder OGAMA 5.0
11 Customized software Proprietary eye gazing software Proprietary eye gazing software
12 Explainable AI tools SHAP, Lime , Deep Explainer, feature importance techniques SHAP
,CLAIMS:We claim:
1. A system (S) for automatic screening, prediction and detection of glaucoma in subjects wherein said system (S) comprises of
- at least one visual exploration task module (V01, V02, …, Vn), said visual exploration task module (V) comprising of
• at least one display screen submodule (V101, V102, …, V1n),
• at least one eye tracker system submodule (V201, V202, …, V2n) and
• at least one visual exploration tasks submodule (V301, V302, …, V3n),
- at least one extensive gaze and performance module (E01, E02, …, En), said extensive gaze and performance module (E) comprising of
• at least one submodule to assess basic eye gaze parameters (E101, E102, …, E1n),
• at least one submodule to assess eye gaze derived parameters (E201, E202, …, E2n),
• at least one task performance measures submodule (E301, E302, …, E3n) and
• at least one feature set output submodule (E401, E402, …, E4n),
- at least one feature analysis module (F01, F02, …, Fn), said feature analysis module (F) comprising of
• at least one submodule for analysis of features (F101, F101, …, F1n),
• at least one submodule for clinical validation (F201, F201, …, F2n),
• at least one submodule for assessment of progression (F301,F302,…, F30n) of Glaucoma in subjects having Glaucoma and
• at least one submodule for estimating compensation for visual field loss (F401,F402, …, F40n),
- at least one explainable detection model module (D01, D02, …, Dn), said explainable detection model module (D) comprising of
• at least one gaze exploration detection model submodule that functions as detection model (DM),
• at least one computational algorithm (DA),
• at least one explainable AI tool submodule (DE),
• at least one deep neural network submodule (D101, D102, …, D1n) and
• at least one dashboard submodule (D201, D202, …, D2n)
wherein
- said visual exploration task module (V) is configured for presenting a plurality of visual exploration tasks (V31, V32, V33, …, V3n) by said visual exploration task module (V3) on said display screen of said display screen submodule (V1) to subjects, for engaging them to obtain eye tracking data from gaze samples of said subjects, through said eye tracker system submodule (V2),
- said extensive gaze and performance module (E) is configured for
• evaluating a plurality of basic eye gaze parameters (E1P1, E1P2,…, E1Pn) from said eye tracking data obtained from the visual exploration task module (V) utilizing at least one open source software or a closed source software that is part of said submodule to assess basic eye gaze parameters (E1),
• obtaining a plurality of eye gaze derived parameters (E2P1, E2P2, …, E2Pn) from said basic eye gaze parameters (E1P) from said eye tracking data obtained from the visual exploration task module (V) utilizing at least one customized software that is part of said submodule to assess eye gaze derived parameters (E2),
• estimating a plurality of performance parameters (E3P1,E3P2,…, E3Pn) by said task performance measures submodule (E3) utilizing said basic eye gaze parameters (E1P) and said eye gaze derived parameters (E2P) and
• providing said feature set (E4) comprising of said eye gaze derived parameters (E2P) and said performance parameters (E3P), as the output for said extensive gaze and performance module (E)
- said feature analysis module (F) is configured for
• analyzing statistically said feature set (E4) from extensive gaze and performance module (E) to obtain for Glaucoma identified subjects forming Glaucoma group, an analysis based on severity (F11) and on age (F12),
• validating by comparing said analysis with clinical measures at clinical validation submodule (F2) during training phase of said system (S) and
• assessing the progression of Glaucoma conditions at assessment of progression of Glaucoma submodule (F3) in Glaucoma subjects by comparing their Glaucoma conditions with their previous conditions
- said explainable detection model module (D) is configured for
? predicting if said subject is in Glaucoma group or in Normal group by said deep neural network (D1),
? said dashboard (D2) is interactive that can display explainability (D21) of detection model (DM) in the form of plots,
? said dashboard is designed to generate screening index called Gaze exploration index (D22) and
? providing visualization (D23) of eye gaze parameters (E1P) and derived parameters (E2P) as per the respective visual exploration tasks (V31, V32,…, V3n) of said visual exploration task module (V) in the form of corresponding fusion maps (D231, D232, ….D23n)
to provide an automatic computer aided detection system which is cost effective, portable, reliable and facilitates in early detection of Glaucoma.
2. The system (S) as claimed in claim 1, wherein said plurality of visual exploration tasks (V3) are a plurality of screen based tasks comprising of visual exploration task 1 (V31) preferably a simple dot task, visual exploration task 2 (V32) preferably a visual search task, visual exploration task 3 (V33) preferably a free-viewing task.
3. The system (S) as claimed in claim 2, wherein said screen based tasks depict and simulate specific activities performed by subjects in daily life selected from group of but not limited to searching for an object, watching television or other screens, viewing photographs or pictures.
4. The system (S) as claimed in claim 2, wherein said screen based tasks are based on images that include scenes, containing a target with distractors, for engaging said subjects to explore all parts of said display screen, preferably based on instructions either displayed on the display screen (V1) or given by the health care personnel.
5. The system (S) as claimed in claim 2, wherein the visual exploration task 1 (V31) comprises of said simple dot task incorporates a stimulus to include a dot preferably of white color of size ranging from 15 to 20 pixels preferably 12 pixels.
6. The system (S) as claimed in claim 2, wherein said visual exploration task 2 (V32) is a visual search task engaging said subjects to a plurality of images that are displayed one after other on said display screen (V1) each for short periods of time and a specified target is placed in various positions on each of said images.
7. The system (S) as claimed in claim 6, wherein method of engaging said subjects in said visual exploration task 2 (V32) comprises the steps of
- displaying one of said plurality of images for short period of time ranging from 5 seconds to 30 seconds preferably 20 seconds,
- prompting of said subjects to search for said specified target that is placed in various positions of said displayed image,
- allowing subject to search for said specified target in said displayed image to provide his response either by clicking mouse button on said target or by indicating to said health care personnel,
- displaying one central dot for a short period of time in the range of 0.5 seconds to 2 seconds preferably for 1 second,
- repeating the above four steps to complete search in each of said plurality of said images of said visual exploration task 2 (V32) and
- collecting the responses and storing as data frame to be fed to extensive gaze and performance module (EXGP) (E). wherein said responses of the subjects are collected as gaze samples by the eye tracker system (V2) in the following steps
o collecting responses as x, y positions with respect to time that are called raw eye gaze data,
o processing said raw eye gaze data to find events such as fixations , saccades and
o processing said events to find derived eye gaze parameters.
8. The system (S) as claimed in claim 2, wherein said visual exploration task 3 (V33) is a free-viewing task comprising of a plurality of images that are real life scenes engaging said subjects with various visual features and does not require any response from them after completion of said task.
9. The system (S) as claimed in claim 8, wherein said visual exploration task 3 (V33) comprises of a plurality of images ranging in number from 2 to 40 preferably 20.
10. The system (S) as claimed in claim 9, wherein said images are presented as such or inverted or coupled with applied noise in order to draw attention of subjects to said task.
11. The system (S) as claimed in claim 1, wherein said eye tracker system is selected from group of Eye Tribe trackers such as Eye Tribe 60 Hz.
12. The system (S) as claimed in claim 1, wherein said software of submodule to assess basic eye gaze parameters (E1) is an open source or closed source software and is selected from group of OGAMA 5.0, any python based eye gazing software, Gazepoint Cloud, WebGazer, and Gazerecorder.
13. The system (S) as claimed in claim 1, wherein said eye gaze derived parameters (E2P) comprises of parameters selected from group of fixation count, saccade count, saccade rate, fixation duration, fixation/saccade ratio, saccade velocity, scanpath length, saccadic direction, scanpath, convex hull area.
14. The system (S) as claimed in claim 2, wherein simple dot task of visual exploration task module (V) assesses monocular performance of subjects using average miss estimated for each eye such as left eye miss and right eye miss and visual search task of visual exploration task module (V) estimates the binocular performance of subjects based on fixation over said specified target within said activity time.
15. The system (S) as claimed in claim 1, wherein said Glaucoma group is categorized based on severity into severe, moderate and mild subgroups.
16. The system (S) as claimed in claim 1, wherein said Glaucoma group is categorized based on age into elder, middle-age and young subgroups.
17. The system as claimed in claim 1, wherein said visual exploration task 2 (V32) checks the contribution of binocular vision of said subjects in their visual performance during day-to-day tasks and further assesses how the subjects compensate for their visual field loss.
18. The system (S) as claimed in claim 1, wherein said computational algorithm (DA) in the gaze exploration model module (D) performs exploration in said visual exploration tasks(V3) that is compared with the exploration of the subjects.
19. The system (S) as claimed in claim 1, wherein said deep neural network (D1) of explainable detection model module (D) is sequential deep neural network.
20. The system (S) as claimed in claim 1, wherein said feature set (E4) is fed as input to said deep neural network (D1) for prediction of class labels as Glaucoma or Normal.
21. The system (S) as claimed in claim 1, wherein said deep neural network model utilizes activation function selected from Rectified Linear Unit (ReLU), Sigmoid or Logistic Activation Function , SoftMax Function, Leaky ReLU Function, Tanh Function.
22. The system (S) as claimed in claim 1, wherein said deep neural network model is fed with decision of input features based on explainable AI tool.
23. The system (S) as claimed in claim 1, wherein said explainability is depicted on dashboard in the form of plots selected from group of waterfall plots, contribution plots.
24. The system (S) as claimed in claim 1, wherein said visualization of said eye gaze parameters (E1P, E2P) and performance parameters (E3P) is provided in form of said fusion maps selected from group of Gaze fusion map (GFM), Gaze fusion reaction time (GFRT) map, Gaze convex hull Map (GCHM), generated by fusing information of all screen based images of visual exploration task 1, visual exploration task 2, visual exploration task 3 respectively for said subjects.
25. The system (S) as claimed in claim 1, wherein said visualization (D3) comprises of
• GFDM Map that is the outcome of monocular performance of subjects by fusing ‘hit/miss’ of said screen based images of visual exploration task 1 (V31),
• GFRT Map that is the outcome of binocular performance of subjects by fusing two variables such as hit or miss and reaction time overlaid onto screen based images of visual exploration task 2 (V32) and
• GCHM Map that is the outcome of binocular performance of subjects by fusing variables such as hit or miss and reaction time overlaid onto screen based images of visual exploration task 3 (V33).
26. A method for automatic screening, prediction and detection of glaucoma wherein said method comprises of steps of
- acquiring eye gaze data of subjects by eye tracker system (V2) while presenting to and engaging said subjects with a plurality of visual exploration tasks (V31,…V3n) on display screen (V1) of visual exploration task module (V),
- estimating monocular performance of subjects by evaluating visual exploration task 1 (V31) and subsequently estimating binocular performance by evaluating other visual exploration tasks such as but not limited to 2 and 3 (V32, V32,…..V3n) to assess how subjects eyes compensate for their visual field loss,
- estimating a plurality of basic eye gaze parameters (E1P) from said acquired eye gaze data from eye tracker system (V2) utilizing open source or closed source software of submodule to assess basic eye gaze parameters (E1) of extensive gaze and performance module (E),
- estimating a plurality of eye gaze derived parameters (E2P) from said acquired eye gaze data from eye tracker system (V2) and said basic eye gaze parameters (E1P) obtained in previous step, utilizing customized software of customized software submodule (E2),
- estimating a plurality of performance parameters (E3P1, E3P2,…, E3Pn) by task performance measures submodule (E3) utilizing said basic eye gaze parameters (E1P) and said eye gaze derived parameters (E2P),
- providing output for extensive gaze and performance module (E) in the form of feature set (F4) comprising of said eye gaze derived parameters (E2P) and said performance parameters (E3P) obtained in the previous steps,
- analyzing statistically said feature set (E4) from extensive gaze and performance module (E) to obtain for glaucoma identified subjects forming glaucoma group, an analysis based on severity (F11) and on age (F12),
- feeding feature set (E4) to pretrained deep neural network (D1) of explainable detection model module to create detection model (DM),
- determining relevant features by detection model (DM) for predicting glaucoma in case of any unseen data by subjects engaged in visual exploration tasks (V3) and as normal in case of absence of any such unseen data,
- performing and presenting explainability (D21) of said detection model in a dashboard (D2) in the form of a plurality of plots (D211, D212,….D21n) such as but not limited to waterfall plots, contribution plots,
- providing visualization for said visual exploration tasks such as 1, 2, 3 (V31,V32,V33..) in the form of fusion maps by high-level abstraction onto said screen based images of visual exploration tasks (V3) based on said eye gaze parameters (E1P,E2P) and said performance parameters (E3P) and
- generating and providing screening index called gaze exploration-index-GE-I (D22) by detection model (DM).
27. The method as claimed in claim 26, wherein said performance parameters (E3P) are evaluated in terms of seen or not seen data of said screen based images and response time of visual exploration tasks (V31, V32,V33).
28. The method (S) as claimed in claim 26, wherein method of acquiring data for cross validation of detection model (DM) for pretraining of deep neural network (D1) comprises the steps of
- conducting standard test such as but not limited to clinical evaluation, visual field test, imaging techniques for glaucoma screening of subjects and the same number of age-related controls with an age group of 30-70 years without constraint on gender,
- producing a visual field report of subjects, after visual field test, by the Humphrey Field Analyzer (HFA) with 24-2 program that forms the sample dataset for pretraining of DNN(D1), wherein pretraining comprises of steps of,
o setting a percentage of sample dataset as training dataset and the remaining as testing dataset,
o utilizing the field report of the dataset fitting the training dataset by the Sequential Deep Neural Network (DNN) model,
o recording the accuracy score of the testing dataset after every iteration of feature relevance,
o identifying the final list of at least five relevant features and
o predicting the unseen samples and improving the accuracy score of the model,
- identifying subjects as subgroups of Glaucoma or Normal based on above screening,
- categorizing subjects with glaucoma as mild, moderate or severe based on the Visual Field Index (VFI),
- categorizing said subgroups of Glaucoma or normal based on age as young, middle age and elder,
- storing said data of Glaucoma and Normal subjects for clinical validation submodule (F2) of EXGP feature analysis module (F) of system(S) for automatic screening, prediction and detection of glaucoma and
- performing clinical validation of acquired eye gaze data by system(S) for automatic screening, prediction and detection of glaucoma.
29. The system (S) as claimed in claim 1, wherein said system (S) provides primary health care facilities in early detection of Glaucoma.
Dated this the 3rd day of July 2024
_____________________
Daisy Sharma
IN/PA-3879
of SKS Law Associates
Attorney for the Applicant
To
The Controller of Patents,
The Patent Office, Chennai
| # | Name | Date |
|---|---|---|
| 1 | 202341044892-STATEMENT OF UNDERTAKING (FORM 3) [04-07-2023(online)].pdf | 2023-07-04 |
| 2 | 202341044892-PROVISIONAL SPECIFICATION [04-07-2023(online)].pdf | 2023-07-04 |
| 3 | 202341044892-FORM FOR SMALL ENTITY(FORM-28) [04-07-2023(online)].pdf | 2023-07-04 |
| 4 | 202341044892-FORM 1 [04-07-2023(online)].pdf | 2023-07-04 |
| 5 | 202341044892-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [04-07-2023(online)].pdf | 2023-07-04 |
| 6 | 202341044892-EDUCATIONAL INSTITUTION(S) [04-07-2023(online)].pdf | 2023-07-04 |
| 7 | 202341044892-DECLARATION OF INVENTORSHIP (FORM 5) [04-07-2023(online)].pdf | 2023-07-04 |
| 8 | 202341044892-Proof of Right [28-07-2023(online)].pdf | 2023-07-28 |
| 9 | 202341044892-ENDORSEMENT BY INVENTORS [28-07-2023(online)].pdf | 2023-07-28 |
| 10 | 202341044892-FORM-26 [08-08-2023(online)].pdf | 2023-08-08 |
| 11 | 202341044892-FORM-31 [03-07-2024(online)].pdf | 2024-07-03 |
| 12 | 202341044892-Evidence u-s 31(d) [03-07-2024(online)].pdf | 2024-07-03 |
| 13 | 202341044892-DRAWING [03-07-2024(online)].pdf | 2024-07-03 |
| 14 | 202341044892-COMPLETE SPECIFICATION [03-07-2024(online)].pdf | 2024-07-03 |
| 15 | 202341044892-Affidavit from Inventor [03-07-2024(online)].pdf | 2024-07-03 |
| 16 | 202341044892-FORM-9 [04-07-2024(online)].pdf | 2024-07-04 |
| 17 | 202341044892-FORM 18 [04-07-2024(online)].pdf | 2024-07-04 |
| 18 | 202341044892-Proof of Right [09-07-2024(online)].pdf | 2024-07-09 |
| 19 | 202341044892-ENDORSEMENT BY INVENTORS [09-07-2024(online)].pdf | 2024-07-09 |
| 20 | 202341044892-FER.pdf | 2025-11-12 |
| 1 | 202341044892_SearchStrategyNew_E_1E_11-11-2025.pdf |