Abstract: The present invention discloses a system (100) for facilitating vision assessment and correction. The system (100) mainly comprises of a computing device (102), an immersive reality device (114), an enterprise device (116), an application module (118) running on the computing device (102), a trained machine learning model (120), and an immersive reality display (122) of the immersive reality device (114). The application module (118) is configured to be operatively coupled with the immersive reality device (114) and the enterprise device (116). The application module (118) is further configured to automatically adjust the periodic visual assistance parameters corresponding to the at least one activity played by the user. The application module (118) is further configured to automatically determine the weighted score associated with the periodic visual assistance parameters. The application module (118) is further configured to determine the vision assessment and correction data associated with the user. <>
Description:FIELD OF THE DISCLOSURE
[0001] This invention generally relates to a field of non-invasive therapies which are used for visual disorders, and more particularly relates to, an immersive system and method for vision assessment and correction, whilst a user is viewing visual content.
BACKGROUND
[0002] The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed technology.
[0003] More than 3.4 million people in the age of 40 or older across each country suffer from visual problems or have corrected vision (visual acuity of 20/40 or less), according to Centers for Disease Control and Prevention. Almost 7 percent of children under the age of 18 across each country have been diagnosed with an eye disease or different visual defects. Nearly 3 percent of children under 18 are blind or whose vision is impaired. Vision loss is among the top ten causes of visual disability in adults over age of 18 and most common disabling conditions in children. One of the visual defects and the most common disabling condition is “Amblyopia” that begins in early childhood and requires immediate correction in order to allow visual pathway to be formed between the Amblyopic eye and the brain. In Amblyopia although there is no structural visual deformity, but a problem in central fixation persists which forces the brain to stop sending signal to one eye, thereby ultimately leading to partial or full loss of visual acuity in the weaker eye due to loss of brain-eye pathway.
[0004] There have been many systems developed recently to correct vision defects, particularly, Amblyopia or any other vision defect. One of the systems involves a forced treatment requiring patients to patch strong eye and see fast moving objects or television from a weak eye. Another forced treatment involves wearing spectacles with the lens for stronger eye blackened, in order to see from the Amblyopic eye. One or more variation of these spectacles have switching on and off feature in order to switch off view of the stronger eye without informing the patient to make process less obvious to the weaker eye. However, a major drawback associated with the above-mentioned treatments is that the forced treatment sometimes does not allow the patients to learn the process of seeing the world in 3D using both the eyes. Another issue which arises with patching is that the treatment involving the patching of the strong eye is highly uncomfortable and kids feel ashamed and finds it tough to comply to both the patching and correcting spectacles. Yet another drawback associated with the above-mentioned treatments is that the compliance or time spent in following doctor’s guidelines is manually lodged and doctors have to chart the treatment based on words of the patient and parents of the patient. In addition to all these limitations, the treatment works only till a critical age of nine years after which the amblyopic eye is never corrected, as the forced treatment is not able to create a pathway between the brain and the amblyopic eye.
[0005] Hence, considering the above mentioned drawbacks in the currently developed systems as mentioned above, there is an urgent need for an automated, dedicated, thoroughly designed, and intelligent system which not only ensures effective assessment and correction of vision defects, particularly the vision defects associated with the Amblyopia, but also facilitates obtaining of the accurate results related with the vision assessment and correction, and solves the aforementioned drawbacks, by being able to effectively track the progress of the vision defect correction in the patient, thereby ensuring enhancement in the accuracy of the treatment provided by the system to the patient having the vision defect, and at the same time creating a pathway between the brain and the amblyopic eye of the patient, or at least provide a useful alternative.
OBJECTIVES OF THE INVENTION
[0006] It is an objective of the invention to provide an immersive system for vision assessment and correction, whilst a patient is viewing visual content.
[0007] It is an objective of the invention to provide the immersive system which ensures compliance between the doctor’s treatment and parameters related to vision defects in the weak eye of the user.
[0008] It is an objective of the invention to provide the immersive system for effective vision assessment and correction, particularly the assessment and correction of vision defects associated with Amblyopia.
[0009] It is an objective of the invention to provide the immersive system which facilitates obtaining of the accurate results related with the vision assessment and correction.
[0010] It is an objective of the invention to provide the immersive system which effectively tracks the progress of vision defect correction in the patient, thereby ensuring enhancement in the accuracy of the treatment provided by the system to the patient having the vision defect.
[0011] It is an objective of the present invention to provide the immersive system which is configured for creating a pathway between the brain and the amblyopic eye of the patient.
[0012] It is an objective of the present invention to provide the immersive system which involves use of interactive, self-adjusting, artificial intelligence (AI) driven, immersive reality based dichoptic visual training that comprises a set of gaming activities to be played in an immersive reality headset.
[0013] It is an objective of the present invention to provide the immersive system which provides the treatment that can be undergone by patients of all ages, as well as effectively works for the patients past critical age as for others.
[0014] It is an objective of the present invention to provide the immersive system which is configured to record all datasets related with measurement of the vision defect correction parameters associated with the patient, thereby eliminating verbal bias of the patients and their parents.
[0015] It is an objective of the present invention to provide the immersive system which enables the patient to play interchangeably a set of more than five games, thereby leading to higher treatment compliance and faster recovery.
SUMMARY
[0016] In accordance with some embodiments of present inventive concepts, an immersive system is claimed, which is configured for performing vision assessment and correction. The immersive system comprises a computing device having a memory and a processor. The immersive system further comprises an immersive reality device, an enterprise device, an application module, and a trained machine learning model. The immersive reality device is configured to be worn by a user and coupled with the computing device. The enterprise device is configured to be operatively coupled with the immersive reality device. The application module running on a screen of the computing device is configured to be operatively coupled with the immersive reality device and the enterprise device. The trained machine learning model is configured to be operatively coupled with the immersive reality device, the enterprise device, and the application module, to perform certain operational steps involved in facilitating vision assessment and correction. The trained machine learning model is configured for displaying, through an immersive reality display, an immersive reality environment. The immersive reality environment comprises a first object having a first property, and the first object is displayed to a user. The trained machine learning model is further configured for receiving, through the processor, a first user input by the immersive reality device to display at least one activity with respect to the first object. The at least one activity is a gaming activity played by the user. The trained machine learning model is further configured for automatically adjusting, through the application module, periodic visual assistance parameters corresponding to the at least one activity played by the user. The trained machine learning model is further configured for automatically determining, through the application module, a weighted score associated with the periodic visual assistance parameters, based on the performance of the user during playing of the at least one activity. The trained machine learning model is further configured for automatically evaluating, through the application module, if the weighted score matches with a fixed targeted score. The fixed targeted score is a desired score associated with an observation by the user and recorded during playing of the first activity. The trained machine learning model is further configured for determining, through the application module, a vision assessment and correction data associated with the user, based on the evaluation with respect to matching between the weighted score and the fixed targeted score.
[0017] In one embodiment, the immersive reality device is configured for displaying, through the immersive reality display, the vision assessment and correction data associated with the user. The periodic visual assistance parameters is, but not limited to, a size of the first object and a contrast of the first object.
[0018] In another embodiment, the application module is configured for supporting a plurality of quality parameters and automatically adjust the plurality of quality parameters, depending on configuration of the immersive reality device. The plurality of quality parameters is, but not restricted to, contrast associated with the first activity, level associated with the first activity, size of the objects associated with the first activity, or the like.
[0019] In accordance with some embodiments of present inventive concepts, a method is claimed, which is configured for performing of the vision assessment and correction. The method comprises initially displaying, through the immersive reality display, the immersive reality environment. Further, the method comprises receiving, through the processor, the first user input by the immersive reality device to display the at least one activity with respect to the first object. Further the method comprises automatically adjusting, through the application module, the periodic visual assistance parameters corresponding to the at least one activity played by the user. Further, the method comprises automatically determining, through the application module, a weighted score associated with the periodic visual assistance parameters, based on the performance of the user during playing of the at least one activity. Further, the method comprises automatically evaluating, through the application module, if the weighted score matches with the fixed targeted score. Further, the method comprises lastly determining, through the application module, the vision assessment and correction data associated with the user, based on the evaluation with respect to the matching between the weighted score and the fixed targeted score.
[0020] In one embodiment, the method further comprises displaying, through the immersive reality display, the vision assessment and correction data associated with the user. Further, the method comprises automatically adjusting, through the application module, the plurality of quality parameters depending on the configuration of the immersive reality device.
[0021] In another embodiment, the method further comprises enabling the processor to receive the second user input by the immersive reality device to display the at least one activity with respect to the second object, in an event of unmatching of the weighted score with the fixed targeted score.
[0022] In yet another embodiment, the step of receiving, through the processor, the first user input by the immersive reality device to display the at least one activity with respect to the first object during the vision assessment and correction comprises tracking, through at least one sensor, movements of the head of the user during the vision assessment and correction.
[0023] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The accompanying drawings illustrate various embodiments of systems, methods, and embodiments of various other aspects of the disclosure. Any person with ordinary skills in the art will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. It may be that in some examples one element may be designed as multiple elements or that multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another, and vice versa. Furthermore, elements may not be drawn to scale. Non-limiting and non-exhaustive descriptions are described with reference to the following drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating principles.
[0025] FIG. 1 is a block diagram illustrating a system for performing vision assessment and correction, according to an embodiment as disclosed herein;
[0026] FIG. 2 is an example scenario illustrating a system using the computing device for displaying an immersive reality environment and a first object, according to one embodiment as disclosed herein;
[0027] FIG. 3 is an example scenario illustrating a system using the computing device for displaying at least one activity with respect to the first object to be played by the user, according to another embodiment as disclosed herein;
[0028] FIG. 4 is an example scenario illustrating a system in which operation is initiated between the computing device and an immersive reality device to display a desired score or a fixed targeted score, according to one embodiment as disclosed herein;
[0029] FIG. 5 is an example scenario illustrating a system in which the operation is initiated between the computing device and the immersive reality device to compare weighted score with the fixed targeted score, according to another embodiment as disclosed herein;
[0030] FIG. 6 is an example scenario illustrating a system in which the operation is initiated between the computing device and the immersive reality device to display vision assessment and correction data corresponding to the at least one activity played by the user, according to an embodiment as disclosed herein;
[0031] FIG. 7 is an example scenario illustrating a system using the computing device for displaying another activity with respect to a second object to be played by the user, and the fixed targeted score corresponding to the another activity, according to one embodiment as disclosed herein;
[0032] FIG. 8 is an example scenario illustrating a system in which the operation is initiated between the computing device and the immersive reality device to compare weighted score corresponding to the another activity with the fixed targeted score corresponding to the another activity, according to another embodiment as disclosed herein;
[0033] FIG. 9 is an example scenario illustrating a system in which the operation is initiated between the computing device and the immersive reality device to display the vision assessment and correction data corresponding to the another activity played by the user, according to yet another embodiment as disclosed herein;
[0034] FIG. 10 is a flow diagram illustrating various operational steps for performing the vision assessment and correction using the computing device, according to an embodiment as disclosed herein; and
[0035] FIG. 11 is a flow diagram illustrating various operational steps for performing the vision assessment and correction upon comparison between the weighted score and the fixed targeted score, according to the embodiments disclosed herein.
DETAILED DESCRIPTION
[0036] Some embodiments of the disclosure, illustrating all its features, will now be discussed in detail. The words “comprising,” “having,” “containing,” and “including,” and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described. Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which example embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples among other possible examples.
[0037] While the present invention is described herein by way of example using embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, and are not intended to represent the scale of the various components. It should be understood that the detailed description thereto is not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claim. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers, or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles, and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
[0038] The present invention is described hereinafter by various embodiments. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only, and are not intended to limit the scope of the claims. In addition, a number of system architectures are identified as suitable for various facets of the implementations. These system architectures are to be treated as exemplary, and are not intended to limit the scope of the invention.
[0039] The present invention discloses a system configured for performing vision assessment and correction and operates on a principle of non-forced treatment, which means that stronger eye of the patient is kept open during the treatment. The disclosed system involves self-adjustment of contrast and size of object or objects, based on performance of the patient in different gaming activities. The different gaming activities played by the user are object shooting activities and object matching activities. The disclosed system only requires a computing device, for instance, a user device like a mobile phone or any handheld device, and does not require manual intervention. The disclosed system integrates principles from the immersive reality with the machine learning techniques to provide accurate results related with the vision assessment and correction, as well as effectively track the progress of the vision defect correction in the patient. The disclosed system is configured to simultaneously ensure both the enhancement in the accuracy of the treatment provided to the patient having the vision defect, and creating a pathway between the brain and the amblyopic eye of the patient.
[0040] In the proposed system, the utilization of principle of artificial intelligence for facilitating enhanced treatment of patients having amblyopia involves set of different gaming activities to be played by the patient using an immersive reality headset.
[0041] Unlike conventional systems and methods, the proposed system and method utilizes principal of displaying multiple levels of different gaming activities to be played by the patient/(s), and score calculation based on specially crafted formulae, thereby making easy and appropriate for the doctor to effectively analyse vision assessment and correction data of the patient or multiple patients remotely, thus eliminating verbal bias of the patients and their parents.
[0042] Accordingly, embodiments herein achieve a method for facilitating vision assessment and correction. The method includes initially displaying, through the immersive reality display, the immersive reality environment. Further, the method includes receiving, through a processor, a first user input by an immersive reality device to display at least one activity with respect to first object. Further, the method includes automatically adjusting, through an application module, periodic visual assistance parameters corresponding to the at least one activity played by the user. Further, the method includes automatically determining, through the application module, a weighted score associated with the periodic visual assistance parameters, based on the performance of the user during playing of the at least one activity. Further, the method includes automatically evaluating, through the application module, if the weighted score matches with a fixed targeted score, Further, the method includes lastly determining, through the application module, the vision assessment and correction data associated with the user, based on the evaluation of matching between the weighted score and the fixed targeted score.
[0043] In the proposed system and method, problem of boredom indicated by many researchers is eliminated by allowing a set of more than five gaming activities to be played by the patients interchangeably, through a common setting panel. The variety in gaming activities leads to even higher treatment compliance, thus leading to faster recovery. The background of the at least one activity is made visible to both the eyes of the patient. One part of the at least one activity is made visible to stronger eye, while another specific part connected to the part visible to the stronger eye is made visible to the weaker eye. For the patients to play the game successfully, they are required to coordinate between both the eyes. In order to help the patients achieve coordination between both the eyes, the periodic visual assistance parameters are provided to the patient in form of contrast change and size change associated with the objects or scene visible to the Amblyopic eye.
[0044] Referring now to drawings, and more particularly to FIGS. 1 through 11, there are shown preferred embodiments.
[0045] FIG. 1 is a block diagram illustrating a system (100) for performing vision assessment and correction, according to an embodiment as disclosed herein. The system (100) mainly comprises of a computing device (102), an immersive reality device (114), an enterprise device (116), an application module (118) running on the computing device (102), a trained machine learning model (120), and an immersive reality display (122) of the immersive reality device (114). The computing device (102) may be, for example, but not limited to a cellular phone, a smart phone, a Personal Digital Assistant (PDA), a tablet computer, a laptop, an Internet of Things (IoT), a smart watch, a virtual reality device, a multiple camera system or the like. The computing device (102) comprises of various hardware components serving different functions during performing of the vision assessment and correction. The various hardware components of the computing device (102) are a memory (104), a processor (106), a communicator (108), a display interface (110), and an image sensor (112). The memory (104) is configured to store data related with the periodic visual assistance parameters. The periodic visual assistance parameters is, but not limited to, a size of the object and a contrast of the object. The memory (104) is further configured to store data related with the weighted score of each of the periodic visual assistance parameters. The memory (104) is further configured to store the fixed targeted score associated with the at least one activity played by the patient. The memory (104) is further configured to store vision assessment and correction data associated with the patient or the user. The terms “patient” and “user” would be used interchangeably and are deemed to have the same meaning, without departing from the scope of the invention or intended to limit the scope of the invention. The memory (104) is further configured to store instructions executed by the processor (106).
[0046] The memory (104) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EEPROM) memories. In addition, the memory (104) may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (104) is non-movable. In some examples, the memory (104) can be configured to store large amounts of information. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
[0047] The processor (106) is configured to be operatively coupled with the memory (104), the communicator (108), the display interface (110), and the image sensor (112). The processor (106) is further configured to execute instructions stored in the memory (104) and to perform various processes. The processor (106) is further configured to receive a first user input and send the first user input to the immersive reality device (114), to display the at least one activity with respect to a first object. The at least one activity is a gaming activity played by the user.
[0048] The communicator (108) is configured for communicating internally between internal hardware components and with external devices via one or more networks. The communicator (108) is further configured for communicating with the computing device (102), by sending the data related to the periodic visual assistance parameters. The communicator (108) is further configured for communicating with the computing device (102), by sending the data related to the weighted score of each of the periodic visual assistance parameters. The communicator (108) is further configured for communicating with the computing device (102), by sending the data related to the fixed targeted score associated with the at least one activity played by the patient. The communicator (108) is further configured for communicating with the computing device (102), by sending the data related to the vision assessment and correction data associated with the patient.
[0049] The display interface (110) is configured for displaying, through a screen of the computing device (102), the data related to the periodic visual assistance parameters. The display interface (110) is further configured for displaying, through the screen of the computing device (102), the data related to the weighted score of each of the periodic visual assistance parameters. The display interface (110) is further configured for displaying, through the screen of the computing device (102), the data related to the fixed targeted score associated with the at least one activity played by the patient. The display interface (110) is further configured for displaying, through the screen of the computing device (102), the data related to the vision assessment and correction data associated with the patient.
[0050] The image sensor (112) is configured to capture image of the objects displayed to the user. The image sensor (112) is further configured to capture image of the immersive reality environment. The image sensor (112) is further configured to capture image of the at least one activity.
[0051] The immersive reality device (114) is for example purposes, illustrated as Virtual Reality (VR) glasses, but not restricted to the same. The immersive reality device (114) may be, but not restricted to, Augmented Reality (AR) glasses, or the VR glasses, or a combination thereof (also known as Immersive Reality Glasses). The immersive reality device (114) comprises of the immersive reality display (122) connected with the immersive reality (114), at a glass portion of the immersive reality device (114). The immersive reality device (114) is configured to be worn by the user or the patient and directly coupled with the computing device (102). The immersive reality device (114) is further configured to receive the first user input through the processor (106), for displaying of the at least one activity with respect to a first object. The at least one activity is a gaming activity played by the user. The immersive reality device (114) is further configured to display, through the immersive reality display (122), the vision assessment and correction data associated with the user.
[0052] In an embodiment, the immersive reality device (114) is, but not limited to, a VR headset, VR goggles, or any other visual simulating device.
[0053] In one embodiment, the immersive reality display (122) may be either a display of the computing device (102) or a display which is attached with the immersive reality device (114).
[0054] The enterprise device (116) is a database module or a storage server device which stores data records related with the objects and the different gaming activities. The enterprise device (116) is configured to be operatively coupled with the immersive reality device (114) to perform certain operations related with storing of the data records. The enterprise device (116) is further configured to store data records related with the first object. The enterprise device (116) is further configured to store data records related with the at least one activity. The enterprise device (116) is further configured to store data records related with the periodic visual assistance parameters. The enterprise device (116) is further configured to store data records related with the weighted score of each of the periodic visual assistance parameters. The enterprise device (116) is further configured to store data records related with the fixed targeted score associated with the at least one activity played by the patient. The enterprise device (116) is further configured to store data records related with the vision assessment and correction data associated with the patient.
[0055] The application module (118) is a software application which runs on the screen of the computing device (102). The application module (118) is configured to be operatively coupled with the immersive reality device (114) and the enterprise device (116) and trained through the trained machine learning model (120), to perform certain operational steps during facilitating of the vision assessment and correction. The application module (118) is further configured to automatically adjust the periodic visual assistance parameters corresponding to the at least one activity played by the user. The application module (118) is further configured to automatically determine the weighted score associated with the periodic visual assistance parameters, based on performance of the user during playing of the at least one activity. The periodic visual assistance parameters is, but not limited to, a size of the object and contrast of the object. The application module (118) is further configured to automatically evaluate that if the weighted score matches with a fixed targeted score. The fixed targeted score is a fixed and constant score that remains unchanged for any gaming activity to be played by the user. The fixed targeted score mainly refers to “a desired score” associated with the observation by the user, and is recorded during playing of any gaming activity by the user. The application module (118) is further configured to determine the vision assessment and correction data associated with the user, based on the evaluation of matching between the weighted score and the fixed targeted score. The vision assessment and correction data associated with the user is, but not restricted to, vision data associated with the user, required improved vision level associated with the user, and desired weighted score associated with the user. If during the step of evaluation, it is detected that the weighted score does not match with the fixed targeted score, the application module (118) is configured to display, through the immersive reality display (122), the immersive reality environment. The displayed immersive reality environment comprises a second object having a second property, and the second object is displayed to the user.
[0056] In one embodiment, if the weighted score does not match with the fixed targeted score, the application module (118) is configured to change the periodic visual assistance parameters associated with the first object, or enable the processor (106) to receive a second user input by the immersive reality device (114), to display at least one activity with respect to the second object. The at least one activity with respect to the second object is a gaming activity with respect to the second object played by the user.
[0057] In another embodiment, the application module (118) is configured to support a plurality of quality parameters and automatically adjust the plurality of quality parameters, depending on configuration of the immersive reality device (114). The plurality of quality parameters is, but not restricted to, contrast associated with the at least one activity with respect to the first object or the second object, levels associated with the at least one activity with respect to the first object or the second object, size of the first object associated with the at least one activity, or the like.
[0058] In yet another embodiment, the immersive reality environment displayed to the user through the immersive reality display (122) comprises at least more than five activities played interchangeably by the user. The at least more than five activities is associated with the first object and the second object.
[0059] In one exemplary embodiment, the first user input and the second user input are received from the at least one sensor selected from the group consisting of the head tracking sensor, the face tracking sensor, the hand tracking sensor, the eye tracking sensor, the body tracking sensor, the voice recognition sensor, the heart rate sensor, the skin capacitance sensor, the electrocardiogram sensor, the brain activity sensor, the geolocation sensor, at least one retinal camera, the balance tracking sensor, the body temperature sensor, the blood pressure monitor, and the respiratory rate monitor. The at least one sensor is configured to track movements of a head of the user during the vision assessment and correction.
[0060] The trained machine learning model (120) is configured to be operatively coupled with the immersive reality device (114), the enterprise device (114), and the application module (118), and perform certain operational steps for facilitating the vision assessment and correction. The trained machine learning model (120) is configured for displaying, through the immersive reality display (122), the immersive reality environment. Further, the trained machine learning model (120) is configured for receiving, through the processor (106), the first user input by the immersive reality device (114) to display at least one activity with respect to the first object. Further, the trained machine learning model (120) is configured for automatically adjusting, through the application module (118), the periodic visual assistance parameters corresponding to the at least one activity played by the user. Further, the trained machine learning model (120) is configured for automatically determining, through the application module (118), a weighted score associated with the periodic visual assistance parameters, based on the performance of the user during playing of the at least one activity. Further, the trained machine learning model (120) is configured for automatically evaluating, through the application module (118), if the weighted score matches with the fixed targeted score, wherein the fixed targeted score is desired score associated with an observation by the user and recorded during playing of the first activity by the user. Further, the trained machine learning model (120) is configured for determining, through the application module (118), the vision assessment and correction data associated with the user, based on the evaluation of matching between the weighted score and the fixed targeted score.
[0061] The computing device (102), the immersive reality device (114), the enterprise device (116), and the trained machine learning model (120) are connected to each other over a communications network (124). The communications network (124) may facilitate a communication link among the components of the system (100). It can be noted that the communication network (124) may be a wired and/or a wireless network. The communication network (124), if wireless, may be implemented using communication techniques such as Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE), Wireless Local Area Network (WLAN), Infrared (IR) communication, Public Switched Telephone Network (PSTN), Radio waves, and other communication techniques, known in the art.
[0062] Although the FIG. 1 depicts an overview of the computing device (102) but it is to be understood that other embodiments are not limited thereon. In other embodiments, the computing device (102) may include one or more number of hardware components. One or more hardware components of the computing device (102) can be combined together in any manner to perform same or substantially similar function to facilitate the vision assessment and correction. Further, the labels or names of the hardware components are used only for illustrative purposes and does not intend to limit the scope of the invention in any manner.
[0063] FIG. 2 is an example scenario illustrating a system (200) using the computing device (102) for displaying an immersive reality environment (204) and the first object (206), according to one embodiment as disclosed herein. The system (200) comprises the computing device (102) and the trained machine learning model (120) connected to each other over the communications network (124). The application module (118) is configured to be trained using the trained machine learning model (120) to display the first object (206). The first object (206) may be displayed through the immersive reality display (122). An application window (208) of the application module (118) is configured to display different gaming activities to a user (202). The user (202) wears the immersive reality device (114) to see the first object (206). The application module (118) is configured to display different levels associated with at least one gaming activity selected from the different gaming activities. For example, the first activity is related with identification of the first object (206) by the user by matching the first object (206) with a fixed symbol. The activity of identification of the first object (206) by way of matching comprises of five levels which are designated as: “Level-1 to Level-5”. The levels: “Level-1 to Level-5” are illustrated for example purposes. However, in accordance with the main aspect of the present invention, there are at least ten levels of each activity, i.e. that is ten levels for the first activity and second activity.
[0064] FIG. 3 is an example scenario illustrating a system (300) using the computing device (102) for displaying the at least one activity with respect to the first object (206) to be played by the user (202), according to another embodiment as disclosed herein. The at least one activity is the gaming activity played by the user (202). Before the user (202) starts playing the at least one activity, the periodic visual assistance parameters corresponding to the at least one activity are adjusted to a certain value. The periodic visual assistance parameters is, but not limited to, the size of the first object (206) and the contrast of the first object (206). Based on the adjustment of the values associated with the periodic visual assistance parameters, the application module determines weighted score associated with each of the periodic visual assistance parameters. For example, the application window (302) of the application module (118) displays the weighted score associated with the contrast of the first object (206) as 35.2, and the weighted score associated with the size of the first object (206) as 15 cm. The periodic visual parameters, that is, the size of the first object (206) and the contrast of the first object (206) are at least ten in number. For example, the size of the first object (206) are in a range of “S1-S10”, and the contrast of the first object (206) are in the range of “C1-C10”. Based on the weighted score associated with the size of the first object (206) and the contrast of the first object (206), the size of the first object (206) and the contrast of the first object (206) are changed to a certain level. For example, based on the weighted score associated with the contrast of the first object (206) as 35.2, and the weighted score associated with the size of the first object (206) as 15 cm, it is being detected that the vision of the user is “WEAK”. Hence, based on the vision detected, the size of the first object (206) is decreased to 10 cm which comes under the range “S2”, and the contrast of the first object (206) is decreased to 29.5 which comes under the range of “C1”. However, the reduction of the size of the first object (206) to “S2”and the contrast of the first object (206) to “C1” is only for example purposes and not restricted to the same. Similarly, if it is being detected that the vision of the user is “NORMAL”, the size of the first object (206) is increased to 16 cm which comes under the range “S3”, and the contrast of the first object (206) is increased to 36.2 which comes under the range of “C3”.
[0065] FIG. 4 is an example scenario illustrating a system (400) in which operation is initiated between the computing device (102) and the immersive reality device (114) to display the fixed targeted score or a desired score, according to one embodiment as disclosed herein. The system (400) comprises the computing device (102) and the trained machine learning model (120) connected to each other over the communications network (124). The application module (118) is configured to be trained using the trained machine learning model (120) to display the fixed targeted score through the application window (402). The fixed targeted score is the desired constant score that remains unchanged for any gaming activity to be played by the user. For example, the desired score associated with the at least one gaming activity corresponding to the first object (206) is 83, which needs to be achieved by the user undergoing the amblyopic eye treatment.
[0066] FIG. 5 is an example scenario illustrating a system (500) in which the operation is initiated between the computing device (102) and the immersive reality device (114) to compare the weighted score with the fixed targeted score, according to another embodiment as disclosed herein. The system (500) comprises the computing device (102) and the trained machine learning model (120) connected to each other over the communications network (124). The application module (118) is configured to be trained using the trained machine learning model (120), to compare the weighted score with the fixed targeted score. Based on the comparison, necessary action treatment plan is automatically suggested through the application module (118). The automatic suggestion of the necessary action treatment plan refers to the suggestion regarding adjustment in the periodic assistance parameters associated with the at least one activity played by the user. For example, the weighted score of the periodic assistance parameters is compared with the desired score, based on which the application module (118) detects that the weighted score (which is 35.2) is less than the desired score (which is 83), which is displayed through an application window (502) of the application module (118). Hence, as a result of this comparison, the contrast and size of the first object (206) may be automatically adjusted through the application module (118), to make the first object (206) visible to the amblyopic eye.
[0067] FIG. 6 is an example scenario illustrating a system (600) in which the operation is initiated between the computing device (102) and the immersive reality device (114) to display the vision assessment and correction data corresponding to the at least one activity played by the user, according to an embodiment as disclosed herein. The system (600) comprises the computing device (102) and the trained machine learning model (120) connected to each other over the communications network (124). The application module (118) is configured to be trained using the trained machine learning model (120), to display the vision assessment and correction data through an application window (602) of the application module (118). The final weighted score of the user (202) is computed based on average of the number of times the user (202) played the at least one activity corresponding to the first object (206), irrespective of the levels of the at least one activity corresponding to the first object (206). For example, the application module (118) is configured to display either through the screen of the computing device (102) or the immersive reality display (122), the vision assessment and correction data which depicts that the final calculated weighted score is 45.2 obtained as the average of the number of times the user (202) played the at least one activity. The final calculated weighted score of 45.2 implies that the user (202) has the right eye as the amblyopic eye, and the vision of the user (202) is “BLUR”, which clearly means that the user (202) is not able to see every object clearly, while playing the at least one activity corresponding to the first object (206).
[0068] FIG. 7 is an example scenario illustrating a system (700) using the computing device (102) for displaying another activity with respect to a second object (704) to be played by the user (202), and the fixed targeted score corresponding to the another activity, according to one embodiment as disclosed herein. The another activity is the second gaming activity played by the user (202). The system (700) comprises the computing device (102) and the trained machine learning model (120) connected to each other over the communications network (124). The application module (118) is configured to be trained using the trained machine learning model (120), to display the another gaming activity corresponding to the second object (704) and the desired score to be achieved corresponding to the second object (704). For example, the application module (118) displays through the immersive reality display (122) or the screen of the computing device (102), the second object (704) corresponding to the another gaming activity. An application window (702) of the application module (118) displays information related with type of the another gaming activity to be played by the user corresponding to the second object (704), as well as the desired score of “51” to be achieved with respect to the another gaming activity corresponding to the second object (704).
[0069] FIG. 8 is an example scenario illustrating a system (800) in which the operation is initiated between the computing device (102) and the immersive reality device (114) to compare the weighted score corresponding to the another activity with the fixed targeted score corresponding to the another activity, according to another embodiment as disclosed herein. The system (800) comprises the computing device (102) and the trained machine learning model (120) connected to each other over the communications network (124). The application module (118) is configured to be trained using the trained machine learning model (120), to display comparison between the weighted score with the fixed targeted score. The weighted score and the fixed targeted score are associated with the another activity corresponding to the second object (704). For example, the application module (118) is configured to display through an application window (802) the weighted score of the periodic visual assistance parameters associated with the another activity played by the user corresponding to the second object (704). Based on the weighted score, the application window (802) displays the comparison between the weighted score and the desired score. The application window (802) displays that the weighted score of the periodic visual assistance parameters associated with the another activity corresponding to the second object (704) is equal to the desired score associated with the another activity corresponding to the second object (704).
[0070] FIG. 9 is an example scenario illustrating a system (900) in which the operation is initiated between the computing device (102) and the immersive reality device (114) to display the vision assessment and correction data corresponding to the another activity played by the user, according to yet another embodiment as disclosed herein. The system (900) comprises the computing device (102) and the trained machine learning model (120) connected to each other over the communications network (124). The application module (118) is configured to be trained using the trained machine learning model (120), to display the vision assessment and correction data through an application window (902) of the application module (118). The final weighted score of the user (202) is computed based on average of the number of times the user (202) played the another activity corresponding to the second object (704), irrespective of the levels of the another activity corresponding to the second object (704). For example, the application module (118) is configured to display either through the screen of the computing device (102) or the immersive reality display (122), the vision assessment and correction data which depicts that the final calculated weighted score is 51 obtained as the average of the number of times the user (202) played the at least one activity. The final calculated weighted score of 51 implies that the amblyopic eye of the user (202) is perfect and exhibits significant improvement, and that the vision of the user (202) is “CLEAR”, which clearly means that the user (202) is able to see every object clearly from the amblyopic eye, while playing the another activity corresponding to the second object (704). Upon detecting that the weighted score of the visual assistance parameters matches with the desired score associated with the another activity corresponding to the second object (704), the level of the another gaming activity is increased by adjusting the contrast and size of the another object to an increased range of values.
[0071] In an alternate exemplary embodiment, as evident from FIGS. 2-9, if upon detecting that the weighted score of the visual assistance parameters does not match with the desired score associated with the at least one activity or the first gaming activity corresponding to the first object (206), the user (202) may be presented with the object having adjusted contrast and size to decrease the level of the first gaming activity, or the user (202) may also be presented with the second object (704) with adjusted and different contrast and size associated with the another activity (second activity), to enhance the vision of the user (202) by way of correction of the amblyopic eye through adjusted contrast and size of the second object (704).
[0072] FIG. 10 is a flow diagram illustrating a method (1000) which depicts various operational steps for using the computing device (102), according to the embodiments disclosed herein. As shown in FIG. 5, the operational steps (1002 – 1012) are performed by the various hardware components of the computing device (102) and other hardware components of the system (100). These hardware components of the computing device (102) and the other hardware components are enabled through the trained machine learning model (120) to perform the operational steps. At step 1002, the method (1000) includes displaying, through the immersive reality display (122), the immersive reality environment. The immersive reality environment comprises the first object having the first property, and the first object is being displayed to the user. At step 1004, the method (1000) includes receiving, through the processor (106), the first user input by the immersive reality device (114) to display the at least one activity with respect to the first object. The at least one activity is the gaming activity played by the user. At step 1006, the method (1000) includes automatically adjusting, through the application module (118), the periodic visual assistance parameters corresponding to the at least one activity played by the user. At step 1008, the method includes automatically determining, through the application module (118), the weighted score associated with the periodic visual assistance parameters, based on the performance of the user during playing of the at least one activity. At step 1010, the method (1000) includes automatically evaluating, through the application module (118), if the weighted score matches with the fixed targeted score, wherein the fixed targeted score is desired score associated with the observation by the user and recorded during playing of the at least one activity which is the first activity. At step 1012, the method (1000) includes determining, through the application module (118), the vision assessment and correction data associated with the user, based on the evaluation of matching between the weighted score and the fixed targeted score. In an embodiment, the method (1000) further includes displaying, through the immersive reality display (122), the vision assessment and correction data associated with the user.
[0073] In another embodiment, the method (1000) further includes automatically adjusting, through the application module (118), the plurality of quality parameters, depending on the configuration of the immersive reality device (114).
[0074] In yet another embodiment, if the weighted score does not match with the fixed targeted score, the method (1000) further includes enabling the processor (106) to receive a second user input by the immersive reality device (114), to display at least one activity with respect to the second object. The at least one activity is the gaming activity played by the user.
[0075] The step 1010 of automatically evaluating, through the application module (118), if the weighted score matches with the fixed targeted score comprises firstly determining, through the application module (118), that the weighted score does not match with the fixed targeted score; and lastly upon determining that if the weighted score does not match with the fixed targeted score, causing the application module (118) to display, through the immersive reality display (122), the immersive reality environment comprising a second object having a second property, and the second object being displayed to the user. The immersive reality environment displayed to the user through the immersive reality display (122) comprises at least more than five activities to be played interchangeably by the user, the at least more than five activities associated with the first object and the second object.
[0076] In one embodiment, the step of receiving, through the processor (106), the first user input by the immersive reality device (114) to display the at least one activity with respect to the first object during the vision assessment and correction comprises: tracking, through the at least one sensor, the movements of the head of the user during the vision assessment and correction.
[0077] FIG. 11 is a flow diagram illustrating a method (1100) depicting various operational steps for performing the vision assessment and correction upon comparison between the weighted score and the fixed targeted score, according to the embodiments disclosed herein. The method (1100) starts at step (1102) and ends at step (1114). At the step 1102, the method (1100) includes displaying the immersive reality environment through the immersive reality device (114). At step 1104, the method (1100) further includes receiving the first user input by the immersive reality device (114) to display the first activity. At step 1106, the method (1100) further includes automatically adjusting the periodic visual assistance parameters corresponding to the first activity. At step 1108, the method (1100) further includes automatically determining the weighted score associated with the periodic visual assistance parameters, based on the performance of the user recorded during playing of the at least one activity. At step 1110, the method (1100) further includes determining if the weighted score matches the fixed targeted score. If the weighted score does not match the fixed targeted score, further the user is displayed with the second having the second property and the second activity corresponding to the second object at step 1112. But, if the weighted score matches with the fixed targeted score, the method (1100) further includes determining the vision assessment and correction data associated with the user at the step 1114.
[0078] The various actions, acts, blocks, steps, or the like in the flow diagram depicting the method (1000) and the method (1100) may be performed in the order presented, or in a different order, or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0079] The embodiments disclosed herein can be implemented using at least one software program running on the at least one hardware device and performing network management functions to control the elements.
[0080] Several modifications and additions are introduced to make the system (100) more tolerant to variance like change in the periodic visual assistance parameters, change in the type of gaming activity, change in the weighted score associated with the periodic visual assistance parameters, and change in the vision assessment and correction data in the deployed environment. Moreover, entire pipeline of the system (100) comprises independent hardware components combined with each other in a manner, such that each independent hardware component work seamlessly to create an automated solution suite that has not been achieved by past automated vision assessment and correction systems for facilitating the assessment and correction of the amblyopic eye of the patient.
[0081] Various modifications to these embodiments are apparent to those skilled in the art from the description. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the embodiments but is to be providing broadest scope of consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and appended claims.
REFERENCE NUMERALS FOR DRAWINGS
(100) – System
(102) – Computing Device
(104) – Memory
(106) – Processor
(108) – Communicator
(110) – Display Interface
(112) – Image Sensor
(114) – Immersive Reality Device
(116) – Enterprise Device
(118) – Application Module
(120) – Trained Machine Learning Model
(122) – Immersive Reality Display
(124) – Communications Network
(202) – User/Patient
(204) – Immersive Reality Environment
(206) – First Object
(208) – Application Window
(704) – Second Object
, Claims:We Claim:
1. An immersive system (100) for vision assessment and correction, comprising:
a computing device (102) having a memory (104) and a processor (106), characterized in that,
an immersive reality device (114) worn by a user and coupled with the computing device (102);
an enterprise device (116) operatively coupled with the immersive reality device (114);
an application module (118) running on a screen of the computing device (102), and operatively coupled with the immersive reality device (114) and the enterprise device (116); and
a trained machine learning model (120) operatively coupled with the immersive reality device (114), the enterprise device (116), and the application module (118);
wherein the trained machine learning model (120) is configured for:
displaying, through the immersive reality display (122), an immersive reality environment, the immersive reality environment comprising a first object having a first property, and the first object being displayed to the user;
receiving, through the processor (106), a first user input by the immersive reality device (114) to display at least one activity with respect to the first object, wherein the at least one activity is a gaming activity played by the user;
automatically adjusting, through the application module (118), periodic visual assistance parameters corresponding to the at least one activity played by the user;
automatically determining, through the application module (118), a weighted score associated with the periodic visual assistance parameters, based on the performance of the user during playing of the at least one activity;
automatically evaluating, through the application module (118), if the weighted score matches with a fixed targeted score, wherein the fixed targeted score is desired score associated with an observation by the user and recorded during playing of the first activity by the user; and
determining, through the application module (118), a vision assessment and correction data associated with the user, based on the evaluation of matching between the weighted score and the fixed targeted score.
2. The immersive system (100) as claimed in claim 1, wherein the computing device (102) is, but not limited to, a mobile device, a laptop, a personal computer, a personal digital assistant (PDA), or any other handheld device.
3. The immersive system (100) as claimed in claim 1, wherein the immersive reality device (114) is, but not limited to, a VR headset, VR goggles, or any other visual simulating device.
4. The immersive system (100) as claimed in claim 1, wherein the immersive reality device (114) is configured to display, through the immersive reality display (122), the vision assessment and correction data associated with the user.
5. The immersive system (100) as claimed in claim 1, wherein the periodic visual assistance parameters is, but not limited to, a size of the first object and a contrast of the first object.
6. The immersive system (100) as claimed in claim 1, wherein the application module (118) is configured to support a plurality of quality parameters and automatically adjust the plurality of quality parameters, depending on configuration of the immersive reality device (114)
7. The immersive system (100) as claimed in claim 6, wherein the plurality of quality parameters is, but not restricted to, contrast associated with the at least one activity with respect to the first object or the second object, levels associated with the at least one activity with respect to the first object or the second object, size of the first object associated with the at least one activity, or the like.
8. The immersive system (100) as claimed in claim 1, wherein if the weighted score does not match with the fixed targeted score, the application module (118) is further configured to display, through the immersive reality display (122), the immersive reality environment comprising a second object having a second property, and the second object being displayed to the user.
9. The immersive system (100) as claimed in claim 1, wherein if the weighted score does not match with the fixed targeted score, the application module (118) is configured to enable the processor (106) to receive a second user input by the immersive reality device (114), to display at least one activity with respect to the second object, wherein the at least one activity is a gaming activity played by the user.
10. The immersive system (100) as claimed in claim 1, wherein the immersive reality environment displayed to the user through the immersive reality display (122) comprises at least more than five activities to be played interchangeably by the user, the at least more than five activities associated with the first object.
11. The immersive system (100) as claimed in claim 9, wherein the immersive reality environment displayed to the user through the immersive reality display (122) comprises at least more than five activities to be played interchangeably by the user, the at least more than five activities associated with the second object.
12. The immersive system (100) as claimed in claim 10, wherein the vision assessment and correction data associated with the user is, but not restricted to, vision data associated with the user, required improved vision level associated with the user, desired weighted score associated with the user.
13. The immersive system as claimed in claim 1, wherein the first user input is received from at least one sensor selected from a group consisting of a head tracking sensor, a face tracking sensor, a hand tracking sensor, an eye tracking sensor, a body tracking sensor, a voice recognition sensor, a heart rate sensor, a skin capacitance sensor, an electrocardiogram sensor, a brain activity sensor, a geolocation sensor, at least one retinal camera, a balance tracking sensor, a body temperature sensor, a blood pressure monitor, and a respiratory rate monitor.
14. The immersive system as claimed in claim 9, wherein the second user input is received from the at least one sensor selected from the group consisting of the head tracking sensor, the face tracking sensor, the hand tracking sensor, the eye tracking sensor, the body tracking sensor, the voice recognition sensor, the heart rate sensor, the skin capacitance sensor, the electrocardiogram sensor, the brain activity sensor, the geolocation sensor, at least one retinal camera, the balance tracking sensor, the body temperature sensor, the blood pressure monitor, and the respiratory rate monitor.
15. The immersive system as claimed in claim 13, wherein the at least one sensor is configured to track movements of a head of the user during the vision assessment and correction.
16. A method for facilitating vision assessment and correction, comprising:
displaying, through an immersive reality display (122), an immersive reality environment, the immersive reality environment comprising a first object having a first property, and the first object being displayed to the user;
receiving, through the processor (106), a first user input by the immersive reality device (114) to display at least one activity with respect to the first object, wherein the at least one activity is a gaming activity played by the user;
automatically adjusting, through the application module (118), periodic visual assistance parameters corresponding to the at least one activity played by the user;
automatically determining, through the application module (118), a weighted score associated with the periodic visual assistance parameters, based on the performance of the user during playing of the at least one activity;
automatically evaluating, through the application module (118), if the weighted score matches with a fixed targeted score, wherein the fixed targeted score is desired score associated with an observation by the user and recorded during playing of the first activity; and
determining, through the application module (118), a vision assessment and correction data associated with the user, based on the evaluation of matching between the weighted score and the fixed targeted score.
17. The method as claimed in claim 16, wherein the method further comprises:
displaying, through the immersive reality display (122), the vision assessment and correction data associated with the user.
18. The method as claimed in claim 16, wherein the method further comprises:
automatically adjusting, through the application module (118), a plurality of quality parameters, depending on configuration of the immersive reality device (114).
19. The method as claimed in claim 16, wherein the step of automatically evaluating, through the application module (118), if the weighted score matches with a fixed targeted score, wherein the fixed targeted score is desired score associated with an observation by the user and recorded during playing of the first activity by the user comprises:
determining, through the application module (118), that the weighted score does not match with the fixed targeted score; and
upon determining that if the weighted score does not match with the fixed targeted score, causing the application module (118) to display, through the immersive reality display (122), the immersive reality environment comprising a second object having a second property, and the second object being displayed to the user.
20. The method as claimed in claim 19, wherein if the weighted score does not match with the fixed targeted score, the application module (118) is configured to enable the processor (106) to receive a second user input by the immersive reality device (114), to display at least one activity with respect to the second object, wherein the at least one activity is a gaming activity played by the user.
21. The method as claimed in claim 16, wherein the immersive reality environment displayed to the user through the immersive reality display (122) comprises at least more than five activities to be played interchangeably by the user, the at least more than five activities associated with the first object.
22. The method as claimed in claim 19, wherein the immersive reality environment displayed to the user through the immersive reality display (122) comprises at least more than five activities to be played interchangeably by the user, the at least more than five activities associated with the second object.
23. The method as claimed in claim 16, wherein the first user input is received from at least one sensor selected from a group consisting of a head tracking sensor, a face tracking sensor, a hand tracking sensor, an eye tracking sensor, a body tracking sensor, a voice recognition sensor, a heart rate sensor, a skin capacitance sensor, an electrocardiogram sensor, a brain activity sensor, a geolocation sensor, at least one retinal camera, a balance tracking sensor, a body temperature sensor, a blood pressure monitor, and a respiratory rate monitor.
24. The method as claimed in claim 19, wherein the second user input is received from the at least one sensor selected from the group consisting of the head tracking sensor, the face tracking sensor, the hand tracking sensor, the eye tracking sensor, the body tracking sensor, the voice recognition sensor, the heart rate sensor, the skin capacitance sensor, the electrocardiogram sensor, the brain activity sensor, the geolocation sensor, at least one retinal camera, the balance tracking sensor, the body temperature sensor, the blood pressure monitor, and the respiratory rate monitor.
25. The method as claimed in claim 16, wherein the step of receiving, through the processor (106), a first user input by the immersive reality device (114) to display at least one activity with respect to the first object during the vision assessment and correction comprises: tracking, through the at least one sensor, movements of a head of the user during the vision assessment and correction.
| # | Name | Date |
|---|---|---|
| 1 | 202411011027-STATEMENT OF UNDERTAKING (FORM 3) [16-02-2024(online)].pdf | 2024-02-16 |
| 2 | 202411011027-FORM FOR SMALL ENTITY(FORM-28) [16-02-2024(online)].pdf | 2024-02-16 |
| 3 | 202411011027-FORM FOR SMALL ENTITY [16-02-2024(online)].pdf | 2024-02-16 |
| 4 | 202411011027-FORM 1 [16-02-2024(online)].pdf | 2024-02-16 |
| 5 | 202411011027-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [16-02-2024(online)].pdf | 2024-02-16 |
| 6 | 202411011027-EVIDENCE FOR REGISTRATION UNDER SSI [16-02-2024(online)].pdf | 2024-02-16 |
| 7 | 202411011027-DRAWINGS [16-02-2024(online)].pdf | 2024-02-16 |
| 8 | 202411011027-DECLARATION OF INVENTORSHIP (FORM 5) [16-02-2024(online)].pdf | 2024-02-16 |
| 9 | 202411011027-COMPLETE SPECIFICATION [16-02-2024(online)].pdf | 2024-02-16 |
| 10 | 202411011027-Proof of Right [23-04-2024(online)].pdf | 2024-04-23 |
| 11 | 202411011027-FORM-26 [23-04-2024(online)].pdf | 2024-04-23 |
| 12 | 202411011027-MSME CERTIFICATE [02-05-2024(online)].pdf | 2024-05-02 |
| 13 | 202411011027-FORM28 [02-05-2024(online)].pdf | 2024-05-02 |
| 14 | 202411011027-FORM-9 [02-05-2024(online)].pdf | 2024-05-02 |
| 15 | 202411011027-FORM 18A [02-05-2024(online)].pdf | 2024-05-02 |
| 16 | 202411011027-FER.pdf | 2024-07-22 |
| 17 | 202411011027-OTHERS [18-11-2024(online)].pdf | 2024-11-18 |
| 18 | 202411011027-FER_SER_REPLY [18-11-2024(online)].pdf | 2024-11-18 |
| 19 | 202411011027-CLAIMS [18-11-2024(online)].pdf | 2024-11-18 |
| 20 | 202411011027-US(14)-HearingNotice-(HearingDate-09-10-2025).pdf | 2025-09-25 |
| 21 | 202411011027-FORM-26 [03-10-2025(online)].pdf | 2025-10-03 |
| 22 | 202411011027-Correspondence to notify the Controller [03-10-2025(online)].pdf | 2025-10-03 |
| 23 | 202411011027-Written submissions and relevant documents [24-10-2025(online)].pdf | 2025-10-24 |
| 1 | SearchHistoryE_19-07-2024.pdf |