Sign In to Follow Application
View All Documents & Correspondence

A Mixed Reality Device (Mrd) And A Method For Generating An Augmented Reality Or A Virtual Reality Environment

Abstract: A method (500) for generating an augmented reality or a virtual reality environment, comprises steps of receiving (510) 3-Dimensional (3D) spatial image data of a real world environment from a first electromagnetic radiation sensor (101); receiving (520) eye tracking data pertaining to movement of an eye of a user from a second electromagnetic radiation sensor (402); receiving (530) hand tracking data pertaining to movement of one or more hands of the user from a third electromagnetic radiation sensor (404); generating (540) a 3D mesh, pertaining to one or more virtual reality or augmented reality objects, as a function of the spatial image data, eye tracking data and the hand tracking data and displaying (550) the 3D mesh on one or more display sources (116). Further, a Mixed Reality Device (MRD) (100) for generating an augmented reality or a virtual reality environment is also provided. [Figure 5]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 October 2018
Publication Number
17/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
legal@ajnalens.com
Parent Application
Patent Number
Legal Status
Grant Date
2022-11-18
Renewal Date

Applicants

DIMENSION NXG PVT. LTD.
Dimension NXG, 410 & 411, 4th floor, Arcadia, Hiranandani Estate, Thane West- 400607, Maharashtra, India

Inventors

1. RAUT, Pankaj Uday
Sai Krupa Building, Near S.T. Stand Sangli, Maharashtra- 416416, India
2. PATIL, Abhijit Bhagvan
S No.44/2A/2B Plot No.27, Near Borse Nagar CC 5991, Malegaon, District-Nashik, Maharashtra- 423203, India
3. TOMAR, Abhishek
1346, Shivakanksha, Kailashpuri, Pachpedi Road, Jabalpur, Madhya Pradesh- 482001, India
4. BHOSALE, Gaurav Gajanan
Plot No.5, Shivneri, Shri Krishna Colony, Sambhajinagar, Kolhapur- 416012, Maharashtra, India
5. SURI, Yukti
A-201, Samarpan Apartments, Hanuman Mandir Road, Opposite Lijjat Papad, Govandi East, Mumbai- 400088, Maharashtra, India
6. Moaz Munir Ahmad Momin
H.no. 126, 101/C, Sakina Manzil Samad Seth Bagicha Bengalpura Bhiwandi- 421302, Maharashtra, India

Specification

FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10 and rule 13]
“A MIXED REALITY DEVICE (MRD) AND A METHOD FOR GENERATING AN AUGMENTED REALITY OR A VIRTUAL REALITY
ENVIRONMENT”
DIMENSION NXG PVT. LTD, an Indian company, having registered Office at Dimension NXG, 410 & 411, 4th floor, Arcadia, Hiranandani Estate, Thane West-400607, Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD
Embodiments of the present invention relate generally to interactions in augmented reality and virtual reality environments and more specifically to a Mixed Reality Device (MRD) and a method for generating an augmented reality or a virtual reality environment.
BACKGROUND ART
While augmented reality and virtual reality headsets are becoming increasing popular in entertainment industry, such as for games and movies, they do hold a lot of potential in many other application areas such as automotive design, pilot training and medical procedure simulation. However, current day augmented or virtual reality headsets are limited by their short range Field of View (FOV) and poor quality holograms.
Therefore, in light of the discussion above, there is need for a Mixed Reality Device (MRD) and a method for generating an augmented reality or a virtual reality environment, that does not suffer from above mentioned deficiencies.
OBJECT OF THE INVENTION
An aspect of the present invention provides a Mixed Reality Device (MRD) for generating an augmented reality or a virtual reality environment.
Another aspect of the present invention provides a method for generating an augmented reality or a virtual reality environment.
SUMMARY OF THE INVENTION
Embodiments of the present invention aim to provide a Mixed Reality Device (MRD) and a method for generating an augmented reality or a virtual reality environment. The MRD and the method offer a much

larger range Field of View and higher quality holograms for relatively accurate augmented and/or virtual reality environments, that makes the present invention suitable for a gamut of applications such as medical sciences, defence training and automotive design etc.
According to a first aspect of the present invention, there is provided a Mixed Reality Device (MRD), for generating an augmented reality or a virtual reality environment, the MRD comprising a first electromagnetic radiation sensor, a second electromagnetic radiation sensor, a third electromagnetic radiation sensor, a control unit and one or more display sources. The first electromagnetic radiation sensor is configured to capture spatial image data of a real world environment and transmit the spatial image data to the control unit. The second electromagnetic radiation sensor is configured to capture eye tracking data pertaining to movement of an eye of a user and transmit the eye tracking data to the control unit. The third electromagnetic radiation sensor is configured to capture hand tracking data pertaining to movement of one or both the hands of the user and transmit the hand tracking data to the control unit. The control unit is configured to generate a 3D mesh, pertaining to one or more virtual reality or augmented reality objects, as a function of the spatial image data, eye tracking data and the hand tracking data. Also, the one or more display sources are configured to display the 3D mesh.
In accordance with an embodiment of the present invention, the MRD further comprises a control unit configured to modify the 3D mesh as a function of the hand tracking data and/or the eye tracking data.
In accordance with an embodiment of the present invention, the MRD further comprising an Inertial Measurement Unit (IMU) configured to capture inertial data and transmit the inertial data to the control unit.
In accordance with an embodiment of the present invention, the

control unit is further configured to perform pose fusion as a function of the spatial image data and the inertial data.
In accordance with an embodiment of the present invention, for generating the 3D mesh, the control unit is further configured to utilize Speeded Up Robust Features (SURF) key point detection and Binary Robust Invariant Scalable Keypoints (BRISK) descriptor for determining a description of the detected keypoints.
In accordance with an embodiment of the present invention, for generating the 3D mesh, the control unit is further configured to utilize Truncated Signed Distance Function (TSDF) and marching cube based triangulation for generating the one or more virtual reality or augmented reality objects.
In accordance with an embodiment of the present invention, the first electromagnetic radiation sensor is an active stereo sensor.
In accordance with an embodiment of the present invention, the second electromagnetic radiation sensor comprises a Near Infra-Red (NIR) light source and an Infra-Red (IR) pass camera sensor unit.
In accordance with an embodiment of the present invention, the third electromagnetic radiation sensor is a depth sensor.
In accordance with an embodiment of the present invention, the depth sensor is selected from a group comprising stereoscopic vision based depth sensors and time of flight based depth sensors.
According to a second aspect of the present invention, there is provided a method for generating an augmented reality or a virtual reality environment, the method comprising steps of receiving 3-Dimensional (3D) spatial image data of a real world environment, from a first electromagnetic radiation sensor; receiving eye tracking data pertaining to

movement of an eye of a user, from a second electromagnetic radiation sensor; receiving hand tracking data pertaining to movement of one or more hands of the user, from a third electromagnetic radiation sensor; generating a 3D mesh, pertaining to one or more virtual reality or augmented reality objects, as a function of the spatial image data, the eye tracking data and the hand tracking data and displaying the 3D mesh on one or more display sources.
In accordance with an embodiment of the present invention, the method further comprises a step of modifying the 3D mesh as a function of the hand tracking data and/or the eye tracking data.
In accordance with an embodiment of the present invention, the method further comprises a step of receiving inertial data from an Inertial Measurement Unit (IMU).
In accordance with an embodiment of the present invention, the method further comprises a step of performing pose fusion as a function of the spatial image data and the inertial data.
In accordance with an embodiment of the present invention, the step of generating the 3D mesh includes utilization of Speeded Up Robust Features (SURF) key point detection and Binary Robust Invariant Scalable Keypoints (BRISK) descriptor for determining a description of the detected keypoints.
In accordance with an embodiment of the present invention, the step of generating the 3D mesh includes utilization of Truncated Signed Distance Function (TSDF) and marching cube based triangulation for generating the one or more virtual reality or augmented reality objects.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
So that the manner in which the above recited features of the

present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may have been referred by embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
These and other features, benefits, and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
Fig. 1A illustrates a front view of a Mixed Reality Device (MRD) for generating an augmented reality or a virtual reality environment, in accordance with an embodiment of the present invention;
Fig. 1B illustrates a left side view of the MRD, in accordance with an embodiment of the present invention;
Fig. 1C illustrates a top view of the MRD, in accordance with an embodiment of the present invention;
Fig. 2 illustrates a user wearing the MRD, in accordance with an embodiment of the present invention;
Fig. 3 illustrates a control unit of the MRD, in accordance with an embodiment of the present invention;
Fig. 4 illustrates a logical diagram of the MRD, in accordance with an embodiment of the present invention; and
Fig. 5 illustrates a method for generating an augmented reality or a virtual reality environment, in accordance with an embodiment of the

present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described, and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claim. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the

prior art base or were common general knowledge in the field relevant to the present invention.
In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase “comprising”, it is understood that we also contemplate the same composition, element or group of elements with transitional phrases “consisting of”, “consisting”, “selected from the group of consisting of, “including”, or “is” preceding the recitation of the composition, element or group of elements and vice versa.
The present invention is described hereinafter by various embodiments with reference to the accompanying drawing, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only, and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary, and are not intended to limit the scope of the invention.
It is rare for a Mixed Reality Device (MRD) to be able to include capabilities of generating an augmented reality environment and a virtual reality environment in a single device. However, the present invention aims to achieve just that by use of a number of electromagnetic radiation sensors. The term electromagnetic radiation sensor encompasses all kinds of sensor devices which are able to detect electromagnetic radiation (such as visible light and Infra-Red (IR) radiation). The electromagnetic

radiation sensors are being used to gather and track spatial data of the real world environment as well as to track eye movement and hand gestures of a user, so that a generated virtual 3D mesh environment can be updated constantly in real time. Wherever, occlusion issues result in track loss for electromagnetic radiation sensors, inertial measurements provided by an Inertial Measurement Unit (IMU) are being used to provide compensation. The algorithms being used to process data and generate 3D mesh offer robust tracking. Also, there are additional effects that may be provided such as synthetic occlusion effects and ambient relighting.
Referring to the drawings, the invention will now be described in more detail. Figure 1A illustrates a front view of a Mixed Reality Device (MRD) 100 for generating an augmented reality or a virtual reality environment, in accordance with an embodiment of the present invention. The MRD 100 includes a first electromagnetic radiation sensor 101. The first electromagnetic radiation sensor 101 is configured to capture spatial image data of a real world environment. In one embodiment of the invention, the first electromagnetic radiation sensor 101 is an active stereo sensor. The active stereo sensor will typically have an IR projector 102, an IR camera 106 and an RGB camera 108. While RGB camera 108 captures coloured imagery of the real world environment, the IR projector 102 and the IR camera 106 together capture depth data of the real world environment using any one or more of Time of Flight based and passive stereoscopic depth imaging.
Further, the MRD 100 includes front visors 104. The front visors 104 may be partially or fully reflective surfaces, therefore are used to view virtual reality and/or augmented reality objects. A cooling vent 110 has been provided to ensure that the internal circuitry and devices of the MRD 100 are provided enough amount of air for convection cooling. A wire outlet 112 allows for connecting wires and chords to connect to various components such as power supply, computational and control units and

data acquisition devices. Since the MRD 100 is adapted to be worn on a head of a user, the MRD 100 also includes extendable bands and straps, and for that reason a strap lock 114 has also been provided with the MRD 100.
Figure 1B illustrates a left side view of the MRD 100, in accordance with an embodiment of the present invention. Further, can be seen from Figure 1B, the MRD 100 includes one or more display sources 116. In various embodiments, the one or more display sources 116 may include LED or LCD based screens with their respective drivers. A front head support 118 allows the MRD 100 to rest properly on the head of the user. An Inertial Measurement Unit (IMU) 120 has also been provided with the MRD 100. The IMU 120 is configured to capture inertial data such as, but not limited to, a specific force on the MRD 100, an angular rate of the MRD 100 and a magnetic field surrounding the MRD 100 etc. More discussion on the role of the IMU 120 has been included in the following discussion. An MRD driver board 122 includes at least a part of the computational software and hardware needed to run the various devices provided with the MRD 100.
Figure 1C illustrates a top view of the MRD 100, in accordance with an embodiment of the present invention. A front padding 124 comprising a material such as foam, also has been provided for providing cushioned contact with the head of the user. As discussed earlier too, an adjustable strap 126 allows a size of the MRD 100 to be varied, so that users of different head sizes may be able to use the MRD 100. Similar to the front padding 124, a back padding 128 provides cushioned contact to a backside of the head of the user. A back head support 130 has been provided for additional support. An adjustment knob 132 allows for tensioning and loosening of the adjustable strap 126. Also can be seen from Figure 1C, that were not visible in Figures 1A or 1B, are cables 134 running from the MRD 100 to other external components.

Figure 2 illustrates a user wearing the MRD 100, in accordance with an embodiment of the present invention. Here in this embodiment, the MRD 100 is shown to include two distinct units a head mounted headpiece 210 and a body mounted control unit 220, that has been worn on a lower section of the body of the user. In various other embodiments, the control unit 220 and the headpiece 210 may be kept together. In various other embodiments, the control unit 220 may be a part of an external computing device (such as a laptop or a desktop) present locally or at a remote location and connected with the headpiece 220 through a network such as Internet.
Figure 3 illustrates the control unit 220 of the MRD 100, in accordance with an embodiment of the present invention. The control unit 220 includes a power supply unit 302 for receiving AC power. Further, the control unit 220 includes an upper cooling vent 304 for receiving air, aiding in convection cooling of its internal components. An HDMI output 306 allows data to be transferred between the control unit 220 and the headpiece 210. However, there may be included other kinds or output ports corresponding to various other protocols for data transfer. A Universal Serial Bus (USB) connector 308 allows for data and power to be transferred in and out of the control unit 220.
A local computing unit 310 includes processors, graphics processors, non-volatile memory units and other computing hardware required for the function of the control unit 220. Hinged connections 312 allow the control unit 220 to be folded and comply with the/any body shape of the user. A battery unit 314 stores power on board the control unit 220. It is contemplated here that the battery unit 314 preferably includes rechargeable batteries, such as, but not limited to, Lithium-Ion or Nickel-Metal-Hydride batteries etc. Another dedicated cooling vent 316 has been provided especially for cooling the components of the local computing unit 310. An indicator 318, such as an LED, has also been provided to provide

various kinds of indications such “charging on”, “on AC power”, “on Battery power”, “low power” and “power out” etc. The indications may be colour coded for differentiation and distinctiveness.
There may be many other components that may be provided in the MRD 100 that have not been illustrated in Figures 1A to 3. They could be located at various locations on the MRD 100 or may be internally provided, without departing from the scope of the invention. However, for the purpose of clarity and enablement of the functioning of the MRD 100, such components have been elucidated by means of a logical diagram, in the following discussion.
Figure 4 illustrates a logical diagram of the MRD 100, in accordance with an embodiment 400 of the present invention. Figure 4 depicts the first electromagnetic radiation sensor 101. As discussed above the first electromagnetic radiation sensor 101 is configured to capture the spatial image data of the real world environment. Moreover, the first electromagnetic radiation sensor 101 is also configured to transmit the spatial image data to the control unit 220. In addition to the first electromagnetic radiation sensor 101, there is also a second electromagnetic radiation sensor 402 configured to capture eye tracking data pertaining to movement of an eye of the user (while wearing the headpiece 210) and transmit the eye tracking data to the control unit 220. In one embodiment of the invention, the second electromagnetic radiation sensor 402 comprises a Near Infra-Red (NIR) light source and an Infra-Red (IR) pass camera sensor unit.
In addition, there is also a third electromagnetic radiation sensor 404 configured to capture hand tracking data pertaining to movement of one or both the hands of the user and transmit the hand tracking data to the control unit 220. It is envisaged here that, the third electromagnetic radiation sensor 404 is a depth sensor. There are various methodologies

available in the art for depth sensing. In that manner, in various embodiments, the depth sensor is selected from a group comprising stereoscopic depth sensors and time of flight based depth sensors. Stereoscopic depth sensors involve passive 3D depth estimation, and at least one form of stereoscopic depth sensors is described in United States Patent No. US7433024B2. The time of flight based depth sensors work by measuring time travelled by a beam of light to an object and then return, and determining the depth as a product of the known speed of light and half of the total time taken by the beam of light to return. A detailed analysis into working of a time of flight based depth sensor can be found in United States Patent No. US8786678B2, which is included herein by reference.
The control unit 220 is configured to generate a 3D mesh as a function of the spatial image data, eye tracking data and the hand tracking data. This can be understood in detail from the following discussion. Figure 4 also depicts an exemplary construction of the control unit 220. The control unit 220 is envisaged to include a Central Processing Unit (CPU) 406. In various embodiments, the CPU 406 is one of, but not limited to, a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), general purpose or an ARM based processor. Further, the control unit 220 includes a Graphics Processing Unit (GPU) 408. The GPU 408 is a specialized processor designed for rapid manipulation, generation, modification and optimization of data, resulting in enhanced visual output from corresponding display sources.
However, a person skilled in the art would appreciate that the functionalities of the CPU 406 and the GPU 408 can be combined into a single hardware, either in cases where graphics processing is not very demanding or in foreseeable future when such hardware capable of performing varying tasks, may be developed. In addition to the CPU 406 and the GPU 408, the control unit 220 is also envisaged to include a

memory unit 410. The memory unit 410 may be one of, but not limited to, EPROM, EEPROM and Flash memory etc. The memory unit 410 is also envisaged to store maps and 3D meshes of environment for reuse in future for better efficiency.
The spatial image data from the first electromagnetic radiation sensor 101 is the visual and depth information of the environment and is used for Simultaneous Localization and Mapping (SLAM). SLAM represents a group of algorithms that allows the control unit 220 to construct or update a map of an unknown environment while simultaneously keeping track of a location of an agent within the environment. Further, SLAM allows the control unit 220 to estimate a pose of an agent (in this a user wearing the MRD 100) and a 3D map of the real world environment at the same time. Since it is difficult to estimate both the pose and the map of the real world environment, SLAM may use approximation solutions such as, but not limited to, particle filter, extended Kalman Filter and Graph Optimization. Further information of SLAM can be found in United States Patents numbered US7774158B2, US9390344B2 and United States Patent Application Publication numbered US20170212529A1.
SLAM utilizes feature detection, feature description and feature matching from the spatial image data. A feature in an image corresponds to highly discriminative and robust information of interest typically in the form of keypoints, patches, structures or patterns. A detector finds the repeatable interest points, and a descriptor is a distinctive specification of the detected interest point typically defining the surrounding information of the interest point. Therefore, in one embodiment of the invention, the control unit 220 is further configured to utilize Speeded Up Robust Features (SURF) key point detection and Binary Robust Invariant Scalable Keypoints (BRISK) descriptor for determining a description of the detected keypoints. Feature matching employs various techniques to find

corresponding interest points between pair of images using various distance and similarity metrics.
The functioning of SURF detector can be understood from Herbert Bay, Tinne Tuytelaars and Luc Van Gool, "SURF: Speeded Up Robust Features", ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I, Pages 404-417, Graz, Austria — May 07 - 13, 2006. While the functioning of BRISK descriptor can be understood from S. Leutenegger, M. Chli and R. Y. Siegwart, "BRISK: Binary Robust invariant scalable keypoints," 2011 International Conference on Computer Vision, Barcelona, 2011, pp. 2548-2555. The aforementioned citations are included herein by reference. Further, pose estimation and bundle adjustment is performed using graph optimization approach in local map.
However, a person skilled in the art would appreciate that many other keypoint detectors and descriptor may be implemented without departing from the scope of the invention. Typical examples of keypoint detectors and descriptors include, but are not limited to, Hessian detector, Harris corner detector, Laplacian detector and FAST detector, Scale Invariant Feature Transform (SIFT), Histogram of Oriented Gradient (HOG) and Gradient Location Orientation Histogram (GLOH) etc.
Further can be seen from the Figure 4, that the control unit 220 is connected with the IMU 120. The IMU 120 is envisaged to include a gyroscope 412, an accelerometer 414 and a magnetometer 416. The IMU 120 is configured to capture the inertial data and transmit the inertial data to the control unit 220. The inertial data is generally in 6-DOF. The inertial data is utilized by the control unit 220 for smoother tracking and pose refinement. The inertial data is also useful for overcoming track loss problem caused due to occlusion of the first electromagnetic radiation sensor 101. In case of track loss, the control unit 220 gets orientation update from the IMU 120 and keeps tracking. For example, in a typical

exemplary scenario, the SLAM using only visual 3D data usually runs at 30 fps. In short, pose estimation would add roughly 33msec latency. In addition to this, unity rendering and data transfer would add more latency. This reflects into visualization on the MRD 100. On the other hand, IMU 120 usually provides 200 measurements or more for every second. Therefore, the control unit 220 is further configured to perform pose fusion as a function of the spatial image data and higher frequency inertial data, which reduces jitter, lag and provides better visualization at the MRD 100.
The 3D mesh that is generated by the control unit 220, pertains to one or more virtual reality or augmented reality objects. In case of virtual reality, the 3D mesh would be a virtual reproduction of the entire real world environment, whereas in case of augmented reality, one or more virtual objects would be made visible in a backdrop of the real world environment. Synthetic occlusion effects and ambient relighting may then be generated by the control unit 220 to make the virtual object seamlessly integrated with the real world environment. In that manner, the control unit 220 is configured to include albedo and shading estimation, which provides real¬time relighting, recolouring applications.
Again, there are various kinds of methodologies or algorithms that may be used to generate the 3D mesh. However, preferably, control unit 220 is configured to utilize Truncated Signed Distance Function (TSDF) and marching cube based triangulation for generating the one or more virtual reality or augmented reality objects. As a result, the 3D mesh is more accurate and has good quality and texture information. Utilization of TSDF for mesh reconstruction can be understood from Werner D., Al-Hamadi A., Werner P. (2014) Truncated Signed Distance Function: Experiments on Voxel Size. In: Campilho A., Kamel M. (eds) Image Analysis and Recognition. ICIAR 2014. Lecture Notes in Computer Science, vol 8815. Springer, Cham, which is included herein by reference. An implementation of the marching cube based triangulation can be

understood from United States Patent No. US4710876A, which is included herein by reference.
Further, the control unit 220 is configured to generate the 3D mesh in real time. The one or more display sources 116, connected with the control unit 220, are configured to display the 3D mesh. The one or more display sources 116 act as display sources in a manner, that the one or more display sources 116 are not directly visible to the user. Instead, the 3D mesh may then be projected onto the front visors 104 that display the 3D mesh to the user. As the user interacts with one or more virtual reality or augmented reality objects through the movement of his/her hands and eyes, the control unit 220 is further configured to modify the 3D mesh as a function of the hand tracking data and/or the eye tracking data. Thus the generation and presentation of the augmented reality or virtual reality environment would then be a consequence of the hardware elements of the MRD 100 and configuration of the control unit 220. In that manner, exemplary embodiments as to various methods for generating the virtual reality or the augmented reality environment have been discussed below.
Figure 5 illustrates a method 500 for generating an augmented reality or a virtual reality environment, in accordance with an embodiment of the present invention. The method begins at step 510 when the control unit 220 receives the 3D spatial image data of the real world environment from the first electromagnetic radiation sensor 101. In one embodiment of the invention, the control unit 220 receives the inertial data from the Inertial Measurement Unit (IMU) 120. Further, in one embodiment, the control unit 220 performs the pose fusion as the function of the spatial image data and the inertial data.
Further, at step 520, the control unit 220 receives the eye tracking data pertaining to the movement of the eye of the user, from the second electromagnetic radiation sensor 402. At step 530, the control unit 220

receives the hand tracking data pertaining to the movement of the one or more hands of the user, from the third electromagnetic radiation sensor 404.
At step 540, the control unit 220 generates the 3D mesh pertaining to the one or more virtual reality or augmented reality objects as a function of the spatial image data, the eye tracking data and the hand tracking data. The spatial image data and the inertial data are used by SLAM to map the environment and produce real time user pose in the said map, which in turn is used by the control unit 220 to properly place and orient the 3D hologram relative to the position of the user wearing the MRD 100. In that manner, the control unit 220 utilizes Speeded Up Robust Features (SURF) key point detection and Binary Robust Invariant Scalable Keypoints (BRISK) descriptor for determining a description of the detected keypoints. Further, the control unit 220 utilizes Truncated Signed Distance Function (TSDF) and marching cube based triangulation for generating the 3D mesh pertaining to one or more virtual reality or augmented reality objects.
At step 550, the one or more display sources 116 display the 3D mesh. The 3D mesh is then projected onto the front visors 104 that display the 3D mesh to the user. Further, the control unit 220 modifies the 3D mesh as a function of the hand tracking data and/or the eye tracking data, as the user interacts with the 3D mesh.
The present invention, as described by means of the embodiments above, offers a number of advantages. The present invention provides robust tracking in fast motion and rotations. The inertial data integration inside SLAM improves tracking performance. The present invention offers sparse positional tracking and dense mesh reconstruction. The reconstructed 3D mesh is relatively more accurate compared to those prepared by devices in the art and has good quality. The invention

includes albedo and shading estimation, which provides real-time relighting and recolouring applications. Also, the proposed pose prediction methodology provides robust and fast estimation of 6 DOF user pose, which in turn helps in display latency reduction.
In some examples, the systems described herein, may include one or more processors, one or more forms of memory, one or more input devices/interfaces, one or more output devices/interfaces, and machine-readable instructions that when executed by the one or more processors cause the system to carry out the various operations, tasks, capabilities, etc., described above.
In some embodiments, the disclosed techniques can be implemented, at least in part, by computer program instructions encoded on a non-transitory computer-readable storage media in a machine-readable format, or on other non-transitory media or articles of manufacture. Such computing systems (and non-transitory computer-readable program instructions) can be configured according to at least some embodiments presented herein, including the processes described in above description.
The programming instructions can be, for example, computer executable and/or logic implemented instructions. In some examples, a computing device is configured to provide various operations, functions, or actions in response to the programming instructions conveyed to the computing device by one or more of the computer readable medium, the computer recordable medium, and/or the communications medium. The non-transitory computer readable medium can also be distributed among multiple data storage elements, which could be remotely located from each other. The computing device that executes some or all of the stored instructions can be a microfabrication controller, or another computing platform. Alternatively, the computing device that executes some or all of

the stored instructions could be remotely located computer system, such as a server.
In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
Further, while one or more operations have been described as being performed by or otherwise related to certain modules, devices or entities, the operations may be performed by or otherwise related to any module, device or entity. As such, any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof.
Further, the operations need not be performed in the disclosed order, although in some examples, an order may be preferred. Also, not all functions need to be performed to achieve the desired advantages of the disclosed system and method.
Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the embodiments shown along with the accompanying drawings but is to be providing broadest scope of

consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and appended claims.

We Claim:
1. A method (500) for generating an augmented reality or a virtual
reality environment, the method (500) comprising steps of:
receiving (510) 3-Dimensional (3D) spatial image data of a real world environment from a first electromagnetic radiation sensor (101);
receiving (520) eye tracking data pertaining to movement of an eye of a user from a second electromagnetic radiation sensor (402);
receiving (530) hand tracking data pertaining to movement of one or more hands of the user from a third electromagnetic radiation sensor (404);
generating (540) a 3D mesh, pertaining to one or more virtual reality or augmented reality objects, as a function of the spatial image data, eye tracking data and the hand tracking data; and
displaying (550) the 3D mesh on one or more display sources (116).
2. The method (500) as claimed in claim 1, further comprising a step of modifying the 3D mesh as a function of the hand tracking data and/or the eye tracking data.
3. The method (500) as claimed in claim 1, further comprising a step of receiving inertial data from an Inertial Measurement Unit (IMU) (120).
4. The method (500) as claimed in claim 3, further comprising a step of performing pose fusion as a function of the spatial image data and the inertial data.

5. The method (500) as claimed in claim 1, wherein the step of generating (540) the 3D mesh includes utilization of Speeded Up Robust Features (SURF) key point detection and Binary Robust Invariant Scalable Keypoints (BRISK) descriptor for determining a description of the detected keypoints.
6. The method (500) as claimed in claim 1, wherein the step of generating (540) the 3D mesh includes utilization of Truncated Signed Distance Function (TSDF) and marching cube based triangulation for generating the one or more virtual reality or augmented reality objects.
7. A Mixed Reality Device (MRD) (100) for generating an augmented
reality or a virtual reality environment, the MRD (100) comprising:
a first electromagnetic radiation sensor (101);
a second electromagnetic radiation sensor (402);
a third electromagnetic radiation sensor (404);
a control unit (220); and
one or more display sources (116);
wherein the first electromagnetic radiation sensor (101) is configured to capture spatial image data of a real world environment and transmit the spatial image data to the control unit (220);
wherein the second electromagnetic radiation sensor (402) is configured to capture eye tracking data pertaining to movement of an eye of a user and transmit the eye tracking data to the control unit (220);
wherein the third electromagnetic radiation sensor (404) is configured to capture hand tracking data pertaining to movement of one or both the hands of the user and transmit the hand tracking data to the control unit (220);
wherein the control unit (220) is configured to generate a 3D

mesh, pertaining to one or more virtual reality or augmented reality objects, as a function of the spatial image data, eye tracking data and the hand tracking data; and
wherein the one or more display sources (116) are configured to display the 3D mesh.
8. The MRD (100) as claimed in claimed in claim 7, wherein the control unit (220) is further configured to modify the 3D mesh as a function of the hand tracking data and/or the eye tracking data.
9. The MRD (100) as claimed in claim 7, further comprising an Inertial Measurement Unit (IMU) (120) configured to capture inertial data and transmit the inertial data to the control unit (220).
10. The MRD (100) as claimed in claim 9, wherein the control unit (220) is further configured to perform pose fusion as a function of the spatial image data and the inertial data.
11. The MRD (100) as claimed in claim 7, wherein for generating the 3D mesh, the control unit (220) is further configured to utilize Speeded Up Robust Features (SURF) key point detection and Binary Robust Invariant Scalable Keypoints (BRISK) descriptor for determining a description of the detected keypoints.
12. The MRD (100) as claimed in claim 7, wherein for generating the
3D mesh, the control unit (220) is further configured to utilize
Truncated Signed Distance Function (TSDF) and marching cube
based triangulation for generating the one or more virtual reality or
augmented reality objects.
13. The MRD (100) as claimed in claim 7, wherein the first
electromagnetic radiation sensor (101) is an active stereo sensor.

14. The MRD (100) as claimed in claim 7, wherein the second electromagnetic radiation sensor (402) comprises a Near Infra-Red (NIR) light source and an Infra-Red (IR) pass camera sensor unit.
15. The MRD (100) as claimed in claim 7, wherein the third electromagnetic radiation sensor (404) is a depth sensor.
16. The MRD (100) as claimed in claim 15, wherein the depth sensor is selected from a group comprising stereoscopic vision based depth sensors and time of flight based depth sensors.

Documents

Application Documents

# Name Date
1 201821039725-FORM-27 [10-04-2025(online)].pdf 2025-04-10
1 201821039725-STATEMENT OF UNDERTAKING (FORM 3) [22-10-2018(online)].pdf 2018-10-22
2 201821039725-OTHERS [22-10-2018(online)].pdf 2018-10-22
2 201821039725-PETITION UNDER RULE 137 [09-04-2025(online)].pdf 2025-04-09
3 201821039725-RELEVANT DOCUMENTS [09-04-2025(online)].pdf 2025-04-09
3 201821039725-FORM FOR STARTUP [22-10-2018(online)].pdf 2018-10-22
4 201821039725-IntimationOfGrant18-11-2022.pdf 2022-11-18
4 201821039725-FORM FOR SMALL ENTITY(FORM-28) [22-10-2018(online)].pdf 2018-10-22
5 201821039725-PatentCertificate18-11-2022.pdf 2022-11-18
5 201821039725-FORM 1 [22-10-2018(online)].pdf 2018-10-22
6 201821039725-FORM 13 [12-10-2022(online)].pdf 2022-10-12
6 201821039725-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-10-2018(online)].pdf 2018-10-22
7 201821039725-Response to office action [01-11-2021(online)].pdf 2021-11-01
7 201821039725-DRAWINGS [22-10-2018(online)].pdf 2018-10-22
8 201821039725-Response to office action [08-07-2021(online)].pdf 2021-07-08
8 201821039725-DECLARATION OF INVENTORSHIP (FORM 5) [22-10-2018(online)].pdf 2018-10-22
9 201821039725-CLAIMS [07-01-2021(online)].pdf 2021-01-07
9 201821039725-COMPLETE SPECIFICATION [22-10-2018(online)].pdf 2018-10-22
10 201821039725-CORRESPONDENCE [07-01-2021(online)].pdf 2021-01-07
10 201821039725-FORM-26 [19-11-2018(online)].pdf 2018-11-19
11 201821039725-FER_SER_REPLY [07-01-2021(online)].pdf 2021-01-07
11 Abstract1.jpg 2018-12-07
12 201821039725- ORIGINAL UR 6(1A) FORM 26-221118.pdf 2019-03-15
12 201821039725-PETITION UNDER RULE 137 [07-01-2021(online)].pdf 2021-01-07
13 201821039725-Proof of Right [07-01-2021(online)].pdf 2021-01-07
13 201821039725-STARTUP [08-07-2020(online)].pdf 2020-07-08
14 201821039725-FORM28 [08-07-2020(online)].pdf 2020-07-08
14 201821039725-RELEVANT DOCUMENTS [07-01-2021(online)].pdf 2021-01-07
15 201821039725-FER.pdf 2020-07-31
15 201821039725-FORM 18A [08-07-2020(online)].pdf 2020-07-08
16 201821039725-FER.pdf 2020-07-31
16 201821039725-FORM 18A [08-07-2020(online)].pdf 2020-07-08
17 201821039725-RELEVANT DOCUMENTS [07-01-2021(online)].pdf 2021-01-07
17 201821039725-FORM28 [08-07-2020(online)].pdf 2020-07-08
18 201821039725-Proof of Right [07-01-2021(online)].pdf 2021-01-07
18 201821039725-STARTUP [08-07-2020(online)].pdf 2020-07-08
19 201821039725- ORIGINAL UR 6(1A) FORM 26-221118.pdf 2019-03-15
19 201821039725-PETITION UNDER RULE 137 [07-01-2021(online)].pdf 2021-01-07
20 201821039725-FER_SER_REPLY [07-01-2021(online)].pdf 2021-01-07
20 Abstract1.jpg 2018-12-07
21 201821039725-CORRESPONDENCE [07-01-2021(online)].pdf 2021-01-07
21 201821039725-FORM-26 [19-11-2018(online)].pdf 2018-11-19
22 201821039725-CLAIMS [07-01-2021(online)].pdf 2021-01-07
22 201821039725-COMPLETE SPECIFICATION [22-10-2018(online)].pdf 2018-10-22
23 201821039725-DECLARATION OF INVENTORSHIP (FORM 5) [22-10-2018(online)].pdf 2018-10-22
23 201821039725-Response to office action [08-07-2021(online)].pdf 2021-07-08
24 201821039725-Response to office action [01-11-2021(online)].pdf 2021-11-01
24 201821039725-DRAWINGS [22-10-2018(online)].pdf 2018-10-22
25 201821039725-FORM 13 [12-10-2022(online)].pdf 2022-10-12
25 201821039725-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-10-2018(online)].pdf 2018-10-22
26 201821039725-PatentCertificate18-11-2022.pdf 2022-11-18
26 201821039725-FORM 1 [22-10-2018(online)].pdf 2018-10-22
27 201821039725-IntimationOfGrant18-11-2022.pdf 2022-11-18
27 201821039725-FORM FOR SMALL ENTITY(FORM-28) [22-10-2018(online)].pdf 2018-10-22
28 201821039725-RELEVANT DOCUMENTS [09-04-2025(online)].pdf 2025-04-09
28 201821039725-FORM FOR STARTUP [22-10-2018(online)].pdf 2018-10-22
29 201821039725-PETITION UNDER RULE 137 [09-04-2025(online)].pdf 2025-04-09
29 201821039725-OTHERS [22-10-2018(online)].pdf 2018-10-22
30 201821039725-STATEMENT OF UNDERTAKING (FORM 3) [22-10-2018(online)].pdf 2018-10-22
30 201821039725-FORM-27 [10-04-2025(online)].pdf 2025-04-10

Search Strategy

1 searchE_28-07-2020.pdf
2 2021-04-0814-57-42AE_08-04-2021.pdf

ERegister / Renewals

3rd: 12 Jan 2023

From 22/10/2020 - To 22/10/2021

4th: 12 Jan 2023

From 22/10/2021 - To 22/10/2022