Sign In to Follow Application
View All Documents & Correspondence

Compact Head Mounted Display With Wide View And Variable Focus

Abstract: The present invention is directed towards a compact and lightweight head mounted device that achieves variable focus with wider field of view using principles of light field technology, variable focus optical element and a hybrid waveguide. The head mounted device comprises of an optical arrangement that includes generating of 3D light field, relaying the generated light field through a relay optics where the reference depth plane is adjusted to coincide with location of virtual objects. Finally, the light field with multiple depth cues is transmitted through a hybrid waveguide where the optical properties of light field are preserved and optical path is folded to generate images of higher resolution and contrast ratio. This makes the overall device light with eliminated vergence accommodation conflict.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 January 2023
Publication Number
06/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-01-20
Renewal Date

Applicants

Dimension NXG Private Limited
527 & 528, 5th floor, Lodha Supremus 2 Road no.22, near new passport office, Wagle Estate Thane West, Maharashtra -400604, India

Inventors

1. Abhishek Tomar
527 & 528, 5th floor, Lodha Supremus 2 Road no.22, near new passport office, Wagle Estate Thane West, Maharashtra -400604, India
2. Pankaj Raut
527 & 528, 5th floor, Lodha Supremus 2 Road no.22, near new passport office, Wagle Estate Thane West, Maharashtra -400604, India
3. Abhijit Patil
527 & 528, 5th floor, Lodha Supremus 2 Road no.22, near new passport office, Wagle Estate Thane West, Maharashtra -400604, India
4. Yukti Suri
527 & 528, 5th floor, Lodha Supremus 2 Road no.22, near new passport office, Wagle Estate Thane West, Maharashtra -400604, India
5. Purwa Rathi
527 & 528, 5th floor, Lodha Supremus 2 Road no.22, near new passport office, Wagle Estate Thane West, Maharashtra -400604, India

Specification

DESC:
FIELD OF THE INVENTION
Embodiment of the present invention relates to head mounted displays having large field of view, sharp lateral and longitudinal resolution, wide depth of field and yet a small form factor, and more particularly to a light weight, compact head mounted display that achieves aforementioned advantages using principles of light field technology, tunable depth of plane and total internal reflection for variable focus and enhanced depth.
BACKGROUND OF THE INVENTION
Human visual perception system is complex and challenging especially when it comes to creating a simulated world or augmenting real world scenes with virtual objects. Wearable glasses known for enabling virtual or augmented view of the world are often encountered with problems of large form factor, vision discomfort, dizziness, unnatural settings, motion artefacts, nausea due to mismatch between vergence and accommodation which must be overcome for natural and easy viewing of images in three dimensions. Recently considerable research has resulted in displays (head mounted displays, near eye displays etc.) for presenting digital imagery to a user through a small display and with higher precision and resolution.
However, none of the known configurations and combinations of optical elements on the wearable displays are capable of presenting a rich, binocular, three-dimensional virtual or augmented reality experience in a manner that will be comfortable and maximally useful to the user. Unlike natural stereoscopic vision that allows wide field of view and image resolution, there is little known ability to achieve extended depths and high brightness without sacrificing image resolution or image depth using wearable displays. Therefore, developers and researchers of AR wearable technology face a fundamental trade-off between the form factor of the device and its functionality. While some of current AR wearables can offer both of these features, they compromise on form factor with bulky and complicated optics. Hence, miniaturised optics is one principal item for achieving consumer-ready smart glasses.
Further, majority of known HMDs produce 2D stereoscopic images at fixed focal distances (typically 2m in front of user) or one focal distance at a time. This often leads to eye fatigue and nausea for users and doesn’t offer the necessary immersive three-dimensional experiences. Furthermore, fabricating the optical structure for HMD with a form factor as convincing as glasses adds to the persisting problem.
As understood, problem of focal length being inconsistent with a convergence distance is observed for optical see-through headsets. Primarily, the reason for this mismatch between the focal length and the vergence distance is due to that most image sources of existing AR displays are 2D planes located at a fixed distance from the eye regardless of the intended distance of the shown objects.
If the distance of the presented object differs from the focus distance of the display, then the depth cues from parallax also differ from the focus cues causing the eye to either focus at the wrong distance or the object to appear to be out of focus. Thus, for an eye to simultaneously focus between ‘cues obtained from 2D image plane to focus upon’ and also ‘cues received to focus and converge to depth of actual 3D object on which digital information is laid’, becomes extremely difficult, thereby causing blurring of either virtual or real world information. Without real depth of field, virtual images may appear disconnected from the real world and cause visual discomfort after extended period of use.
Addressing the above focal length and vergence mismatch problem and to provide a sense of realism via a lightweight, compact and small wearable displays has been attempted in present disclosure, and that may address one or more of the challenges or needs mentioned herein, as well as provide other benefits and advantages. In the background of foregoing limitations, there exists a need for lost cost, light weight and high performance AR/VR display that is not posed with problem of focal length and convergence distance mismatch between the displayed digital information and the real-world scene (accommodation-convergence discrepancy) that causes vision discomfort and fatigue for users.
The above information disclosed in this Background section is only for enhancement of understanding of the background of the invention, and therefore, it may contain information that does not form prior art.
OBJECT OF THE INVENTION
An object of the present invention is to provide a low-cost, high-performance and lightweight true stereoscopic see-through type head mount display in which visual comfort is improved with correct depth and focus cues.
Another object of the present invention is to provide a see through head mounted display of small form factor and capable of generating 3D light field imaging with large viewing angle, high frequency, high spatial resolution and extended depth.
Yet another object of the present invention is to provide a light weight, compact and easy to manufacture head mounted display providing high brightness, no color break up, wide viewing angle along with high image resolution or image depth with varying focal length.
Yet another object of the present invention is to provide a head mounted display featuring high resolution, higher sharpness, higher contrast ratio and correct optical focal cues to enable user focus on displayed objects as if those objects are located at intended distance from the user.
Yet another object of the present invention is to provide a light weight and compact form factor head mounted display that exhibits high light efficiency, realism, consumes less power and render enhanced stereoscopic light-field to each eye of user.
In yet another embodiment, easy to use and ergonomically designed head mounted display capable of enabling viewing of variable depth 3D content with freedom to focus as user desires.
SUMMARY
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Accordingly, a see-through head mounted device is presented that is characterized in achieving variable focus and an expanded view. The head mounted device comprising of an integral imaging optics; a relay optics; and a hybrid waveguide. Firstly, the integral imaging optics is configured to generate 3D light field for viewing plurality of virtual objects with variable focus along an adjustable reference depth plane. Next, the relay optics that is configured to receive the light field from the integral imaging optics comprises of a spatial light modulator, a variable focus element and a collimator.
The spatial light modulator is configured to modulate the light field, while the variable focus optical element is positioned between the spatial light modulator and a collimator that is configured for collimating the modulated light field with variable focus. The variable focus optical element is configured to adjust the reference depth plane such that position of the plurality of virtual objects is matched with that of the adjusted reference depth plane. Finally, the hybrid waveguide is configured to fold the modulated and collimated light field and preserve the variable focus of said light field to relay towards an eye box with the expanded view.
In another exemplary embodiment, a method for achieving variable focus and an expanded view via a see-through head mounted device is proposed. The method comprising generating 3D light field for viewing plurality of virtual objects with variable focus along an adjustable reference depth plane using an integral imaging optics; modulating the light field received from the integral imaging optics by a spatial light modulator; collimating the modulated light field carrying variable focus information by the collimator; adjusting the reference depth plane such that position of the plurality of virtual objects is matched therewith the reference depth plane via a variable focus optical element; and transmitting the modulated and collimated light field via a hybrid waveguide such that optical variable focus properties of light field are preserved and an expanded view of real world blended with the multiple objects of variable depth is presented at an eye box.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular to the description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, the invention may admit to other equally effective embodiments.
These and other features, benefits and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
Fig. 1 illustrates optical architecture enabled with light field technology, relay optics and hybrid waveguide, in accordance with an embodiment of the present invention.
Fig. 2 illustrates light field generating integral imaging optics, in accordance with an embodiment of the present invention.
Fig. 3 shows adjustable depth plane achieved using variable focus optical element, in accordance with an embodiment of the present invention.
Fig. 4 shows configuration of hybrid waveguide, in accordance with an embodiment of the present invention.
Fig. 5 is a flowchart illustrating a method for achieving variable focus and an expanded view via a see-through head mounted device, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" be used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense, (i.e., meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles, and the like are included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
Reference will now be made to embodiments, examples of which are illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide an understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase “comprising”, it is understood that we also contemplate the same composition, element or group of elements with transitional phrases “consisting of”, “consisting”, “selected from the group of consisting of, “including”, or “is” preceding the recitation of the composition, element or group of elements and vice versa.
The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
In order to augment real-world environment, augmented reality based head mounted devices have become quiet popular with their see-through displays (HMDs) that allow user to view outside world through the given display while at the same time complementing the real world with virtual objects/information being presented on the same display. While many types of display configurations have been worked upon, the present disclosure shall focus on see through light-field displays coupled with a novel arrangement of one or more hybrid waveguide along with additional optical elements to achieve a distinct set of advantages, not achievable in so far.
With use of light field display, wider field of view is achievable with perceptible depth cues and the ability to occlude portions of a real-world environment besides reducing the overall form factor of HMD thereby making HMD thin and light weight with overall reduced form factor. Most importantly, light field displays achieve correct convergence (rotation of visual axis to converge at object point), accommodation (capacity of eye lens to tune its optical power for focus), binocular disparity (horizontal shift between retinal images of the same object), retinal defocus depth cues or retinal image blur cue (image blurring effect varying with distance from the eye’s fixation point to the points nearest or further away).
Apropos, the present disclosure uses light field technology primarily to resolve vergence-accomodation conflict (VAC), oculomotor and other disorientation symptoms which may otherwise cause eye and body discomfort and dizziness. The technology represents a real-world scene as a 360 degree light field depicting all the light rays in a 3D space, flowing through every point and in every direction as they are able to render different focal planes visible to the human eye and also enabling display of virtual pixels in a different focal plane with improved depth perception and depth sharpness.
Vergence-accomodation conflict has been identified as one of the critical factors in visual discomfort while wearing any near eye devices such as HMDs. As known, accommodation cues refer to focus action of the eye where ciliary muscles change the refractive power of the crystalline lens and therefore minimize the amount of blur for the fixated depth of the scene. On the other hand, the convergence cue refers to the rotation action of the eyes to bring the visual axes inward or outward to intersect at a 3D object of interest at near or far distances. The vergence-accomodation mismatch is a primary problem that arises primarily because of 2D surface as an image source located at a fixed distance from the eye that leads to incorrect focus cues causing several visual cue conflicts.
With the use of 3D light-field technology, the present disclosure attempts to resolve this accommodation-convergence discrepancy problem as the reconstructed 3D scenes creates a 3D image source instead of a 2D display surface for the HMD. With the 2D light source, eyes focus at wrong fixed distance, which brings accommodation errors and consequently eye strain, nausea, and other eye damage. The light field display projects a highly efficient, high-fidelity digital representation of how light exists in the real world. With use of light field displays, the virtual images are presented with depth cues and at correct focal distances, which is eventually preserved by hybrid waveguide arrangement of present disclosure to achieve a life-like visual representation.
In accordance with one general embodiment of present disclosure, the present system and method is directed to a true stereoscopic optical see-through head mounted display (HMD) configured with light field technology that enables upgrading conventional augmented/virtual reality (AR/VR) content to light field content. Predominantly, the technology enables users to refocus the image at different depths with the same light field. To achieve the same, spatial-angular information of light incident on image sensors of the head mounted display is captured. By sampling both the spatial and angular domains, the user is given an option to manipulate focus, perspective, and depth of field (DOF) during post-processing.
Referring to Fig. 1, a compact head mounted device (HMD) 100 comprising of a light field generating optics 10 is presented that transmits the light through relay optics 30 further comprising of spatial light modulator 15, collimator 20 and variable focus optical element 18, and a hybrid waveguide 50 before being projected onto the retina 60 to obtain a sharp image of better resolution, depth and wide field of view based on combined effects of overall optical configuration and concepts of tunable reference depth plane 9 and total internal reflection. High quality acquisition and reproduction of the original image is critical for three-dimensional display, for which the present disclosure makes use of light field technology.
As shown in Fig. 1, a slim, compact HMD 100 with a large field of view is presented comprising of an integral imaging optics 10 based light field display that is operable to produce a 3D light field and present blended view of real world with digital imagery of plurality of virtual objects 3(a), 3(b), 3(c) (collectively referred by numeral 3) at variable focus and expanded view. (As used herein, the term "3D light field" means a field of a 3D scene that has a set of rays that appear to originate from the 3D scene to produce the perception of the 3D scene). An HMD wearer/user sees a different view from each eye; is able to fixate and focus on multiple objects 3 in the virtual scene at their proper depth; and experiences smooth motion parallax when moving relative to the display.
The integral imaging based light field optics 10 may be configured to transmit the illumination light from a light source towards the eyebox/exit pupil via a novel optical arrangement while also enabling view of the real world in order to provide a see-through optical path. The attempt is to generate tunable/adjustable reference depth plane to simulate a 3D light field and provide a comfortable viewing condition for the wearer. Any optical system capable of producing a 3D light field in a very compact, consumer - appropriate form factor can be used in the apparatus and method of the present invention. This effectively addresses the problem of focal length and convergence distance mismatch and consequent visual fatigue in conventional HMD systems with omni-directional parallax light field rendering capability.
Re-referring to Fig. 1, an optical arrangement that is developed to accommodate and address root problem of focal distance mismatch in a small form factor is depicted. Accordingly, an integral imaging optics 10 operable to produce a 3D light field and transmit it to the relay optics 30 for further transmission to a hybrid waveguide 50 is shown. This assist in rendering the digital information focus cue regardless of the distance to the viewer (as explained in detail in later sections).
As generally understood, the “light field” is a collection of light rays appearing to be emitted by a 3D scene to create a perception therefor. The light-field, in one example embodiment, is generated by projecting images to the eye from many slightly shifted, multiple discrete viewpoints to provide monocular depth cues. Alternately, the direction of light ray bundle emitted by the 3D real scene may be sampled and viewed from different eye positions to render a true 3D image. The light field image of a 3D surface enables replacement of a typical 2D image surface to potentially overcome vergence-accomodation discrepancy problem.
In one exemplary embodiment, the light field may be generated by a microdisplay 5 and an array of small microlenses or optical apertures 7 positioned in close proximity next to each other for speedily projecting image to eye from different position corresponding to its position in array and generating the light-field, as shown in Fig. 2. Specifically, the microdisplay 5 may use an array of lenses or microlenses 7 positioned in front of image to display the 3D image (discussed in detail later).
The high definition microdisplay 5 renders a set of 2D elemental images 2(a), 2(b), 2(c) (collectively referred with numeral 2), each of which represents a different perspective of a 3D image. The conical ray bundles emitted by corresponding pixels in elemental image 2 render ray bundles intersect and optically create different perception of a 3D point that appears to emit light and occupy the 3D space.
When such an array of 2D elemental images 2 is placed in front of an array of microlenses 7, the perspectives are integrated producing 3D images with full parallax information in both x and y coordinates and free of the convergence-accommodation conflict. By reconstructing any point of the real 3D world scene through intersection of many rays, it provides the user with full parallax images and render true light field of a 3D scene reconstructed optically and appropriate focus cues for laying virtual information across varyingly generated reference depth plane 9.
In the context of the following description, a microdisplay 5 is a backlit transmissive display such as an LCD (liquid crystal display) panel, or an emissive display such as an OLED (organic light-emitting diode) panel or chip, a reflective display, a diffractive light source, a projector, a beam generator, a laser, a light modulator, etc.
For purposes of general disclosure, the multi-focal-plane display, integral-imaging, and computational multi-layer approaches are commonly referred to be light field displays and are suitable for head-mounted applications. The present disclosure choses one of aforementioned light field display technology for its use in proposed HMD, though disclosure is not limited to any one selected light field display technology.
As described above, the image pixels in of elemental image 2 emits conical ray bundle that intersects to cause an integral perception of the 3D scene, in which the object appears to have a variable depth range in the reference plane 9, for example emitting light and illuminating the 3D space. Each pixel on the elemental image 2 defines the positional information of light field while the microlens array 7 defines the direction information of the light field.
In one example embodiment, microlens array unit 7 (3D integral imaging optics) may be utilized for optically reconstructing a 3D surface shape by omnidirectional parallax information enabling viewing of different views of elemental object along the virtual reference plane 9. On the other hand, any of known multi-view stereoscopic systems (parallax barriers or lenticular sheets) may also be used to generate binocular views though with only horizontal parallax to present 3D information to user.
However, where the former suffers from low lateral and depth resolution and narrow depth of view, the latter exhibits limited viewing angle, limited resolution per view besides low performance from focal length and vergence distance mismatch point of view. Other techniques such as holography, stereoscopy, Free-viewpoint TV (FTV) and the like may also be adopted, though integral imaging (as discussed above) is preferred method for illumination requirements in pragmatically generating full parallax 3D images.
While use of integral imaging based light field has advantage of low hardware complexity, provisioning of continuous viewpoints and full parallax, it is accompanied with limitations of bulky form factor, degraded spatial resolution, constrained viewing angles and screen door effect, limited depth of images. Such uncorrected optical errors adversely impacts one’s visual capabilities, and further accentuates sub-optimal viewing experience and visual acuity.
In general, the microlens array 7 of integral imaging optics 10 has a fixed focal length and fixed central depth plane because of fixed aperture of lenses in microlens array. With depth of 3D reconstructed points shifting away from reference depth plane 9, limitation of depth range is observed that causes rapid degradation of spatial resolution. For dynamically varying the position of reference depth plane 9 such that the viewed objects match position of reference depth plane, output in form of full parallax light field is centered on the reference depth plane 9 (as will be discussed in later section).
In next following embodiment, the full parallax light field is projected into a relay optics 30 of the head mounted device 100 that is capable of presenting spatially disparate pupils in the optical path, as shown in Fig. 1-3. The relay optics 30 is configured to relay the light field output via a hybrid waveguide 50 to generate a plurality of expanded light beams enabling pupil replication besides contributing to a favourable, compact form factor. Thus, using the above light field technology in combination with relay optics 30 and a hybrid waveguide 50, a large viewing angle, large depth and high resolution images may be generated.
In one preferred embodiment, the relay optics 30 comprises of an imaging unit 25 which further comprises of a spatial light-modulator (SLM) 15 and a collimator 20 to collimate the modulated light from SLM 15. The said configuration can be made replaceable with collimator 20 being positioned prior to SLM 15 (as viewed from light source) such that collimated beam is projected to illuminate the SLM under a different set of incident angles. The SLM 15 is configured for modulating the amplitude and/or the phase of the image light of 3D virtual objects behind the microdisplay 5. Each reflected (or transmitted) light beam will carry out a certain image information, produced by the modulation of the SLM 15 and thus both amplitude and phase information is relayed. In accordance with an example embodiment, the collimator 20 may be one or more of a singlet or doublet, a traditional rotational symmetry lens group, or monolithic freeform prism, for example that is capable of magnifying the light field emitted from integral imaging optics 10.
In one noteworthy embodiment, a variable focus optical element 18 is disposed at a location optically conjugate to the microdisplay 5 of the integral imaging optics 10, and between optical elements (SLM 15 and collimator 20) of the imaging unit 25. Thus, the modulated/collimated light is made to pass through a variable focus optical element 18 inserted at reconstructed image plane (reference depth plane) 9 capable of providing various levels of focus (a collimated flat wavefront to represent optical infinity, more and more beam divergence/wavefront curvature to represent closer viewing distance relative to the eye), designed in a manner to refocus infinity-focused light at specific radial distances and impart different amounts of wavefront divergence or convergence to light passing there through based on recreated reference depth plane 9 by variable focus element 18. This enables presenting to user virtual content over a tunable or adjustable reference depth plane 9 along with expanded view of real world.
In accordance with one working embodiment, the virtual objects 3 are formed on tunable reference depth plane 9, which is made adjustable axially using the variable focus optical element 18 such that the accommodation depth of the eye matches with the apparent display of virtual objects on depth plane. As the light rays emitting from relay optics 30 are modulated, they intersect at reference depth of plane 9 to have large 3D volume before being directed towards the hybrid waveguide 50. The spatially separated elementary images on the microdisplay 5 are configurable to be arranged such that the reference depth plane 9 so created coincides with position of variable focus optical element 18, which is conjugate to the microdisplay 5. This allows different field angles and depth to be correspondingly rendered irrespective of distance from HMD wearer.
Through use of variable focus optical element 18, position of the reference depth plane 9 is adjusted, particularly by way of tuning focal length, wherein the position of reference depth plane 9 is selected such that it contains augmented reality information (comprising of virtual objects) relative to the position of the object in the real world scene that the user is observing. As can be seen in Fig. 3, a schematic illustration of integration of variable focus optical element 18 into an optical path of light field generating optics 10 is provided, whereby the axial position of the reference depth plane 9 of the reconstructed 3D scene is dynamically adjusted based on user’s area of interest and consequently viewing angle can be controlled with enhanced depth and widen view. This overcomes basic limitation of narrow viewing angle, low resolution and constrained depth of field of integral imaging based light field generating optics 10.
As generally known in art, the variable focus optical element 18 may be a refractive element, such as a liquid crystal lens, an electro-active lens, a conventional refractive lens with moving elements, a mechanical-deformation-based lens, birefringent lens or a plurality of fluids with different refractive indices. In another optional arrangement, the variable focus optical element may also comprise a switchable diffractive optical element to which when a voltage is applied, the molecules reorient so that their refractive indices no longer match that of the host medium, thereby creating a high-frequency switchable diffraction pattern.
Thus, the light field is relayed as a single modulated collimated beam of light with focal cues towards hybrid waveguide 50 to amplify the reconstructed 3D scene with focal depth cues and display different perspective views at different positions with at least some wavefront divergence to the exit pupil (eye box). Specifically, the collimated beams of light are output from the hybrid waveguide 50 at particular angles and amounts of divergence corresponding to the depth plane associated with the particular waveguide as they preserve the optical properties of light passing received from relay optics 10 and present the same to wearer’s retina 60 with an expanded view.
Thus, it is appreciated to state that the integral imaging optics 10 in present disclosure is configured to output light with variable levels of wavefront divergence, wherein each discrete level of wavefront divergence corresponds to a particular adjustable depth plane 9 (which is distance from eye relief where image is formed). To enable the construction of wavefront divergence, a combination of relay group 30 coupled with a stacked hybrid waveguide 50 is proposed for the purposes of present disclosure.
As popularly known, waveguide 50 provides for a favourable and compact form factor as it enables folding of integral imaging based optics 10 (producing light field) horizontally to temple sides of HMD 100. With multiple folding possible, a more balanced weight distribution is achieved while achieving a substantially wide field of view than any other optical arrangement. Functionally, in principle the waveguides are used for directing or propagating light into the eyes of viewer by way of receiving an incoming light via an incoupler and outputting the light via an outcoupler towards the eye. However, with use of waveguides as light propagating and pupil expansion optical element, the quality of image exiting the waveguide is degraded along with ghost-like image artefacts with some optical losses.
Accordingly, in one preferable solution embodiment, a hybrid waveguide 50 is opted. Apropos, a material of higher refractive index along with high levels of transparency and homogeneity is proposed at interface where light is outcoupled via 55 out of the waveguide. Here, the incoupling and outcoupling of light may be achieved using diffractive optical element, in accordance with one example embodiment. The attempt is to widen the exit pupil and at the same time make the overall structure transparent enough to avoid adverse rainbow effects. Hence, the selection of hybrid waveguide. The exit pupil is the place where the eye is placed to see the magnified 3D view, and is located on the plane conjugate with the reference depth plane 9 of the integral imaging based optics 10.
In one specific configuration, the hybrid waveguide 50 includes input grating, fold grating, and output grating 55. The input grating receives the light field from spatial light modulator (SLM) of the relay optics 30 and propagates it through the waveguide 50 via total internal reflection. Next, the fold grating expands the propagating light field in a direction traverse to direction of propagation that eventually translates into larger exit pupil in both horizontal and vertical directions. Lastly, the output grating 55 releases the light field at uniform intensity for transmitting towards the eye box 80.
Referring to Fig. 4, a detailed working configuration of hybrid waveguide 50 is proposed. The exiting spatially modulated, collimated and magnified light field having varying focal depth is received at input coupler, which may be orientated directly towards or at an angle relative to the fold grating. The relay optics 30 provides a substantially wider footprint of elemental view from integral imaging light source 10 received by the incoupler of the waveguide 50.
One example embodiment of hybrid waveguide 50 selects a highly transparent core made of polymer while the auxiliary layers are thinner than the core and preferable made of organic/inorganic or hybrid polymer. Additionally, in accordance with one noteworthy embodiment, the refractive index of material forming the core is markedly different from the refractive index of material forming the auxiliary layers for facilitating better propagation of light in the desired direction.
Both the core and the auxiliary layers may be studded with nanophotonic indentations such as surface relief features including diffractive optical elements such as diffractive gratings that enable the core and the auxiliary layers having high difference in their refractive indices to allow formation of diffractive optical elements with light redirecting capabilities. Further, in one embodiment at least one grating in at least one of the layers is a multiplexed grating operating on more than one wavelength or image field of view range.
In one alternate embodiment, plurality of waveguides 50(a), 50(b), 50(c) (collectively referred by numeral 50) may be stacked together to achieve different amounts of wavefront divergence for different depth planes and to output light of different wavelengths in consonance with variable depth of reference depth plane 9 that is determined based on user’s area of viewing interest. Accordingly, a series of linear or rectangular cylindrical waveguides may be stacked together to form a planar 2D waveguide that corresponds to a different depth plane. The light is received at input coupling of first waveguide of the waveguide stack and the hybrid composition of the waveguide causes the light to create a slightly spherical wavefront curvature before it is received by the second hybrid waveguide.
At the second hybrid waveguide, the received collimated light of slightly spherical wavefront curvature is further complemented by the composition of second hybrid waveguide to create another incremental amount of wavefront curvature, and so on so forth such optical combination goes on for further waveguide configurations in a stack. Such an incremental wavefront curvature created at each layer of stacked hybrid waveguide creates an image of varying focal length at the eye of wearer. Such a configuration provides as many perceived focal planes as there are available hybrid waveguides within the stack to create an output with the appropriate amount of divergence or collimation for a particular depth plane associated with the corresponding waveguide. This is however, for viewing virtual objects 3 on varying depth planes that are in consonance with adjustable reference depth plane 9.
The novel way of stacking and arranging the light field display along with relay optics 30 and hybrid waveguide 50 together constitute large eye box 80 with large exit pupil to compensate for the optical distortions and asymmetric optics, besides preserving the focal depth in the light that exits the waveguide 50 while overcoming the limitation of small exit pupil, low resolution of light field technology when used alone.
Thus, light field display relayed through the relay optics towards the stack of hybrid waveguide(s) 50 provides cues to vergence by displaying different images to eye, and cues to accommodation by outputting the light that forms the images with variable levels of wavefront divergence that corresponds to a particular depth plane. The waveguides may be arranged in a plurality of columns and rows in a stack of layers to generate a respective depth plane at a respective distance that syncs with the adjustable reference plane 9 to produce a 4D light field. This type of propagation generates a series of exit sub beamlets that together form one single expanded light-beam towards the eye box 80.
As in present case, the layered hybrid waveguide 50 with core and auxiliary layers, the stacking is done in a manner such that the output grating of each layer overlay so that the light field is emitted with maximized optical power (creating higher resolution images with higher sharpness and contrast ratio) of hybrid waveguides combined together, besides illuminating the retina 60 more uniformly and sequentially with a wider field of view.
To accommodate the desire to have comfortable viewing with large field of view without adding to the weight and number of components to the head mounted device, in one embodiment, multiple outputs/reflections of various different reflective and/or diffractive surfaces can be aggregated and presented in a frame-sequential configuration, wherein a sequence of frames at high frequency are directed to wearer’s eye that provides the perception of a single coherent scene.
In one alternate configuration, one or more optical elements between the hybrid waveguide 50 and the eye may be provided. The optical elements may act to, e.g., correct aberrations in image light emitted from the hybrid waveguide 50, magnify image light emitted from the hybrid waveguide, make some other optical adjustment of image light emitted from the hybrid waveguide 50, or some combination thereof.
In one example embodiment, the optical combiner (e.g. waveguide) placed in front of the eye combines the optical paths of the virtual display and real scene, thereby enabling an augmented view. The hybrid configuration of waveguide optimally blends digital world with the real world with no unwanted artefacts like rainbows or glows. Further, it provides for minimized size and aspect ratio of entire HMD. Furthermore, with a larger exit pupil and an expanded field of view at multiple depth planes showing multiple virtual objects 3, the HMD 100 is less likely to be sensitive to slight misalignments of the display relative to the user's anatomical pupil, thereby easing prolonged wearing of HMD.
In one preferable embodiment, a compensator 70 is provided that reverses any deflection that is caused by the light passing through the relay optics 30 and the hybrid waveguide 50, resulting in no net change in vergence, minimize aberrations and distortion for the see-through path of HMD 100. For example, the compensator 70 may have an optical power that is opposite in sign to the sum of optical powers of combined relay optics 30 and hybrid waveguide 50 but have the same magnitude to negate the effect of distortions by aforementioned optical assembly. These optics apply an equal but opposite deflection to the light rays, such that they arrive at the user’s eye with no net change in vergence.
Also, with optimized optical power of compensator 70, the light coming from real world is neither magnified nor refocused while passing through hybrid waveguide 50. Such compensation can be achieved by any of reflective, refractive or diffractive techniques. Thus, in the solution of the present invention, the combination of light field technology with relay optics and hybrid waveguide can accurately display simultaneous variable-depth 3D content within a user’s environment and reduced VAC conflict, all achievable with a small factor head mounted display. Using said miniaturised optical hardware, a display experience that is entirely natural to the eye has been made possible.
Finally, referring to Fig. 5 a method for achieving variable focus and an expanded view via a see-through head mounted device (100) is illustrated. The method comprising: generating 3D light field for viewing plurality of virtual objects (3) with variable focus along an adjustable reference depth plane (9) using an integral imaging optics in step 510. In step 520, the light field received from the integral imaging optics (10) by a spatial light modulator (15) is modulated. This modulated light field carrying variable focus information is collimated by the collimator (20) in step 530. In step 540, the reference depth plane (9) is adjusted such that position of the plurality of virtual objects (3) is matched therewith the reference depth plane (9) via a variable focus optical element (18). In step 550, the modulated and collimated light field is transmitted via a hybrid waveguide (50) such that optical variable focus properties of light field are preserved and an expanded view of real world blended with the multiple objects (3) of variable depth is presented at an eye box (80).
In accordance with an embodiment, the head mounted device comprises a memory unit configured to store machine-readable instructions. The machine-readable instructions may be loaded into the memory unit from a non-transitory machine-readable medium, such as, but not limited to, CD-ROMs, DVD-ROMs and Flash Drives. Alternately, the machine-readable instructions may be loaded in a form of a computer software program into the memory unit. The memory unit in that manner may be selected from a group comprising EPROM, EEPROM and Flash memory. Further, a processor is operably connected with the memory unit. In various embodiments, the processor is one of, but not limited to, a general-purpose processor, an application specific integrated circuit (ASIC) and a field-programmable gate array (FPGA).
In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
Further, while one or more operations have been described as being performed by or otherwise related to certain modules, devices or entities, the operations may be performed by or otherwise related to any module, device or entity. As such, any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof. It should be understood that the techniques of the present disclosure might be implemented using a variety of technologies. For example, the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium. Suitable computer readable media may include volatile (e.g., RAM) and/or non-volatile (e.g., ROM, disk) memory, carrier waves and transmission media. Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publicly accessible network such as the Internet.
It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "controlling" or "obtaining" or "computing" or "storing" or "receiving" or "determining" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the embodiments shown along with the accompanying drawings but is to be providing broadest scope of consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention.
,CLAIMS:We Claim:

1) A see-through head mounted device (100) characterized in achieving variable focus and an expanded view, comprising:
an integral imaging optics (10) configured to generate 3D light field for viewing plurality of virtual objects (3) with variable focus along an adjustable reference depth plane (9);
a relay optics (30) comprising of:
a spatial light modulator (15) for modulating the light field;
a variable focus optical element (18) positioned between the spatial light modulator (15) and a collimator (20); and
the collimator (20) for collimating the modulated light field with variable focus;
wherein the variable focus optical element (18) is configured to adjust the reference depth plane (9) such that position of the plurality of virtual objects (3) is matched with that of the adjusted reference depth plane (9); and
a hybrid waveguide (50) configured to fold the modulated and collimated light field and preserve the variable focus of said light field to relay towards an eye box (80) with the expanded view.

2) The see-through head mounted device (100), as claimed in claim 1, wherein the integral imaging optics (10) further comprises of:
a microdisplay (5) configured to render a set of 2D elemental images (2) from plurality of virtual objects (3); and
a microlens array (7) configured to integrate perspectives from the 2D elemental images (2) to produce true 3D image of the plurality of virtual objects (3) with omnidirectional parallax.

3) The see-through head mounted device (100), as claimed in claim 1, wherein the microdisplay (5) is selected from a liquid crystal display panel, organic light-emitting diode, a reflective display, a diffractive light source, a projector, a lase or beam generator or a light modulation.

4) The see-through head mounted device (100), as claimed in claim 1, wherein the spatial light modulator (15) is configured to modulate amplitude and phase of the light field.

5) The see-through head mounted device (100), as claimed in claim 1, wherein the collimator (20) is configured to collimate and magnify the modulated light carrying variable depth cues before relaying towards the hybrid waveguide 50.

6) The see-through head mounted device (100), as claimed in claim 1, wherein the variable focus optical element 18 is disposed between the spatial light modulator (15) and the collimator (20) at a location optically conjugate to the microdisplay (5).

7) The see-through head mounted device (100), as claimed in claim 1, wherein the plurality of virtual objects are rendered at variable depth on the reference depth plane (9) that is axially adjusted by the variable focus optical element (18).

8) The see-through head mounted device (100), as claimed in claim 1, wherein the variable focus optical element (18) is a liquid crystal lens, an electro-active lens, a conventional refractive lens with moving elements, a mechanical-deformation-based lens, birefringent lens, switchable diffractive optical element or a plurality of fluids with different refractive indices.

9) The see-through head mounted device (100), as claimed in claim 1, wherein the hybrid waveguide (50) comprises of a stack of plurality of waveguides, each waveguide comprising of an input grating, fold grating and an output grating, wherein the waveguide is studded with surface relief features to enable uniform light propagation.

10) The see-through head mounted device (100), as claimed in claim 9, wherein, the hybrid waveguide (50) comprises of a higher refractive index material along with higher transparency at interface thereof compared to waveguide core such that total internal reflection is achieved via the waveguide core while the transparent interface minimize chromatic aberrations.

11) The see-through head mounted device (100), as claimed in claim 9, wherein the hybrid waveguide (50) is configured to create incremental amount of wavefront curvature besides preserving optical properties of incoming light field from the relay optics (30) such that variable depth planes are created in consonance with the adjustable reference depth plane (9).

12) The see-through head mounted device (100), as claimed in claim 1, further comprising a compensator (70) configured to reverse deflection(s) produced in light field travelling through the relay optics (30) and the hybrid waveguide (50) to render no net change in optical power of the light field directed towards the eye box (80).

13) The see-through head mounted device (100), as claimed in claim 1, wherein the reference depth plane 9 is adjusted axially based on wearer’s area of viewing interest.

14) A method for achieving variable focus and an expanded view via a see-through head mounted device (100), comprising:
generating 3D light field for viewing plurality of virtual objects (3) with variable focus along an adjustable reference depth plane (9) using an integral imaging optics (10);
modulating the light field received from the integral imaging optics (10) by a spatial light modulator (15);
collimating the modulated light field carrying variable focus information by the collimator (20);
adjusting the reference depth plane (9) such that position of the plurality of virtual objects (3) is matched therewith the reference depth plane (9) via a variable focus optical element (18); and
transmitting the modulated and collimated light field via a hybrid waveguide (50) such that optical variable focus properties of light field are preserved and an expanded view of real world blended with the multiple objects (3) of variable depth is presented at an eye box (80).

Documents

Application Documents

# Name Date
1 202321004884-PROVISIONAL SPECIFICATION [25-01-2023(online)].pdf 2023-01-25
2 202321004884-FORM FOR STARTUP [25-01-2023(online)].pdf 2023-01-25
3 202321004884-FORM FOR SMALL ENTITY(FORM-28) [25-01-2023(online)].pdf 2023-01-25
4 202321004884-FORM 1 [25-01-2023(online)].pdf 2023-01-25
5 202321004884-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-01-2023(online)].pdf 2023-01-25
6 202321004884-DRAWINGS [25-01-2023(online)].pdf 2023-01-25
7 202321004884-ENDORSEMENT BY INVENTORS [23-01-2024(online)].pdf 2024-01-23
8 202321004884-DRAWING [23-01-2024(online)].pdf 2024-01-23
9 202321004884-COMPLETE SPECIFICATION [23-01-2024(online)].pdf 2024-01-23
10 202321004884-FORM-9 [24-01-2024(online)].pdf 2024-01-24
11 202321004884-STARTUP [29-01-2024(online)].pdf 2024-01-29
12 202321004884-FORM28 [29-01-2024(online)].pdf 2024-01-29
13 202321004884-FORM 18A [29-01-2024(online)].pdf 2024-01-29
14 Abstact.jpg 2024-02-01
15 202321004884-FER.pdf 2024-02-21
16 202321004884-OTHERS [26-03-2024(online)].pdf 2024-03-26
17 202321004884-FER_SER_REPLY [26-03-2024(online)].pdf 2024-03-26
18 202321004884-CLAIMS [26-03-2024(online)].pdf 2024-03-26
19 202321004884-ABSTRACT [26-03-2024(online)].pdf 2024-03-26
20 202321004884-US(14)-HearingNotice-(HearingDate-29-08-2024).pdf 2024-08-01
21 202321004884-FORM-26 [07-08-2024(online)].pdf 2024-08-07
22 202321004884-Written submissions and relevant documents [03-09-2024(online)].pdf 2024-09-03
23 202321004884-PatentCertificate20-01-2025.pdf 2025-01-20
24 202321004884-IntimationOfGrant20-01-2025.pdf 2025-01-20
25 202321004884-FORM FOR SMALL ENTITY [09-04-2025(online)].pdf 2025-04-09
26 202321004884-EVIDENCE FOR REGISTRATION UNDER SSI [09-04-2025(online)].pdf 2025-04-09

Search Strategy

1 sserE_21-02-2024.pdf

ERegister / Renewals

3rd: 27 Mar 2025

From 25/01/2025 - To 25/01/2026

4th: 27 Mar 2025

From 25/01/2026 - To 25/01/2027

5th: 27 Mar 2025

From 25/01/2027 - To 25/01/2028

6th: 27 Mar 2025

From 25/01/2028 - To 25/01/2029

7th: 27 Mar 2025

From 25/01/2029 - To 25/01/2030

8th: 27 Mar 2025

From 25/01/2030 - To 25/01/2031

9th: 27 Mar 2025

From 25/01/2031 - To 25/01/2032

10th: 27 Mar 2025

From 25/01/2032 - To 25/01/2033