Abstract: Disclosed is a method (200) and apparatus (100) for generating a stereoscopic three-dimensional (3d) viewing based on composite image visualization of a scene. The method comprises: capturing (210) at least a first two-dimensional image and a second 2D image; obtaining (220) a composite image of a scene by combining of the at least first image and the second image; generating (230) a composite luminosity map of the composite image, comprising generating (310, 320) a first luminosity map and a second luminosity map for the captured first image and the captured second image, respectively, normalizing (330) respective first and second luminosity values for each pixel on the at least first and second images using a luminosity processor unit, and combining (340) said first and second luminosity maps, obtaining (240) depth information of the composite image by a depth sensor; and rendering (250) the composite image of the scene surrounding the device onto a 3D rendering environment Ref. Figure 2
Description:FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See Section 10, Rule 13)
“METHOD AND APPARATUS FOR GENERATING A STEREOSCOPIC THREE-DIMENSIONAL (3D) VIEWING BASED ON COMPOSITE IMAGE VISUALIZATION OF A SCENE”
TESSERACT IMAGING LIMITED, a corporation organised and existing under the laws of India, whose address is - 5 TTC Industrial Area, Reliance Corporate IT Park, Thane Belapur Road, Ghansoli, Navi Mumbai, Maharashtra – 400701, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
The present disclosure generally relates to representation of visual information; and more specifically to methods for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene. Furthermore, the present disclosure also relates to apparatuses for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene.
BACKGROUND OF THE INVENTION
Extended reality, or XR, is a collective term that refers to immersive technologies, including virtual reality (VR), augmented reality (AR) and mixed reality (MR). Advancements in the field of extended reality is changing the way humans interact as it enables users to create, collaborate and explore in computer-generated environments like never before.
In stereoscopic imaging, multiple images are overlaid to create a real-world experience which includes appropriate luminosity, depth or an illusion of three-dimensional (3D) image. There exist multiple techniques to incorporate stereoscopic information into existing field of view. However, such existing techniques for creating a 3D view or multiple 3D views of a scene are not efficient in rendering stereoscopic views onto any user displays. Therefore, there exists a need to achieve a sensation of real-world imagery.
Additionally, in augmented reality (AR) applications of stereoscopic imaging, virtual objects are overlaid onto a digital representation of a real-world environment. In some instances, virtual objects may be generated and superimposed in the digital representation of the real-world environment, such that digital representations of real-world physical objects and the virtual objects are displayed together. However, in general, the rendering of virtual objects onto the digital representation of the real-world environment is not done in a realistic manner. Thus, producing AR or MR technology that facilitates a comfortable, natural-feeling presentation of virtual image elements amongst other virtual or real-world imagery elements is challenging.
Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with existing techniques for generating extended-reality images.
SUMMARY OF THE INVENTION
The present disclosure seeks to provide an improved method and apparatus for generating a stereoscopic three-dimensional (3D) viewing. An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in the prior art.
In one aspect, an embodiment of the present disclosure provides a method for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene, comprising the steps of:
capturing at least a first two-dimensional (2D) image and a second 2D image using one or more image capturing units;
obtaining a composite image of a scene surrounding an electronic device using an image sensor unit, wherein the composite image comprises a combination of the at least first image and the second image;
generating a composite luminosity map of the composite image, comprising
generating a first luminosity map for the captured first image using a luminosity sensor,
generating a second luminosity map for the captured second image using said luminosity sensor,
normalizing respective first and second luminosity values for each pixel on the at least first and second images using a luminosity processor unit, and
combining said first and second luminosity maps using said luminosity processor unit,
obtaining depth information of the composite image by a depth sensor; and
rendering the composite image of the scene surrounding the device onto a 3D rendering environment using a rendering unit.
In another aspect, an embodiment of the present disclosure provides an apparatus for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene, comprising:
an electronic device;
one or more image capturing units configured to acquire at least a first two-dimensional (2D) image and a second 2D image;
an image sensor unit for obtaining a composite image out of a combination of the at least first image and the second image;
a luminosity sensor unit for generating at least a first luminosity map and a second luminosity map;
a luminosity processor unit for processing respective luminosity values and for combining first and second luminosity maps;
a depth sensor for obtaining depth information of the composite image; and
a rendering unit for rendering the composite image of the scene surrounding the device onto a 3D rendering environment
Embodiments of the present disclosure substantially eliminate or at least partially address the aforementioned problems in the prior art, and facilitate an improved image reconstruction to generate more realistic and stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene.
Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments construed in conjunction with the appended claims that follow.
It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features and characteristics of the disclosure are set forth in the appended claims. The disclosure itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying figures. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those skilled in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers. One or more embodiments are now described, by way of example only, with reference to the accompanying figures wherein like reference numerals represent like elements and in which:
FIG. 1 is a block diagram illustrating a hardware structure of an apparatus for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene, according to an embodiment of the present disclosure.
Fig. 2 is a flow chart illustrating a method for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene, according to an embodiment of the present disclosure.
Fig. 3 is a flow chart illustrating a method for generating a composite image according to an embodiment of the present invention.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF THE INVENTION
While the embodiments in the disclosure are subject to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the figures and will be described below. It should be understood, however, that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.
The terms “comprises”, “comprising”, or any other variations thereof used in the disclosure, are intended to cover a non-exclusive inclusion, such that a device, system, assembly that comprises a list of components does not include only those components but may include other components not expressly listed or inherent to such system, or assembly, or device. In other words, one or more elements in a system or device proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or device.
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practising the present disclosure are also possible.
The present disclosure relates to a method for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene. The method comprising the steps of capturing at least a first two-dimensional (2D) image and a second 2D image using one or more image capturing units; obtaining a composite image of a scene surrounding an electronic device using an image sensor unit, wherein the composite image comprises a combination of the at least first image and the second image; generating a composite luminosity map of the composite image, obtaining depth information of the composite image by a depth sensor; and rendering the composite image of the scene surrounding the device onto a 3D rendering environment using a rendering unit. The step of obtaining the composite image of the scene surrounding the electronic device using the image sensor unit further comprises generating a first luminosity map for the captured first image using a luminosity sensor, generating a second luminosity map for the captured second image using said luminosity sensor, normalizing respective first and second luminosity values for each pixel on the at least first and second images using a luminosity processor unit, and combining said first and second luminosity maps using said luminosity processor unit.
Moreover, the present disclosure relates to an apparatus for generating a stereoscopic three dimensional (3D) viewing based on composite image visualization of a scene, comprising: an electronic device; one or more image capturing units configured to acquire at least a first two dimensional (2D) image and a second 2D image; an image sensor unit for obtaining a composite image out of a combination of the at least first image and the second image; a luminosity sensor unit for generating at least a first luminosity map and a second luminosity map; a luminosity processor unit for processing respective luminosity values and for combining first and second luminosity maps; a depth sensor for obtaining depth information of the composite image; and a rendering unit for rendering the composite image of the scene surrounding the device onto a 3D rendering environment.
Fig. 1 is a block diagram illustrating a hardware structure of the apparatus for stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene. The apparatus (100) includes an electronic device (107), one or more image capturing units (101), an image sensor unit (102), a luminosity sensor unit (103), a luminosity processor unit (104), a depth sensor (108) and a rendering unit (105).
The apparatus comprises one or more image capturing units that are configured to capture at least a first two-dimensional (2D) image and a second 2D image of an environment where the electronic device is located. The one or more image capturing units are communicably coupled to the electronic device. In an embodiment, the one or more capturing units are integrated with the electronic device, i.e., the electronic device may include a camera. In such a case, the electronic device is employed to capture the environment via the one or more image capturing unit, thereby acquiring the plurality of images for the environment where the device is located.
In another embodiment, the electronic device and the one or more image capturing units are implemented in distinct devices. In yet another embodiment of the present disclosure, the one or more image capturing units are configured to capture the environment continuously, i.e., capture a video of the environment.
In an embodiment, the one or more image capturing units comprise one or more cameras and camera-pose-tracking means, wherein the camera-pose-tracking means are employed to track a pose of a given image capturing unit (namely, positions and orientations of the given image capturing unit). Notably, a given pose of an image capturing unit is representative of a viewpoint from which the given image is captured.
The apparatus comprises an image sensor unit that is employed for obtaining a composite image by combining the at least first image and a second image. It will be appreciated that such a combination at least first image and a second image may be done such that an extent of an object represented in the composite image exceeds the extent depicted in a single image.
The apparatus comprises a luminosity sensor unit that is employed to generate at least a first luminosity map and a second luminosity map. In this regard, a luminosity sensor is employed to generate the first luminosity map for the first image. Similarly, luminosity sensor is employed to generate the second luminosity map for the second image. Moreover, the apparatus comprises a luminosity processor unit that is employed to process respective luminosity values, and to combine the first luminosity map and the second luminosity map. The apparatus comprises a depth sensor that is employed to obtain depth information of the composite image. Furthermore, the apparatus comprises a rendering unit for rendering the composite image of the scene surrounding the device onto a 3D rendering environment. The rendering unit 105 is employed to render the composite image of the scene.
In an embodiment, the electronic device is operatively coupled to an extended reality controller, in whose direction atleast a point of interest may be generated.
In an embodiment, the apparatus may also further comprise a storage means for temporary storage of scene, image, or image related information such as depth or luminosity.
In an exemplary embodiment the encoded image data may comprise of several individual images and its associated luminosity and depth information, the decoding whereof, may directly yield the information, or could also be using live capture and rendering techniques. In a non-limiting instance, presently known stereo cameras can provide left and right eye images and correlation methods may be used to extract depth.
It may be understood by a person skilled in the art that the FIG. 1 includes a simplified architecture of the apparatus 100 for sake of clarity, which should not unduly limit the scope of the claims herein. It is to be understood that the specific implementation of the apparatus 100 is provided as an example and is not to be construed as limiting. The person skilled in the art will recognize many variations, alternatives, and modifications of embodiments of the present disclosure.
The embodiments of the present disclosure provide a method for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene. As illustrated in Fig. 2, the method for generating an environment model includes actions in the following blocks.
At block 210, at least a first two-dimensional (2D) image and a second 2D image are captured using one or more image capturing units.
At block 220, a composite image of a scene surrounding an electronic device is obtained using an image sensor unit. In this regard, the composite image comprises a combination of the at least first image and the second image.
At block 230, a composite luminosity map of the composite image is generated.
In an embodiment, normalizing respective first and second luminosity values for each pixel on the at least first and second images can be done according to a dynamic range of colours in the at least first and second images. This enables the normalized luminosities to correspond to the dynamic range of the colours. When the colour of the virtual object is modulated based on such a normalized luminosity, the colour of the virtual object appears more realistic. As an example, let us assume that the overall luminosities of all the points in at least the first and second images lie in a range of 20 to 80. In case the dynamic range of the colours lies in a range of 0 to 255 (that is, for 8-bit colour values), the overall luminosities of all the points can be normalized according to the range of 0 to 255.
Optionally, in the method, the step of generating a composite luminosity map of the composite image comprises of calculating the overall luminosity of each pixel by averaging the different luminosities of corresponding pixels in the at least first and second images. Herein, the term "luminosity" refers to a brightness of a given point. It will be appreciated that the luminosity of a same point can differ when viewed from different directions, due to properties of a material from which that point is made, and a positioning of light source(s) in the real-world environment.
At block 240, a depth information of the composite image is obtained by a depth sensor. At block 250, the composite image of the scene surrounding the device is rendered onto a 3D rendering environment using a rendering unit.
Thus, the method comprises obtaining, using a depth sensor, depth information of the composite image; and process the images, based on the corresponding viewpoints and the depth information, to render the composite images of the scene.
Adding depth information to a signal containing image data allows for generation of multiple 3D image view (for stereo and autostereo applications) and also allows for post-transmission of adjustment of data to fit to any display or rendering unit within the stereoscopic apparatus.
Alternatively, optionally, the images comprise pairs of stereo images, wherein obtaining depth information of the composite image comprises of utilizing binocular disparities on at least first and second images. It will be appreciated that techniques for creating the 3D rendering environment of the real-world environment are well-known in the art.
Referring to Fig. 3, the steps for generating a composite image includes the following steps:
Block 310 embodies a first luminosity map being generated for the captured first image using a luminosity sensor.
Block 320 embodies a second luminosity map being generated for the captured second image using said luminosity sensor.
Block 330 operatively normalizes the respective first and second luminosity values for each pixel on the at least first and second images using the said luminosity processor unit.
Block 340 combines said first and second luminosity maps using said luminosity processor unit.
Optionally, the method comprises displaying a virtual object in the said 3D rendering environment.
Additionally, optionally, obtaining a composite image from the at least first image and second image, comprises: storing colour values of each pixel in the at least first image and second image; generating an intermediate image based on the 3D rendering environment; identifying at least one point of interest in the intermediate image wherein, said point of interest is based on at least one of: a user's gaze or a direction in which an extended-reality controller is pointing; and superimposing the virtual object on the at least one point of interest in the intermediate image, on a vertical portion of the rendering environment.
Additionally, optionally, the step of displaying the virtual object in the said 3D rendering environment comprises of modulating the virtual object in line with the composite luminosity map of the composite image. This allows for adjusting the colour of the virtual object to match the lighting of the background view in the 3D rendering environment. In other words, this allows the virtual object to blend well with digital representations of real-world physical objects present in the real-world environment. “Blend” here essentially means that the colour intensity of the virtual object matches and adapts as per actual lighting conditions in the real-world environment.
Moreover, optionally, the method step of obtaining the at least first and second image comprises of identifying at least one point of interest in the 3D rendering environment, based on a first viewpoint and a second viewpoint. In this regard, the first viewpoint and a second viewpoint correspond to a head pose of the user, while the user's gaze corresponds to a gaze direction in which the user is looking.
Alternatively, optionally, identifying the at least one point of interest in the 3D rendering environment is based on a first viewpoint and a direction in which an extended-reality controller is pointing Throughout the present disclosure, the term "extended-reality” encompasses virtual reality (VR), augmented reality (AR) and mixed reality (MR). The extended-reality controller is a controller that allows the user to interact with an extended-reality environment being displayed to the user via an electronic device. An example of the electronic device is a head-mounted display device, a smart wearable glass.
Optionally, in an AR implementation, the virtual object is to be displayed at the electronic device after the modulation on an optical-see-through view of the real-world environment.
Alternatively, optionally, in an MR implementation, the virtual object is to be displayed at the electronic device after the modulation on a video-see-through view of the real-world environment.
Additionally, optionally, obtaining a composite image from the at least first image and second image comprises: storing colour values of each pixel in the at least first image and second image; generating an intermediate image based on the 3D rendering environment; identifying at least one point of interest in the intermediate image wherein, said point of interest is based on at least one of: a user's gaze or a direction in which an extended-reality controller is pointing; and superimposing the virtual object on the at least one point of interest in the intermediate image, on a vertical portion of the rendering environment.
Optionally, the rendering of the composite image of the scene surrounding the device onto a 3D rendering environment comprises of determining a distance to a nearest foreground object indicated by the depth information of the composite image. In this regard, one or more foreground objects may be determined, utilizing an apparatus. Additionally, optionally, apparatus may determine one or more background objects. A given scene may contain background objects such as distant buildings, trees, etc. The scene may additionally or alternatively include foreground objects, such as pedestrians and vehicles. For objects detected in the scene, the apparatus (based on the depth information obtained using a depth sensor) may determine which objects are foreground objects and/or which objects are background objects. In an embodiment, the apparatus (based on the depth information obtained using a depth sensor) may identify any objects closer than a threshold distance as foreground objects and any objects further than the threshold distance as background objects.
Examples of implementation of the present invention may also include imaging in scientific and medical applications. In a non-limiting example, in astronomical modelling a galaxy which has been scaled to be presented on the 3-D display device and the nearest and furthest regions of the displayed galaxy can be distorted in order to maintain the geometric perceived depth of the region of interest. Similarly where medical devices utilise 3-D display devices, for example in remote controlled keyhole surgery, it is important that the geometric perceived depth of the region of interest, the region in which the surgery is taking place, is not distorted. This method allows the entire scene to be displayed without allowing distortion of the region in which the surgery is taking place.
It will be appreciated in the art that the methods and apparatus disclosed herein are applicable to any variety of formats applicable to stereoscopic content including, but not limited to, multi-tile formats, two-tile formats, interlace formats, side-by-side formats, field-sequential formats, etc. Neither the type of data to be transmitted nor the method of using the data is limited with any specific format.
Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural.
Equivalents
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding the description may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B”.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated in the description. , Claims:CLAIMS:
We claim:
1. A method (200) for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene, comprising the steps of:
capturing (210) at least a first two-dimensional (2D) image and a second 2D image using one or more image capturing units (101);
obtaining (220) a composite image of a scene surrounding an electronic device (101) using an image sensor unit (102), wherein the composite image comprises a combination of the at least first image and the second image;
generating (230) a composite luminosity map of the composite image, comprising
generating a first luminosity map (310) for the captured first image using a luminosity sensor unit (103),
generating a second luminosity map (320) for the captured second image using said luminosity sensor unit (103),
normalizing respective first and second luminosity values (330) for each pixel on the at least first and second images using a luminosity processor unit (104), and
combining said first and second luminosity maps (340) using said luminosity processor unit (104),
obtaining (240) depth information of the composite image by a depth sensor; and
rendering (250) the composite image of the scene surrounding the device onto a 3D rendering environment using a rendering unit.
2. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 1, comprising displaying a virtual object in the said 3D rendering environment.
3. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 2, wherein displaying the virtual object in the said 3D rendering environment comprises of modulating the virtual object in line with the composite luminosity map of the composite image.
4. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 1, wherein obtaining the at least first and second image comprises of identifying at least one point of interest in the 3D rendering environment, based on a first viewpoint and a second viewpoint.
5. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 4, wherein identifying the at least one point of interest in the 3D rendering environment is based on a first viewpoint and a direction in which an extended-reality controller operatively coupled to the electronic device (107), is pointing.
6. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 2, wherein obtaining a composite image from the at least first image and second image, comprises:
storing colour values of each pixel in the at least first image and second image;
generating an intermediate image based on the 3D rendering environment;
identifying at least one point of interest in the intermediate image wherein, said point of interest is based on at least one of: a user's gaze or a direction in which an extended-reality controller operatively coupled to the electronic device (107), is pointing; and
superimposing the virtual object on the at least one point of interest in the intermediate image, on a vertical portion of the rendering environment.
7. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 1, wherein generating a composite luminosity map of the composite image comprises of calculating the overall luminosity of each pixel by averaging the different luminosities of corresponding pixels in the at least first and second images.
8. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 1, wherein obtaining depth information of the composite image comprises of utilizing binocular disparities on the at least first and second images.
9. The method (200) for generating stereoscopic 3D viewing based on composite image visualization of a scene as claimed in claim 1, wherein rendering the composite image of the scene surrounding the device onto a 3D rendering environment comprises of determining a distance to a nearest foreground object indicated by the depth information of the composite image.
10. An apparatus (100) for generating a stereoscopic three-dimensional (3D) viewing based on composite image visualization of a scene, comprising:
an electronic device (107);
one or more image capturing units (101) configured to acquire at least a first two-dimensional (2D) image and a second 2D image;
an image sensor unit (102) for obtaining a composite image out of a combination of the at least first image and the second image;
a luminosity sensor unit (103) for generating at least a first luminosity map and a second luminosity map;
a luminosity processor unit (104) for processing respective luminosity values and for combining first and second luminosity maps;
a depth sensor (108) for obtaining depth information of the composite image; and
a rendering unit (105) for rendering the composite image of the scene surrounding the device onto a 3D rendering environment.
| # | Name | Date |
|---|---|---|
| 1 | 202221070117-STATEMENT OF UNDERTAKING (FORM 3) [05-12-2022(online)].pdf | 2022-12-05 |
| 2 | 202221070117-POWER OF AUTHORITY [05-12-2022(online)].pdf | 2022-12-05 |
| 3 | 202221070117-FORM 1 [05-12-2022(online)].pdf | 2022-12-05 |
| 4 | 202221070117-FIGURE OF ABSTRACT [05-12-2022(online)].pdf | 2022-12-05 |
| 5 | 202221070117-DRAWINGS [05-12-2022(online)].pdf | 2022-12-05 |
| 6 | 202221070117-DECLARATION OF INVENTORSHIP (FORM 5) [05-12-2022(online)].pdf | 2022-12-05 |
| 7 | 202221070117-COMPLETE SPECIFICATION [05-12-2022(online)].pdf | 2022-12-05 |
| 8 | Abstract1.jpg | 2023-01-25 |
| 9 | 202221070117-RELEVANT DOCUMENTS [26-08-2024(online)].pdf | 2024-08-26 |
| 10 | 202221070117-FORM 13 [26-08-2024(online)].pdf | 2024-08-26 |
| 11 | 202221070117-AMENDED DOCUMENTS [26-08-2024(online)].pdf | 2024-08-26 |
| 12 | 202221070117-POA [24-04-2025(online)].pdf | 2025-04-24 |
| 13 | 202221070117-FORM 13 [24-04-2025(online)].pdf | 2025-04-24 |
| 14 | 202221070117-AMENDED DOCUMENTS [24-04-2025(online)].pdf | 2025-04-24 |