Abstract: ABSTRACT A SYSTEM, A METHOD AND A COMPUTER SYSTEM FOR MODIFYING REAL WORLD OBJECT INCLUDING MATERIAL AND VISUAL PROPERTIES IN A MIXED REALITY SPACE A method (200) for modifying real world object including material and visual properties in a mixed reality space (322) comprises steps of receiving live visuals of a scene (302) having one or more real objects (304) using the HMD (102) and determining RGB values, depth value and 6DoF pose estimation of the captured scene (302) and the one or more real objects (304) therein, processing the visuals captured within the scene (302) by conducting intrinsic decomposition on the visuals in real time, separating the visuals into a first plurality of layers, generating and superimposing first one or more intangible virtual objects (324) based on the determined values, having the first plurality of layers, modifying the first one or more intangible virtual objects (324) by substituting the first plurality of layers and displaying the modified scene (328) in the mixed reality space (322) in real time using the HMD (102). [FIGURE 2]
DESC:FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10 and rule 13]
"A SYSTEM, A METHOD AND A COMPUTER SYSTEM FOR MODIFYING REAL WORLD OBJECT INCLUDING MATERIAL AND VISUAL PROPERTIES IN A MIXED REALITY SPACE"
We, DIMENSION NXG PRIVATE LIMITED, an Indian company, having a registered office at 501, Arcadia, Hiranandani Estate, Patlipada, Ghodbunder Road, Thane, Maharashtra -400607, India
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
Embodiment of the present invention relates to mixed reality technologies and more particularly a system, a method and a computer system for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD).
BACKGROUND OF THE INVENTION
Virtual reality (VR) is an interactive computer-generated experience taking place within a simulated environment, that may incorporate auditory, visual, haptic, and other types of sensory feedback. This technology doesn’t allow the mixing of computer graphics and real-world properties.
There are existing technologies which connect the computer-generated graphics and real world creating mixed reality image. Such as uses Intrinsic decomposition refers to the fundamentally ambiguous task of separating a video stream into its constituent layers, in particular reflectance and shading layers. Such a decomposition is the basis for a variety of video manipulation applications, such as realistic recolouring or retexturing of objects. Invention presents a variational approach to tackle this under constrained inverse problem at real-time frame rates, which enables on-line processing of live video footage. The problem of finding the intrinsic decomposition is formulated as a mixed variational optimization problem based on an objective function that is specifically tailored for fast optimization. It tackles the resulting high-dimensional, non-convex optimization problem via a data-parallel iteratively reweighted least squares solver that runs on commodity graphics hardware. Real-time performance is obtained by combining a local-global solution strategy with hierarchical coarse-to-fine optimization. Compelling real-time augmented reality applications, such as recolouring, material editing and retexturing, are demonstrated in a live setup.
Another solution is a real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. It acquires a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to views by re-projection.
Another solution is the first end-to-end approach for real-time material estimation for general object shapes that only requires a single-color image as input. In addition to Lambertian surface properties. It automatically computes the specular albedo, material shininess, and a foreground segmentation. It uses image-to-image translation techniques based on deep convolutional encoder-decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo.
These primitive technologies do not provide solution to connect the computer-generated information and real world into mixed reality image without using explicit correspondence search. The goal is to outperform state-of-the-art approaches in terms of runtime and result quality – even without user guidance such as scribbles.
Hence, there exists a need for a system, a method and a computer system for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD) that does not suffer from above-mentioned deficiencies and provide an effective and viable solution.
OBJECT OF THE INVENTION
An object of the present invention is to provide a system for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD).
Another object of the present invention is to provide a method for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD).
Yet another object of the present invention is to provide a computer system for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD).
Yet another object of the present invention is to enable an interaction between computer generated perceptual information and real world and to make the final image into mixed reality image.
Yet another object of the invention is to perform real time estimation for general object shapes using the method combination of sophisticated local spatial and global spatio-temporal priors resulting in temporally coherent decompositions at real- time frame rates without the need for explicit correspondence search, user guided intrinsic decomposition of static scenes captured by an RGB-D sensor.
SUMMARY OF THE INVENTION
The present invention is described hereinafter by various embodiments. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein.
According to first aspect of the present invention, there is provided a method for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD). The method comprises steps of, but not limited to, receiving live visuals of a scene having one or more real objects using the HMD and determining RGB values, depth value and 6DoF pose estimation of the captured scene and the one or more real objects therein, processing the visuals captured within the scene by conducting intrinsic decomposition on the visuals in real time, separating the visuals into a first plurality of layers, generating and superimposing first one or more intangible virtual objects based on the determined values, having the first plurality of layers representing the first one or more intangible virtual objects and one or more visual properties of the first one or more intangible virtual objects, in a mixed reality space, modifying the first one or more intangible virtual objects within the captured scene by substituting the first plurality of layers in the mixed reality space in real time and displaying the modified scene including the modified first one or more intangible virtual objects in the mixed reality space in real time using the HMD.
In accordance with an embodiment of the present invention, the method further comprises the steps of suggesting second one or more intangible virtual objects pre-stored in a data repository, based on surroundings in the captured scene, the second one or more intangible virtual objects having a second plurality of layers representing the second one or more intangible virtual objects and one or more visual properties of the second one or more intangible virtual objects, modifying the second one or more intangible virtual objects within the captured scene by substituting the second plurality of layers in the mixed reality space in real time and displaying the modified scene including the modified second one or more intangible virtual objects in the mixed reality space in real time using the HMD. Further, the second one or more intangible virtual objects are, but not limited to, computer generated graphics of the one or more real objects that are not present within the captured scene.
In accordance with an embodiment of the present invention, the method further comprises the steps of suggesting the second one or more intangible virtual objects pre-stored in a data repository, based on a search request received from the HMD, the search request comprising predefined filters such as names, usage, colour, durability and size of the desired one or more real objects not present within the captured scene, modifying the second one or more intangible virtual objects within the captured scene by substituting the second plurality of layers in the mixed reality space in real time and displaying the modified scene including the modified second one or more intangible virtual objects in the mixed reality space in real time using the HMD.
In accordance with an embodiment of the present invention, the one or more real objects are selected from a group comprising, but not limited to, walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
In accordance with an embodiment of the present invention, the first one or more intangible virtual objects are selected from a group comprising, but not limited to, computer generated graphics of the one or more real objects present within the captured scene.
In accordance with an embodiment of the present invention, the visual properties are selected from a group comprising, but not limited to shape, appearance, texture, material and colour.
In accordance with an embodiment of the present invention, modifying the first one or more intangible virtual objects and the second one or more intangible virtual objects comprises adding or removing the first one or more intangible virtual objects and/or the second one or more intangible virtual objects in the captured scene and/or by altering the visual properties of the first one or more intangible virtual objects and/or the second one or more intangible virtual objects in the mixed reality space.
In accordance with an embodiment of the present invention, altering the visual properties includes reshaping, recolouring, retexturing, relighting of the first one or more intangible virtual objects and/or the second one or more intangible virtual objects.
In accordance with an embodiment of the present invention, the HMD comprises one or more cameras for capturing the visual data, one or more microphones for capturing the audio data, one or more sensors and the data repository having prestored data.
In accordance with an embodiment of the present invention, the prestored data includes a database of the first one or more intangible virtual objects having the first plurality of layers and the second one or more intangible virtual objects having the second plurality of layers.
According to a second aspect of the present invention, there is provided a computer system for modifying real world object including material and visual properties in a mixed reality space, the computer system being connected with the Mixed Reality (MR) based Head Mounted Device (HMD). The computer system comprising a memory unit configured to store machine-readable instructions and a processor operably connected with the memory unit, the processor obtaining the machine-readable instructions from the memory unit, and being configured by the machine-readable instructions to receive live visuals of a scene having one or more real objects using the HMD and determining RGB values, depth value and 6DoF pose estimation of the captured scene and the one or more objects therein, process the visuals captured within the scene by conducting intrinsic decomposition on the visuals in real time, separate the visuals into a first plurality of layers, generate and superimpose first one or more intangible virtual objects based on the determined values, having the first plurality of layers representing the first one or more intangible virtual objects and one or more visual properties of the first one or more intangible virtual objects, in a mixed reality space, modify the first one or more intangible virtual objects within the captured scene by substituting the first plurality of layers in the mixed reality space in real time and display the modified scene including the modified first one or more intangible virtual objects in the mixed reality space in real time using the HMD.
In accordance with an embodiment of the present invention, the processor is further configured to suggest second one or more intangible virtual objects pre-stored in a data repository, based on surroundings in the captured scene, the second one or more intangible virtual objects having a second plurality of layers representing the second one or more intangible virtual objects and one or more visual properties of the second one or more intangible virtual objects, modify the second one or more intangible virtual objects within the captured scene by substituting the second plurality of layers in the mixed reality space in real time and display the modified scene including the modified second one or more intangible virtual objects in the mixed reality space in real time using the HMD. Further, the second one or more intangible virtual objects are, but not limited to, computer generated graphics of the one or more real objects that are not present within the captured scene.
In accordance with an embodiment of the present invention, the processor is further configured to suggest the second one or more intangible virtual objects pre-stored in the data repository, based on a search request received from the HMD, the search request comprising predefined filters such as names, usage, colour, durability and size of the desired one or more real objects not present within the captured scene, modify the second one or more intangible virtual objects within the captured scene by substituting the second plurality of layers in the mixed reality space in real time and display the modified scene including the modified second one or more intangible virtual objects in the mixed reality space in real time using the HMD.
In accordance with an embodiment of the present invention, the one or more real objects are selected from a group comprising, but not limited to, walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
In accordance with an embodiment of the present invention, the first one or more intangible virtual objects are selected from a group comprising, but not limited to, computer generated graphics of the one or more real objects present within the captured scene.
In accordance with an embodiment of the present invention, visual properties are selected from a group comprising, but not limited to, shape, appearance, texture, material and colour.
In accordance with an embodiment of the present invention, the processor is configured to modify the first one or more intangible virtual objects and the second one or more intangible virtual objects by adding or removing the first one or more intangible virtual objects and/or the second one or more intangible virtual objects in the captured scene and/or by altering the visual properties of the first one or more intangible virtual objects and/or the second one or more intangible virtual objects in the mixed reality space.
In accordance with an embodiment of the present invention, the processor is configured to alter the visual properties by reshaping, recolouring, retexturing and relighting of the first one or more intangible virtual objects and/or the second one or more intangible virtual objects.
In accordance with an embodiment of the present invention, the HMD comprises one or more cameras for capturing the visual data, one or more microphones for capturing the audio data, one or more sensors, and the data repository having the prestored data.
In accordance with an embodiment of the present invention, the prestored data includes a database of the first one or more intangible virtual objects having the first plurality of layers and the second one or more intangible virtual objects having the second plurality of layers.
According to third aspect of the present invention, there is provided a system for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD). The system comprises the Mixed Reality (MR) based Head Mounted Device (HMD) having one or more cameras for capturing the visual data, one or more microphones for capturing the audio data, and one or more sensors, a processing module, an interface module and the data repository having prestored data. Further, the interface module is configured to receive live visuals of a scene having one or more real objects using the HMD and determining RGB values, depth value and 6DoF pose estimation of the captured scene and the one or more objects therein. Additionally, the processing module is configured to process the visuals captured within the scene by conducting intrinsic decomposition on the visuals in real time, separate the visuals into a first plurality of layers and generate and superimpose first one or more intangible virtual objects based on the determined values, having the first plurality of layers representing the first one or more intangible virtual objects and one or more visual properties of the first one or more intangible virtual objects, in a mixed reality space and modify the first one or more intangible virtual objects within the captured scene by substituting the first plurality of layers in the mixed reality space in real time. In addition, the interface module is configured to display the modified scene including the modified first one or more intangible virtual objects in the mixed reality space in real time using the HMD.
In accordance with an embodiment of the present invention, the processing module is further configured to suggest second one or more intangible virtual objects pre-stored in the data repository, based on surroundings in the captured scene, the second one or more intangible virtual objects having a second plurality of layers representing the second one or more intangible virtual objects and one or more visual properties of the second one or more intangible virtual objects and modify the second one or more intangible virtual objects within the captured scene by substituting the second plurality of layers in the mixed reality space in real time. Furthermore, the interface module is configured to display the modified scene including the modified second one or more intangible virtual objects in the mixed reality space in real time using the HMD. Herein, the second one or more intangible virtual objects are computer generated graphics of the one or more real objects that are not present within the captured scene.
In accordance with an embodiment of the present invention, the processing module is further configured to suggest the second one or more intangible virtual objects pre-stored in the data repository, based on a search request received from the HMD, the search request comprising predefined filters such as names, usage, colour, durability and size of the desired one or more real objects not present within the captured scene, and modify the second one or more intangible virtual objects within the captured scene by substituting the second plurality of layers in the mixed reality space in real time. Further, the interface module is configured to display the modified scene including the modified second one or more intangible virtual objects in the mixed reality space in real time using the HMD.
In accordance with an embodiment of the present invention, the one or more real objects are selected from a group comprising walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
In accordance with an embodiment of the present invention, the first one or more intangible virtual objects are selected from a group comprising computer generated graphics of the one or more real objects present within the captured scene.
In accordance with an embodiment of the present invention, the visual properties are selected from a group comprising shape, appearance, texture, material and colour.
In accordance with an embodiment of the present invention, the processing module is configured to modify the first one or more intangible virtual objects and the second one or more intangible virtual objects by adding or removing the first one or more intangible virtual objects and/or the second one or more intangible virtual objects in the captured scene and/or by altering the visual properties of the first one or more intangible virtual objects and/or the second one or more intangible virtual objects in the mixed reality space.
In accordance with an embodiment of the present invention, the processing module is configured to alter the visual properties by reshaping, recolouring, retexturing and relighting of the first one or more intangible virtual objects and/or the second one or more intangible virtual objects.
In accordance with an embodiment of the present invention, the prestored data includes a database of the first one or more intangible virtual objects having the first plurality of layers and the second one or more intangible virtual objects having the second plurality of layers.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular to the description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, the invention may admit to other equally effective embodiments.
These and other features, benefits and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
Fig. 1 illustrates an exemplary environment of computing devices to which various embodiments of the present invention may be implemented;
Fig. 2 illustrates a method for modifying real world object including material and visual properties in a mixed reality space using the MR based HMD, in accordance with an embodiment of the present invention;
Fig. 3A illustrates an information flow diagram of receiving and processing visuals of a scene via the HMD, in accordance with an embodiment of the present invention;
Fig. 3B illustrates an information flow diagram of separation of received visuals into layers and generation as well as superimposition of first one or more intangible virtual objects in a mixed reality space, in accordance with an embodiment of the present invention;
Fig. 3C-3D illustrate modification and display of the first one or more intangible virtual objects in the mixed reality space, in accordance with an embodiment of the present invention;
Fig. 4A illustrates an information flow diagram of receiving and processing visuals of a scene via the HMD, in accordance with another embodiment of the present invention;
Fig. 4B illustrates an information flow diagram of displaying the modified scene including a modified second one or more intangible virtual objects in the mixed reality space, in accordance with another embodiment shown in Fig. 4A of the present invention; and
Fig. 5 illustrates a system for modifying real world object including material and visual properties in a mixed reality space using the MR based HMD, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF DRAWINGS
While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase “comprising”, it is understood that we also contemplate the same composition, element or group of elements with transitional phrases “consisting of”, “consisting”, “selected from the group of consisting of, “including”, or “is” preceding the recitation of the composition, element or group of elements and vice versa.
The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
Figure 1 illustrates an exemplary environment of computing devices to which various embodiments of the present invention may be implemented.
As shown in figure 1, the environment comprises a computer system (104) connected with a Mixed Reality based Head Mounted Device (102). The HMD (102) may include capabilities of generating an augmented reality (AR) environment, mixed reality (MR) environment and a virtual reality (VR) environment in a single device. The HMD (102) is envisaged to primarily include (although not shown in figure 1), but not limited to, one or more cameras for capturing the visual data, one or more microphones for capturing the audio data, one or more sensors and a data repository having prestored data.
The one or more cameras may be selected from one or more of, but not limited to, an IR camera, an RGB/colour camera or an RGB-D camera to capture coloured imagery of the real-world environment with depth. Further, the one or more sensors may include electromagnetic radiation sensors and air sensors. The electromagnetic radiation sensors may be used to gather and track spatial data of the real-world environment (may be called as a scene) as well as to track eye movement and hand gesture of a user so as to update the 3D generated object in VR, AR and/or MR. The electromagnetic radiation sensors may have an IR projector and an IR camera. The IR projector and IR camera together capture depth data of the real-world environment using any one or more of Time of Flight based and passive stereoscopic depth imaging.
Additionally, the one or more microphones are envisaged to receive audio of the real-world environment and/or users of the HMD. Furthermore, the data repository may be local storage or a cloud-based storage.
It will be appreciated by a person skilled in the art, that apart from the above-mentioned components, the HMD (102) may also include other basic components (which would be briefed below), without departing from the scope of the present invention. The HMD (102) comprises visors having may be partially or fully reflective surface. In other words, the visors may have a variable transparency. The visors are used to view human or object in virtual reality, mixed reality or augmented reality. The HMD (102) may further include cooling vent to ensure that internal circuitry and devices of the HMD (102) are provided with enough amount of air for convection cooling. A wire outlet may also be provided to allow the connecting wires and chords to connect to various components such as power supply, computational and control units and data acquisition devices.
Further, the HMD (102) may be envisaged to include extendable bands and straps and a strap lock for securing the HMD (102) positioned on the head. The HMD (102) is envisaged to include one or more display sources which may be LCD, LED or TFT screens with respective drivers. The HMD (102) may have a driver board including a part of computational software and hardware needed to run devices provided with the HMD (102). The HMD (102) may further include power supply unit for receiving AC power supply. Moreover, the HMD (102) may include, HDMI output to allow data to be transferred. A Universal serial bus (USB) connector to allow data and power transfer. The HMD (102) is also envisaged to include a plurality of electronic components for example, a graphics processor unit (GPU) and a power source provide electrical power to the HMD (102).
A Graphics Processing Unit (GPU) is a single-chip processor primarily used to manage and boost the performance of video and graphics such as 2-D or 3-D graphics, texture mapping, hardware overlays etc. The GPU may be selected from, but not limited to, NVIDIA, AMD, Intel and ARM for real time 3D imaging. The power source may be inbuilt inside the HMD (102). A plurality of indicators such as LED to indicate various parameters such as battery level or connection disconnection may be included in the HMD (102). The indications may be colour coded for differentiation and distinctiveness.
In accordance with an embodiment of the present invention, the HMD (102) is capable of being connected with external controllers to enable interaction with the objects in a mixed reality space and/or the HMD (102) enables interaction using hand gestures/movements of the user.
In accordance with an embodiment of the present invention, the computer system (104) connected with the HMD (102), may be encased inside the HMD (102) itself. The computer system (104) is comprises a memory unit (1044) configured to store machine-readable instructions. The machine-readable instructions may be loaded into the memory unit (1042) from a non-transitory machine-readable medium, such as, but not limited to, CD-ROMs, DVD-ROMs and Flash Drives. Alternately, the machine-readable instructions may be loaded in a form of a computer software program into the memory unit (1042). The memory unit (1042) in that manner may be selected from a group comprising EPROM, EEPROM and Flash memory. Further, the computer system (104) includes a processor (1044) operably connected with the memory unit (1042). In various embodiments, the processor (1044) is one of, but not limited to, a microprocessor, a general-purpose processor, an application specific integrated circuit (ASIC) and a field-programmable gate array (FPGA).
Figure 2 illustrates a method (200) for modifying real world object including material and visual properties in a mixed reality space using a Mixed Reality (MR) based Head Mounted Device (HMD), in accordance with an embodiment of the present invention. The method (200) begins at step 210, when the processor (1044) receive live visuals of a scene (302) using the HMD (102). The one or more cameras of the HMD (102) capture the visuals of the scene (302) which is in the Field of View (FOV) of the one or more cameras. The live visual may be, but not limited to, an RGB-D image which is simply a combination of an RGB image and its corresponding depth image. In general, a depth image is an image channel in which each pixel relates to a distance between the image plane and the corresponding object in the RGB image. It uses structured infrared light to determine depth values of each pixel. Additionally, the scene (302) may be understood to a real world indoor or outdoor environment having one or more real objects (304) therein. The one or more real objects (304) are selected from a group comprising, but not limited to, walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
Further, the processor (1044) is configured to determines RGB values, depth value and 6dof pose estimation of the visuals in the captured scene (302) and the detected one or more real objects (304) using the one or more sensors. The same has been illustrated in figure 3A. As shown in figure 3A, the visuals of a room are received by the processor (1044) from the HMD (102). The processor (1044) then detects the presence and RGB values, depth value, 6dof pose estimation of one or more real objects (304) within the room such as a sofa, a door, multiple walls, floor etc. Then at step 220, the visuals are processed by conducting the intrinsic decomposition in real-time. Then, as a result of intrinsic decomposition, at step 230, the visuals are separated into a first plurality of layers (not shown). In general, these are the very layers that are combined to form an image. The first plurality of layers may comprise of characteristics of an image/visual such as, but not limited to, a reflectance (albedo) image and a shading (irradiance) image, which multiply to form the original visual/image.
The same has been illustrated in figure 3B. In addition, the processor (1044) generates first one or more intangible virtual objects (324) based on the values (RGB, depth value, 6Dof pose value etc.) of the one or more real objects (304), determined during step 210. The first one or more intangible virtual objects (324) may be, but not limited to, computer generated graphics of the one or more real objects (304) present within the captured scene (302). The generated first one or more intangible virtual objects (302) have same visual properties such as shape, appearance, texture and colour same as the corresponding one or more real objects (304).
In simple words, the processor (1044) creates a respective replica of each of the one or more real objects (304) in the mixed reality space (322) having the exact same appearance and other visual properties as the that of the one or more real objects (304). Each of the visual properties have the associated first plurality of layers (as previously mentioned, radiance, reflectance etc.) which may be changed/substituted bring a change in any of the one or more visual properties. In that sense, each of the first one or more intangible virtual objects (324) is also envisaged to be one of the first plurality of layers which may be added or removed or substituted as and when required.
After generation, the processor (1044) superimposes the first one or more intangible virtual objects (324) over the detected corresponding one or more real objects (304) based on the 6Dof pose estimation. The same has been illustrated in figure 3B. So, it will appear as if real objects in the room have come to life and may be edited as per a will of the user. As shown in figure 3B, the one or more names (326) of the detected one or more real objects (304) or the first one or more intangible virtual objects (324) along with their visual properties, are displayed as overlaid graphics (326) on the scene (302) in the mixed reality space (322). The user of the HMD (102) may interact with names and thereby the one or more intangible virtual objects (324), using the external controllers and/or hand gestures/movements. In another embodiment, the one or more names only appear when the user selects one of the first one or more intangible virtual object (324) using hand gestures or external controller (using pointers being displayed in the mixed reality space (322)).
Moreover, at step 240, the processor (1044) is configured to modify the first one or more intangible virtual objects (324) within the captured scene (302) by substituting the first plurality of layers in the mixed reality space (322) in real time. The modification includes adding or removing the first one or more intangible virtual objects (324) in the captured scene (302) and/or altering the visual properties of the first one or more intangible virtual objects (324) in the mixed reality space (322). The visual properties may be altered by, but not limited to, reshaping, recolouring, retexturing and relighting of the first one or more intangible virtual objects (324). Continuing from the example shown in figure 3B where the one or more real objects (304) were detected to be a sofa, a door, multiple walls, floor etc., corresponding first one or more intangible virtual objects (324) were superimposed having exact appearance. Now, the user wants to change the appearance of the scene (302) by, say, changing the colours of the sofa and/or the wall. Then, refer to figure 3C, where the user may select the wall and the sofa as well as the visual property (i.e. the colour in this example) one by one, which he/she wants to modify.
After selection the user may provided with a number of options to select the desired/new colour. Then processor (1044) in combination with the GPU (of HMD (102)) modifies the appearance of sofa and the wall, to the desired colour by substituting/altering the corresponding first plurality of layers (i.e. reflectance and shading etc.) associated with the colour. Now, let’s assume, the user is not satisfied with the change and wants to further modify the wall and the sofa. Then, referring to figure 3D, the user selects a different wall which is to be painted and is provided with new options. Additionally, the user selects the previously painted wall to further modify the wall to add a wall texture. The processor (1044) makes the desired changes to the corresponding plurality of layers of the walls to facilitate the change in one or more visual properties. After that, the user may select to change the material, texture and colour of the sofa to another, so, the processor (1044) makes the corresponding substitution/changes in the corresponding first plurality of layers to facilitate the change.
Then, at step 250, the modified scene (328) including the modified first one or more intangible virtual objects (324) in the mixed reality space (322) in real time using the HMD (102). In accordance with an embodiment, the steps 250 and 260 are taking place simultaneously in real-time. The above-mentioned method enables such visual-based resurfacing, texture transfer between images, relighting and material recognition tasks which find extensive applications in interior designing. However, it will be appreciated by a skilled addressee that the above-mentioned steps of method are not limited to any specific order and may be implemented in other ways, without departing from the scope of the present invention.
In accordance with an embodiment of the present invention, the method and the computer system (104) are implementable for not just changing/modifying the visual properties of the first one or more intangible virtual objects (324) generated as replicas of detected one or more real objects (304) but also for suggesting second one or more intangible virtual objects (408) based on the captured scene (402) and the ambience. The second one or more intangible virtual objects (408) may be, but not limited to, computer generated graphics of the one or more real objects (304) that are not present within the captured scene (402). Referring to an example shown in figure 4A, the processor (1044) receives live visual of an empty room and processes the received visuals. The data repository (not shown) is envisaged to have a prestored data that includes a database of the various first one or more intangible virtual objects (324) having the first plurality of layers and the second one or more intangible virtual objects (408) having the second plurality of layers, depending upon the scene (402).
So accordingly, the processor (1044) suggests the names of the one or more second intangible virtual objects and overlays the names as graphics (406) on the scene (402) in the mixed reality space (404). In the present example, the capture scene (402) is of the empty room, so a LED TV, a sofa, a table, a chandelier, a plant, a door and a carpet is suggested for a user to select. The user may select and place the selected second one or more intangible virtual object using the hand gestures or the external controller in the mixed reality space (322). In one embodiment, the corresponding second one or more intangible virtual object (408) appears in the mixed reality space (404) as soon as the user selects the name. This would give an idea to the user about how the room would look once the one or more real objects (304) (corresponding to one or more intangible virtual objects (408)) are placed. The same has been illustrated in the figure 4B.
Additionally, the selected objects are presented with their respective visual properties such as material, texture, colour etc. As previously mentioned, the user may modify/change the visual properties and processor (1044) facilitates the modification by substituting the second plurality of layers associated with the visual properties of each of the second one or more intangible virtual objects (408). Additionally, the user may also remove the object altogether from the mixed reality space (322), as and when required by removing the corresponding second plurality of layers. The modified scene (410) displaying the modified second one or more intangible virtual objects (408) is displayed in real-time as the modifications take place.
In accordance with another embodiment (not shown) of the present invention, the computer implemented method may also enable the user (using the HMD (102)) to search for desired second one or more intangible virtual objects (408). The processor (1044) may receive a search request from the one or microphones (like a voice search) of the HMD (102) or via external controller/hand gestures (in the form of text). The search request may comprise predefined filters such as, but not limited to, names, usage, colour, durability and size of the desired one or more real objects (304) which are not present within the captured scene (402). The processor (1044) accordingly suggests the second one or more intangible virtual objects (408) (pre-stored in the data repository) corresponding to the requested one or more real objects (304).
For example: the user may ask to provide furniture having atleast 5 years of durability and a seating capacity of 7. So, the processor (1044), in response, may suggest an L-shaped or (3+2+2) teak-wood sofa set according to the available space in the room and colour of the walls, from which the user may select. Additionally, the selected objects are presented with their respective visual properties such as material, texture, colour etc. As previously mentioned, the user may modify/change the visual properties and processor (1044) facilitates the modification by substituting the second plurality of layers associated with the visual properties of each of the second one or more object. The modified scene (410) displaying the modified second or ore intangible virtual objects may be displayed in real-time as the modifications take place.
In accordance with another aspect of the invention, figure 5 illustrates a system (500) for modifying real world object including material and visual properties in a mixed reality space (322) using the MR based HMD (102), in accordance with an embodiment of the present invention. As shown in figure 5, the system (500) comprises the Mixed Reality (MR) based Head mounted device (102) (HMD (102)) having one or more cameras (1022) for capturing the visual data, one or more microphones(1026) for capturing the audio data, and one or more sensors (1024), a processing module (504), an interface module (502) and the data repository (not shown) having prestored data. Further, the interface module (502) is configured to receive live visuals of a scene (302) having one or more real objects (304) using the HMD (102) and determining RGB values, depth value and 6DoF pose estimation of the captured scene (302) and the one or more objects therein. Additionally, the processing module (504) is configured to process the visuals captured within the scene (302) by conducting intrinsic decomposition on the visuals in real time, separate the visuals into a first plurality of layers and generate and superimpose first one or more intangible virtual objects (324) based on the determined values, having the first plurality of layers representing the first one or more intangible virtual objects (324) and one or more visual properties of the first one or more intangible virtual objects (324), in a mixed reality space (322) and modify the first one or more intangible virtual objects (324) within the captured scene (302) by substituting the first plurality of layers in the mixed reality space (322) in real time. In addition, the interface module (502) is configured to display the modified scene (328) including the modified first one or more intangible virtual objects (324) in the mixed reality space (322) in real time using the HMD (102).
In accordance with an embodiment of the present invention, the processing module (504) is further configured to suggest second one or more intangible virtual objects (408) pre-stored in the data repository, based on surroundings/ambience in the captured scene (402), the second one or more intangible virtual objects (408) having a second plurality of layers representing the second one or more intangible virtual objects (408) and one or more visual properties of the second one or more intangible virtual objects (408) and modify the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time. Furthermore, the interface module (502) is configured to display the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (404) in real time using the HMD (102). Herein, the second one or more intangible virtual objects (408) are, but not limited to, computer generated graphics of the one or more real objects (304) that are not present within the captured scene (402).
In accordance with an embodiment of the present invention, the processing module (504) is further configured to suggest the second one or more intangible virtual objects (408) pre-stored in the data repository, based on a search request received from the HMD (102), the search request comprising predefined filters such as names, usage, colour, durability and size of the desired one or more real objects (304) not present within the captured scene (402), and modify the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time. Further, the interface module (502) is configured to display the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (322) in real time using the HMD (102).
In accordance with an embodiment of the present invention, the one or more real objects (304) are selected from a group comprising, but not limited to, walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
In accordance with an embodiment of the present invention, the first one or more intangible virtual objects (324) are, but not limited to, computer generated graphics of the one or more real objects (304) present within the captured scene (302).
In accordance with an embodiment of the present invention, the visual properties are selected from a group comprising, but not limited to, shape, appearance, texture, material and colour.
In accordance with an embodiment of the present invention, the processing module (504) is configured to modify the first one or more intangible virtual objects (324) and the second one or more intangible virtual objects (408) by adding or removing the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the captured scene (302 and 402) and/or by altering the visual properties of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the mixed reality space (322 and 404).
In accordance with an embodiment of the present invention, the processing module (504) is configured to alter the visual properties by reshaping, recolouring, retexturing and relighting of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408).
In accordance with an embodiment of the present invention, the prestored data includes a database of the first one or more intangible virtual objects (324) having the first plurality of layers and the second one or more intangible virtual objects (408) having the second plurality of layers.
The present invention provides a number of advantages. firstly, the present invention provides 2D as well as 3D image segmentation based on colour, edge and mesh information. Further, the segmentation can be improved with the help of deep learning-based approaches. Additionally, the present invention extracts shading and albedo from RGB image. The above-mentioned method, system and the computer system enable such visual-based resurfacing, texture transfer between images, relighting and material recognition tasks which find extensive applications in interior designing. Also, the present invention can be used to replace real object with virtual object and vice-versa.
Further, one would appreciate that a communication network may also be used in the system. The communication network can be a short-range communication network and/or a long-range communication network, wire or wireless communication network. The communication interface includes, but not limited to, a serial communication interface, a parallel communication interface or a combination thereof.
In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
Further, while one or more operations have been described as being performed by or otherwise related to certain modules, devices or entities, the operations may be performed by or otherwise related to any module, device or entity. As such, any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof. It should be understood that the techniques of the present disclosure might be implemented using a variety of technologies. For example, the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium. Suitable computer readable media may include volatile (e.g. RAM) and/or non-volatile (e.g. ROM, disk) memory, carrier waves and transmission media. Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publicly accessible network such as the Internet.
It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "controlling" or "obtaining" or "computing" or "storing" or "receiving" or "determining" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the embodiments shown along with the accompanying drawings but is to be providing broadest scope of consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention.
,CLAIMS:We claim
1. A method (200) for modifying real world object including material and visual properties in a mixed reality space (322) using a Mixed Reality (MR) based Head mounted device (102) (HMD (102)), the method (200) comprising steps of:
receiving (210) live visuals of a scene (302) having one or more real objects (304) using the HMD (102) and determining RGB values, depth value and 6DoF pose estimation of the captured scene (302) and the one or more real objects (304) therein;
processing (220) the visuals captured within the scene (302) by conducting intrinsic decomposition on the visuals in real time;
separating (230) the visuals into a first plurality of layers;
generating and superimposing (240) first one or more intangible virtual objects (324) based on the determined values, having the first plurality of layers representing the first one or more intangible virtual objects (324) and one or more visual properties of the first one or more intangible virtual objects (324), in a mixed reality space (322);
modifying (250) the first one or more intangible virtual objects (324) within the captured scene (302) by substituting the first plurality of layers in the mixed reality space (322) in real time; and
displaying (260) the modified scene (328) including the modified first one or more intangible virtual objects (324) in the mixed reality space (322) in real time using the HMD (102).
2. The method (200) as claimed in claim 1, further comprising the steps of:
suggesting second one or more intangible virtual objects (408) pre-stored in a data repository, based on surroundings in the captured scene (402), the second one or more intangible virtual objects (408) having a second plurality of layers representing the second one or more intangible virtual objects (408) and one or more visual properties of the second one or more intangible virtual objects (408);
modifying the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time; and
displaying the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (404) in real time using the HMD (102);
wherein the second one or more intangible virtual objects (408) are computer generated graphics of the one or more real objects (304) that are not present within the captured scene (402).
3. The method (200) as claimed in claim 2, further comprising the steps of:
suggesting the second one or more intangible virtual objects (408) pre-stored in a data repository, based on a search request received from the HMD (102), the search request comprising predefined filters such as names, usage, colour, durability and size of the desired one or more real objects (304) not present within the captured scene (402);
modifying the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time; and
displaying the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (404) in real time using the HMD (102).
4. The method (200) as claimed in claim 1, wherein the one or more real objects (304) are selected from a group comprising walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
5. The method (200) as claimed in claim 1, wherein the first one or more intangible virtual objects (324) are computer generated graphics of the one or more real objects (304) present within the captured scene (302).
6. The method (200) as claimed in claim 3, wherein visual properties are selected from a group comprising shape, appearance, texture, material and colour.
7. The method (200) as claimed in claim 6, wherein modifying the first one or more intangible virtual objects (324) and the second one or more intangible virtual objects (408) comprises adding or removing the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the captured scene (302 and 402) and/or by altering the visual properties of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the mixed reality space (322 or 404).
8. The method (200) as claimed in claim 7, wherein altering the visual properties includes reshaping, recolouring, retexturing, relighting of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408).
9. The method (200) as claimed in claim 1, wherein the HMD (102) comprises one or more cameras for capturing the visual data, one or more microphones for capturing the audio data, one or more sensors and the data repository having prestored data.
10. The method (200) as claimed in claim 9, wherein the prestored data includes a database of the first one or more intangible virtual objects (324) having the first plurality of layers and the second one or more intangible virtual objects (408) having the second plurality of layers.
11. A computer system (104) for modifying real world object including material and visual properties in a mixed reality space (322), the computer system (104) being connected with the Mixed Reality (MR) based Head mounted device (102) (HMD (102)), the computer system (104) comprising:
a memory unit (1042) configured to store machine-readable instructions; and
a processor (1044) operably connected with the memory unit (1042), the processor (1044) obtaining the machine-readable instructions from the memory unit (1042), and being configured by the machine-readable instructions to:
receive live visuals of a scene (302) having one or more real objects (304) using the HMD (102) and determining RGB values, depth value and 6DoF pose estimation of the captured scene (302) and the one or more objects therein;
process the visuals captured within the scene (302) by conducting intrinsic decomposition on the visuals in real time;
separate the visuals into a first plurality of layers;
generate and superimpose first one or more intangible virtual objects (324) based on the determined values, having the first plurality of layers representing the first one or more intangible virtual objects (324) and one or more visual properties of the first one or more intangible virtual objects (324), in a mixed reality space (322);
modify the first one or more intangible virtual objects (324) within the captured scene (302) by substituting the first plurality of layers in the mixed reality space (322) in real time; and
display the modified scene (328) including the modified first one or more intangible virtual objects (324) in the mixed reality space (322) in real time using the HMD (102).
12. The computer system (104) as claimed in claim 11, wherein the processor (1044) is further configured to:
suggest second one or more intangible virtual objects (408) pre-stored in a data repository, based on surroundings in the captured scene (402), the second one or more intangible virtual objects (408) having a second plurality of layers representing the second one or more intangible virtual objects (408) and one or more visual properties of the second one or more intangible virtual objects (408);
modify the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time; and
display the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (404) in real time using the HMD (102);
wherein the second one or more intangible virtual objects (408) are computer generated graphics of the one or more real objects (304) that are not present within the captured scene (402).
13. The computer system (104) as claimed in claim 12, wherein the processor (1044) is further configured to:
suggest the second one or more intangible virtual objects (408) pre-stored in the data repository, based on a search request received from the HMD (102), the search request comprising predefined filters such as names, usage, colour, durability and size of the desired one or more real objects (304) not present within the captured scene (402);
modify the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time; and
display the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (404) in real time using the HMD (102).
14. The computer system (104) as claimed in claim 11, wherein the one or more real objects (304) are selected from a group comprising walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
15. The computer system (104) as claimed in claim 11, wherein the first one or more intangible virtual objects (324) are computer generated graphics of the one or more real objects (304) present within the captured scene (302).
16. The computer system (104) as claimed in claim 13, wherein visual properties are selected from a group comprising shape, appearance, texture, material and colour.
17. The computer system (104) as claimed in claim 16, wherein the processor (1044) is configured to modify the first one or more intangible virtual objects (324) and the second one or more intangible virtual objects (408) by adding or removing the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the captured scene (302 and/or 402) and/or by altering the visual properties of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the mixed reality space (322 and/or 404).
18. The computer system (104) as claimed in claim 17, wherein the processor (1044) is configured to alter the visual properties by reshaping, recolouring, retexturing and relighting of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408).
19. The computer system (104) as claimed in claim 11, wherein the HMD (102) comprises one or more cameras for capturing the visual data, one or more microphones for capturing the audio data, one or more sensors, and the data repository having the prestored data.
20. The computer system (104) as claimed in claim 19, wherein the prestored data includes a database of the first one or more intangible virtual objects (324) having the first plurality of layers and the second one or more intangible virtual objects (408) having the second plurality of layers.
21. A system (500) for modifying real world object including material and visual properties in a mixed reality space (322) using a Mixed Reality (MR) based Head mounted device (102) (HMD), the system (500) comprising:
a Mixed Reality (MR) based Head mounted device (102) (HMD (102)) having one or more cameras (1022) for capturing the visual data, one or more microphones (1026) for capturing the audio data, and one or more sensors (1024);
a processing module (504);
an interface module (502); and
a data repository having prestored data;
wherein the interface module (502) is configured to receive live visuals of a scene (302) having one or more real objects (304) using the HMD (102) and determining RGB values, depth value and 6DoF pose estimation of the captured scene (302) and the one or more objects therein;
wherein the processing module (504) is configured to:
process the visuals captured within the scene (302) by conducting intrinsic decomposition on the visuals in real time;
separate the visuals into a first plurality of layers;
generate and superimpose first one or more intangible virtual objects (324) based on the determined values, having the first plurality of layers representing the first one or more intangible virtual objects (324) and one or more visual properties of the first one or more intangible virtual objects (324), in a mixed reality space (322);
modify the first one or more intangible virtual objects (324) within the captured scene (302) by substituting the first plurality of layers in the mixed reality space (322) in real time; and
wherein the interface module (502) is configured to display the modified scene (328) including the modified first one or more intangible virtual objects (324) in the mixed reality space (322) in real time using the HMD (102).
22. The system (500) as claimed in claim 21, wherein the processing module (504) is further configured to:
suggest second one or more intangible virtual objects (408) pre-stored in the data repository, based on surroundings in the captured scene (402), the second one or more intangible virtual objects (408) having a second plurality of layers representing the second one or more intangible virtual objects (408) and one or more visual properties of the second one or more intangible virtual objects (408);
modify the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time; and
wherein the interface module (502) is configured to display the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (404) in real time using the HMD (102);
wherein the second one or more intangible virtual objects (408) are computer generated graphics of the one or more real objects (304) that are not present within the captured scene (402).
23. The system (500) as claimed in claim 22, wherein the processing module (504) is further configured to:
suggest the second one or more intangible virtual objects (408) pre-stored in the data repository, based on a search request received from the HMD (102), the search request comprising predefined filters such as names, usage, colour, durability and size of the desired one or more real objects (304) not present within the captured scene (402);
modify the second one or more intangible virtual objects (408) within the captured scene (402) by substituting the second plurality of layers in the mixed reality space (404) in real time;
wherein the interface module (502) is configured to display the modified scene (410) including the modified second one or more intangible virtual objects (408) in the mixed reality space (404) in real time using the HMD (102).
24. The computer system (104) as claimed in claim 21, wherein the one or more real objects (304) are selected from a group comprising walls, curtains, furniture, frames, electronic appliances, paintings, almirahs, showpieces, musical instruments, pets and flora.
25. The system (500) as claimed in claim 21, wherein the first one or more intangible virtual objects (324) are computer generated graphics of the one or more real objects (304) present within the captured scene (302).
26. The system (500) as claimed in claim 23, wherein visual properties are selected from a group comprising shape, appearance, texture, material and colour.
27. The system (500) as claimed in claim 26, wherein the processing module (504) is configured to modify the first one or more intangible virtual objects (324) and the second one or more intangible virtual objects (408) by adding or removing the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the captured scene (302) and/or by altering the visual properties of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408) in the mixed reality space (322).
28. The system (500) as claimed in claim 27, wherein the processing module (504) is configured to alter the visual properties by reshaping, recolouring, retexturing and relighting of the first one or more intangible virtual objects (324) and/or the second one or more intangible virtual objects (408).
29. The system (500) as claimed in claim 23, wherein the prestored data includes a database of the first one or more intangible virtual objects (324) having the first plurality of layers and the second one or more intangible virtual objects (408) having the second plurality of layers.
Dated this the 1st day of November 2019
[VIVEK DAHIYA]
AGENT FOR THE APPLICANT- IN/PA 1491
| Section | Controller | Decision Date |
|---|---|---|
| 15 and 43 | Santosh Gupta | 2021-10-27 |
| 15 and 43 | Santosh Gupta | 2021-10-27 |
| # | Name | Date |
|---|---|---|
| 1 | 201821041431-FORM-27 [09-04-2025(online)].pdf | 2025-04-09 |
| 1 | 201821041431-PROVISIONAL SPECIFICATION [01-11-2018(online)].pdf | 2018-11-01 |
| 2 | 201821041431-OTHERS [01-11-2018(online)].pdf | 2018-11-01 |
| 2 | 201821041431-PETITION UNDER RULE 137 [09-04-2025(online)].pdf | 2025-04-09 |
| 3 | 201821041431-RELEVANT DOCUMENTS [09-04-2025(online)].pdf | 2025-04-09 |
| 3 | 201821041431-FORM FOR STARTUP [01-11-2018(online)].pdf | 2018-11-01 |
| 4 | 201821041431-IntimationOfGrant27-10-2021.pdf | 2021-10-27 |
| 4 | 201821041431-FORM FOR SMALL ENTITY(FORM-28) [01-11-2018(online)].pdf | 2018-11-01 |
| 5 | 201821041431-PatentCertificate27-10-2021.pdf | 2021-10-27 |
| 5 | 201821041431-FORM 1 [01-11-2018(online)].pdf | 2018-11-01 |
| 6 | 201821041431-US(14)-HearingNotice-(HearingDate-18-08-2021).pdf | 2021-10-18 |
| 6 | 201821041431-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [01-11-2018(online)].pdf | 2018-11-01 |
| 7 | 201821041431-Written submissions and relevant documents [01-09-2021(online)].pdf | 2021-09-01 |
| 7 | 201821041431-DRAWINGS [01-11-2018(online)].pdf | 2018-11-01 |
| 8 | 201821041431-FORM-26 [19-11-2018(online)].pdf | 2018-11-19 |
| 8 | 201821041431-FORM-26 [17-08-2021(online)].pdf | 2021-08-17 |
| 9 | 201821041431-Correspondence to notify the Controller [16-08-2021(online)].pdf | 2021-08-16 |
| 9 | 201821041431-DRAWING [01-11-2019(online)].pdf | 2019-11-01 |
| 10 | 201821041431-CLAIMS [18-02-2021(online)].pdf | 2021-02-18 |
| 10 | 201821041431-COMPLETE SPECIFICATION [01-11-2019(online)].pdf | 2019-11-01 |
| 11 | 201821041431-ENDORSEMENT BY INVENTORS [18-02-2021(online)].pdf | 2021-02-18 |
| 11 | Abstract1.jpg | 2019-11-09 |
| 12 | 201821041431-FER_SER_REPLY [18-02-2021(online)].pdf | 2021-02-18 |
| 12 | 201821041431-STARTUP [08-07-2020(online)].pdf | 2020-07-08 |
| 13 | 201821041431-FORM 3 [18-02-2021(online)].pdf | 2021-02-18 |
| 13 | 201821041431-FORM28 [08-07-2020(online)].pdf | 2020-07-08 |
| 14 | 201821041431-FORM 18A [08-07-2020(online)].pdf | 2020-07-08 |
| 14 | 201821041431-FORM 4(iii) [18-02-2021(online)].pdf | 2021-02-18 |
| 15 | 201821041431-FER.pdf | 2020-08-17 |
| 15 | 201821041431-OTHERS [18-02-2021(online)].pdf | 2021-02-18 |
| 16 | 201821041431-PETITION UNDER RULE 137 [18-02-2021(online)]-1.pdf | 2021-02-18 |
| 16 | 201821041431-RELEVANT DOCUMENTS [18-02-2021(online)].pdf | 2021-02-18 |
| 17 | 201821041431-RELEVANT DOCUMENTS [18-02-2021(online)]-2.pdf | 2021-02-18 |
| 17 | 201821041431-PETITION UNDER RULE 137 [18-02-2021(online)]-2.pdf | 2021-02-18 |
| 18 | 201821041431-PETITION UNDER RULE 137 [18-02-2021(online)].pdf | 2021-02-18 |
| 18 | 201821041431-RELEVANT DOCUMENTS [18-02-2021(online)]-1.pdf | 2021-02-18 |
| 19 | 201821041431-Proof of Right [18-02-2021(online)].pdf | 2021-02-18 |
| 20 | 201821041431-PETITION UNDER RULE 137 [18-02-2021(online)].pdf | 2021-02-18 |
| 20 | 201821041431-RELEVANT DOCUMENTS [18-02-2021(online)]-1.pdf | 2021-02-18 |
| 21 | 201821041431-PETITION UNDER RULE 137 [18-02-2021(online)]-2.pdf | 2021-02-18 |
| 21 | 201821041431-RELEVANT DOCUMENTS [18-02-2021(online)]-2.pdf | 2021-02-18 |
| 22 | 201821041431-PETITION UNDER RULE 137 [18-02-2021(online)]-1.pdf | 2021-02-18 |
| 22 | 201821041431-RELEVANT DOCUMENTS [18-02-2021(online)].pdf | 2021-02-18 |
| 23 | 201821041431-FER.pdf | 2020-08-17 |
| 23 | 201821041431-OTHERS [18-02-2021(online)].pdf | 2021-02-18 |
| 24 | 201821041431-FORM 4(iii) [18-02-2021(online)].pdf | 2021-02-18 |
| 24 | 201821041431-FORM 18A [08-07-2020(online)].pdf | 2020-07-08 |
| 25 | 201821041431-FORM28 [08-07-2020(online)].pdf | 2020-07-08 |
| 25 | 201821041431-FORM 3 [18-02-2021(online)].pdf | 2021-02-18 |
| 26 | 201821041431-FER_SER_REPLY [18-02-2021(online)].pdf | 2021-02-18 |
| 26 | 201821041431-STARTUP [08-07-2020(online)].pdf | 2020-07-08 |
| 27 | 201821041431-ENDORSEMENT BY INVENTORS [18-02-2021(online)].pdf | 2021-02-18 |
| 27 | Abstract1.jpg | 2019-11-09 |
| 28 | 201821041431-CLAIMS [18-02-2021(online)].pdf | 2021-02-18 |
| 28 | 201821041431-COMPLETE SPECIFICATION [01-11-2019(online)].pdf | 2019-11-01 |
| 29 | 201821041431-Correspondence to notify the Controller [16-08-2021(online)].pdf | 2021-08-16 |
| 29 | 201821041431-DRAWING [01-11-2019(online)].pdf | 2019-11-01 |
| 30 | 201821041431-FORM-26 [17-08-2021(online)].pdf | 2021-08-17 |
| 30 | 201821041431-FORM-26 [19-11-2018(online)].pdf | 2018-11-19 |
| 31 | 201821041431-Written submissions and relevant documents [01-09-2021(online)].pdf | 2021-09-01 |
| 31 | 201821041431-DRAWINGS [01-11-2018(online)].pdf | 2018-11-01 |
| 32 | 201821041431-US(14)-HearingNotice-(HearingDate-18-08-2021).pdf | 2021-10-18 |
| 32 | 201821041431-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [01-11-2018(online)].pdf | 2018-11-01 |
| 33 | 201821041431-PatentCertificate27-10-2021.pdf | 2021-10-27 |
| 33 | 201821041431-FORM 1 [01-11-2018(online)].pdf | 2018-11-01 |
| 34 | 201821041431-IntimationOfGrant27-10-2021.pdf | 2021-10-27 |
| 34 | 201821041431-FORM FOR SMALL ENTITY(FORM-28) [01-11-2018(online)].pdf | 2018-11-01 |
| 35 | 201821041431-RELEVANT DOCUMENTS [09-04-2025(online)].pdf | 2025-04-09 |
| 35 | 201821041431-FORM FOR STARTUP [01-11-2018(online)].pdf | 2018-11-01 |
| 36 | 201821041431-PETITION UNDER RULE 137 [09-04-2025(online)].pdf | 2025-04-09 |
| 36 | 201821041431-OTHERS [01-11-2018(online)].pdf | 2018-11-01 |
| 37 | 201821041431-FORM-27 [09-04-2025(online)].pdf | 2025-04-09 |
| 37 | 201821041431-PROVISIONAL SPECIFICATION [01-11-2018(online)].pdf | 2018-11-01 |
| 1 | 2020-08-1412-06-36E_14-08-2020.pdf |