Sign In to Follow Application
View All Documents & Correspondence

Method And System For Providing Visual Feedback In A Virtual Reality Environment

Abstract: In one embodiment, a method of providing visual feedback to a user in a virtual reality environment is disclosed. The method may comprise: capturing position and depth information of the user using a depth sensor; determining a potential interaction point between the user and a 3D virtual object associated with the virtual reality environment based on the position and depth information of the user; determining, using a virtual depth camera, depth of the 3D virtual object at the potential interaction point; calculating a distance between the user and the 3D virtual object based on the position and depth information of the user and the depth of the 3D virtual object; and rendering a soft shadow of the user on the 3D virtual object based on the distance between the user and the 3D virtual object.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
16 July 2014
Publication Number
31/2014
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@knspartners.com
Parent Application

Applicants

WIPRO LIMITED
Doddakannelli, Sarjapur Road, Bangalore 560035, Karnataka, India.

Inventors

1. SREENIVASA REDDY MUKKAMALA
#1/436-6 Maruthi Nagar, Near Nagarjuna, College, Kadapa, Andhra Pradesh
2. MANOJ MADHUSUDHANAN
#980, 1st Cross, 2nd Phase, 5th Stage, BEML Layout, Rajarajeshwari Nagar, Bangalore – 560098, Karnataka, India.

Specification

CLIAMS:We claim:

1. A method of providing visual feedback to a user in a virtual reality environment, the method comprising:
capturing position and depth information of the user using a depth sensor;
determining a potential interaction point between the user and a 3D virtual object associated with the virtual reality environment based on the position and depth information of the user;
determining, using a virtual depth camera, depth of the 3D virtual object at the potential interaction point;
calculating a distance between the user and the 3D virtual object based on the position and depth information of the user and the depth of the 3D virtual object; and
rendering a soft shadow of the user on the 3D virtual object based on the distance between the user and the 3D virtual object.

2. The method of claim 1, wherein the position and depth information of the user is determined from at least one of an image and a video captured by the depth sensor.

3. The method of claim 1, wherein the position and the depth information of the user comprises at least one of body joints position information and head position information.

4. The method of claim 1, wherein the soft shadow of the user corresponds to a body part of the user interacting with the virtual reality environment.

5. The method of claim 1, wherein the soft shadow is rendered by looking-up a predefined color look-up table to convert the distance between the user and the 3D virtual object to a black color image with alpha transparency.

6. The method of claim 5 further comprising applying a spatially variant blur effect to the black color image, wherein the radius of the blur effect is based on the distance between the user and the 3D virtual object.

7. A system for providing visual feedback to a user in a virtual reality environment, comprising:
a processor; and
a memory disposed in communication with the processor and storing processor-executable instructions, the instructions comprising instructions to:
receive position and depth information of the user from a depth sensor;
determine a potential interaction point between the user and a 3D virtual object associated with the virtual reality environment based on the position and depth information of the user;
receive depth of the 3D virtual object at the potential interaction point from a virtual depth camera;
calculate a distance between the user and the 3D virtual object based on the position and depth information of the user and the depth of the 3D virtual object; and
render a soft shadow of the user on the 3D virtual object based on the distance between the user and the 3D virtual object.

8. The system of claim 7, wherein the position and depth information of the user is determined from at least one of an image and a video captured by the depth sensor.

9. The system of claim 7, wherein the position and the depth information of the user comprises at least one of body joints position information and head position information.

10. The system of claim 7, wherein the soft shadow of the user corresponds to a body part of the user interacting with the virtual reality environment.

11. The system of claim 7, wherein the instructions comprise instructions to render the soft shadow by looking-up a predefined color look-up table to convert the distance between the user and the 3D virtual object to a black color image with alpha transparency.

12. The system of claim 11, wherein the instructions further comprise instructions to apply a spatially variant blur effect to the black color image, wherein the radius of the blur effect is based on the distance between the user and the 3D virtual object.

13. A non-transitory, computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform operations comprising:
receiving position and depth information of the user from a depth sensor;
determining a potential interaction point between the user and a 3D virtual object associated with the virtual reality environment based on the position and depth information of the user;
receiving depth of the 3D virtual object at the potential interaction point from a virtual depth camera;
calculating a distance between the user and the 3D virtual object based on the position and depth information of the user and the depth of the 3D virtual object; and
rendering a soft shadow of the user on the 3D virtual object based on the distance between the user and the 3D virtual object.

Dated this 16th day of July, 2014
Swetha S.N
Of K&S Partners
Aent for the Applicant
,TagSPECI:TECHNICAL FIELD
This disclosure relates generally to providing feedback to a user and more particularly to a method and system for providing visual feedback to a user in a virtual reality environment.

Documents

Application Documents

# Name Date
1 3494-CHE-2014 FORM-9 16-07-2014.pdf 2014-07-16
1 3494-CHE-2014-FER.pdf 2019-10-31
2 3494CHE2014_Certifiedcoyrequest.pdf 2015-03-13
2 3494-CHE-2014 FORM-18 16-07-2014.pdf 2014-07-16
3 3494CHE2014_Certifiedcoyrequest.pdf ONLINE 2015-02-18
3 3494-CHE-2014-Request For Certified Copy-Online(18-07-2014).pdf 2014-07-18
4 IP27891-SPEC.pdf 2014-07-23
4 3494-CHE-2014-Request For Certified Copy-Online(16-02-2015).pdf 2015-02-16
5 IP27891-FIG.pdf 2014-07-23
5 3494-CHE-2014 CORRESPONDENCE OTHERS 02-09-2014.pdf 2014-09-02
6 FORM 5.pdf 2014-07-23
6 3494-CHE-2014 FORM-1 02-09-2014.pdf 2014-09-02
7 FORM 3.pdf 2014-07-23
7 3494-CHE-2014 POWER OF ATTORNEY 02-09-2014.pdf 2014-09-02
8 3494CHE2014_CertifiedCopyRequest.pdf 2014-07-23
9 FORM 3.pdf 2014-07-23
9 3494-CHE-2014 POWER OF ATTORNEY 02-09-2014.pdf 2014-09-02
10 3494-CHE-2014 FORM-1 02-09-2014.pdf 2014-09-02
10 FORM 5.pdf 2014-07-23
11 IP27891-FIG.pdf 2014-07-23
11 3494-CHE-2014 CORRESPONDENCE OTHERS 02-09-2014.pdf 2014-09-02
12 IP27891-SPEC.pdf 2014-07-23
12 3494-CHE-2014-Request For Certified Copy-Online(16-02-2015).pdf 2015-02-16
13 3494CHE2014_Certifiedcoyrequest.pdf ONLINE 2015-02-18
13 3494-CHE-2014-Request For Certified Copy-Online(18-07-2014).pdf 2014-07-18
14 3494CHE2014_Certifiedcoyrequest.pdf 2015-03-13
14 3494-CHE-2014 FORM-18 16-07-2014.pdf 2014-07-16
15 3494-CHE-2014-FER.pdf 2019-10-31
15 3494-CHE-2014 FORM-9 16-07-2014.pdf 2014-07-16

Search Strategy

1 SEARCH_STRATEGY_25-10-2019.pdf