Abstract: A mobile based method wherein the image(s) data, of the objects or layouts of a real world or wholly an artificial one, is received and the image(s) are rendered to two or more Users 101, one of whom acts as a Guide 201 under the present invention. The image(s) are displayed with a 360 degree panoramic view on a mobile device based virtual reality system and internet/bluetooth. Both the user(s) and guide get the same view of the image data initially. The user(s) can experience by viewing through the virtual reality system, various objects and scene in the 360 degree panoramic image(s) data. Further, through the use of a mobile-based method, the Guide 201, in his mobile device, sees the same view i.e. the image data that is being seen by any user(s) at any particular point of time. The Guide 201 provides his guidance notes and /or commentary on that particular image data. Further through the use of a mobile-based method and the mobile device application, the Guide 201 can change the image data displayed in the User’s 101 mobile device application.
Claims:We claim:
1. A mobile-based method, comprising:
• receiving images data of the objects or layouts of a real world or wholly an artificial one;
• rendering 360 degree panoramic view in a virtual reality environment;
• detecting an exact oriented view seen by the User 101, seen through a mobile based virtual reality system, using motion sensor;
• facilitating approximately the same User’s view in the Guide’s 201 mobile device.
2. The method of claim 1, further comprising:
• facilitating in detail various internal segments of the virtual objects in the Guide’s 201 mobile device;
• facilitating options for the guide to change the view for the User by selecting respective options.
3. A mobile device comprising:
• a software application;
• a display (divided in 2 parts);
• a motion sensor;
• to detect and obtain the exact view being seen by the user while experiencing the 360 degree panoramic view in a virtual reality environment;
• to transfer the images data to the mobile memory.
• a memory where the images data are stored;
• a bluetooth facility;
• an internet facility.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10; rule 13)
TITLE OF THE INVENTION
“MONITORING AND GUIDING USER EXPERIENCES IN A VIRTUAL REALITY ENVIRONMENT”
APPLICANT
(a) NAME: INDIACOM LIMITED
(b) NATIONALITY: INDIAN
(c) ADDRESS: HERMES HERITAGE, PHASE 1 (COMMERCIAL), OPP. MSEB, SHASTRI NAGAR, PUNE-NAGAR ROAD, PUNE, MAHARASHTRA, INDIA, 411006
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR PREDICTING TRANSACTION COMPLETION PROBABILITY SCORE
Inventors: Rahul Suresh Kulkarni, Srinivasa Venkoba Rao, Vik Mohapatra & Parag Ramesh Kadam
NOTATION AND NOMENCLATURE
[0001] Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, different technical companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function.
[0002] Further, the singular number shall include the plural and vice versa; the masculine gender shall include the feminine and neuter genders, the feminine gender shall include the masculine and neuter genders, the neuter gender shall include the masculine and feminine genders; and the words include and including, and variations thereof, shall not be deemed to be terms of limitation, but rather shall be deemed to be followed by the words without limitation.
FIELD OF INVENTION
[0003] The present invention relates to virtual reality (VR). More particularly, the present invention relates to a 360 degree panoramic view in a virtual reality experience wherein the experience of user(s) is being monitored and guided by another user (referred to as 'Guide' in this document).
BACKGROUND OF INVENTION (STATE OF ART)
[0004] Mobile based virtual reality systems (e.g. Google Cardboard, Microsoft VR Kit) are used by user(s) wherein the total control of exploring the view shown by this mobile based virtual reality system, lies with user(s) only. User(s) do not get any parallel guidance on the scene where he is experiencing the view by using the mobile based virtual reality system. Though the concept of multiple users seeing the same overall scene exists but the exact view of each one of them differs based on what part of the scene they are looking at.
[0005] Therefore, there is a need in the art for improvement by providing a solution for the above said shortcoming
SUMMARY OF THE INVENTION
[0006] Embodiments of the present invention provide improvements over the state of the art. The present invention is on the base of a mobile method wherein the image(s) data, of the objects or layouts of a real world or wholly an artificial one, is received. The image(s) are rendered to two or more users, one of whom acts as a guide under the present invention. The image(s) are displayed with a 360 degree panoramic view on a mobile device based virtual reality system and internet/bluetooth. Both the user(s) and guide get the same view of the image data initially. The user(s) can experience by viewing through the virtual reality system, various objects and scene in the 360 degree panoramic image(s) data.
[0007] Further, through the use of a mobile-based method, the guide, in his mobile device, sees the exact view, that is the exact image data that is being seen by any user(s) at any particular point of time. The guide provides his guidance notes and /or commentary on that particular image data to individual user(s). Further, through the use of a mobile-based method and the mobile device application, the guide can change the image data of the user as being displayed in the user’s mobile device application..
OBJECT OF INVENTION
[0008] The primary object of this invention is to overcome the drawbacks of the known art.
[0009] The objective of this invention is to provide a medium to two or more users to experience a 360 degree panoramic view on a virtual reality system the objects or layouts of a real world or wholly an artificial one simultaneously.
[0010] Another object of the invention is to address the important need of guidance element in this whole element of mobile based, 360 degree, panoramic, virtual reality experience by two or more users at the same time.
[0011] Another object is to help the users in experiencing different view of the interior scenes of the real world object from the virtual reality system.
[0012] Another object of this invention is to provide sufficient interior scenes of the real world object from the virtual reality system.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Having thus generally described the invention, reference will be made to the accompanying drawings, illustrating an embodiment thereof, in which:
Fig. 1 is flow diagram which illustrates the process wherein the User 101 starts viewing the scene in the mobile based virtual reality system.
Fig. 2 illustrates of the procedure adopted by the Guide 201 to start viewing the scene in the mobile device
Fig. 3 is a flow diagram which illustrates how the exact view seen by the User 101 is getting transferred into the mobile device of the Guide 201.
Fig. 4 is a flow diagram which illustrates the process of mapping of sensor values into image coordinates.
Fig. 5 is a flow diagram which illustrates the conversion of the image coordinates into image fragments.
Fig. 6 illustrates the process wherein the Guide 201 is able to see the exact view seen by the User 101 and then monitors the viewing stages of the User 101.
Fig. 7 illustrates the process wherein the Guide 201 selects the views for the User 101 and then monitors the viewing stages of the User 101.
[0014] The figures depict a preferred embodiment of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein.
DETAILED DESCRIPTION
[0015] Embodiments of the present invention provide the methods by which the user experiences in a virtual reality environment is monitored and guided on each and every views seen by the user.
[0016] Referring to FIG. 1 which illustrates the process wherein the User 101 starts viewing the scene in the mobile based virtual reality system. The key things involved in the invention are the mobile device 103, mobile app 104 and mobile based virtual reality system 107. At all times the mobile is connected to the Internet or the Bluetooth 102 and is active. In the beginning, the User installs the mobile app 104 in its mobile device 103. The mobile app is particularly developed in relation to this present invention, which portraits the 360 degree panoramic views of any scene (for example, inside of a house) in it. The views are shown in two parts on the mobile screen and at all times both parts shows same particular view of various scenes. Once the app starts showing the scene 105, then the User loads the mobile device onto the mobile based virtual reality system 106. Subsequently, 108 the User either straps the mobile based virtual reality system onto his head 109 or he can also hold the mobile based virtual reality system in his hands in front of eyes to view. Then the User can start turning or tilting his head to watch different interior views of the scene shown through the mobile based virtual reality system. The mobile based virtual reality system sdk facility in the app tracks the head movements 111 and renders continuous image fragments (of different interior views of the scene) to the mobile app 112, which ultimately helps in showing the exact view of the scene available at approximately the same degree of turn/tilt movement of the head done by the User.
[0017] Referring to FIG. 2 which illustrates the process wherein the Guide 201 starts viewing the scene in the mobile device. The mobile is connected to the Internet or the Bluetooth 202 which is active. In the beginning, the Guide installs the mobile app 204 in its mobile device 203. The mobile app is particularly developed in relation to this present invention, to track the actions going on in the User’s mobile while the User watches the scenes. The views are shown in two parts on the mobile screen, at all times both parts shows same particular view of various scenes. Once the User starts turning or tilting his head and watches 205 different interior views of the scene through the mobile based virtual reality system, the Guide is able to track 206 and view the same in its mobile device through the mobile app loaded in his mobile. This process is further described in detail in the Fig. 3.
[0018] In the FIG. 1, the User installs the app and does further things as explained above. In the FIG. 2, the Guide installs the app and does further things as explained above. It is not necessary or a compulsion in the present invention as to who among User and Guide, will start the app first and who will start next. Anyone among them can start first or it can start at once simultaneously.
[0019] Referring to FIG. 3 which illustrates how the exact view seen by the User 101 is getting transferred into the mobile device of the Guide 201. When the User starts viewing the scene through the mobile based virtual reality system 109, the motion sensor listener service runs in the mobile app simultaneously 301. This motion sensor listener service renders the image fragments with each and every head movement of the User(s) 302 and then generates continuous sensor values 303. These sensor values gets transferred 304 to mapper algorithm in the app 305, where the sensor values are appropriated into image coordinates 306. The detailed process of mapping of sensor values into image coordinates is further explained in Fig. 4. The image coordinates are then transferred 307 to the server 308 where it gets stored. The image coordinates are then transferred 309 to the mobile app present in the Guide’s mobile device 203. With the internal technique present in the mobile app, these image coordinates are inflated into real image fragments 310. The detail process of conversion of image coordinates into image fragments is further explained in Fig. 5. The real image fragments finally shows the Guide the exact view which is seen by the User at that particular point of time. This whole process shown in the Fig. 3 is a continuous process which primarily changes with the head movements of the User.
[0020] Referring to FIG. 4 illustrates the process of mapping of sensor values into image coordinates. As described above, when the User starts viewing the scene through the mobile based virtual reality system, the motion sensor listener service runs in the mobile app simultaneously. The motion sensor listener service 301 runs to get the Y(Roll) value 401 and Z (Yaw) value 405 of the motion sensor. The Z (Yaw) value 405 of the motion sensor maps the horizontal 406 length/path of the image and thus calibrates (marks) to appropriate X axis coordinate of the image 407. Further to that, after determination of the X coordinate of the image, the same is being transferred 408 to the Server 308 for storage. The Y (Roll) value 401 of the motion sensor maps the vertical 402 length/path of the image and thus calibrates (marks) to appropriate Y axis coordinate of the image 403. Further to that, after determination of the Y coordinate of the image, the same is being transferred 404 to the Server 308 for storage.
[0021] Referring to FIG. 5 which illustrates the conversion of the image coordinates into image fragments. In above para, the process of determination of the X coordinate and Y coordinate of the image is described. In order to determine the exact image placement of the view seen by the User, the X coordinate and Y coordinate helps in determining the approximate width and height of the images respectively. As and when the X coordinates 501 are received, the internal process takes it to the left edge 502 of the image fragment to be created, and then the width 503 of the image fragment is constructed. Likewise as and when the Y coordinates 504 are received, the internal process takes it to the top edge 505 of the image fragment to be created, and then the height 506 of the image fragment is constructed. Once the width and height of the image fragment is constructed, the actual position 507 of the image gets determined. This is a continuous process.
[0022] Referring to FIG. 6 which illustrates the process wherein the Guide 201 is able to see the exact view seen by the User 101 and then monitors the viewing stages of the User 101. The Guide’s mobile app would have different selection options 203 in it to change the scenes. For example: In case, an apartment is showcased through the app, the selection option can be 1. Drawing room, 2. Kitchen, 3. Bathroom, 4. Bedroom, etc. The Guide can click on any of the options. Let’s suppose in one case, the Guide clicks on the Drawing Room option, the name “Drawing room” (which is already pre-feeded in the both the apps of User and the Guide) gets transferred 601 to the Server which then transferred 602 to the User’s mobile app. Hence, the User then starts viewing 603 the Drawing room and different things in it. As he sees through different views of the Drawing room, the image coordinates gets sent 604 to the server which in turn sends it 605 to the Guide’s mobile app. Then the image coordinates gets converted 606 into image fragments in the Guide’s mobile app and the Guide starts seeing the approximately the exact view 607 as seen by the User at that particular point of time. Further, the Guide can comment appropriately on the view seen by the User.
[0023] Referring to FIG. 7 which illustrates the process wherein the Guide 201 selects the views for the User 101 and then monitors the viewing stages of the User 101. The Guide’s mobile app 203 would have different selection options 701 in it to change the scenes. For example: In case a apartment is showcased through the app, the selection option can be 1. Drawing room, 2. Kitchen, 3. Bathroom, 4. Bedroom, etc. The Guide can click on any of the options 701. Let’s suppose in one case, the Guide clicks on the Drawing Room option, the name “Drawing room” (which is already pre-feeded in the both the apps of User and the Guide) gets transferred 702 to the Server 308 which then transfers 704 it to the User’s mobile app. Hence, the User 101 then starts viewing 706 the Drawing room and different things in it.
[0024] The Guide’s mobile app would also have further selection options (spotlight views of the image) of any the options as mentioned above to change different views in the scenes. In the same example: after the Guide selects Drawing room, it’ll have further selection options - 1. Painting on the wall, 2. Lamp on Table, 3. Television, 4. Toys, etc. The Guide can click on any of the options. Let’s suppose in one case, the Guide clicks on the “Painting on wall” option, the name “Painting on wall” (which is already pre-feeded in the both the apps of User and the Guide) gets transferred 703 to the Server which then transfers 705 it to the User’s mobile app. After the name is received at the mobile app of the User, the present view of the User changes 707 to the spotlight view as selected by the Guide. Hence, the User then starts viewing 708 the Painting on wall in the Drawing room. The User can then see different parts of the painting, which following the process as described above gets seen by the Guide. Further, the Guide can comment appropriately on the view seen by the User.
[0025] The above description explains the best mode of the invention with the examples to describe the invention, which would enable a layman to understand the invention. This description does not limit the invention to the precise terms set forth herein. Thus, while the invention has been described in detail with reference to the examples set forth above, those of ordinary skill in the art may effect alterations, modifications and variations to the examples without departing from the scope of the invention.
ADVANTAGES OF THE INVENTION
[0026] The present invention has the following advantages:
• The present invention helps the users in experiencing 360 degree panoramic view of the objects or layouts or scenario or monuments of the real world or wholly an artificial one, on a mobile based virtual reality system.
• The present invention provides an easy-to-use platform to the sellers/businesses to present their products in a better way than before. E.g. Real estate builders agents presenting their building rooms to the prospective clients. Similarly, a Teacher acting as a Guide is equipped to showcase to his students various locations, scenario or monument and objects, etc. Further as the teacher is able to view exactly what the student is viewing he is able to comment or guide the student(s) appropriately.
• The present invention improves the chances of success rates in the sale transaction as the prospective customer can, even sitting at home experience the whole thing or object.
• The present invention is a great aid to the educational institutes /teachers to give a real life & immersive experience a scenario or monument or any object to his student(s) and more importantly guide/comment simultaneously on what the student sees and if the student is not seeing what the teacher desires him to see, take corrective action by loading the appropriate object/scenario/monument.
• The present invention helps the users in experiencing different view of the interior scenes of the real world object from the mobile based virtual reality system.
• The present invention provides sufficient interior scenes of the real world object from the mobile based virtual reality system.
• The present invention is widely applicable to different industries as may be applicable.
| # | Name | Date |
|---|---|---|
| 1 | FORM28 [18-12-2015(online)].pdf | 2015-12-18 |
| 2 | Form 5 [18-12-2015(online)].pdf | 2015-12-18 |
| 3 | Form 3 [18-12-2015(online)].pdf | 2015-12-18 |
| 5 | EVIDENCE FOR SSI [18-12-2015(online)].pdf_1.pdf | 2015-12-18 |
| 6 | EVIDENCE FOR SSI [18-12-2015(online)].pdf | 2015-12-18 |
| 7 | Drawing [18-12-2015(online)].pdf | 2015-12-18 |
| 8 | Description(Complete) [18-12-2015(online)].pdf | 2015-12-18 |
| 9 | Abstract1.jpg | 2018-08-11 |