Abstract: A system for interacting with virtual objects is provided. The system includes a display unit, a user interaction controller, and a processor communicably coupled to each other to enable a user to interact with at least one virtual object displayed on the display unit. The processor is configured to determine a group of virtual objects based on matching a first filter criteria. Further, the processor is configured to detect a position and an orientation of the user interaction controller based on tracking data and at least one virtual object of interest from amongst the group of virtual objects in response to the user-interaction controller pointing at the at least one virtual object. The processor determines whether the at least one virtual object of interest satisfies a second filter criteria and provides feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria.
Description:FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[SEE SECTION 10, RULE 13]
SYSTEMS AND METHODS FOR INTERACTING WITH VIRTUAL OBJECTS;
TESSERACT IMAGING LIMITED, A CORPORATION ORGANISED AND EXISTING UNDER THE LAWS OF INDIA, WHOSE ADDRESS IS-5 TTC INDUSTRIAL AREA, RELIANCE CORPORATE IT PARK, THANE BELAPUR ROAD, GHANSOLI, NAVI MUMBAI, MAHARASHTRA – 400 701, INDIA
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to systems and methods for interacting with virtual objects, and more particularly relates to systems and methods for interacting with virtual objects in a two and/or three-dimensional space.
BACKGROUND OF THE INVENTION
[0002] Now a days, interactive devices such as, but not limited to, computers, laptops, mobile phones, handheld controllers are used for almost all the day-to-day tasks. People use these interactive devices in order to simplify the tasks. Further, due to the complex lifestyle, multiple tasks may be required to be completed by the user during the day, wherein each task may be of a different nature and requires different level of interaction.
[0003] The interactive devices display visual interfaces and are provided with multiple virtual objects displayed thereon to aid the users to perform a required task. In most cases, the visual interfaces may be crowded with the virtual objects which the user may not require, resulting in an unpleasant user interaction experience.
[0004] Further, multiple applications may be required to be installed onto the interactive device to enable multiple tasks to be caried out. Each of the applications may have multiple virtual objects that appear on the display screen that are pre-built with the application and there may be multiple virtual objects that are being created in response to user’s input during the course of usage of the application. Therefore, when the user opens a particular application using the interactive device, multiple virtual objects that are pre-built and created are required to be loaded, thereby hampering processing capabilities of the interactive device. Further, filtering the virtual objects of interest for a particular task becomes a cumbersome and time-consuming activity for the user. During the filtering process, there is a tendency that virtual objects mandatorily required for the operation of the application may be erroneously filtered by the user or virtual objects which are not required are not filtered by the user.
[0005] In view of the above, there is a dire need for efficient systems and methods that are required to interact with virtual objects in two dimensional and/or three-dimensional space.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present invention, provide a system and method for interacting with virtual objects.
[0007] In one aspect of the invention, a system is provided. The system comprises a display unit configured to display a visual scene to a user. A user-interaction controller communicably coupled to the display unit is configured to enable the user to interact with at least one virtual object of a plurality of virtual objects displayed on the visual scene of the display unit. A processor, communicably coupled to the display unit and the user-interaction controller, is configured to determine, a group of virtual objects from the plurality of virtual objects based on matching a first filter criteria. The information pertaining to the group of virtual objects is stored. Thereafter, the processor detects, a position and an orientation of the user interaction controller, based on tracking data received from a means for tracking the position and the orientation of the user-interaction controller. Further, the processor detects, based on the position and the orientation of the user-interaction controller, at least one virtual object of interest from amongst the group of virtual objects in response to the user-interaction controller pointing at the at least one virtual object amongst the group of virtual objects. The processor determines whether the at least one virtual object of interest satisfies a second filter criteria and provides a feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria in response to determining the at least one virtual object of interest satisfies the second filter criteria.
[0008] In yet another aspect of the invention, a method is provided. The method comprises the steps of, determining, by the processor, a group of virtual objects from a plurality of virtual objects based on matching a first filter criteria, the plurality of virtual objects displayed on a visual scene of a display unit. Storing, by the processor, information pertaining to the group of virtual objects. Thereafter, detecting, by the processor a position and an orientation of a user-interaction controller, based on tracking data received from a means for tracking the position and the orientation of the user-interaction controller. Based on the position and the orientation of the user-interaction controller, detecting by the processor at least one virtual object of interest from amongst the group of virtual objects in response to the user interaction controller pointing at the at least one virtual object amongst the group of virtual objects. Thereafter, determining by the processor, whether the at least one virtual object of interest satisfies a second filter criteria and providing, by the processor, a feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria in response to determining the at least one virtual object of interest satisfies the second filter criteria.
[0009] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] Reference will be made to embodiments of the invention, examples of which may be illustrated in the accompanying figures. These figures are intended to be illustrative, not limiting. The accompanying figures, which are incorporated in and constitute a part of the specification, are illustrative of one or more embodiments of the disclosed subject matter and together with the description explain various embodiments of the disclosed subject matter and are intended to be illustrative. Further, the accompanying figures have not necessarily been drawn to scale, and any values or dimensions in the accompanying figures are for illustration purposes only and may or may not represent actual or preferred values or dimensions. Although the invention is generally described in the context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
[0011] FIG. 1 is an environment for a system for interacting with virtual objects, according to one or more embodiments of the present invention;
[0012] FIG. 2 is an environment illustrating a visual scene displayed on a display unit, according to one or more embodiments of the present invention;
[0013] FIG. 3 illustrates determination of group of virtual objects, according to one or more embodiments of the present invention;
[0014] FIG. 4 illustrates an exemplary embodiment of determination of group of virtual objects based on a first filter criteria, according to one or more embodiments of the present invention;
[0015] FIG. 5 illustrates an exemplary embodiment of a visual ray having a geometric shape of a cylinder originating from a user-interaction controller, according to one or more embodiments of the present invention;
[0016] FIG. 6 illustrates an exemplary embodiment of a visual ray having a geometric shape of a cone originating from a user-interaction controller, according to one or more embodiments of the present invention;
[0017] FIG. 7 illustrates an exemplary embodiment of determination of whether a virtual object of interest satisfies a second filter criteria, according to one or more embodiments of the present invention; and
[0018] FIG. 8 illustrates a flowchart of a method for interacting with virtual objects, according to one or more embodiments of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0019] Reference will now be made in detail to specific embodiments or features, examples of which are illustrated in the accompanying drawings. Wherever possible, corresponding or similar reference numbers will be used throughout the drawings to refer to the same or corresponding parts. References to various elements described herein, are made collectively or individually when there may be more than one element of the same type. However, such references are merely exemplary in nature. It may be noted that any reference to elements in the singular may also be construed to relate to the plural and vice-versa without limiting the scope of the invention to the exact number or type of such elements unless set forth explicitly in the appended claims. Moreover, relational terms such as first and second, and the like, may be used to distinguish one entity from the other, without necessarily implying any actual relationship or between such entities.
[0020] Various embodiments of the invention provide systems and methods for interacting with virtual objects. The present invention is configured to provide efficient systems and methods to interact with virtual objects in two-dimensional and three-dimensional spaces.
[0021] In one aspect of the invention, a system is provided. The system comprises a display unit configured to display a visual scene to a user. A user-interaction controller communicably coupled to the display unit is configured to enable the user to interact with at least one virtual object of a plurality of virtual objects displayed on the visual scene of the display unit. A processor, communicably coupled to the display unit and the user-interaction controller, is configured to determine, a group of virtual objects from the plurality of virtual objects based on matching a first filter criteria. The information pertaining to the group of virtual objects is stored. Thereafter, the processor detects, a position and an orientation of the user interaction controller, based on tracking data received from a means for tracking the position and the orientation of the user-interaction controller. Further, the processor detects, based on the position and the orientation of the user-interaction controller, at least one virtual object of interest from amongst the group of virtual objects in response to the user-interaction controller pointing at the at least one virtual object amongst the group of virtual objects. The processor determines whether the at least one virtual object of interest satisfies a second filter criteria and provides feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria in response to determining the at least one virtual object of interest satisfies the second filter criteria.
[0022] In yet another aspect of the invention, a method is provided. The method comprises the steps of, determining, by the processor, a group of virtual objects from a plurality of virtual objects based on matching a first filter criteria, the plurality of virtual objects displayed on a visual scene of a display unit. Storing, by the processor, information pertaining to the group of virtual objects. Thereafter, detecting, by the processor a position and an orientation of a user-interaction controller, based on tracking data received from a means for tracking the position and the orientation of the user-interaction controller. Based on the position and the orientation of the user-interaction controller, detecting by the processor at least one virtual object of interest from amongst the group of virtual objects in response to the user interaction controller pointing at the at least one virtual object amongst the group of virtual objects. Thereafter, determining by the processor, whether the at least one virtual object of interest satisfies a second filter criteria and providing, by the processor, feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria in response to determining the at least one virtual object of interest satisfies the second filter criteria.
[0023] In view of the above, below indicated are few of the technical advantages/technical effects derived from the same.
[0024] The system and method provided above may be utilized for applications related to two dimensional as well as three-dimensional spaces. Further, the system and method may also be utilized in a combination of two-dimensional and three-dimensional spaces.
[0025] The visual scene displayed by the display unit may be a two-dimensional or three-dimensional in nature, thereby providing flexibility to the user to interact with the virtual objects for a wide range of applications.
[0026] The user-interaction controller of the present invention may be a wired or wireless user-interaction controller. The user-interaction controller may have the capability to provide various controls to the user to interact with the virtual objects displayed in the visual scene.
[0027] In another embodiment, the user-interaction controller may have a plurality of controls which may be customized by the user based on preferences, thereby advantageously enabling the user to seamlessly experience a customized level of interaction.
[0028] The processor of the system/method provided above may group the virtual objects based on matching the first filter criteria, advantageously ensuring, the number of virtual objects to be considered are reduced, thereby improving processing speed of the processor, and further reducing the time for the user to complete the task.
[0029] Further, the processor may store the information pertaining to the group of virtual objects at a database. Further, the processor may store recent information of the group of objects in a cache memory to enable quick reference to the group of virtual objects. In this regard, the information of the group of virtual objects may automatically be deleted in the event of expiration of a pre-defined/user defined time period. Advantageously, enabling the processor to quickly access the cache memory to refer to recent information about the group of virtual objects stored within the pre-defined/user defined time period.
[0030] Further, the processor may detect a position and an orientation of the user-interaction controller, thereby accurately detecting at least one virtual object of interest from amongst the group of virtual objects of interest that the user-interaction controller is pointing towards.
[0031] Further, the processor may determine whether the at least one virtual object of interest satisfies a second filter criteria. In case the second filter criteria are satisfied, the processor provides feedback to the user on the same. In an embodiment, the feedback may be provided in real time for the user to make prompt decision on carrying out a task/sub-task.
[0032] It is to be noted that the above indicated technical advantages/technical effects are purely exemplary in nature and should nowhere be construed as limiting the scope of the present invention.
[0033] Fig. 1 illustrates an example of an environment for interacting with virtual objects, according to one or more embodiments of the present invention. The environment includes a system 100, a wearable device 150, a means for tracking position and orientation 160, a cache memory 170, a database 180 and an application server 190. The system 100, the wearable device 150, the means for tracking position and orientation 160, the cache memory 170, the database 180 and the application server 190, communicate with each other over a communications network 195.
[0034] The communications network 195 may be one of, but not limited to, local area network (LAN) cable, wireless local area network (WLAN), cellular, or satellite.
[0035] In a preferred embodiment, the wearable device 150 is a head mounted display (HMD).
[0036] In an alternate embodiment, the wearable device 150 is a watch.
[0037] In an embodiment, the means for tracking the position and orientation 160 is one of, but not limited to, a position sensor. The types of position sensors that may be used are at least one of, but not limited to, linear, rotary and angular position sensors and a combination thereof.
[0038] Further, the means for tracking the orientation 160 is one of, but not limited to, an orientation sensor. The types of orientation sensors that may be used are at least one of, but not limited to, relative, absolute and geomagnetic orientation sensors and a combination thereof.
[0039] In an embodiment, the system 100 includes a display unit 102, a user-interaction controller 104 and a processor 106.
[0040] The display unit 102, the user-interaction controller 104, the wearable device 150, the means for tracking position and orientation 160, the cache memory 170, the database 180 and the application server 190 communicate with the processor 106.
[0041] As shown in Fig. 1, the display unit 102 is configured to display a visual scene 108 to a user.
[0042] In an embodiment, the display unit 102 is one of, but not limited to, a laptop, a desktop, a mobile phone, a television and a combination thereof. With respect to these type of display units indicated above, the visual scene is displayed on the display unit 102 itself in a two dimensional space as shown in Fig. 2.
[0043] In an alternate embodiment, the display unit 102 is part of the wearable device 150 as shown in Fig. 1. With reference to the example environment of Fig. 1, the wearable device 150 is a head mounted display (HMD) that the user is wearing. In this regard, the display unit 102 is part of the HMD. With reference to the HMD, the display unit 102 may be part of the HMD, which displays the visual scene 108 at a pre-defined distance in the Extended Reality (XE) environment in a three-dimensional space. It is known in the art that XE covers representative forms such as, but not limited to, augmented reality, mixed reality and virtual reality. In this regard, the visual scene 108 may be projected by the display unit 102 part of the HMD at the pre-defined distance from the HMD.
[0044] In an embodiment, the visual scene 108 is an interface that comprises a plurality of virtual objects 110. As indicated above, if the visual scene 108 is two-dimensional in nature, then the plurality of virtual objects 110 may also be two-dimensional. Alternatively, if the visual scene 108 is three-dimensional in nature, then the plurality of virtual objects 110 may also be three-dimensional.
[0045] In an embodiment, the plurality of virtual objects 110 are icons displayed in the visual scene 108. Each of the virtual object 110 will have at least one function which may be pre-defined or customized by the user based on preferences. The virtual objects 110 enable the user to perform multiple tasks/sub-tasks.
[0046] In an embodiment, the plurality of virtual objects 110 present in the visual scene 108 may be pre-built with an application and also created during the course of usage of the application by the user.
[0047] With reference to Fig. 1, the user-interaction controller 104 of the system 100 is communicably coupled to the display unit 102. The user-interaction controller 104 is configured to enable the user to interact with at least one virtual object of a plurality of virtual objects 110 displayed on the visual scene 108 of the display unit 102.
[0048] In an embodiment, the user-interaction controller 104 is embedded with the means for tracking the position and the orientation 160 of the user-interaction controller 104.
[0049] In an alternate embodiment, the means for tracking the position and orientation 160 of the user-interaction controller 104 may be located as a separate entity which is in communication with the user-interaction controller 104 in order to track the position and the orientation of the user-interaction controller 104.
[0050] In yet another alternate embodiment, the user-interaction controller 104 may be a third-party device which may be customized as the user-interaction controller 104 by embedding with the position and orientation sensors therein.
[0051] In an embodiment, the user-interaction controller 104 may have the capability to provide at least one control to the user to interact with the virtual objects 110 displayed in the visual scene 108 of the display unit 102.
[0052] In another embodiment, the user-interaction controller 104 may have a plurality of controls which may be customized by the user based on preferences, thereby advantageously enabling the user to experience a customized level of interaction with the virtual objects 110.
[0053] The processor 106 of the system 100 as shown in Fig. 1 which is in communication with the display unit 102 and the user-interaction controller 106, is configured to carry out various steps in order to enable the user to interact with the virtual objects 110 displayed in the visual scene 108 of the display unit 102.
[0054] In an embodiment, the processor 106 is a separate entity, that is in communication with the display unit 102 and the user-interaction controller 104.
[0055] In an alternate embodiment, the processor 106 is embedded in the user-interaction controller 104.
[0056] The processor 106 of the system 100 is the processor that may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor is configured to fetch and execute computer-readable instructions stored in the memory.
[0057] The cache memory 170 and the database 180 referred hereinafter, in general includes memory and any other storage means and/or units may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[0058] In an embodiment, the processor 106 is configured to retrieve information about properties of each of the plurality of virtual objects 110 which are displayed and to be displayed on the visual scene from an application server 190 and store the same at the database 180. For example, if the application A which is displayed in the visual scene 108 as shown in Fig. 3 includes virtual objects 110 such as, electronic gadgets, clothes and shoes, the processor retrieves the relevant information of each of those virtual objects 110 from the application server 190 and stores the same in the database 180. The processor 106 retrieves the relevant information of each of these virtual objects 110 from the application server 190 by accessing information as stored in the application server 190 of the specific application, herein the application A.
[0059] In addition, the processor 106 is configured to receive information pertaining to a position and orientation of each virtual object 110 displayed in the visual scene 108 and store the information in the database 180.
[0060] In addition, in case the application server 190 does not have the required information on the properties and position and orientation of each virtual object 110 displayed in the visual scene 108, the processor 106 is configured to ping other relevant servers/databases in order to retrieve the information on the position and the orientation of each virtual object 110 displayed in the visual scene 108.
[0061] The processor 106, further, is configured to detect a position and an orientation of the user interaction controller 104 in real time. In order to do so, the processor 106 is communicably coupled to the means for tracking the position and orientation 160 of the user-interaction controller 104. Accordingly, the means for tracking the position and the orientation 160 of the user interaction controller 104 communicates the position and the orientation of each virtual object 110 to the processor 106. As discussed above, the means for tracking the position and the orientation 160 of the user-interaction controller 104 may be embedded within the user-interaction controller 104 itself or may be a separate entity communicably coupled to the user-interaction controller 104.
[0062] Once the information of properties of each virtual object 110, position and orientation of each virtual object 110 and position and orientation of the user-interaction controller 104 is determined, the processor 106 determines a group of virtual objects 302 from the plurality of virtual objects based on matching a first filter criteria as shown in Fig. 3. The group of virtual objects 302 are considered to satisfy the first filter criteria if the group of virtual objects 302 belong to a particular class. Let us consider that the user using the application A as displayed in the visual scene 108 is interested in a particular class comprising of electronic gadgets/devices. In this regard, the user may indicate the interest in the class “electronic gadgets/devices via at least one control provided on the user-interaction controller 104 or the wearable device 150, herein the HMD.
[0063] In an embodiment, the user may provide inputs via the at least one control provided on the user-interaction controller 104, the HMD and a combination thereof.
[0064] In an embodiment, the at least one control may be at least one of, but not limited to, input buttons provided on the user-interaction controller 104, voice commands using the user-interaction controller 104 or the wearable device 150 such as the HMD, gesture controls using either the user-interaction controller 104 or the HMD.
[0065] In an alternative embodiment, the user may provide inputs via the at least one control configured on a third-party device which is connected to the processor 106.
[0066] In another alternative embodiment, the user may indicate the interest in the class, by selecting a drop-down menu or by entering/typing the class from the plurality of classes on a search bar displayed on the application, herein ‘application A’ of the visual scene 108.
[0067] Based on the interest by the user via the at least one control, the first filter criteria are set by the processor 106, which includes the class “electronic device/gadgets”. Based on the first filter criteria, the processor 106 performs a match of properties of each virtual object 110 displayed in the visual scene 108 with the first filter criteria, “electronic device/gadgets” and determines the group of virtual objects 302, that belong to the class “electronic device/gadgets”.
[0068] In an embodiment, the processor 106 matches the properties of each virtual object with the first filter criteria based on comparing the properties of the said virtual object 110 stored in the database 180 as a configuration file or any other relevant file, with the first filter criteria as shown in Fig. 4.
[0069] In case, the processor 106 determines that the virtual object 110 does not satisfy the first filter criteria, then the processor 106 may either mask the said virtual object 110 such as, but not limited to, by hiding the virtual object 110, marking the virtual object in a particular colour to indicate that the virtual object 110 is not relevant or delete the virtual object 110 temporarily.
[0070] Once the group of virtual objects 302 are determined from amongst the plurality of virtual objects 110, the processor 106 stores the information of the group of virtual objects 302 in the database 180. In addition, the processor 106 may also store the information of the group of virtual objects 302 in the cache memory 170 as shown in Fig. 1 for immediate reference. Advantageously, the processor 106 may refer to the recent data of the group of virtual objects 302 quickly if required. Further, once the pre-defined or user-defined time period is expired, the processor 106 may ensure that the cache memory 170 including data of the group of virtual objects 302 is cleared for fresh loading of data therein. Advantageously, the processor 106 ensures that there is no lag in enabling the user to interact with the virtual objects in real time.
[0071] Thereafter, the processor 106 is configured to detect at least one virtual object of interest 304 from amongst the group of virtual objects 302 in response to the user interaction controller 104 pointing to the at least one virtual object amongst the group of virtual objects 302. More specifically, based on the pointing of the user-interaction controller 104 towards the at least one virtual object from amongst the group of virtual objects 302, the processor 106 is configured to control a projection of a visual ray 306 having a dynamically generated or pre-defined shape originating from the user-interaction controller 104 towards the at least one virtual object of interest 304 displayed in the visual scene 108 as shown in Fig. 3.
[0072] In a preferred embodiment, the processor 106 is configured to control the projection of the visual ray 306 having a dynamically generated shape based on the input of the first filter criteria for a particular class as received from the user via the at least one control of the user-interaction controller 104. For example, if the first filter criteria selected is “electronic devices/gadgets”, based on the particular class being the electronic devices/gadgets, the processor 106 is configured to dynamically generate the shape of the visual ray 306 such as, but not limited to, line, beam, cylinder and cone. The processor 106 may utilize the information of the class of the first filter criteria and the position and orientation of the user-interaction controller 104 to control the projection of the visual ray 306 having the geometric shape which is dynamically generated in order to ensure the visual ray 306 reaches the at least one virtual object of interest 304 irrespective of crowding and overlap of the group of virtual objects in the visual scene as seen in Figs. 3, 5 and 6. Advantageously, the processor 106 ensures that the at least one virtual object of interest 304 is targeted thereby enhancing the accuracy of the interaction of the user with the virtual objects. As seen in these figures, the processor 106 may change the shape of the visual ray dynamically based on the first filter criteria belonging to at least one class.
[0073] In another embodiment, let us consider the user is interested in a particular virtual object from the group of virtual objects 302 that have been determined. In this regard, the user may point towards the virtual object of interest 304 using the user-interaction controller 104 or the wearable device 150 such as the HMD. Based on the pointing by the user-interaction controller 104 towards the virtual object of interest, the processor 106 may project the visual ray 306 having pre-defined shapes towards the virtual object of interest 304 as shown in Fig. 3, 5 and 6 in a pre-defined order. In particular, in case the virtual object of interest 304 is overlapping or crowded, the processor 106 may project the visual ray having pre-defined shapes in a pre-defined order in order to advantageously ensure that the visual ray 306 points to the exact virtual object of interest 304. For example, the processor 106 may first project a visual ray 306 having a pre-defined shape of a line to check if the same reaches the virtual object of interest. In case, the visual ray 306 doesn’t reach the virtual object of interest, the processor 106 may project the visual ray having other geometric shapes such as, cone, cylinder, etc in a pre-defined order in order to ensure the visual ray reaches the virtual object of interest.
[0074] In another embodiment, in response to the user-interaction controller 104 pointing to the at least one virtual object amongst the group of virtual objects 302, the processor 106, based on the combination of the position and orientation of each of the virtual object of the group of virtual objects 302 and the position and orientation of the user interaction controller, is configured to control the projection of the visual ray 306 having a dynamically generated geometric shape, originating from the user-interaction controller 104 towards the at least one virtual object of interest 304. The processor 106 may utilize the information of the position and orientation of each of the virtual object and the position and orientation of the user-interaction controller 104 to control the projection of the visual ray 306 having the geometric shape which is dynamically generated in order to ensure the visual ray 306 reaches the at least one virtual object of interest 304 irrespective of crowding of the group of virtual objects in the visual scene and overlap of the virtual objects amongst the group of virtual objects. Advantageously, the processor 106 ensures that the at least one virtual object of interest 304 is targeted thereby enhancing the accuracy of the interaction of the user with the virtual objects.
[0075] In another embodiment, in case of the virtual object of interest 304 is hidden by other virtual objects, the processor 106 is configured to dynamically extend the visual ray 306 having a dynamically generated shape to an extent that the visual ray 306 is able pass through the non-relevant virtual objects and reach the virtual object of interest 304.
[0076] In view of the above, due to the usage of the visual ray of various geometric shapes, usage of first filter criteria, usage of information of the position and orientation of each virtual object from amongst the group of virtual objects and usage of information of the position and orientation of the user-interaction controller 104 advantageously ensures that the at least one virtual object of interest 304 that the user is interested in is identified substantially accurately by the processor 106, thereby overcoming the issue of inaccurate identification of virtual objects of interest in prior arts.
[0077] In an embodiment, the virtual object of interest 304 is that virtual object that is of interest to the user.
[0078] In an embodiment, the geometric shape of the visual ray is at least one of, but not limited to, a line, a cylinder, a cone, a Bezier curve or a combination thereof.
[0079] Subsequent to detection of the at least one virtual object of interest 304, the processor 106 is configured to determine if the at least one virtual object of interest 304 satisfies a second filter criteria.
[0080] In an embodiment, the at least one virtual object of interest 304 is considered to satisfy the second filter criteria if the at least one virtual object of interest 304 belongs to one or more sub-classes.
[0081] In an embodiment, the second filter criteria, similar to the first filter criteria is one of user defined and pre-defined.
[0082] For example, let us consider that the user is interested in a “mobile phone” being a first filtering parameter of the second filter criteria. Further, the second filtering parameter being the mobile phone belongs to the brand “Apple” and the third filtering parameter being the mobile phone has a colour “black”. In this regard, the user may provide the filtering parameters for the second filter criteria as inputs via the at least one control of the user-interaction controller 104 or the HMD. Based on which, the processor 106 sets the second filter criteria as the sub-class “mobile phone”, sub-sub-class, “Apple” and sub-sub-sub class “black”. Based on the second filter criteria, the processor 106 performs a match of properties of the virtual object of interest 304 displayed in the visual scene 108 with the second filter criteria and determines whether the virtual object of interest 304 satisfies the second filter criteria as shown in Fig. 7.
[0083] In an embodiment, the processor 106 matches the properties of the virtual object of interest 304 with the second filter criteria based on comparing the properties of the virtual object of interest 304 stored in the database 180 as a configuration file or any other relevant file, with the second filter criteria.
[0084] The above indicated second filter criteria is just an exemplary embodiment, there could be many more filter criteria that may be set. Therefore, just illustrating two filter criteria should not be construed as limiting the scope of the present invention.
[0085] Pursuant to the determination of the at least one virtual object of interest 304 satisfies the second filter criteria, the processor 106 is configured to provide feedback to the user indicating the at least one virtual object of interest 304 satisfies the second filter criteria.
[0086] In an embodiment, the feedback provided to the user is one of, but not limited to, visual feedback, auditory feedback and/or haptic feedback and a combination thereof.
[0087] In an embodiment, the visual feedback in the event of satisfying the second filter criteria may be at least one of, but not limited to, highlighting the at least one virtual object of interest, change in colour of the at least one virtual object of interest, brightening of the at least one virtual object of interest and providing a visual indication on the user-interaction controller itself.
[0088] In an embodiment, the auditory feedback in the event of satisfying the second filter criteria may be at least one of, but not limited to, providing a specific sound.
[0089] In an embodiment, the haptic feedback in the event of satisfying the second filter criteria may be at least one of, but not limited to, providing a sense of touch by providing vibrations or motions on the user-interaction controller 104 or on any other device in contact with the user.
[0090] In an embodiment, in the event of determining that the at least one virtual object of interest 304 does not satisfy the second filter criteria, the processor 106 is configured to ignore the at least one virtual object of interest 304, thereby advantageously ensuring that the virtual objects are filtered and number of virtual objects are reduced for further processing.
[0091] In an embodiment, the processor 106 ignores the at least one virtual object of interest 304 utilizing techniques such as, but not limited to, masking such as, but not limited to, hiding the virtual object of interest, deleting the virtual object of interest temporary, or highlighting the virtual object of interest.
[0092] Further, in an example embodiment, the current system 100 may be used for playing games such as, but not limited to memory-based games. Let us consider, the user is shown five virtual objects in the visual scene 108 in a certain chronological order. The processor 106, ensures that the user-interaction controller 104 scans these five virtual objects in the chronological order and thereafter, the processor 106 stores the chronological order of these five virtual objects in the database 180 or the cache memory 170. During the gaming session, let us consider that ten virtual objects are presented to the user in the visual scene and as per the rules of the game, the user is required to identify the five virtual objects from the ten virtual objects in the chronological order in which they were originally shown to the user or the descending order. When the user provides the response pertaining to the order of the five virtual objects from amongst the ten virtual objects via at least the user-interaction controller 104 or the wearable device such as the HMD 150, the processor 106 is configured to compare the stored chronological order of the five virtual objects with the response of the user. Thereafter, the processor 106 may provide feedback to the user of the correctness of the response. The above illustrated embodiment is merely exemplary in nature and should nowhere be construed to limit the scope of the present invention.
[0093] FIG. 8 shows a flowchart of a method for interacting with virtual objects. For the purpose of description, the method is described with the embodiments as illustrated in Fig. 1 to Fig. 7. The method comprises the steps as indicated below:
[0094] At step 805, determining, by the processor, a group of virtual objects from a plurality of virtual objects based on matching a first filter criteria, the plurality of virtual objects displayed on a visual scene of a display unit.
[0095] At step 810, storing, by the processor, information pertaining to the group of virtual objects.
[0096] At step 815, detecting, by the processor, a position and an orientation of a user-interaction controller, based on tracking data received from a means for tracking the position and the orientation of the user-interaction controller.
[0097] At step 820, detecting, by the processor, based on the position and the orientation of the user-interaction controller, at least one virtual object of interest from amongst the group of virtual objects in response to the user interaction controller pointing at the at least one virtual object amongst the group of virtual objects.
[0098] At step 825, determining, by the processor, whether the at least one virtual object of interest satisfies a second filter criteria.
[0099] At step 830, providing, by the processor, feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria in response to determining the at least one virtual object of interest satisfies the second filter criteria.
[00100] While aspects of the present invention have been particularly shown and described with reference to the embodiments above, it will be understood by those skilled in the art that various additional embodiments may be contemplated by the modification of the disclosed machines, systems and methods without departing from the scope of what is disclosed. Such embodiments should be understood to fall within the scope of the present invention as determined based upon the claims and any equivalents thereof.
, Claims:We Claim:
1. A system comprising:
a display unit configured to display a visual scene to a user;
a user-interaction controller communicably coupled to the display unit, configured to enable the user to interact with at least one virtual object of a plurality of virtual objects displayed on the visual scene of the display unit;
a processor, communicably coupled to the display unit and the user-interaction controller, the processor configured to:
determine, a group of virtual objects from the plurality of virtual objects based on matching a first filter criteria;
store, information pertaining to the group of virtual objects;
detect, a position and an orientation of the user interaction controller, based on tracking data received from a means for tracking the position and the orientation of the user-interaction controller;
detect, based on the position and the orientation of the user-interaction controller, at least one virtual object of interest from amongst the group of virtual objects in response to the user-interaction controller pointing at the at least one virtual object amongst the group of virtual objects;
determine, whether the at least one virtual object of interest satisfies a second filter criteria; and
provide feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria in response to determining the at least one virtual object of interest satisfies the second filter criteria.
2. The system as claimed in claim 1, wherein in response to determining the at least one virtual object of interest does not satisfy the second filter criteria, the processor is configured to ignore the at least one virtual object of interest.
3. The system as claimed in claim 1, wherein the feedback provided to the user comprises at least one of: a visual feedback, an auditory feedback and/or a haptic feedback.
4. The system as claimed in claim 1, wherein the processor is configured to retrieve information about properties of the plurality of virtual objects displayed on the visual scene, wherein the group of virtual objects that match the first filter criteria are determined by matching the properties of the plurality of objects with the first filter criteria.
5. The system as claimed in claim 4, wherein the properties of each virtual object of the group of virtual object comprises a class and one or more sub-classes to which the virtual object belongs.
6. The system as claimed in claim 1, wherein the first filter criteria and the second filter criteria are provided by the user and/or are pre-defined.
7. The system as claimed in claim 1, wherein the group of virtual objects are considered to satisfy the first filter criteria if the group of virtual objects belong to a particular class.
8. The system as claimed in claim 1, wherein the at least one virtual object of interest is considered to satisfy the second filter criteria if the at least one virtual object of interest belongs to one or more sub-classes.
9. The system as claimed in claim 1, wherein the processor is configured to control a projection of a visual ray originating from the user-interaction controller towards the at least one virtual object of interest displayed on the visual scene of the display unit, based on the position and the orientation of the user-interaction controller.
10. The system as claimed in claim 9, wherein the visual ray has a pre-defined and/or dynamic geometric shape, the geometric shape being at least one of: a line, a cylinder, a cone, a Bezier curve or a combination thereof.
11. The system as claimed in claim 10, wherein the processor is configured to dynamically generate the geometric shape of the visual ray based on at least one of, a position and an orientation of the at least one virtual object of interest in the visual scene, a class/sub-class of the virtual object of interest in the visual scene and a combination thereof.
12. A method comprising:
determining, by the processor, a group of virtual objects from a plurality of virtual objects based on matching a first filter criteria, the plurality of virtual objects displayed on a visual scene of a display unit;
storing, by the processor, information pertaining to the group of virtual objects;
detecting, by the processor, a position and an orientation of a user-interaction controller, based on tracking data received from a means for tracking the position and the orientation of the user-interaction controller;
detecting, by the processor, based on the position and the orientation of the user-interaction controller, at least one virtual object of interest from amongst the group of virtual objects in response to the user interaction controller pointing at the at least one virtual object amongst the group of virtual objects;
determining, by the processor, whether the at least one virtual object of interest satisfies a second filter criteria; and
providing, by the processor, feedback to the user indicating the at least one virtual object of interest satisfies the second filter criteria in response to determining the at least one virtual object of interest satisfies the second filter criteria.
| # | Name | Date |
|---|---|---|
| 1 | 202221029716-STATEMENT OF UNDERTAKING (FORM 3) [24-05-2022(online)].pdf | 2022-05-24 |
| 2 | 202221029716-POWER OF AUTHORITY [24-05-2022(online)].pdf | 2022-05-24 |
| 3 | 202221029716-FORM 1 [24-05-2022(online)].pdf | 2022-05-24 |
| 4 | 202221029716-FIGURE OF ABSTRACT [24-05-2022(online)].jpg | 2022-05-24 |
| 5 | 202221029716-DRAWINGS [24-05-2022(online)].pdf | 2022-05-24 |
| 6 | 202221029716-DECLARATION OF INVENTORSHIP (FORM 5) [24-05-2022(online)].pdf | 2022-05-24 |
| 7 | 202221029716-COMPLETE SPECIFICATION [24-05-2022(online)].pdf | 2022-05-24 |
| 8 | 202221029716-Proof of Right [07-06-2022(online)].pdf | 2022-06-07 |
| 9 | Abstract1.jpg | 2022-08-30 |