Sign In to Follow Application
View All Documents & Correspondence

Communication In Virtual Reality

Abstract: Disclosed is a method and system for communication in a virtual reality environment. The method comprises initiating multimedia on a device providing a virtual reality environment to a primary user and receiving a communication initiated by a secondary user via a communication channel between the device and a mobile phone associated with the primary user. The method further comprise comparing a first relationship and a second relationship and selecting a character from the characters present in the multimedia based on the comparison of the first relationship and the second relationship. The method furthermore comprises displaying the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 April 2016
Publication Number
18/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2024-03-23
Renewal Date

Applicants

HCL Technologies Limited
B-39, Sector 1, Noida 201 301, Uttar Pradesh, India

Inventors

1. TAMMANA, Sankar Uma
HCL Technologies Limited, A- 8 & 9, Sec-60, Noida, UP-201301, India
2. DHALIWAL, Jasbir Singh
HCL Technologies Limited, A- 8 & 9, Sec-60, Noida, UP-201301, India

Specification

TECHNICAL FIELD
[001] The present subject matter described herein, in general, relates to a system and a method for communication, and more particularly a system and a method for communication in a virtual reality environment.
BACKGROUND
[002] Technologies such as Virtual Reality (VR) are gaining considerable momentum in today’s world, because of immersive real life experience provided by these technologies. Further, such immersive real life experience provided by these technologies, greatly enhance user’s experience by reducing the difference between reality and virtual reality.
[003] However, the immersed feelings of a user using these immersive technologies for example Virtual Reality are interrupted by external world events such as a phone call or a text messages. Further, post attending to the phone call or text messages achieving the immersive feeling is a challenge. Furthermore, emotions of the user may get distributed post attending to the phone call or text messages and thus even after resuming the video or game, it may take time to get the same immersive feeling or involvement of game as compared to previous. Currently there is no mechanism to carry the same immersive feeling for real world events and their actions based on the users emotion/psychological state.
SUMMARY
[004] Before the present systems and methods for communication in a virtual reality environment, are described, it is to be understood that this application is not limited to the particular systems, and methodologies described, as there can be multiple possible embodiments which are not expressly illustrated in the present disclosures. It is also to be understood that the terminology used in the description is for the purpose of describing the particular implementations or versions or embodiments only, and is not intended to limit the scope of the present application. This summary is provided to introduce aspects related to a system and a method for communication in a virtual reality environment. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
3
[005] In one implementation, a system for communication in a virtual reality environment is disclosed. In one aspect, the system may initiate multimedia on a device providing a virtual reality environment to a primary user. Upon initiating, the system may receive a communication initiated by a secondary user via a communication channel between the device and a mobile phone associated with the primary user The communication may be one or more of a call, a message, and an e-mail. Further to receiving, the system may compare a first relationship and a second relationship. The first relationship exists between the primary user and the secondary user, and the second relationship exists between characters present in the multimedia. Subsequent to comparing the system may select a character from the characters present in the multimedia based on the comparison of the first relationship and the second relationship. Upon selecting the system may display the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.
[006] In one implementation, a method for communication in a virtual reality environment is disclosed. In one aspect, the method may comprise initiating multimedia on a device providing a virtual reality environment to a primary user and receiving a communication initiated by a secondary user via a communication channel between the device and a mobile phone associated with the primary user. The communication is one or more of a call, a message, and an e-mail. The method may further comprise comparing a first relationship and a second relationship and selecting a character from the characters present in the multimedia based on the comparison of the first relationship and the second relationship. The first relationship exists between the primary user and the secondary user, and the second relationship exists between characters present in the multimedia. The method may furthermore comprise displaying the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user. In yet another implementation, non-transitory computer readable medium embodying a program executable in a computing device for communication in a virtual reality environment is disclosed. In one aspect, the program may comprise a program code for initiating multimedia on a device providing a virtual reality environment to a primary user and receiving a communication initiated by a secondary user via a communication channel between the device and a mobile phone associated with the primary user. The communication is one or more of a call, a message, and an e-mail. The program may comprise a program code for comparing a first relationship and a second relationship. The first relationship exists
4
between the primary user and the secondary user, and the second relationship exists between characters present in the multimedia. The program may comprise a program code for selecting a character from the characters present in the multimedia based on the comparison of the first relationship and the second relationship. The program may comprise a program code for displaying the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.
BRIEF DESCRIPTION OF THE DRAWINGS
[007] The foregoing detailed description of embodiments is better understood when read in conjunction with the appended drawings. For the purpose of illustrating of the present subject matter, an example of construction of the present subject matter is provided as figures; however, the invention is not limited to the specific method and system disclosed in the document and the figures.
[008] The present subject matter is described detail with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer various features of the present subject matter.
[009] Figure 1 illustrates a network implementation of a system for communication in a virtual reality environment, in accordance with an embodiment of the present subject matter.
[010] Figure 2 illustrates the system and its subcomponents for communication in a virtual reality environment, in accordance with an embodiment of the present subject matter.
[011] Figure 3 illustrates a method for communication in a virtual reality environment, in accordance with an embodiment of the present subject matter.
DETAILED DESCRIPTION
[012] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the"
5
include plural references unless the context clearly dictates otherwise. Although any systems and methods for communication in a virtual reality environment, similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary, systems and methods for communication in a virtual reality environment are now described. The disclosed embodiments for communication in a virtual reality environment are merely examples of the disclosure, which may be embodied in various forms.
[013] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments for communication in a virtual reality environment. However, one of ordinary skill in the art will readily recognize that the present disclosure for communication in a virtual reality environment is not intended to be limited to the embodiments described, but is to be accorded the widest scope consistent with the principles and features described herein.
[014] In an implementation, a system and method for communication in a virtual reality environment, is described. In the embodiment, biometric data of the primary user may be obtained. In one example, the biometric data may be obtained when the primary user wears a virtual reality head set. The biometric data may comprise a finger print data, iris data, heart beat data, and face recognition data. Further to obtaining the biometric data, the mobile associated with the user may be identified based on biometric data and a communication channel between the mobile and the virtual reality device may be created. In one example, the communication channel may be one of a Bluetooth communication channel, a Wi-Fi communication channel, a VoIP communication channel and a Zigbee communication channel. Furthermore, the virtual reality device may initiate a command signal and trigger a forwarding function and a silent profile on the mobile device. In other example, the forwarding function comprises forwarding the communication to the device via the communication channel. Further the original profile may be stored before triggering.
[015] Further, in an embodiment, multimedia may be initiated on the virtual reality device providing a virtual reality environment to a primary user. Upon initiating the multimedia, a communication initiated by a secondary user may be received via a communication channel between the device and a mobile phone associated with the primary user. The communication may be one or more of a call, a message, and an e-mail. Further to receiving the communication, a first relationship and a second relationship may be compared. The first
6
relationship exists between the primary user and the secondary user, and the second relationship exists between characters present in the multimedia. Subsequent to comparing the relationships, a character from the characters present in the multimedia may be selected based on the comparison of the first relationship and the second relationship.
[016] Subsequent to selection, a first emotion of the primary user, and a second emotion of the secondary user may be generated. In example, the first emotion may be generated based on one or more sensors located on the body of the primary user or facial recognition using a camera in the VR headset and a second emotion may be generated based on speech processing methodology utilized on the received communication.
[017] Further to generating the emotions, a final emotion to the character may be associated with the selected character based on one of the first emotion, the second emotion, and a genre of the multimedia. Upon associating the system may display the character with the emotion as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.
[018] In one embodiment, a command to the mobile device may be sent over the established communication channel, to trigger the original profile previously stored or a preconfigured profile such as full volume and vibration, when the user removes the VR headset or stop playing or pause the multimedia. Upon triggering the original or the preconfigured profile on the mobile device of the primary user, the may break the communication channel between the VR headset and the mobile if the VR headset is removed.
[019] Referring now to Figure 1, a network implementation 100 of a system 102 for communication in a virtual reality environment, in accordance with an embodiment of the present subject matter may be described. In one embodiment, the present subject matter is explained considering that the system 102 may be implemented as a system 102 installed within a device 104 connected to a network 106. It may be understood that the system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, a cloud-based computing environment, or a mobile 108 and the like.
[020] In embodiment, the system 102 may be implementing in the device 104. In one example, the device 104 may be a virtual reality headset such as PlayStation VR™, Oculus Rift™, and HTC Vive™. It may also be understood that the system 102 implemented on the
7
device 104 supports a plurality of browsers and all viewports. Examples of the plurality of browsers may include, but not limited to, Chrome™, Mozilla™, Internet Explorer™, Safari™, and Opera™. It will also be understood that the system 102 may be accessed by multiple users through one or more controller. Further, the system may be communicatively coupled to sensors 110 located on a user. Furthermore, the system 102 may be communicatively coupled to a database. In one example, the devices 104 may obtain the multimedia to be viewed from the database via the network 106. In one example, the database may be any of the relationship database and the like.
[021] In one implementation, the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), Wireless Personal Area Network (WPAN), Wireless Local Area Network (WLAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, MQ Telemetry Transport (MQTT), Extensible Messaging and Presence Protocol (XMPP), Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[022] In one embodiment, the system 100 for communication in the virtual environment may comprise one or more sensors 110 located on a body of the user. In one example, the sensors may be heart rate sensors, body temperature sensor, acceleration sensor, proximity sensor, tilt sensor, gyroscope, and perspiration sensor. Further, the network system 100 may comprise a wearable virtual reality device for displaying a multimedia in a virtual reality environment to the user. In one example, the multimedia may be a video or a game. Furthermore, the network system 100 may comprise a system 102 communicatively coupled with the one or more sensor 110 and the device 104, for example a wearable virtual reality device.
[023] Referring now to Figure 2, the system 102 is illustrated in accordance with an embodiment of the present subject matter. In one embodiment, the system 102 may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers,
8
microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 may be configured to fetch and execute computer-readable instructions stored in the memory 206.
[024] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with the user directly or through the client devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[025] The memory 206 may include any computer-readable medium or computer program product known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 210.
[026] The modules 208 include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. In one implementation, the modules 208 may include a receiving module 212, a comparing module 214, a selecting module 216 and other module 218. The other modules 218 may include programs or coded instructions that supplement applications and functions of the system 102. The modules 208 described herein may be implemented as software modules that may be executed in the cloud-based computing environment of the system 102.
[027] The memory 206, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules 208. The memory 206 may include data generated as a result of the execution of one or more modules in the other module 220. In one implementation, the memory may include data 210. Further, the data 210 may include a system data 220 for storing data processed, computed received and generated
9
by one or more of the modules 208. Furthermore, the data 210 may include other data 224 for storing data generated as a result of the execution of one or more modules in the other module 220.
[028] In one implementation, at first, a user may use the controller or mobile 108 to access the system 102 via the I/O interface 204. The user may register using the I/O interface 204 in order to use the system 102. In one aspect, the user may access the I/O interface 204 of the system 102 for obtaining information or providing input information. In one implementation the system 102 may automatically provide information to the user through I/O interface 204, device 104.
RECEIVING MODULE 212
[029] Referring to figure 2, in an embodiment the receiving module 212 may initiate multimedia on the device 104 providing a virtual reality environment to a user. In one other embodiment, the receiving module 212 may obtain the multimedia from a database and provide the multimedia to the device 104 for initiating the multimedia on the device 104. In one example, the device 104 may be a wearable virtual reality headset such as a PlayStation VR™ headset, Oculus Rift™ headset, and HTC Vive™ headset. In one more example, the multimedia may be an image, a video, a movie, a game, an e-learning module, and a live broadcast.
[030] In the embodiment upon initiating the multimedia, the receiving module 212 may obtain biometric data of the primary user. The biometric data may comprise a finger print data, iris data, heart beat data, and face recognition data. Further, the receiving module 212 may store the biometric data in the system data 220. Further to obtaining the biometric data, the receiving module 212 may identify the mobile associated with the user based on biometric data and create a communication channel between the mobile 108 and the device 104. In one example, the communication channel is one of a Bluetooth communication channel, a Wi-Fi communication channel, a VoIP communication channel and a Zigbee communication channel. Furthermore, the receiving module 212 may trigger a forwarding function and a silent profile on the mobile device. In other example, the forwarding function comprises forwarding the communication to the device via the communication channel. In one example, prior to triggering the forwarding function and the silent profile the receiving module 212 may store the current profile as an original vibration and sound profile.
10
[031] In the embodiment, subsequent to triggering, the receiving module 212 may receive a communication initiated by a secondary user via the communication channel created between the device and a mobile phone associated with the primary user. Further, the communication may be a call, a message, and an e-mail. In one example, on receiving the communication the receiving module 212 may alert the primary user. In the example, the alert may be a message on the device 104 display or a blinking light. Further, the primary user may ignore or take the actions by pressing configured button on the device 104 or voice command. Based on the action the receiving module 212 may suspend the multimedia which is currently being played.
COMPARING MODULE 214
[032] Further in the embodiment, upon receiving a communication from the device 108, the comparing module 214 may identify the characters in the multimedia and the second relationships between the characters in the multimedia. The second relationships may be understood as a co-relation between the characters in the multimedia and the role of the character in the multimedia. For example, father, mother, child, son, daughter, daughter in law, hero, anti-hero, villain, police, politician, banker, doctor, friend, girlfriend, wife, grandmother, etc.
[033] In one example, the comparing module 214 may continuously monitor the multimedia upon initiation of the multimedia and identify the characters in the multimedia, such as Robin, Jack and Jill and identify the second relation between the characters, such as Robin is the son of Jack and Jill, Jack is the father, Jill is the mother of Robin, Jack and Jill are husband and wife. In one other example, the comparing module 214 identifies the character and the relation before the initiation of the multimedia and before receiving a call and may store it in system data 220. In one example, the identification may be done at the start of the multimedia or during the multimedia is in play or when a new character is introduced. The identification may be performed based on voice and multimedia processing, in one example, the identification may be performed based on script procession and analysis of the dialogues played in the multimedia.
[034] In one example, the comparing module 214 may monitor the multimedia being played on the device 104 and identify the different characters and their relations and emotions those are present in the active multimedia. Further, the comparing module 214 obtain phone book
11
comprising contact details and received communication details and identify a first relationship between the primary user and secondary user based on the phonebook and stored configuration. In one other example, primary relation identified may be Father, Mother, Brother, Sister, and friend.
[035] Upon identifying, the comparing module may compare the first relationship and the second relationship. The first relationship exists between the primary user and the secondary user, and wherein the second relationship exists between characters present in the multimedia. In the implementation, the comparing data may store the characters, first relationship data, comparing data and second relationship data in system data 220.
SELECTING MODULE 216
[036] Further in the embodiment upon comparing, the selecting module 216 may select a character based on the comparison of the first relationship and the second relationship. In one example, when characters have a similar comparison results, a final character may be selected based on a Round-Robin, Random, Least recently used, and most recently used, etc. between the similar characters. In an example, Jack playing a role of the father to robin in the multimedia may be selected to represent the secondary user, when the relation between the primary user and the secondary user is son and father respectively.
[037] Subsequent to selection, the selecting module 216 may generate a first emotion of the primary user, and a second emotion of the secondary user. In example, the first emotion may be generated based on one based on one or more sensors located on the body of the primary user and a second emotion may be generated based on based on speech processing methodology utilized on the received communication.
[038] Further to generating the emotions, the selecting module 216 may associate a final emotion to the character based on one of the first emotion, the second emotion, and a genre of the multimedia. The genre of the multimedia may be an action genre, a comedy genre, a horror genre, and a fantasy genre. In one example, the selecting module 216 may associate the final emotion based on primary user’s first emotional condition. For example, primary user is sad; it may associate the same emotion in one example or opposite emotion set comedy/funny in other example for the selected character to represent the secondary user. Further to associating, the selecting module 216 may display the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.
12
[039] In one example, further to displaying the character to the primary user in the virtual reality environment, the character may introduce itself as the secondary user and may communicate the communication, for example reading out a received SMS, or playing the call. Further, the background of the character while communicating may be the same as the current multimedia. After completing the communication, the device 104 may resume the multimedia. This way the primary user may not get disturbed while he is playing game or video with VR headset.
[040] In one embodiment, the selection module 216 may send a command to the mobile device, to trigger the original sound and vibration profile previously stored by the receiving module 212 or a preconfigured sound and vibration profile such as full volume and vibration, when the user removes the VR headset or stop playing the multimedia or pauses the multimedia. Upon triggering the original or the preconfigured sound and vibration profile on the mobile device of the primary user, the may break the communication channel between the VR headset and the mobile if the VR headset is removed from its ideal position or defined position.
[041] Exemplary embodiments for communication in a virtual reality environment discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include those provided by the following features.
[042] Some embodiments of the system and the method enable ideal immersive experience in the virtual reality environment.
[043] Some embodiments of the system and the method prevent diversion of user’s mood while the user is playing, viewing a multimedia in virtual reality environment.
[044] Some embodiments of the system and the method enable receiving calls in between interacting with a multimedia.
[045] Some embodiments of the system and the method enable identification emotional equilibrium in real time.
[046] Referring now to Figure 3, a method 300 for communication in a virtual reality environment is shown, in accordance with an embodiment of the present subject matter. The method 300 may be described in the general context of computer executable instructions.
13
Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types.
[047] The order in which the method 300 for communication in a virtual reality environment as described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300 or alternate methods. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 300 may be considered to be implemented in the above described system 102.
[048] At block 302, multimedia may be initiated on a device providing a virtual reality environment to a user. In an implementation, the receiving module 212 may initiate and store entry points data in the system data 220.
[049] At block 304, a communication initiated by a secondary user may be received via a communication channel between the device and a mobile phone associated with the primary user. The communication may be one or more of a call, a message, and an e-mail. In an implementation, the receiving module 212 may receive a communication initiated by a secondary user and may store the communication in the system data 220.
[050] At block 306, a first relationship and a second relationship may be compared. The first relationship may exist between the primary user and the secondary user, and the second relationship may exist between characters present in the multimedia. In the implementation, the comparing module 214 may compare a first relationship and a second relationship and store comparison data, the first relationship, second relationship in the system data 220.
[051] At block 308, a character from the characters present in the multimedia is selected based on the comparison of the first relationship and the second relationship. In the implementation, the selecting module 216 may select a character from the characters present in the multimedia and may store the selected character in the system data 220.
14
[052] At block 310, the character is displayed as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user. In the implementation, the selecting module 216 may select a character from the characters present in the multimedia. The selecting module 216 may display and store the selected character in the system data 220.
[053] Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include a method and system for communication in a virtual reality environment.
[054] Although implementations for methods and systems for communication in a virtual reality environment have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for communication in a virtual reality environment.

WE CLAIM:
1. A method for communication in a virtual reality environment, the method comprising:
initiating, by a processor, multimedia on a device providing a virtual reality environment to a primary user;
receiving, by the processor, a communication initiated by a secondary user via a communication channel between the device and a mobile phone associated with the primary user, and wherein the communication is one or more of a call, a message, and an e-mail;
comparing, by the processor, a first relationship and a second relationship, wherein the first relationship exists between the primary user and the secondary user, and wherein the second relationship exists between characters present in the multimedia;
selecting, by the processor, a character from the characters present in the multimedia based on the comparison of the first relationship and the second relationship;
displaying, by the processor, the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.
2. The method of claim 1, further comprising
generating, by the processor, a first emotion of the primary user based on one or more sensors located on the body of the primary user; and
generating, by the processor, a second emotion of the secondary user based on speech processing methodology; and
associating, by the processor, a final emotion to the character based on one of the first emotion, the second emotion, and a genre of the multimedia, and wherein the genre of the multimedia is an action genre, a comedy genre, a horror genre, and a fantasy genre.
3. The method of claim 1, further comprising:
obtaining, by the processor, biometric data of the primary user wherein the biometric data comprises one of a finger print data, iris data, heart beat data, and face recognition data;
16
identifying, by the processor, the mobile associated with the user based on biometric data;
creating, by the processor, a communication channel between the mobile and the device, wherein the communication channel is one of a Bluetooth communication channel, a Wi-Fi communication channel, a VoIP communication channel; and
triggering, by the processor, a forwarding function and a silent profile on the mobile device, wherein the forwarding function comprises forwarding the communication to the device via the communication channel.
4. The method of claim 1, further comprising
monitoring, by the processor, the multimedia; and
identifying, by the processor, the characters in the multimedia and the second relationships between the characters in the multimedia.
5. A system for communication in a virtual reality environment, the system comprising:
a memory; and
a processor coupled to the memory, wherein the processor is capable of executing instructions to perform steps of:
initiating multimedia on a device providing a virtual reality environment to a primary user;
receiving a communication initiated by a secondary user via a communication channel between the device and a mobile phone associated with the primary user and wherein the communication is one or more of a call, a message, and an e-mail;
comparing a first relationship and a second relationship, wherein the first relationship exists between the primary user and the secondary user, and wherein the second relationship exists between characters present in the multimedia;
selecting a character from the characters present in the multimedia based on the comparison of the first relationship and the second relationship;
displaying the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.
6. The system of claim 5, further comprising:
17
generating a first emotion of the primary user based on one or more sensors located on the body of the primary user; and
generating a second emotion of the secondary user based on speech processing methodology; and
associating a final emotion to the character based on one of the first emotion, the second emotion, and a genre of the multimedia, and wherein the genre of the multimedia is an action genre, a comedy genre, a horror genre, and a fantasy genre.
7. The system of claim 5, further comprising
obtaining biometric data of the primary user, wherein the biometric data comprises one of a finger print data, iris data, heart beat data, and face recognition data;
identifying the mobile associated with the user based on biometric data;
creating the communication channel between the mobile and the device, wherein the communication channel is one of a Bluetooth communication channel, a Wi-Fi communication channel, a VoIP communication channel; and
triggering a forwarding function and a silent profile on the mobile device, wherein the forwarding function comprises forwarding the communication to the device via the communication channel.
8. The system of claim 5, further comprising
monitoring the multimedia; and
identifying the characters in the multimedia and the second relationships between the characters in the multimedia.
9. A non-transitory computer program product having embodied thereon a computer program for communication in a virtual reality environment, the computer program product storing instructions, the instructions comprising instructions for:
initiating multimedia on a device providing a virtual reality environment to a primary user;
obtaining biometric data of the primary user, wherein the biometric data comprises one of a finger print data, iris data, heart beat data, and face recognition data;
identifying the mobile associated with the user based on biometric data;
18
creating a communication channel between the mobile and the device, wherein the communication channel is one of a Bluetooth communication channel, a Wi-Fi communication channel, a VoIP communication channel; and
triggering a forwarding function and a silent profile on the mobile device, wherein the forwarding function comprises forwarding the communication to the device via the communication channel
receiving a communication initiated by a secondary user via a communication channel between the device and a mobile phone associated with the primary user and wherein the communication is one or more of a call, a message, and an e-mail;
comparing a first relationship and a second relationship, wherein the first relationship exists between the primary user and the secondary user, and wherein the second relationship exists between characters present in the multimedia;
selecting a character from the characters present in the multimedia based on the comparison of the first relationship and the second relationship;
displaying the character as a representation of the secondary user in the virtual reality environment for communication between the primary user and the secondary user.
10. The non-transitory computer program product of claim 9, further comprising:
generating a first emotion of the primary user based on one or more sensors located on the body of the primary user; and
generating a second emotion of the secondary user based on speech processing methodology; and
associating a final emotion to the character based on one of the first emotion, the second emotion, and a genre of the multimedia, and wherein the genre of the multimedia is an action genre, a comedy genre, a horror genre, and a fantasy genre.

Documents

Application Documents

# Name Date
1 Form 9 [07-04-2016(online)].pdf 2016-04-07
2 Form 3 [07-04-2016(online)].pdf 2016-04-07
4 Form 18 [07-04-2016(online)].pdf 2016-04-07
5 Drawing [07-04-2016(online)].pdf 2016-04-07
6 Description(Complete) [07-04-2016(online)].pdf 2016-04-07
7 Form 26 [06-07-2016(online)].pdf 2016-07-06
8 201611012320-GPA-(11-07-2016).pdf 2016-07-11
9 201611012320-Form-1-(11-07-2016).pdf 2016-07-11
10 201611012320-Correspondence Others-(11-07-2016).pdf 2016-07-11
11 abstract.jpg 2016-07-18
12 201611012320-FER.pdf 2020-01-30
13 201611012320-OTHERS [13-07-2020(online)].pdf 2020-07-13
14 201611012320-FER_SER_REPLY [13-07-2020(online)].pdf 2020-07-13
15 201611012320-COMPLETE SPECIFICATION [13-07-2020(online)].pdf 2020-07-13
16 201611012320-CLAIMS [13-07-2020(online)].pdf 2020-07-13
17 201611012320-POA [09-07-2021(online)].pdf 2021-07-09
18 201611012320-FORM 13 [09-07-2021(online)].pdf 2021-07-09
19 201611012320-Proof of Right [22-10-2021(online)].pdf 2021-10-22
20 201611012320-US(14)-HearingNotice-(HearingDate-20-02-2024).pdf 2024-02-07
21 201611012320-Correspondence to notify the Controller [11-02-2024(online)].pdf 2024-02-11
22 201611012320-FORM-26 [15-02-2024(online)].pdf 2024-02-15
23 201611012320-Written submissions and relevant documents [06-03-2024(online)].pdf 2024-03-06
24 201611012320-RELEVANT DOCUMENTS [06-03-2024(online)].pdf 2024-03-06
25 201611012320-PETITION UNDER RULE 137 [06-03-2024(online)].pdf 2024-03-06
26 201611012320-PatentCertificate23-03-2024.pdf 2024-03-23
27 201611012320-IntimationOfGrant23-03-2024.pdf 2024-03-23
28 201611012320-FORM 4 [18-07-2024(online)].pdf 2024-07-18

Search Strategy

1 2020-01-2416-40-11_24-01-2020.pdf

ERegister / Renewals

3rd: 18 Jun 2024

From 07/04/2018 - To 07/04/2019

4th: 18 Jun 2024

From 07/04/2019 - To 07/04/2020

5th: 18 Jun 2024

From 07/04/2020 - To 07/04/2021

6th: 18 Jun 2024

From 07/04/2021 - To 07/04/2022

7th: 18 Jun 2024

From 07/04/2022 - To 07/04/2023

8th: 18 Jun 2024

From 07/04/2023 - To 07/04/2024

9th: 18 Jul 2024

From 07/04/2024 - To 07/04/2025

10th: 27 Mar 2025

From 07/04/2025 - To 07/04/2026