Sign In to Follow Application
View All Documents & Correspondence

“A Method And A System For Dynamically Modifying A Virtual World”

Abstract: The present disclosure describes a method (200) and a system (400) to dynamically enhance user experience. The method comprises detecting (402) one or more user activities and creating (404) a mood board (214) by identifying one or more related mood(s) of the user in relation to the detected one or more user activities at given moments. The method further comprises identifying (406) one or more user interest elements from the mood board (214) and dynamically enhancing (408) a virtual world using the identified one or more user interest elements.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 August 2020
Publication Number
06/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@knspartners.com
Parent Application

Applicants

HIKE PRIVATE LIMITED
4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India

Inventors

1. Anshuman Misra
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
2. Ankur Narang
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
3. Dipankar Sarkar
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
4. Kavin Bharti Mittal
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India

Specification

The present invention generally relates to systems and methods for dynamically
enhancing user experience. Particularly, the present invention relates to techniques of
dynamically modifying a virtual world based on moods or emotions of the user.
BACKGROUND
[0002] Some of the technology advancements that have taken place recently are in the fields
of Virtual Reality (VR) and Augmented Reality (AR). Both VR and AR have taken an
important place in the field of technology in a very short time span. These technologies
make use of an artificial environment that simulates a real environment. This artificial
environment is also referred as ‘virtual world’ or ‘virtual environment.’ Virtual environment
is an interactive, computer-generated, 3D environment and experienced by a user using a
display and speakers via which the visual and auditory elements are presented to the user.
The user may interact with the virtual environment via a variety of input devices, such as
head mounted displays, gloves, touchscreens, microphones, keyboards etc.
[0003] The virtual environments may be categorized into static and dynamic virtual
environments. Today most of the virtual worlds are static in nature and they react in an
expected way to the user. In a static virtual world, the environment is pre-created. Though
one can use an avatar to move about in a static virtual world and interact with other objects,
a static virtual world can become predictable and the user may lose interest. On the other
hand, a dynamic virtual world is subject to continual development and changes and thus,
can never be completely known to a user. In known dynamic virtual worlds, the components
change with respect to user inputs. Moods of a user play an important role in retaining
interest of user in the virtual world. However, in known literature there are no techniques
that can dynamically change or update a virtual world based on moods of a user.
[0004] Thus, there exist a need for the technology that facilitates updating of the virtual
world based on moods of the users to retain the interest of the user in the virtual world.
SUMMARY OF THE INVENTION
3
[0005] One or more shortcomings discussed above are overcome, and additional advantages
are provided by the present disclosure. Additional features and advantages are realized
through the techniques of the present disclosure. Other embodiments and aspects of the
disclosure are described in detail herein and are considered a part of the disclosure.
[0006] According to an aspect of the present disclosure, methods and systems are provided
for enhancing the user experience.
[0007] In another aspect, the present disclosure relates to a method and a system for
dynamically updating a virtual world of a user based on mood(s) of the user.
[0008] In a non-limiting embodiment of the present disclosure, the present application
discloses a method for dynamically enhancing the user experience, the method comprises
detecting one or more user activities and creating a mood board by identifying one or more
related mood(s) of the user in relation to the detected one or more user activities at given
moments. The method further comprises identifying one or more user interest elements from
the mood board and dynamically enhancing a virtual world using the identified one or more
user interest elements.
[0009] In another non-limiting embodiment of the present disclosure, the present
application discloses a system for dynamically enhancing the user experience, the system
comprises a detecting unit configured to detect one or more user activities and a creating
unit configured to create a mood board by identifying one or more related mood(s) of the
user in relation to the detected one or more user activities at given moments. The system
further comprises an identifying unit configured to identify one or more user interest
elements from the mood board and a processing unit configured to dynamically enhance a
virtual world using the identified one or more user interest elements.
[0010] The foregoing summary is illustrative only and is not intended to be in any way
limiting. In addition to the illustrative aspects, embodiments, and features described above,
further aspects, embodiments, and features will become apparent by reference to the
drawings and the following detailed description.
4
OBJECTS OF THE INVENTION:
[0011] The main object of the present invention is to enhance user experience of a user.
[0012] Another main object of the present invention is to facilitate dynamic updating of a
virtual world of a user based on mood(s) of the user.
[0013] Another main object of the present invention is to retain the interest of the user in
the virtual world by personalizing the virtual world based on mood(s) of the user.
BREIF DESCRIPTION OF DRAWINGS
[0014] Further aspects and advantages of the present disclosure will be readily understood
from the following detailed description with reference to the accompanying drawings.
Reference numerals have been used to refer to identical or functionally similar elements.
The figures together with a detailed description below, are incorporated in and form part of
the specification, and serve to further illustrate the embodiments and explain various
principles and advantages, in accordance with the present disclosure wherein:
[0015] Fig. 1 illustrates an exemplary environment for updating a virtual world based on
mood(s) of a user, in accordance with some embodiments of the present disclosure.
[0016] Fig. 2 illustrates a block diagram illustrating a system for updating a virtual world
based on mood(s) of a user, in accordance with some embodiments of the present disclosure.
[0017] Fig. 3(a) and 3(b) illustrate exemplary views of virtual worlds.
[0018] Fig. 4 depicts flowchart of a method for updating a virtual world of a user based on
mood(s) of the user, in accordance with some embodiments of the present disclosure.
[0019] Fig. 5 illustrates a block diagram of an exemplary computer system for
implementing embodiments consistent with the present disclosure.
5
[0020] It should be appreciated by those skilled in the art that any block diagrams herein
represent conceptual views of illustrative systems embodying the principles of the present
subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state
transition diagrams, pseudo code, and the like represent various processes which may be
substantially represented in computer readable medium and executed by a computer or
processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF THE INVENTION
[0021] Referring now to the drawings, there is shown an illustrative embodiment of the
disclosure “a method and system for dynamically modifying a virtual world”. It is
understood that the disclosure is susceptible to various modifications and alternative forms;
specific embodiments thereof have been shown by way of example in the drawings and will
be described in detail below. It will be appreciated as the description proceeds that the
disclosure may be realized in different embodiments.
[0022] The terms “comprises”, “comprising”, or any other variations thereof, are intended
to cover non-exclusive inclusions, such that a setup, device that comprises a list of
components that does not include only those components but may include other components
not expressly listed or inherent to such setup or device. In other words, one or more elements
in a system or apparatus proceeded by “comprises… a” does not, without more constraints,
preclude the existence of other elements or additional elements in the system or apparatus
or device. It could be noted with respect to the present disclosure that the terms like “a
system for dynamically modifying a virtual world”, “The system” refers to the same system
which is used using the present disclosure.
[0023] In the present document, the word “exemplary” is used herein to mean “serving as
an example, instance, or illustration.” Any embodiment or implementation of the present
subject matter described herein as “exemplary” is not necessarily to be construed as
preferred or advantageous over other embodiments.
[0024] While the disclosure is susceptible to various modifications and alternative forms,
specific embodiment thereof has been shown by way of example in the drawings and will
6
be described in detail below. It should be understood, however that it is not intended to limit
the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover
all modifications, equivalents, and alternative falling within the scope of the disclosure.
[0025] The terms like “virtual world”, and “virtual environment” may be used
interchangeably or in combination throughout the description. Further, the terms like
“moods”, “sentiments”, and “emotions” may be used interchangeably or in combination
throughout the description.
[0026] According to an aspect, the present disclosure provides a technique of enhancing
user experience. Particularly, the present disclosure provides an improved technique for
updating an own constructed virtual world of a user based on mood(s) of the user, according
to an aspect of the present disclosure.
[0027] The virtual worlds may be divided into several categories based on their properties
in relation to the user. For example, the virtual worlds may be categorized as persistent and
non-persistent virtual world. A persistent virtual world is a virtual world that continuously
exists even when the users are not connected to the virtual world or even when the user exits
the virtual world. On the other hand, a non-persistent virtual world is created and destroyed
each time a user enters and exits the virtual world.
[0028] In another example, the worlds are categorized as static and dynamic virtual worlds,
as described earlier. Today most of the virtual worlds are static in nature and they react in
an expected way to the user and can become predictable. Thus, a user can easily predict the
elements (i.e. components/places/objects/environment etc.) in the static virtual world and
may get bored soon. Consider an example of a mobile game as a static virtual world. Every
time a user plays the game, he sees same elements in the game. So, after playing the game
continuously for few days/hours the user may get bored. To retain the interest of the user,
the virtual worlds should be dynamic in nature.
[0029] A dynamic virtual world is subject to continuous development and changes and thus,
can never be completely known to a user. The dynamic virtual world has elements that may
7
change based on the user inputs. A dynamic virtual world requires user input for updating
or modifying the virtual world. Each time a user wants to modify the virtual world, he/she
has to manually supply the necessary information to the server for updating the virtual
world, which requires extra efforts and time of the user. Some systems update virtual worlds
based on user’s web cache, registry entries, age, gender, geographic location, purchase
history etc. However, such systems do not consider the mood(s) of the user with respect to
other elements for modifying the virtual worlds. Thus, in known literature there are no
effective techniques that can dynamically change or update a virtual world based on mood
of a user to retain interest of the user in the virtual world.
[0030] The present disclosure addresses above-mentioned problems and provides an
improved method for updating the virtual world of a user based on mood(s) of the user. To
achieve this the present disclosure describes a system that detects one or more activities of
the user and then creates a mood board by identifying one or more moods of the user in
relation to the one or more activities. The mood board further comprises information
pertaining to one or more user interest elements associated with the identified one or more
moods. The system identifies one or more user interest elements from the mood board and
then updates/enhances a virtual world based on the identified one or more user interest
elements. This way the system enhances user interest in the virtual world and thus, provides
an improved experience of the virtual world to the user.
[0031] Figure 1 illustrates an exemplary communication environment 100 for enhancing
user experience by dynamically updating a virtual world of a user based on mood(s) of the
user relative to other users/persons or elements. The environment 100 comprises a plurality
of user devices 102-1, 102-2, …102-n and a server 104 communicably coupled to each other
via a communication network 106 and a database 108. The server comprises one or more
processing units (not shown) and one or more memory units (not shown) communicably
coupled to each other to implement its one or more functionalities. Further, each of the
plurality of user devices 102 comprises one or more transceivers (not shown), one or more
processing units (not shown), and one or more memory units (not shown) communicably
coupled to each other to implement one or more functionalities of the user devices 102. A
8
single user may be associated with one or more user devices 102. Alternatively, a single
device 102 may be associated with more than one user. Examples of user devices 102 may
include, but not limited to, a personal computer, a mobile phone, a laptop, a tablet, virtual
reality devices and so forth. Further, each of the one or more user devices 102 and the server
104 may include any number of other components as required for their operation. However,
description of such components has been avoided for the sake of brevity.
[0032] The server 104 may be referred as virtual reality (VR) server. For the sake of
illustration, a single server 104 is shown in figure 1. However, there can be a plurality of
servers each one for a different purpose. The database 108 stores data for each user obtained
from various data sources. The database 108 may store data of a user related to, but not
limited to, social media interactions with other users or elements, data from mobile
applications, chat and call logs of the user, voice or text or video communications with other
users, profile information changes, status information changes, interactions with one or
more real-world or virtual world entities.
[0033] The social media interactions may comprise interactions such as, but not limited to,
likes, comments, reactions on various posts on social media. The posts may include photos,
videos, texts etc. The data from various mobile applications may include, but not limited to,
interaction data and usage pattern of the mobile applications, location data, content that is
being played/downloaded/uploaded/browsed on the mobile applications etc.
[0034] The one or more user devices 102 in conjunction with the server 104 may enable
respective users to generate their personal virtual worlds. In some embodiments, the one or
more user devices 102 may be coupled to one or more external devices such as, but not
limited to, cameras, speakers, microphones and so forth. The external devices may aid a
user in generation of a personalized virtual world for the user. In some embodiments, the
virtual world for the user may be generated by monitoring data related to user activities
(interactions, communications etc.) stored in the database 108. In some other embodiments,
the virtual world may be generated based on real-world experiences of the user. The virtual
world may be generated by any of suitable means such as artificial intelligence, virtual
9
reality technology, augmented reality technology, machine learning, neural networks and so
forth. However, the process of generation of virtual world has been omitted from the present
disclosure for the sake of brevity. Embodiments of the present disclosure covers or intended
to cover any possible and suitable mean for generating a user-specific virtual world.
[0035] The generated virtual worlds of the users are stored in the memory unit.
Additionally, the memory unit may be configured to store one or more instructions. The
processing unit may be configured to execute the one or more instructions stored in the
memory unit. The memory unit may include a Random-Access Memory (RAM) unit and/or
a non-volatile memory unit such as a Read Only Memory (ROM), optical disc drive,
magnetic disc drive, flash memory, Electrically Erasable Read Only Memory (EEPROM),
a memory space on a server or cloud and so forth. Example of the processing unit may
include, but not restricted to, a general-purpose processor, a Field Programmable Gate Array
(FPGA), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor
(DSP), microprocessors, microcomputers, micro-controllers, digital signal processors,
central processing units, state machines, logic circuitries, and/or any devices that manipulate
signals based on operational instructions.
[0036] Each user constructs their own virtual world. So, for n users there will be n respective
user-specific virtual worlds stored in the memory unit at the server 104. Further, the memory
unit may store association information between the virtual worlds and the users. The
association information may be used for accessing a virtual world stored in the memory unit.
A user-specific virtual world has an association with the user using one or more of, but not
limited to, phone number, device ID, email ID, username, and authentication credentials etc.
[0037] Turning now to figure 2 which shows a block diagram illustrating a system 200 for
dynamically updating a virtual world of a user based on mood(s) of the user relative to other
users/people and elements. According to an embodiment of the present disclosure, the
system 200 may comprise input/output interface 202, a processing unit 204, a memory 206,
and various units 210. The I/O interface 202 may include a variety of software and hardware
interfaces, for example, a web interface, a graphical user interface, input device, output
10
device and the like. The I/O interface 202 may allow the system 200 to interact with the user
devices 102 directly or through other devices. The memory 206 is communicatively coupled
to the processing unit 204 and the units 210. Further, the memory 206 comprises data 208
obtained from various data sources, user specific virtual worlds 212, and a mood board 214.
The data 208 includes, but not limited to, social media interactions with other users or
elements, data from mobile applications, chat and call logs of the user, voice or text or video
communications with other users, profile information changes, status information changes,
interactions with one or more real-world or virtual world entities.
[0038] Further, the units 210 comprise a detecting unit 218, a creating unit 220, an
identifying unit 222, and other units 224. Further, the units 218-224 may be dedicated
hardware units capable of performing various operations of the system 200. However,
according to other embodiments, the function of the units 218-224 may be performed by the
processing unit 204 or an application-specific integrated circuit (ASIC) or any circuitry
capable of executing instructions stored in the memory 206 of the system 200. For the sake
of illustration, it is shown here that the user devices 102 are external to the system. However,
in an alternative embodiment the user devices 102 may be a part of the system 200.
[0039] The system 200 receives, via the I/O interface 202, data from the one or more user
devices 102 and stores the received data in the memory 206. The one or more user devices
102 may transmit generated user specific virtual worlds 212 to the system 200 and the
system 200 may receive the generated virtual worlds 212 from the one or more user devices
102. In another embodiment, the system 200 may receive various data 208 from the user
devices 102 and then generates user specific virtual worlds 212 by analyzing the received
data 208. The system 200 may store the received or generated virtual worlds 212 in the
memory 206. Each virtual world corresponding to each user device 102 may be stored along
with association information described above. The system modifies the user-specific virtual
worlds 212 using the data stored in the mood board 214 and stores the updated virtual worlds
216 in the memory. The process of updating the virtual worlds is described below in detail.
11
[0040] The system detects one or more activities of the user using the detecting unit 218.
The one or more activities may be activities of the user in real world or in virtual world. The
one or more activities may include, but not limited to, watching a movie or other video
content, listening audio content, playing video games, studying, dancing, singing, audio or
video or text conversation with other user(s), and other interactions with the users and the
elements. In an embodiment, the one or more activities may be detected by the detecting
unit 218 by analyzing the data 208 received from the user devices 102.
[0041] A creating unit 220 of the system creates a mood board 214 for the user based on the
detected one or more activities. The mood board 214 is an indicator of what the user’s mood
maybe like in relation to the one or more activities. The creating unit 218 identifies one or
more mood(s) of the user with respect to other elements or users in relation to the detected
one or more activities at a given moment. The creating unit 218 uses machine learning
models to derive mood(s) of the user and extract specific entities of interest from text data
and uses deep learning techniques to extract mood(s) from voice and video data. The
machine learning models may comprise various models including a language detection
model and a language specific model. These models may help the system in classifying a
piece of text to a mood. The system may use these models to extract mood(s) of user from
other types of data including, but not limited to, social media interactions with other users
or elements, data from mobile applications, chat and call logs of the user, voice or text or
video communications with other users, profile information changes, status information
changes, interactions with one or more real-world or virtual world entities. The system may
extract time, keywords, location etc. from above data. The creating unit 218 may take help
of an external knowledge graph like Wikipedia, IMDB, Google etc. to derive mapping and
to identify the mood(s) of the user. The user will have to interact with at least one element
in the system to generate some mood board. As more interactions take place, the quality of
the mood board generated increases. One of the factors for measuring the quality of the
mood board is accuracy of data reflecting the mood of the user. Further, the external
knowledge graphs are selected to cover as many as possible entities that a user talk about.
12
[0042] For example, consider an example where a user is watching ‘Titanic’ movie. The
creating unit 218 uses the knowledge graph and the models to determine that the current
mood of the user is romantic mood. In another example, if the user is playing sad songs, the
creating unit 218 uses the knowledge graph and the models to determine that the current
mood of the user is sad. Consider another example where a user is watching a football match
and parallelly texting someone about his/her activity. The system can detect that the mood
of the user is delightful by analyzing the conversations using the machine learning models.
[0043] Thus, the mood board comprises information indicating the one or more activities
performed by the user, the time and/or location of the activity, and mood(s) of the user
corresponding to each activity.
[0044] In addition to detecting mood of the user, the creating unit 218 further detects one
or more elements of interest for the user associated with the identified mood(s). The
elements of interest may comprise, but not limited to, places, people, animals, and
components. The creating unit 218 uses the machine learning models and the knowledge
graph to detect the one or more elements of interest. Considering above example, where
during evening time the user is watching ‘Titanic’ movie and at the same time likes a
post/photo related to caramel popcorn. This could mean that the user has a sweet tooth mood
as well in addition to the mood being romantic. Further, at the same time or after watching
the movie the user misses his dog. Thus, the creating unit 218 updates the mood board by
corelating the detected one or more elements of interest (i.e. likes caramel popcorn and
misses the dog) with the corresponding user activity and mood(s). Further, the moods of the
user with respect to the users/people and elements are time dependent. Table 1 illustrates a
few user activities; and mood(s) and elements of interest corresponding to those activities:
User activity Mood(s) Elements of interest
(evening) watching
romantic movie
Romantic Popcorn, Dog, Romantic
places
(evening) watching football
match
Delightful Beer, Snacks, Desired
Jersey
13
(night) listening sad songs Sad and depressed Water, Motivational
content
(morning) studying Serious Books, Notebook, Library
Table 1
[0045] This way the system builds the mood board 214 and then updates the virtual world
for the user based on the data present in the mood board. For updating the virtual world, the
identifying unit 222 identifies one or more elements of interest from the mood board. The
one or more elements of interests may comprise, but not limited to, people, places, animals,
and components and the processing unit 204 then dynamically updates the virtual world
using the identified one or more user interest elements. The processing unit 204 uses virtual
elements to update the virtual worlds. To update the virtual world, the processing unit 204
performs one or more of: adding one or more user interest elements in the virtual world e.g.,
a popcorn stand and a petting station may be added in the virtual world when the mood of
the user is romantic; modifying one or more existing elements of the virtual world e.g., a
movie board in the virtual world is updated to display the list of movies depending on the
mood of the user; removing one or more elements from the virtual world e.g. if the user
disliked some item/element when his/her mood is romantic the same item/element may be
temporarily or permanently removed from the virtual world; and adding one or more
components on avatars of users in the virtual world e.g., if a user is watching a football
match in the virtual world, the dress of the user avatar may be updated based on dress of
his/her favorite team.
[0046] In should be noted that the virtual worlds described herein are custom virtual world.
In other words, each user has their own virtual world. Further, the mood(s) of the user are
relative to other users/people or elements.
Example:
[0047] Figure 3(a) illustrates an exemplary custom virtual world 300-1 of a user. The virtual
world 300-1 shown in figure 3(a) is generated by monitoring data related to user activities
stored in the memory 206. However, the process of generation of virtual world has been
omitted from the present disclosure for the sake of brevity. The virtual world 300-1
14
comprises various components such an entry point 310, a cinema 320, a restaurant 330, a
football ground 340, and a bar 350. It should be noted that the virtual world 300-1 shown in
figure 3(a) is for illustration purpose only and in general a virtual world may vary from the
virtual world 300-1 shown in figure 3(a) and may comprise different components that the
components of the virtual world shown in figure 3(a).
[0048] Figure 3(b) shows an exemplary view of the virtual world 300-2 that has been
obtained after updating the virtual world 300-1 based on mood(s) of the user. Consider
above example, where during evening time a user is watching ‘Titanic’ movie. The system
determines using the knowledge graph and/or by analyzing data from various data sources
that the movie is a romantic movie. Thus, it is determined that the mood of the user is
romantic. While watching the movie the user likes caramel popcorn. At the same time, the
user is also missing their dog. All this information is determined by analyzing the data
obtained from various data sources and the determined information stored in the mood
board. Thus, the mood board of the user depicts that the user watches romantic movie during
evening time and while watching the movie the user misses their dog and prefers having
caramel popcorn during the movie. This information from the mood board is used in
modifying the virtual world of the user. Next time when the user visits the virtual world
during an evening time, the system shows a list of romantic movies to the user as shown in
figure 3(b). When the movie break happens, the system shows a popcorn man/stand appears
near the user and after watching the movie the system leads the user to go towards a petting
station 360, where they can view short dog videos (dogs playing, being petted, etc.). This
way the system may control the user as where they go after each interaction.
[0049] Thus, based on the mood of the user various components of the virtual worlds such
as, but not limited to, weather, background, color, sounds, texture, billboards etc. may be
changed. In an embodiment, the system may automatically change one or more components
of the virtual world without identifying elements of interest. For example, after determining
that the user has a romantic mood, the system may change weather of the virtual world to
rainy weather and may also add components such as flowers etc. into the virtual world. In
an embodiment, if the user is chatting with their friends about visiting some place, the
15
system 200 may change the weather of the virtual world to the weather of the place which
the user wants to visit with his/her friends.
[0050] Thus, the system disclosed herein enables dynamic updating of the virtual world
based on mood(s) of the user. This updating of the virtual world does not have any
involvement of the user thus, the time and effort of the user are saved. Thus, the system
provides an efficient and convenient technique to dynamically and automatically update the
virtual world of the user based on mood(s) of the user which helps in retaining the interest
of the user in the virtual world. Thus, the system disclosed herein enhances user experience.
Further, the system described herein provides adequate coverage for a larger section of users
because the system provides support of multiple languages with the help of language
detection model and a language specific model.
[0051] Fig. 4 is a flow chart representing exemplary method 400 for enhancing user
experience by dynamically updating virtual world of a user based on mood(s) of the user
according to an embodiment of the present disclosure. The method may pe performed by
the system 200 in conjunction with the various units described in figure 2. The method 400
is merely provided for exemplary purposes, and embodiments are intended to include or
otherwise cover any methods or procedures for dynamically updating virtual world of a user
based on mood(s) of the user.
[0052] As illustrated in figure 4, the method 400 includes one or more blocks illustrating a
method to dynamically update a virtual world of a user based on mood(s) of the user. The
method 400 may be described in a general context of computer executable instructions.
Generally, computer executable instructions can include routines, programs, objects,
components, data structures, procedures, modules, and functions, which perform specific
functions or implement specific abstract data types.
[0053] The order in which the method 400 is described is not intended to be construed as a
limitation, and any number of the described method blocks can be combined in any order to
implement the method. Additionally, individual blocks may be deleted from the methods
16
without departing from the spirit and scope of the subject matter described herein.
Furthermore, the method can be implemented in any suitable hardware, software, firmware,
or combination thereof.
[0054] At step 402, the method comprises detecting one or more use activities. The one or
more activities may be activities of the user in real world or in virtual world.
[0055] At step 404, the method comprises creating a mood board by identifying one or more
related mood(s) of the user in relation to the detected one or more user activities at given
moments. The mood board 214 is an indicator of what the user’s mood maybe like in relation
to the one or more activities. The system identifies one or more mood(s) of the user with
respect to other elements or users in relation to the detected one or more activities at a given
moment. The system uses various machine learning models and knowledge graphs to extract
one or more mood(s) of the user from various data sources including, but not limited to,
social media interactions with other users or elements, data from mobile applications, chat
and call logs of the user, voice or text or video communications with other users, profile
information changes, status information changes, interactions with one or more real-world
or virtual world entities.
[0056] In an embodiment, in addition to detecting mood of the user, the system 200 further
detects one or elements of interest for the user associated with the identified mood(s) of the
user. The elements of interest may comprise, but not limited to, places, people, and
components. The system analyzes the data 208 stored in the memory 206 using the machine
learning models and the knowledge graph to detect the one or more elements of interest,
corresponding to the identified one or more mood(s) of the user. Further, the system updates
the mood board by corelating the detected one or more elements of interest with the
corresponding user activities and mood(s).
[0057] At step 406, after creating the mood board, the method comprises identifying one or
more user interest elements from the mood board. In an embodiment, analysing the plurality
of data sources comprises analysing one or more of: social media interactions, data from
mobile applications, chat and call logs, voice or text or video communications, profile
17
information changes, status information changes, interactions with one or more real-world
or virtual world entities.
[0058] Finally, at step 408, the method comprises dynamically enhancing the virtual world
using the identified one or more user interest elements. In an embodiment, dynamically
enhancing the virtual world using the identified one or more user interest elements
comprises one or more of: adding one or more user interest elements in the virtual world;
modifying one or more existing elements of the virtual world; removing one or more
elements from the virtual world; and adding one or more components on avatars of users in
the virtual world.
[0059] In an embodiment, the one or more user interest elements comprises, but not limited
to, people, places, and components. Further, the virtual world is a user-specific or custom
virtual world and the mood(s) of the user are with respect to other users/people or elements.
In an alternative embodiment, the method of figure 4 can be performed in a virtual world
application that can be viewed in augmented reality, virtual reality, or a flat display.
[0060] Accordingly, from the above disclosure, it may be worth noting that the present
disclosure provides an easy, convenient, and efficient technique for updating a virtual world
of a user based on mood(s) of the user. Additionally, the disclosed techniques help in
retaining the user interest by personalizing the user’s virtual world based on mood(s) of the
user. Thus, the user will spend more time in the virtual world and hence, the user experience
will be improved.
Computer System
[0061] Figure 5 illustrates a block diagram of an exemplary computer system 500 for
implementing embodiments consistent with the present invention. In an embodiment, the
computer system 500 can be the system 200 which is used for dynamically updating a
custom virtual world of a user based on mood(s) of the user relative to other users/people
and elements. According to an embodiment, the computer system 500 may receive various
data from one or more user devices 510 which may include, for example, custom virtual
worlds and interaction or messaging data received from the one or more of user devices.
18
The computer system 500 may comprise a central processing unit (“CPU” or “processor”)
502. The processor 502 may comprise at least one data processor for executing program
components for executing user- or system-generated business processes. The processor 502
may include specialized processing units such as integrated system (bus) controllers,
memory management control units, floating point units, graphics processing units, digital
signal processing units, etc.
[0062] The processor 502 may be disposed in communication with one or more input/output
(I/O) devices (511 and 512) via I/O interface 501. The I/O interface 501 may employ
communication protocols/methods such as, without limitation, audio, analog, digital, stereo,
IEEE-1394, serial bus, Universal Serial Bus (USB), infrared, PS/2, BNC, coaxial,
component, composite, Digital Visual Interface (DVI), high-definition multimedia interface
(HDMI), Radio Frequency (RF) antennas, S-Video, Video Graphics Array (VGA), IEEE
802.n /b/g/n/x, Bluetooth, cellular (e.g., Code-Division Multiple Access (CDMA), HighSpeed Packet Access (HSPA+), Global System For Mobile Communications (GSM), LongTerm Evolution (LTE) or the like), etc.
[0063] Using the I/O interface 501, the computer system 500 may communicate with one
or more I/O devices (511 and 512).
[0064] In some embodiments, the processor 502 may be disposed in communication with a
communication network 509 via a network interface 503. The network interface 503 may
communicate with the communication network 509. The network interface 503 may employ
connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted
pair 10/100/1000 Base T), Transmission Control Protocol/Internet Protocol (TCP/IP), token
ring, IEEE 802.11a/b/g/n/x, etc. The communication network 509 can be implemented as
one of the different types of networks, such as intranet or Local Area Network (LAN) and
such within the organization. The communication network 509 may either be a dedicated
network or a shared network, which represents an association of the different types of
networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP),
Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol
19
(WAP), etc., to communicate with each other. Further, the communication network 409 may
include a variety of network devices, including routers, bridges, servers, computing devices,
storage devices, etc.
[0065] In some embodiments, the processor 502 may be disposed in communication with a
memory 505 (e.g., RAM 513, ROM 514, etc. as shown in FIG. 5) via a storage interface
504. The storage interface 504 may connect to memory 505 including, without limitation,
memory drives, removable disc drives, etc., employing connection protocols such as Serial
Advanced Technology Attachment (SATA), Integrated Drive Electronics (IDE), IEEE1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems Interface
(SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magnetooptical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state
memory devices, solid-state drives, etc.
[0066] The memory 505 may store a collection of program or database components,
including, without limitation, user/application data 506, an operating system 507, web
browser 508 etc. In some embodiments, the computer system 500 may store user/application
data 506, such as the data, variables, records, etc. as described in this invention. Such
databases may be implemented as fault-tolerant, relational, scalable, secure databases such
as Oracle or Sybase.
[0067] The operating system 507 may facilitate resource management and operation of the
computer system 500. Examples of operating systems include, without limitation, Apple
Macintosh OS X, UNIX, Unix-like system distributions (e.g., Berkeley Software
Distribution (BSD), FreeBSD, Net BSD, Open BSD, etc.), Linux distributions (e.g., Red
Hat, Ubuntu, K-Ubuntu, etc.), International Business Machines (IBM) OS/2, Microsoft
Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry Operating System
(OS), or the like. I/O interface 501 may facilitate display, execution, interaction,
manipulation, or operation of program components through textual or graphical facilities.
For example, I/O interface may provide computer interaction interface elements on a display
system operatively connected to the computer system 500, such as cursors, icons, check
20
boxes, menus, windows, widgets, etc. Graphical User Interfaces (GUIs) may be employed,
including, without limitation, Apple Macintosh operating systems’ Aqua, IBM OS/2,
Microsoft Windows (e.g., Aero, Metro, etc.), Unix X-Windows, web interface libraries (e.g.,
ActiveX, Java, JavaScript, AJAX, HTML, Adobe Flash, etc.), or the like.
[0068] In some embodiments, the computer system 500 may implement a web browser 508
stored program component. The web browser 508 may be a hypertext viewing application,
such as Microsoft™ Internet Explorer, Google™ Chrome, Mozilla™ Firefox, Apple™
Safari™, etc. Secure web browsing may be provided using Secure Hypertext Transport
Protocol (HTTPS) secure sockets layer (SSL), Transport Layer Security (TLS), etc. Web
browsers may utilize facilities such as AJAX, DHTML, Adobe Flash, JavaScript, Java,
Application Programming Interfaces (APIs), etc. In some embodiments, the computer
system 500 may implement a mail server stored program component. The mail server 516
may be an Internet mail server such as Microsoft Exchange, or the like. The mail server 516
may utilize facilities such as Active Server Pages (ASP), ActiveX, American National
Standards Institute (ANSI) C++/C#, Microsoft .NET, CGI scripts, Java, JavaScript, PERL,
PHP, Python, WebObjects, etc. The mail server may utilize communication protocols such
as Internet Message Access Protocol (IMAP), Messaging Application Programming
Interface (MAPI), Microsoft Exchange, Post Office Protocol (POP), Simple Mail Transfer
Protocol (SMTP), or the like. In some embodiments, the computer system 500 may
implement a mail client 515 stored program component. The mail client 515 may be a mail
viewing application, such as Apple™ Mail, Microsoft™ Entourage, Microsoft™ Outlook,
Mozilla™ Thunderbird, etc.
[0069] Furthermore, one or more computer-readable storage media may be utilized in
implementing embodiments consistent with the present invention. A computer-readable
storage medium refers to any type of physical memory on which information or data
readable by a processor may be stored. Thus, a computer-readable storage medium may
store instructions for execution by one or more processors, including instructions for causing
the processor(s) to perform steps or stages consistent with the embodiments described
herein. The term “computer-readable medium” should be understood to include tangible
21
items and exclude carrier waves and transient signals, i.e., non-transitory. Examples include
Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory,
nonvolatile memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs),
flash drives, disks, and any other known physical storage media.
[0070] The foregoing description of the various embodiments is provided to enable any
person skilled in the art to make or use the present disclosure. Various modifications to these
embodiments will be readily apparent to those skilled in the art, and the generic principles
defined herein may be applied to other embodiments without departing from the spirit or
scope of the disclosure. Thus, the present disclosure is not intended to limit the embodiments
shown herein, and instead the embodiments should be accorded the widest scope consistent
with the principles and novel features disclosed herein.

WE CLAIM:
1. A method (400) for dynamically enhancing user experience, the method comprising:
detecting (402) one or more user activities;
creating (404) a mood board (214) by identifying one or more related mood(s) of the
user in relation to the detected one or more user activities at given moments;
identifying (406) one or more user interest elements from the mood board (214); and
dynamically enhancing (408) a virtual world using the identified one or more user
interest elements.
2. The method as claimed in claim 1, wherein creating the mood board further
comprises:
detecting one or more user interest elements associated with the identified one or
more mood(s) of the user by analysing a plurality of data stored in a memory (206); and
updating the mood board by corelating the detected one or more user interest
elements with the one or more user activities.
3. The method as claimed in claim 2, wherein analysing the plurality of data sources
comprises analysing one or more of:
social media interactions, data from mobile applications, chat and call logs, voice or
text or video communications, profile information changes, status information changes,
interactions with one or more real-world or virtual world entities.
4. The method as claimed in claim 1, wherein dynamically enhancing the virtual world
using the identified one or more user interest elements comprises:
adding one or more user interest elements in the virtual world;
modifying one or more existing elements of the virtual world;
removing one or more elements from the virtual world; and
adding one or more components on avatars of users in the virtual world.
23
5. The method as claimed in claim 1, wherein the one or more user interest elements
comprises people, places, and components.
6. The method as claimed in claim 1, wherein the virtual world is a user-specific virtual
world, and wherein the one or more related mood(s) of the user correspond to mood(s) of
the user relative to other users or elements.
7. A system (200) for dynamically enhancing user experience, the system comprising:
a detecting unit (218) configured to detect one or more user activities;
a creating unit (220) configured to create a mood board (214) by identifying one or
more related mood(s) of the user in relation to the detected one or more user activities at
given moments;
an identifying unit (222) configured to identify one or more user interest elements
from the mood board (214); and
a processing unit (204) configured to dynamically enhance a virtual world using the
identified one or more user interest elements.
8. The system as claimed in claim 7, the creating unit is further configured to:
detect one or more user interest elements associated with the identified one or more
mood(s) of the user by analysing a plurality of data stored in a memory (206); and
update the mood board by corelating the detected one or more user interest elements
with the one or more user activities.
9. The system as claimed in claim 8, wherein the creating unit is configured to analyse
one or more of:
social media interactions, data from mobile applications, chat and call logs, voice or
text or video communications, profile information changes, status information changes,
interactions with one or more real-world or virtual world entities.
10. The system as claimed in claim 7, wherein the processing unit is further configured
to:
24
add one or more user interest elements in the virtual world;
modify one or more existing elements of the virtual world;
remove one or more elements from the virtual world; and
add one or more components on avatars of users in the virtual world.
11. The system as claimed in claim 7, wherein the one or more user interest elements
comprises people, places, and components.
12. The system as claimed in claim 7, wherein the virtual world is a user-specific virtual
world, and wherein the one or more related mood(s) of the user correspond to mood(s) of
the user relative to other users or elements.

Documents

Application Documents

# Name Date
1 202011033840-FORM 18 [02-07-2024(online)].pdf 2024-07-02
1 202011033840-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2020(online)].pdf 2020-08-07
2 202011033840-POWER OF AUTHORITY [07-08-2020(online)].pdf 2020-08-07
2 202011033840-Proof of Right [23-10-2020(online)].pdf 2020-10-23
3 202011033840-COMPLETE SPECIFICATION [07-08-2020(online)].pdf 2020-08-07
3 202011033840-FORM 1 [07-08-2020(online)].pdf 2020-08-07
4 202011033840-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2020(online)].pdf 2020-08-07
4 202011033840-DRAWINGS [07-08-2020(online)].pdf 2020-08-07
5 202011033840-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2020(online)].pdf 2020-08-07
5 202011033840-DRAWINGS [07-08-2020(online)].pdf 2020-08-07
6 202011033840-COMPLETE SPECIFICATION [07-08-2020(online)].pdf 2020-08-07
6 202011033840-FORM 1 [07-08-2020(online)].pdf 2020-08-07
7 202011033840-POWER OF AUTHORITY [07-08-2020(online)].pdf 2020-08-07
7 202011033840-Proof of Right [23-10-2020(online)].pdf 2020-10-23
8 202011033840-FORM 18 [02-07-2024(online)].pdf 2024-07-02
8 202011033840-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2020(online)].pdf 2020-08-07