Sign In to Follow Application
View All Documents & Correspondence

“Method For Dynamic Content Generation And Device Thereof”

Abstract: A method and device for dynamically generating virtual content for a user is disclosed. The device comprises memory to store one or more user activities and one or more emotional state of the user. The device comprises one or more processor coupled to the memory to analyse the one or more user activities and detect one or more elements from the analysed one or more user activities. The one or more processors may be configured to detect the one or more emotional state of the user from the user activities and correlate the emotional state with respect to the detected one or more elements with corresponding user activity. The device is further configured to dynamically generate one or more virtual content depicting varied contextual emotional state.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 June 2020
Publication Number
01/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@knspartners.com
Parent Application

Applicants

HIKE PRIVATE LIMITED
4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India

Inventors

1. Ankur Narang
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
2. Anshuman Misra
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
3. Dipankar Sarkar
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
4. Kavin Bharti Mittal
Hike Pvt. Ltd., 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India

Specification

0001] The present invention generally relates to social networking and more
specifically, the present invention relates to a method and a device for generating
dynamic content on a multimedia platform.
Background of Invention
[0002] This section is intended to provide information relating to the field of the
invention and thus any approach/functionality described below should not be assumed
to be qualified as prior art merely by its inclusion in this section.
[0003] Exponential growth of social media and usage of Internet have contributed to
the development of new sources of information and entertainment. Accordingly, new
and interesting opportunities for communicating on social networks arise. Instant
messaging applications generally allow users to send and receive text-based messages.
However, the major drawback of text based messages is lack of emotional connect. As
a result, users sometimes convey their emotions and/or moods to another user by adding
emoticons to a text-based message. However, such emoticons are static ones and stored
in the database of the messaging application.
[0004] Thus, the present invention aims at overcoming the drawbacks in the
conventional techniques of messaging with emoticons and generate the emoticons or
emojis dynamically.
Objects of the Invention
[0005] An object of the present invention is to provide a method and device for
detecting mood or current state of user’s mind about a particular person or object or
activity based on the messages and interactions performed by the user.
[0006] Another object of the present invention is to provide a method and device to
dynamically generate personalized emojis/avatars based on his/her emotional state and
the context information shared by the user with other users.
3
Summary of the Invention
[0007] The present disclosure overcomes one or more shortcomings of the prior art and
provides additional advantages discussed throughout the present disclosure. Additional
features and advantages are realized through the techniques of the present disclosure.
Other embodiments and aspects of the disclosure are described in detail herein and are
considered a part of the claimed disclosure.
[0008] In one non-limiting embodiment of the present disclosure, a method for
dynamically generating one or more virtual content is disclosed. The method comprises
analysing one or more user activities. The method further comprises detecting one or
more elements from the analysed one or more user activities. Further, the method
comprises detecting one or more emotional state of the user from the user activities.
The method further comprises correlating the emotional state with respect to the
detected one or more elements with corresponding user activity. Further, the method
comprises dynamically generating one or more virtual content depicting varied
contextual emotional state.
[0009] In another non-limiting embodiment of the present disclosure, the method
comprises one or more user activities that comprises messages, status messages, user
profile, usage pattern of the application, status messages of contacts or profile, voice,
video.
[0010] In yet another non-limiting embodiment of the present disclosure, the method
comprises one or more elements corresponds to at least one person or object.
[0011] In still another non-limiting embodiment of the present disclosure, the method
comprises detecting one or more elements based on one or more machine learning
techniques.
[0012] In another non-limiting embodiment of the present disclosure, the method
comprises one or more emotional state of the user that may comprise mood of the user.
4
[0013] In yet another non-limiting embodiment of the present disclosure, the method
comprises one or more virtual content that may comprise one or more emoji/avatar.
[0014] In still another non-limiting embodiment of the present disclosure, the method
comprises creating one or more emoji/avatar based on selfie image of the at least one
of the user.
[0015] In another non-limiting embodiment of the present disclosure, a computing
device for dynamically generating one or more virtual content is disclosed. The device
comprises a memory configured to store one or more user activities and one or more
emotional state of the user. The device further comprises one or more processors,
coupled with the memory and configured to analyse the one or more user activities
stored in the memory of the device. Further, the one or more processors are configured
to detect one or more elements from the analysed one or more user activities. The one
or more processor coupled with memory and configured to detect the one or more
emotional state of the user from the user activities. Further, the device configured to
correlate the emotional state with respect to the detected one or more elements
with corresponding user activity and dynamically generate one or more virtual content
depicting varied contextual emotional state.
[0016] In another non-limiting embodiment of the present disclosure, the device is
configured to perform one or more user activities comprising messages, status
messages, user profile, usage pattern of the application, status messages of contacts or
profile, voice, video.
[0017] In yet another non-limiting embodiment of the present disclosure, the one or
more elements correspond to one or more person or objects.
[0018] In still another non-limiting embodiment of the present disclosure, the one or
more elements are detected based on one or more machine learning techniques.
[0019] In another non-limiting embodiment of the present disclosure, the one or more
emotional state of the user comprises mood of the user.
[0020] In yet another non-limiting embodiment of the present disclosure, the one or
more virtual content comprises one or more emoji/avatar.
5
[0021] In still another non-limiting embodiment of the present disclosure, the device
is configured to the create one or more emoji/avatar based on selfie image of the at least
one user.
[0022] The foregoing summary is illustrative only and is not intended to be in any way
limiting. In addition to the illustrative aspects, embodiments, and features described
above, further aspects, embodiments, and features will become apparent by reference
to the drawings and the following detailed description.
Brief Description of the Drawings
[0023] The embodiments of the disclosure itself, as well as a preferred mode of use,
further objectives and advantages thereof, will best be understood by reference to the
following detailed description of an illustrative embodiment when read in conjunction
with the accompanying drawings. One or more embodiments are now described, by
way of example only, with reference to the accompanying drawings in which:
[0024] Figure 1 illustrates an exemplary environment 100 of a computing device for
dynamically generating one or more virtual content in accordance with an embodiment
of the present disclosure.
[0025] Figure 2 illustrates a block diagram 200 illustrating the computing device for
dynamically generating one or more virtual content in accordance with an embodiment
of the present disclosure.
[0026] Figure 3 illustrates a method 300 for dynamically generating one or more
virtual content in accordance with an embodiment of the present disclosure.
[0027] The figures depict embodiments of the disclosure for purposes of illustration
only. One skilled in the art will readily recognize from the following description that
alternative embodiments of the structures and methods illustrated herein may be
employed without departing from the principles of the disclosure described herein.
Detailed Description of the Invention
6
[0028] The foregoing has broadly outlined the features and technical advantages of the
present disclosure in order that the detailed description of the disclosure that follows
may be better understood. It should be appreciated by those skilled in the art that the
conception and specific embodiment disclosed may be readily utilized as a basis for
modifying or designing other structures for carrying out the same purposes of the
present disclosure.
[0029] The novel features which are believed to be characteristic of the disclosure, both
as to its organization and method of operation, together with further objects and
advantages will be better understood from the following description when considered
in connection with the accompanying figures. It is to be expressly understood, however,
that each of the figures is provided for the purpose of illustration and description only
and is not intended as a definition of the limits of the present disclosure.
[0030] In the present document, the word "exemplary" is used herein to mean "serving
as an example, instance, or illustration." Any embodiment or implementation of the
present subject matter described herein as "exemplary" is not necessarily to be
construed as preferred or advantageous over other embodiments.
[0031] The terms like “emoji” or “avatar” or “sticker” or “emoticon” may be used
interchangeably throughout the description.
[0032] Disclosed herein is a method and a device for dynamically generating virtual
content for a user. Briefly stated, embodiments of the present invention are directed
towards dynamically generating and updating, in real-time, a database of emojis/avatars
for a user based on user’s current state of mind for any person or object. Specifically,
the user’s current state of mind for other person or object may be judged or analyzed
by tracking recent activities of the user and/or user’s emotional state i.e. user’s mood
towards a particular user or object. In an embodiment, the user’s activities may be
detected based on user’s recent communication or interaction with another person,
user’s liking or disliking about the behavior of other users etc. In an exemplary
embodiment, the emotional state may be analyzed by analysis of profile pic of the user,
user’s status change or any recent activity performed by user such as checking profile
of his contacts etc. In another exemplary embodiment, the emotional state may also be
7
analyzed by monitoring the user’s facial expression or gestures on viewing the other
user or object. In an embodiment, real-time changes in the user’s current state about a
person or object may be updated in the emoji’s database and presented in the form of
new or updated emoji/avatar to the user for expressing their feelings about the person
or object over a social/messaging platform. The technique presented in the present
invention allows real-time generation and update of personalized emojis/stickers to
present the user’s current state of mind for another person or object. Such, real-time
generation, and depiction of user’s state of mind in form of emojis/avatar may enhance
user’s experience.
[0033] In an exemplary embodiment, if the device detects from user’s interactions with
his friend that the user is angry with his friend in view of Manchester united right now
and analyzed that the user’s mood is negative towards his friend for Manchester united.
Accordingly, based on user’s mood, the device may dynamically generate an animated
user's emoji which may best represent user’s present state. The generated emoji(s) may
be only shown based on the temporary state of the user. In another exemplary
embodiment, if the same user who may be angry towards Manchester United with his
friend but having fun with pet dog after some time and posted a selfie with his dog.
Then, the device may dynamically generate and update the animated user’s emoji to
present user’s current mood for his pet. In both the embodiments, the device detect and
analyze the user’s activities and emotional state to dynamically generate emojis/stickers
for sharing with other user(s) that reflect real-time status of the user towards a particular
person or object based on the context. Accordingly, the dynamically generated
emoji/stickers may be provided to the user to enable the user to view and to select one
or more emojis/stickers for insertion into a message.
[0034] In the following detailed description of the embodiments of the disclosure,
reference is made to the accompanying drawings that form a part hereof, and in which
are shown by way of illustration specific embodiments in which the disclosure may be
practiced. These embodiments are described in sufficient detail to enable those skilled
in the art to practice the disclosure, and it is to be understood that other embodiments
may be utilized and that changes may be made without departing from the scope of the
8
present disclosure. The following description is, therefore, not to be taken in a limiting
sense.
[0035] Figure 1 illustrates an exemplary environment 100 in which the invention may
be implemented. It must be understood to a person skilled in art that the present
invention may also be implemented in various environments and components, other
than as shown in Fig. 1.
[0036] In the illustrated embodiment, as shown in figure 1, in the environment 100, a
plurality of user devices (102a, 102b, 102c, 102d, 102e) are communicably coupled to
each other and with server (106) via a network (104). The user devices (102a, 102b,
102c, 102d, 102e) may be capable of communicating over the network (104) to send
and receive information, including instant messages, performing various online
activities or like. The user devices (102a, 102b, 102c, 102d, 102e) may also be able to
access various computing applications, including a browser, messenger or other webbased application or the like application from the server (106) using the network (104).
[0037] In an exemplary embodiment, the function of user devices (102a, 102b, 102c,
102d, 102e) and/or server (106) is described in detail in conjunction with FIG. 2. In an
embodiment FIG. 2 illustrates a user device 200 that may be used to implement the
invention. The user device 200 may include more or less components than those shown
in FIG. 2. In an embodiment, the user device 200 may include additional components
that are necessary to perform various other functionalities of the user device and the
same are not shown for the sake of brevity. However, the components shown are
sufficient to disclose an illustrative embodiment for practicing the present invention.
User device 200 may represent at least one of user devices (102a, 102b, 102c, 102d,
102e) presented in Figure 1.
[0038] The user device 200 includes a processing unit 201 in communication with a
memory 203 via a communication bus. User device 200 also includes a display 205 and
Input/output network interfaces 207.The memory 203 further includes detection unit
203a, analyzing engine 203b, mood board unit 203c, multimedia unit 203d and
applications such as web browser, messenger and or like applications resident thereon.
Multimedia unit 203d may include a plurality of emojis that may be selected by the user
for sharing with the other users while messaging. The emojis stored in multimedia unit
9
203d can also be used by the user for reflecting his profile pic or status message etc.
The emoji/avatar may be available in the form of text messages, audio clip, video clip
or in combination. In an embodiment, the emojis’ presence in multimedia unit is
optional in the user device 200. The emojis can be retrieved from a third party server
and downloaded in user device 200. The user device 200 may retrieve the emojis from
the server through the network interface. The multimedia unit 203d may be dynamically
updated with various emojis based on the current state of the user. The current state of
the user may be detected by the detection unit 203a present in the memory 203.
[0039] The detection unit 203a may detect one or more activities performed by the user.
The user activities may include changing of status messages, profile bio, usage pattern
of the application (time of day, frequency), checking status messages or bio of the
references in contact book etc. In an embodiment, these user activities may be detected
or analyzed randomly or periodically like minutely or hourly or at any specific time
provided by the user or by application on its own. The detection unit 203a may then
detect one or more elements specific to a person or object from the one or more user
activities. In an exemplary embodiment, if a user has updated his status message as
“playing with Bruno” then the detection unit 203a may consider change of status
message as user’s activity and his pet dog “Bruno” as the element detected from the
user activity. Along with detection of user’s activities, an analysing engine 203b may
analyze the emotional state or mood of the user from the user’s activity. Particularly,
when a user perform any of the activity such as sending messages, changing profile pic
or updating status message etc. At first, the language of the messages is analyzed. For
analyzing the mood of the user from the language of the message, the analysing engine
203b may run a language model and identify the mood or emotional state of the user.
The language model is a deep learning model which is trained on a large corpus of data
so that it can process the text in a meaningful statistical way. In an embodiment, the
present application may use a deep learning model that also runs a keyword system in
parallel to fulfill any specific need, based on analyzed activity, that may override the
primary detection system. Based on the activities performed by the user and the
emotional state detected by the analysing engine 203b and the mood board unit 203c
is updated.
10
[0040] The mood board unit 203c uses machine learning models to derive sentiment,
extract specific entities of interest from the message text. Additionally, the mood board
unit 203c uses deep learning techniques to extract text and emotional state from voice
& video etc. The machine learning models may include language detection model and
language specific models. These models may help in classifying the piece of text to a
sentiment. The detected and analyzed data extracted from emotions, time, keywords,
location, etc. along with an external world knowledge graph such as Wikipedia etc. and
curated data is used to correlate the one or more elements extracted from the user’s
activity with the emotional state of the user. The mood board unit 203c may also use
image analysis technique to analyze the user‘s emotional state by checking their profile
pic or status information. For example, the user is wondering in a zoo and shared a
profile pic with Tiger. The image analysis tool may identify the profile pic and may
relate the information with the current context to detect user’s mood as adventurous.
[0041] The attributes used for generation of emojis are also present in the multimedia
unit 203d. In an embodiment, an appearance of the selected emoji can be modified,
such as, modifying a feature of the selected emoji, adding a new attribute to the selected
emoji, modifying a feature of the emoji’s attribute, or any combination thereof. In some
embodiments, an appearance of the selected emoji may be modified to mimic an avatar
associated with the user. The attributes may include themes, text font, clothes,
background, color combination etc. The feature of emoji may include shape, size, color,
orientation etc. of the portions of emojis. The applications like messenger are used to
share the emojis generated by the device 200 based on the user activities and emotional
state for a particular user or object. The browser may help in extraction of the external
world knowledge graph required for the learning models.
[0042] In an exemplary embodiment, if a user ‘X’ is talking about a dance competition
in the messages, then the mood board unit 203c based on language models may
correlate the user mood for participation in “dance” and the past activity of dance
performed by user ‘X’ . In an exemplary embodiment, when the user ‘X’ is interacting
with user ‘Y’, the multimedia unit 203d may dynamically generate the emojis based on
correlation such as projecting the user ‘X’ dancing with user ‘Y’ in various dance forms.
In this way, the user ‘X’ may indicate his emotional connect towards dancing with user
‘Y’. Further, in another exemplary embodiment, if there is bad dance experience
11
present in the history between user ‘X’ and user ‘Z’. Then, the mood board unit 203c
may also dynamically generate emojis of User ‘X’ with user ‘Z’ in bad/funny dance
forms. In this way, the device 200 may present the current state of user i.e. to deny
dancing with user ‘Z’. However, if the mood board unit 203c detects that the user is
again willing to dance with user ‘Z’ to give him another chance of improving the
performance, then the device may dynamically generate emojis projecting good dance
forms with user ‘Z’. Accordingly, based on the person and object, the device 200 may
on the fly generate the emoji or avatar.
[0043] The processing unit 201 may use the emojis generated in the memory 203 for
displaying using the display unit 205 to the user based on the contextual information.
The contextual information may be considered as the information retrieved and
analyzed by the memory 203 based on user’s activity and emotional state. The
processing unit 201 also communicate with the input/output interface 207 to send and
receive the emojis selected by the various users for sharing with other users on similar
or different platforms. In an essential embodiment, it is to be appreciated that in order
to perform the functionalities described in figure 2, in above paragraphs, said user
device 200 may remain connected to the server (106) through the network (104).
[0044] In an alternative embodiment, the device 200 may be implemented as a server
(106). In such an embodiment, the user device (102a, 102b, 102c, 102d, 102e) may
interact with the server 106, 200 to extract the dynamically generated emojis for the
user. Further, in an embodiment, the user device (102a, 102b, 102c, 102d, 102e) may
also operate with other servers (not shown) to extract content. Such content may
include, but is not limited to textual, graphical, audio, video or any combination thereof.
Devices that may operate as server may be any network device but not limited to
personal computers, desktop computers, multiprocessor systems, microprocessor-based
or programmable consumer electronics, network PCs, server devices, network
appliances, and the like.
[0045] In such an embodiment, the server 200 may detect the activities of users on
user’s device (102a, 102b, 102c, 102d,102e) and user’s emotional state by using
learning models and language specific models, in similar way as described above. In
particular, the detection unit 203a of the server 200 may be configured to detect one or
more activities performed by the user. In an example, the user activities may include
12
but are not limited to changing of status messages, profile bio, usage pattern of the
application (time of day, frequency), checking status messages or bio of the references
in contact book etc. Further, in an exemplary embodiment, these user activities may be
detected or analyzed randomly or periodically like minutely or hourly or at any specific
time provided by the user or by application on its own. In an exemplary embodiment,
if user ‘A’ has changed his profile pic with his fiancée ‘B’ then, the detection unit 203a
of the server 200, may consider change of profile pic as the one of the user’s activity
and “fiancée of user” as one of the element detected from user’s activity. Along with
detection of user’s activities, an analysing engine 203b of the server 200 may be
configured to analyze/detect the emotional state or mood of the user from the user’s
activity. Further, based on detection of the emotional state or mood of the user. The
mood board unit 203c can learn and corelate the elements present in the user activities
and emotional state and convey the same to multimedia unit 203d. The multimedia unit
203d, in response to the correlation provided by mood board unit 203c, may generate
new emojis or modify the emojis already existing in the multimedia unit 203d based on
the contextual information for one or more users. The mood board unit 203c may
communicate with multimedia unit 203d to automatically convert the emojis, based on
the learning models. Multimedia unit 203c may create new emojis anytime based on
the learning developed due to user’s activities and mood information. In an exemplary
embodiment, the processing unit 201 may automatically share the real-time generated
emojis with the user device (102a, 102b, 102c, 102d, 102e) as soon as the server device
200 generates. In another embodiment, the server 200 may send a request to the user
device (102a, 102b, 102c, 102d, 102e) to accept the dynamically generated emojis
specific to other person or object.
[0046] In an embodiment, the processing unit 201 of the server 200 may collect the
user’s activity information on a regular basis or based on event triggering. The user
activities may include text messages, status messages, user profile, usage pattern of the
application, status messages of contacts or profile, voice, video of the contacts. The
collected data may be stored in the memory 203 and can be used by the learning models
for analysing the context information. Particularly, the learning models may extract
one or more elements specific to any other person or object from the user’s activities.
At the same time, the server 200 may also monitor emotional state of the user for any
other person or object. Based on the activity data available with the server 200, the
13
server 200 may generate the emoji or avatar of the person in real-time to express his
emotion for another person or object. The other person may be selected from his contact
book, any celebrity like actor, actress, sportsman etc. Similarly, the object may be
anything which may correctly present the contextual information like house, bicycle,
car etc.….
[0047] The processing unit 201 of the server 200 may extract the emoji/avatar
generated in the memory 203 and share the same with user device (102a, 102b, 102c,
102d, 102e) through the input/output interface 207. The user device (102a, 102b, 102c,
102d, 102e) may be provided with an option of accepting or rejecting the emojis/avatar
generated by the server 200. The user device (102a, 102b, 102c, 102d, 102e) may also
be provided with an option to provide feedback for the dynamically generated
emojis/avatars. Based on the feedback received from the user device (102a, 102b, 102c,
102d, 102e), the learning models may manipulate/update the emojis/avatar and share
the modified emoji/avatar with the user device, if requested. In an exemplary
embodiment, user ‘A’ voice message is analyzed where user ‘A’ is praising his favorite
artist ‘B’. Based on the learning models analysis, the server 200 checked the presence
of emojis related to artist ‘B’ in multimedia unit 203d. In absence of artist ‘B’ emoji in
unit, the multimedia unit 203d may generate an entirely new emoji/avatar, on-the-fly
for expressing emotional connect of user ‘A’ and artist ‘B’ . Such, on the fly generation
of new emojis or updating existing emojis based on current context
information/experience about a person or object may lead to more enjoyment and enrich
the user’s experience.
[0048] In an alternative embodiment, the server 200 may implement neural networks
to generate other virtual content like audio and video stickers along with emoji or
avatars for sharing with other users.
[0049] The processing unit 201 may include one or more processors to perform the
functions stated in the invention The processor may be a general purpose processor, a
special purpose processor, a conventional processor, a digital signal processor (DSP), a
plurality of microprocessors, one or more microprocessors in association with a DSP
core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs),
14
Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit
(IC), a state machine, and the like.
[0050] The Memory 203 is a computer readable storage media. The memory may
include any suitable volatile or non-volatile computer readable storage media. Software
is stored in memory for execution and/or access by one or more of the respective
processors.
[0051] The display 205 may include but is not limited to Cathode ray tube display
(CRT), Light-emitting diode display (LED), Electroluminescent display (ELD),
Electronic paper, E Ink, Plasma display panel (PDP), Liquid crystal display (LCD),
High-Performance Addressing display (HPA), Thin-film transistor display (TFT),
Organic light-emitting diode display (OLED), or any other display as may be obvious
to a person skilled in the art.
[0052] When a single device or article is described herein, it will be readily apparent
that more than one device/article (whether or not they cooperate) may be used in place
of a single device/article. Similarly, where more than one device or article is described
herein (whether or not they cooperate), it will be readily apparent that a single
device/article may be used in place of the more than one device or article or a different
number of devices/articles may be used instead of the shown number of devices or
programs. The functionality and/or the features of a device may be alternatively
embodied by one or more other devices which are not explicitly described as having
such functionality/features. Thus, other embodiments of the invention need not include
the device itself.
[0053] Figure 3 illustrates a method 300 for dynamically generating personalized
virtual content for a user in accordance with an embodiment of the present disclosure.
[0054] At block 302, the method 300 may include analysing one or more user activities.
The plurality of user activities performed at the user device (102a, 102b, 102c, 102d,
102e) are stored in the memory 203. The one or more processors may analyze the one
or more user activities such as messages, status messages, user profile, usage pattern of
the application, status messages of contacts or profile, voice, video. In an exemplary
embodiment, the one or more processors may monitor the activities performed on his
15
user device. The one or more processors may analyse the activities performed by the
user either at a specific interval of time or on triggering of any event such as change in
profile picture or status message or sending any message in form of text, audio or video
to any other person.
[0055] At block 304, the method 300 may include detecting one or more elements from
the analysed one or more user activities. The one or more elements may be the person
or object related to the one or more user activities. In an exemplary embodiment, the
person may be a celebrity or idol etc. about whom the user has indicated in his past or
recent activities. The learning models, language tool and image analysis tool etc. may
detect the one or more elements from the one or more user activities and consider the
one or more elements as keyword or data points for generating contextual information.
[0056] At block 306, the method 300 may include detecting one or more emotional
state of the user from the user activities. As stated in block 304, the one or more
processors, by using the learning models, may also detect the emotional state of the user
from the activities performed by the user. The emotional state may be considered as
current state of mind of person such as feeling sad, happy, angry, disgust, surprise etc.
[0057] At block 308, the method 300 may include correlating the emotional state with
respect to the detected one or more elements with corresponding user activity. The one
or more processors may establish a correlation between the activity performed by the
user and emotional state or mood of the user. In an exemplary embodiment, when an
activity is performed on the user device, the language/image analysis models perform
analysis for detection of those elements which may be held responsible for the current
state of mind of the user. In other words, a correlation is established between the
elements and emotional state of the user and the same may be stored in the memory 203
for future use.
[0058] At block 310, the method 300 may include dynamically generating one or more
virtual content depicting varied contextual emotional state. The virtual content may be
one or more emoji/avatar or audio or video stickers.
[0059] At block 312, the method 300 may include updating the virtual content in
memory 203. Accordingly, the one or more processors may update the virtual content
such as emoji/avatar/stickers in the memory for user’s selection. The user may provide
16
a feedback (Implicit or explicit) to the multimedia unit 203d. The implicit feedback
may be obtained by analyzing the usage of the dynamically generated emoji/avatar. In
another embodiment, the user may provide explicit feedback for the sticker through
messages or any other sticker/emoji.
[0060] The flowchart and block diagrams in the Figures illustrate the architecture,
functionality, and operation of possible implementations of devices, methods, and
computer program products according to various embodiments of the present invention.
In this regard, each block in the flowchart or block diagrams may represent a module,
unit, segment, or portion of instructions, which comprises one or more executable
instructions for implementing the specified logical function(s). In some alternative
implementations, the functions noted in the block may occur out of the order noted in
the figures. For example, two blocks shown in succession may, in fact, be executed
substantially concurrently, or the blocks may sometimes be executed in the reverse
order, depending upon the functionality involved. It will also be noted that each block
of the block diagrams and/or flowchart illustration, and combinations of blocks in the
block diagrams and/or flowchart illustration, can be implemented by special purpose
hardware-based systems that perform the specified functions or acts or carry out
combinations of special purpose hardware and computer instructions.
[0061] The descriptions of the various embodiments of the present invention have been
presented for purposes of illustration but are not intended to be exhaustive or limited to
the embodiments disclosed. Many modifications and variations will be apparent to
those of ordinary skill in the art without departing from the scope and spirit of the
invention. The terminology used herein was chosen to best explain the principles of the
embodiment, the practical application or technical improvement over technologies
found in the marketplace, or to enable others of ordinary skill in the art to understand
the embodiments disclosed herein.

WE CLAIM:

1. A method for dynamically generating one or more virtual content, the method
comprising:
analysing one or more user activities;
detecting one or more elements from the analysed one or more user activities;
detecting one or more emotional state of the user from the user activities;
correlating the emotional state with respect to the detected one or more elements
with corresponding user activity; and
dynamically generating one or more virtual content depicting varied contextual
emotional state.
2. The method as claimed in claim 1, wherein the one or more user activities comprises
of messages, status messages, user profile, usage pattern of the application, status
messages of contacts or profile, voice, video.
3. The method as claimed in claim 1, wherein the one or more elements corresponds to at
least one person or object.
4. The method as claimed in claim 1, wherein the one or more elements are detected based
on one or more machine learning techniques.
5. The method as claimed in claim 1, wherein the one or more emotional state of the user
comprises mood of the user.
6. The method as claimed in claim 1, wherein the one or more virtual content comprises
one or more emoji/avatar.
7. The method as claimed in claim 6, wherein the created one or more emoji/avatar are
based on selfie image of at least one user.
8. A computing device for dynamically generating one or more virtual content, the device
comprising:
18
a memory configured to store one or more user activities and one or more
emotional state of the user;
one or more processors coupled to the memory, the one or more processors
configured to:
analyse the one or more user activities;
detect one or more elements from the analysed one or more user activities;
detect the one or more emotional state of the user from the user activities;
correlate the emotional state with respect to the detected one or more elements
with corresponding user activity; and
dynamically generate one or more virtual content depicting varied contextual
emotional state.
9. The device as claimed in claim 8, wherein the one or more user activities comprises of
messages, status messages, user profile, usage pattern of the application, status
messages of contacts or profile, voice, video.
10. The device as claimed in claim 8, wherein the one or more elements corresponds to one
or more person or objects.
11. The device as claimed in claim 8, wherein the one or more elements are detected based
on one or more machine learning techniques.
12. The device as claimed in claim 8, wherein the one or more emotional state of the user
comprises mood of the user.
13. The device as claimed in claim 8, wherein the one or more virtual content comprises
one or more emoji/avatar.
14. The device as claimed in claim 13, wherein the created one or more emoji/avatar are
based on selfie image of the at least one user.

Documents

Application Documents

# Name Date
1 202011027699-FORM 18 [15-05-2024(online)].pdf 2024-05-15
1 202011027699-STATEMENT OF UNDERTAKING (FORM 3) [30-06-2020(online)].pdf 2020-06-30
2 202011027699-POWER OF AUTHORITY [30-06-2020(online)].pdf 2020-06-30
2 202011027699-Proof of Right [11-12-2020(online)].pdf 2020-12-11
3 202011027699-COMPLETE SPECIFICATION [30-06-2020(online)].pdf 2020-06-30
3 202011027699-FORM 1 [30-06-2020(online)].pdf 2020-06-30
4 202011027699-DECLARATION OF INVENTORSHIP (FORM 5) [30-06-2020(online)].pdf 2020-06-30
4 202011027699-DRAWINGS [30-06-2020(online)].pdf 2020-06-30
5 202011027699-DRAWINGS [30-06-2020(online)].pdf 2020-06-30
5 202011027699-DECLARATION OF INVENTORSHIP (FORM 5) [30-06-2020(online)].pdf 2020-06-30
6 202011027699-COMPLETE SPECIFICATION [30-06-2020(online)].pdf 2020-06-30
6 202011027699-FORM 1 [30-06-2020(online)].pdf 2020-06-30
7 202011027699-POWER OF AUTHORITY [30-06-2020(online)].pdf 2020-06-30
7 202011027699-Proof of Right [11-12-2020(online)].pdf 2020-12-11
8 202011027699-FORM 18 [15-05-2024(online)].pdf 2024-05-15
8 202011027699-STATEMENT OF UNDERTAKING (FORM 3) [30-06-2020(online)].pdf 2020-06-30
9 202011027699-FER.pdf 2025-07-04
10 202011027699-FORM 3 [21-08-2025(online)].pdf 2025-08-21

Search Strategy

1 202011027699_SearchStrategyNew_E_202011027699E_19-03-2025.pdf