Sign In to Follow Application
View All Documents & Correspondence

“A Method And System For Generating Multiple Expressive Emojis”

Abstract: Disclosed herein is a method and a system (102) for generating emojis expressing user’s emotions based on a single image of a user. To allow the system (102) to transform a single user image into emojis representing various emotions, the system is first trained by providing a number of facial images, each expressing a distinct emotion. The system (102) then converts each facial image into an emoji such that the emoji perfectly represents the facial image it is based upon. Through this process a modulation factor is trained for each emotion that describes the modulations an emoji needs to go through in order to depict a particular emotion perfectly. When a user provides his/her image (104) to the system (102), the system (102) first converts the image to an emoji and then applies the trained modulation factors to transform the emoji into various emojis depicting different emotions.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 January 2020
Publication Number
34/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@knspartners.com
Parent Application

Applicants

Hike Private Limited
4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India

Inventors

1. Dipankar Sarkar
4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
2. Ankur Narang
4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India
3. Kavin Bharti Mittal
4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India

Specification

TECHNICAL FIELD
The present invention generally relates to the field of social networking and more
particularly to a method and system for creating emojis expressing user’s emotions.
5 BACKGROUND OF THE INVENTION
With the rapid growth in technology pertaining to digital communications, a large number
of social networking applications have become widespread. These social networking
applications provide users with various means such as text, audio and video to
communicate with each other. Amongst the various means, text messages are most widely
10 used for communication amongst users on the social networking applications as they are
the most easy and convenient forms of communication. Considering this, many social
networking applications have provided various features to the users in order to make the
text messaging more interactive and thereby enhance user’s experience. Amongst the
various features provided by the social networking applications, means for expressing
15 emotions by using emojis, emoticons, stickers, GIFs and other such media is very common.
Further, there are many social networking applications that allow a user to create a userbased emoji that converts user’s image depicting an emotion into an emoji/emoticon.
However, such applications utilize a lot of custom artwork to generate the expression of
the user. Moreover, since such applications are based on custom artwork which is prestored
20 by a digital artist, these applications are unable to generate different realistic variants of an
emotion of a user. Also, some applications require a user to take multiple images
expressing different emotions to generate emojis corresponding to those emotions.
Therefore, such techniques are complex and inefficient.
25 Thus, there exists a need for a technology that automatically generates multiple
emojis/stickers/emoticons expressing different emotions based on a single image of a user.
OBJECTS OF THE INVENTION:
An object of the present invention is to automatically generate one or more emojis from a
30 single image of a user.
Another object of the present invention is to automatically generate emojis which represent
realistic emotions of a user.
3
Yet another object of the present invention is to automatically generate/customize
emojis/emoticons with distinctive facial expressions.
SUMMARY OF THE INVENTION
5 The present disclosure overcomes one or more shortcomings of the prior art and provides
additional advantages discussed throughout the present disclosure. Additional features and
advantages are realized through the techniques of the present disclosure. Other
embodiments and aspects of the disclosure are described in detail herein and are considered
a part of the claimed disclosure.
10
In one non-limiting embodiment of the present disclosure, a method of generating emojis
expressing user’s emotions is disclosed. The method comprises receiving a facial image of
the user in a real-time. The method further comprises determining a set of user facial
parameter values for the facial image of the user. The method further comprises generating
15 a user emoji corresponding to the facial image of the user based on the set of user facial
parameter values. The method further comprises determining a set of user emoji-facial
parameter values for the user emoji. Further, the method comprises applying a trained
modulation factor upon the set of user emoji-facial parameter values in order to generate a
new user emoji expressing the user’s emotion.
20
In another non-limiting embodiment of the present invention, a method for generating the
trained modulation factor for each emoji is disclosed. The method comprises receiving a
plurality of human facial images corresponding to a plurality of emotions over a period of
time-interval. The method further comprises determining a set of human-facial parameter
25 values for each human facial image. The method further comprises generating a plurality
of emojis corresponding to the plurality of human facial images based on the set of humanfacial parameter values. The method further comprises determining a set of emoji-facial
parameter values for each emoji. The method further comprises determining a variation
between the set of emoji-facial parameter values and the set of human-facial parameter
30 values for each human facial image. Further, the method comprises generating the trained
modulation factor for each emoji based on the variation, such that the trained modulation
factor comprises a set of modulated emoji-facial parameter values indicating the values
required for matching the plurality of emojis with the corresponding plurality of human
4
facial images, and the trained modulation factor is generated for each emotion of the
plurality of emotions.
In yet another non-limiting embodiment of the present disclosure, a system for generating
emojis expressing user’s emotions is disclosed. The system comprises a receiving unit
5 configured to receive a facial image of the user in a real-time. The system further comprises
a facial parameter determination unit configured to determine a set of user facial parameter
values for the facial image of the user. The system further comprises an emoji generation
unit configured to generate a user emoji corresponding to the facial image of the user based
on the set of user facial parameter values. The system further comprises an emoji facial
10 parameter determination unit configured to determine a set of user emoji-facial parameter
values for the user emoji. Further, the system comprises an execution unit configured to
apply a trained modulation factor upon the set of user emoji-facial parameter values in
order to generate a new user emoji expressing the user’s emotion.
15 In yet another embodiment of the present disclosure, a generation unit configured to
generate the trained modulation factor for each emoji is disclosed. The generation unit is
configured to receive a plurality of human facial images corresponding to a plurality of
emotions over a period of time-interval. The generation unit is further configured to
determine a set of human-facial parameter values for each human facial image. The
20 generation unit is further configured to generate a plurality of emojis corresponding to the
plurality of human facial images based on the set of human-facial parameter values. The
generation unit is further configured to determine a set of emoji-facial parameter values
for each emoji. The generation unit is further configured to determine a variation between
the set of emoji-facial parameter values and the set of human-facial parameter values for
25 each human facial image. Further, the generation unit is configured to generate the trained
modulation factor for each emoji based on the variation, such that the trained modulation
factor comprises a set of modulated emoji-facial parameter values indicating the values
required for matching the plurality of emojis with the corresponding plurality of human
facial images, and the trained modulation factor is generated for each emotion of the
30 plurality of emotions.
The foregoing summary is illustrative only and is not intended to be in any way limiting.
In addition to the illustrative aspects, embodiments, and features described above, further
5
aspects, embodiments, and features will become apparent by reference to the drawings and
the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
5 The embodiments of the disclosure itself, as well as a preferred mode of use, further
objectives and advantages thereof, will best be understood by reference to the following
detailed description of an illustrative embodiment when read in conjunction with the
accompanying drawings. One or more embodiments are now described, by way of example
only, with reference to the accompanying drawings in which:
10 Figure 1A shows an exemplary environment 100A of a system for generating trained
modulation factors in accordance with an embodiment of the present disclosure;
Figure 1B shows an exemplary environment 100B for application of the trained
modulation factors in real-time to generate emojis expressing user’s emotions in
accordance with an embodiment of the present disclosure;
15 Figure 2 shows a block diagram 200 illustrating a system for generating emojis expressing
user’s emotions in accordance with an embodiment of the present disclosure;
Figure 3 shows a method 300 for generating a trained modulation factor for each emoji in
accordance with an embodiment of the present disclosure; and
Figure 4 shows a method 400 for generating emojis expressing user’s emotions in real20 time in accordance with an embodiment of the present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One
skilled in the art will readily recognize from the following description that alternative
embodiments of the structures and methods illustrated herein may be employed without
departing from the principles of the disclosure described herein.
25
DETAILED DESCRIPTION
The foregoing has broadly outlined the features and technical advantages of the present
disclosure in order that the detailed description of the disclosure that follows may be better
understood. It should be appreciated by those skilled in the art that the conception and
30 specific embodiment disclosed may be readily utilized as a basis for modifying or
designing other structures for carrying out the same purposes of the present disclosure.
The novel features which are believed to be characteristic of the disclosure, both as to its
organization and method of operation, together with further objects and advantages will be
6
better understood from the following description when considered in connection with the
accompanying figures. It is to be expressly understood, however, that each of the figures
is provided for the purpose of illustration and description only and is not intended as a
definition of the limits of the present disclosure.
5 In the present document, the word "exemplary" is used herein to mean "serving as an
example, instance, or illustration". Any embodiment or implementation of the present
subject matter described herein as "exemplary" is not necessarily to be construed as
preferred or advantageous over other embodiments.
10 Further, the terms like “comprises”, “comprising”, or any other variations thereof, are
intended to cover non-exclusive inclusions, such that a setup, device that comprises a list
of components that does not include only those components but may include other
components not expressly listed or inherent to such setup or device. In other words, one or
more elements in a system or apparatus proceeded by “comprises… a” does not, without
15 more constraints, preclude the existence of other elements or additional elements in the
system or apparatus or device.
Furthermore, the terms like “emoji”, “emoticons” and/or “sticker” may be used
interchangeably or in combination throughout the description.
20
Disclosed herein is a method and a system for generating emojis expressing a user’s
emotions. Growth in the technologies pertaining to digital communications have provided
users with various means to interact with each other on various social networking platforms
by using various means such as audio chats/calls, video chats/calls, and text messages.
25 Amongst the various means, communication by text messages has been most common and
convenient means of communication and therefore, the social networking platforms are
constantly working to improve user’s experience while communicating via text messages
by introducing various emojis, emoticons, stickers, GIFs etc. that allows a user to convey
their emotions. Further, a few social networking platforms allow a user to create an
30 emoji/emoticon based on the user’s image in order to depict a particular emotion. However,
such platforms utilize a lot of custom artwork to generate the expression of the user.
Moreover, since such platforms are based on custom artwork which is prestored by a digital
artist, these applications are unable to generate different realistic variants of an emotion of
a user. Also, some platforms require a user to take multiple images expressing different
7
emotions to generate emojis corresponding to those emotions. Therefore, such techniques
are complex and inefficient.
The present disclosure understands this need and provides a system that uses a single image
5 of a user and transforms it into various emojis representing different emotions. To allow
the system to transform a single user image into emojis representing various emotions, the
system is first trained by providing a number of facial images, where each image
corresponds to a distinct emotion. The system then tries to convert each facial image into
an emoji such that the emoji perfectly represents the facial image it is based upon. Through
10 this process a modulation factor is trained for each emotion that describes the modulations
an emoji needs to go through in order to depict a particular emotion perfectly. Now, during
a real-time implementation, when a user provides his/her image to the system, the system
would first convert the image to an emoji and then based on the trained modulation factors
transforms the emoji into various emojis depicting different emotions. This technique
15 therefore allows a more realistic depiction of user’s emotions and offers a simpler approach
for the users as the users do not have to take multiple images for different emotions.
Figures 1A and 1B shows exemplary environments 100A and 100B of a system for
generating emojis expressing user’s emotions in accordance with an embodiment of the
20 present disclosure. Further, figure 1A illustrates the training aspect of the system for
generation of trained modulation factors and figure 1B illustrates the generation of emojis
user’s emotions in real-time by applying the trained modulation factors. It must be
understood to a person skilled in art that the system may also be implemented in various
environments, other than as shown in Figs. 1A and 1B.
25
The detailed explanation of the exemplary environments 100A and 100B is explained in
conjunction with Figure 2 that shows a block diagram 200 of a system 102 for generating
emojis expressing user’s emotions in accordance with an embodiment of the present
disclosure. Although the present disclosure is explained considering that the system 102 is
30 implemented on a server, it may be understood that the system 102 may be implemented
in a variety of computing systems, such as a laptop computer, a desktop computer, a
notebook, a workstation, a mainframe computer, a server, a network server, a cloud-based
computing environment. It may be understood that the system 102 may be accessed by
8
multiple users through one or more user devices or applications residing on the user
devices.
In one implementation, the system 102 may comprise an I/O interface 202, a processor
5 204, a memory 206 and the units 212. The memory 206 may be communicatively coupled
to the processor 204 and the units 212. Further, the memory 206 may store user facial
images 208 and modulation factors 210. The significance and use of each of the stored
quantities is explained in the upcoming paragraphs of the specification. The processor 204
may be implemented as one or more microprocessors, microcomputers, microcontrollers,
10 digital signal processors, central processing units, state machines, logic circuitries, and/or
any devices that manipulate signals based on operational instructions. Among other
capabilities, the processor 204 is configured to fetch and execute computer-readable
instructions stored in the memory 206. The I/O interface 202 may include a variety of
software and hardware interfaces, for example, a web interface, a graphical user interface,
15 and the like. The I/O interface 202 may allow the system 102 to interact with the user
directly or through the user devices. Further, the I/O interface 202 may enable the system
202 to communicate with other computing devices, such as web servers and external data
servers (not shown). The I/O interface 202 can facilitate multiple communications within
a wide variety of networks and protocol types, including wired networks, for example,
20 LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O
interface 202 may include one or more ports for connecting many devices to one another
or to another server.
In one implementation, the units 212 may comprise a receiving unit 214, a facial parameter
25 determination unit 216, an emoji generation unit 218, an emoji facial parameter
determination unit 220, an execution unit 222, and a generation unit 224. According to
embodiments of present disclosure, these units 214-224 may comprise hardware
components like processor, microprocessor, microcontrollers, application-specific
integrated circuit for performing various operations of the system 102. It must be
30 understood to a person skilled in art that the processor 204 may also perform all the
functions of the units 214-224 according to various embodiments of the present disclosure.
The detailed explanation of the environments 100A and 100B has been divided into two
sections, where section 1 describes the “training” of the system 102 to generate modulation
9
factors and section 2 describes the “application” of the trained modulation factors in realtime to generate emojis expressing user’s emotions.
Section 1 - Training of the System 102
5 As described above, figure 1A depicts an exemplary environment 100A illustrating the
training of the system 102 to generate modulation factors (MF) corresponding to various
emotions. To train the system 102, a plurality of human facial images are received by the
system 102 via the receiving unit 214, where each image represents an emotion. In the
exemplary environment 100A, human facial images 1-6 are received by the system where
10 human facial images 1 and 2 represent “happy emotion”, human facial images 3 and 4
represent “shocked emotion” and human facial images 5 and 6 represent “angry emotion”.
It may also be noted that in the exemplary environment 100A only six images are received
by the system 102 for the sake of simplicity. However, the number of images received by
the system 102 for training can be higher or lower. Further, it must also be noted that for
15 efficient training of the system 102, human facial images may correspond to humans
belonging to different genders, age groups, ethnicities etc.
Upon receiving the human facial images, in next step, the generation unit 224 determines
a set of human-facial parameter values for each human image 1-6. The set of human-facial
parameter values may include, but not limited to, human eye area, human mouth
20 dimension, human ear dimension, human eyebrow area, human cheek area, and human
nose area. Further, based on the determined human-facial parameter values, the generation
unit 224 generates an emoji for each of the human facial images 1-6.
Now, the basic objective of the training is to alter the generated emojis in such a manner
25 that they perfectly depict the emotions represented by the corresponding human facial
images. In order to do so, the generation unit 224 determines a set of emoji facial parameter
values for each generated emoji. The set of emoji facial parameter values may include, but
not limited to, emoji eye area, emoji mouth dimension, emoji ear dimension, emoji
eyebrow area, emoji cheek area, and emoji nose area. The generation unit 224 then
30 determines a variation between the human-facial parameter values and the emoji facial
parameter values for each generated emoji. For instance, the generation unit 224 may
determine that for human facial image 1, the human eye area is 2.3cm × 2.1cm. However,
the emoji eye area corresponding to the emoji generated for human facial image 1 is 2.5
10
cm × 2.2cm. The generation unit 224 would then determine a modulation factor (MF) that
would alter the emoji eye area in order to perfectly match with the human facial image 1.
The determined modulation factor (MFHappy) would also describe the alterations required
in other emoji facial parameter values such as emoji mouth dimension, emoji ear
5 dimension, emoji eyebrow area, emoji cheek area, and emoji nose area in order for the
generated emoji to completely match with the corresponding human facial image.
Now, as seen in figure 1A, multiple human facial images are received by the system 102
for a particular emotion. For instance, human facial images 1 and 2 represent happy
10 emotion. Therefore, the generation unit 224 generates multiple modulation factors
corresponding to the happy emotion. Similarly, multiple modulation factors are generated
corresponding to shocked emotion (MFShocked) and angry emotion (MFAngry). It must be
noted by a skilled person that the system 102 can be trained for multiple emotions including
but not limited to the ones illustrated in exemplary environment 100A depicted in figure
15 1A.
Section 2 - Application of the trained modulation factors in real-time to generate
emojis expressing user’s emotions
As described above, figure 1B depicts an exemplary environment 100B for application of
20 the trained modulation factors in real-time to generate emojis expressing user’s emotions.
Once the system 102 has been trained or in particular, trained modulation factors have been
generated, the system 102 can operate in real-time to generate emojis for a user expressing
different emotions. The implementation of the system 102 in real-time can occur in one or
more embodiments. In one embodiment, the system 102 may generate various emojis
25 expressing different emotions for a user as and when the user registers him/herself on the
system 102. In another embodiment, the system 102 may generate a user emoji expressing
a certain emotion on request by the user.
Now referring back to figure 1B in conjunction with figure 2, the receiving unit 214 first
30 receives a facial image 104 of the user in real-time. The user facial image 104 may be
received by the receiving unit 214 from an external server or can be captured by a media
capturing unit (not shown). The facial parameter determination unit 216 then determines a
set of user facial parameter values for the user facial image 104 received by the receiving
unit 214. The set of user facial parameter values may include, but not limited to, user eye
11
area, user mouth dimension, user ear dimension, user eyebrow area, user cheek area, and
user nose area. Based on the set of user facial parameter values, the emoji generation unit
218 generates a user emoji corresponding to the user facial image 104 received by the
receiving unit 214. Once, the user emoji corresponding to user facial image 104 has been
5 created, the emoji facial parameter determination unit 220 determines a set of user emoji
facial parameter values for the user emoji generated by the emoji generation unit 218. The
emoji facial parameter values may include, but not limited to, user emoji eye area, user
emoji mouth dimension, user emoji ear dimension, user emoji eyebrow area, user emoji
cheek area, and user emoji nose area.
10
Now, once the emoji facial parameter determination unit 220 has determined the set of user
emoji facial parameter values for the user emoji generated by the emoji generation unit
218, the system 102 is tasked with transforming the generated user emoji into multiple
emojis, each expressing a different user emotion. For this purpose, based on the set of user
15 emoji facial parameter values, the execution unit 222 selects and applies a pre-trained
modulation factors stored in the memory 206. For instance, if the execution unit 222 has
to transform the generated user emoji into a new user emoji expressing happy emotion, it
would refer to pre-trained modulation factors 210 stored in the memory 206 and select a
modulation factor (MFHappy) that can transform the user emoji into the new user emoji
20 expressing happy emotion with minimum modulation. Similarly, the same generated user
emoji may be transformed into new user emojis, each expressing distinct emotion like
angry, shocked etc. by applying a pretrained modulation factor – MFAngry and MFShocked
respectively. The new user emojis expressing different emotions may be provided to a user
in his/her profile as a go to option to choose from while text messaging with another user
25 or the new user emojis expressing different emotions can be created on the request of the
user for an emoji depicting a certain emotion.
The system 102 thus provides an efficient way of creating multiple user emojis, each
expressing different user emotions from a single image of the user.
30
Figure 3 shows a method 300 for generating a trained modulation factor for each emoji in
accordance with an embodiment of the present disclosure in accordance with an
embodiment of the present disclosure.
12
As illustrated in figure 3, the method 300 includes one or more blocks illustrating a method
for generating a trained modulation factor for each emoji. The method 300 may be
described in the general context of computer executable instructions. Generally, computer
executable instructions can include routines, programs, objects, components, data
5 structures, procedures, modules, and functions, which perform specific functions or
implement specific abstract data types.
The order in which the method 300 is described is not intended to be construed as a
limitation, and any number of the described method blocks can be combined in any order
10 to implement the method. Additionally, individual blocks may be deleted from the methods
without departing from the spirit and scope of the subject matter described.
At block 302, the method 300 may include receiving a plurality of human facial images
corresponding to a plurality of emotions over a period of time-interval.
15 At block 304, the method 300 may include determining a set of human-facial parameter
values for each human facial image.
At block 306, the method 300 may include generating a plurality of emojis corresponding
to the plurality of human facial images based on the set of human-facial parameter values.
20
At block 308, the method 300 may include determining a set of emoji-facial parameter
values for each emoji.
At block 310, the method 300 may include determining a variation between the set of
25 emoji-facial parameter values and the set of human-facial parameter values for each human
facial image.
At block 312, the method 300 may include generating the trained modulation factor for
each emoji based on the variation. The trained modulation factor comprises a set of
30 modulated emoji-facial parameter values indicating the values required for matching the
plurality of emojis with the corresponding plurality of human facial images. Further, the
trained modulation factor is generated for each emotion of the plurality of emotions.
Now, figure 4 shows a method 400 for generating emojis expressing user’s emotions in
35 real-time using the pre-trained modulation factor. The method 400 may be described in the
13
general context of computer executable instructions. Generally, computer executable
instructions can include routines, programs, objects, components, data structures,
procedures, modules, and functions, which perform specific functions or implement
specific abstract data types.
5
The order in which the method 400 is described is not intended to be construed as a
limitation, and any number of the described method blocks can be combined in any order
to implement the method. Additionally, individual blocks may be deleted from the methods
without departing from the spirit and scope of the subject matter described.
10
At block 402, the method 400 may include receiving a facial image of a user in real-time.
At block 404, the method 400 may include determining a set of user facial parameter values
for the facial image of the user.
15
At block 406, the method 400 may include generating a user emoji corresponding to the
facial image of the user based on the set of user facial parameter values.
At block 408, the method 400 may include determining a set of user emoji-facial parameter
20 values for the user emoji.
At block 410, the method 400 may include applying a trained modulation factor upon the
set of user emoji-facial parameter values in order to generate a new user emoji expressing
the user’s emotion.
25
A description of an embodiment with several components in communication with each
other does not imply that all such components are required. On the contrary, a variety of
optional components are described to illustrate the wide variety of possible embodiments
of the invention.
30
When a single device or article is described herein, it will be clear that more than one
device/article (whether they cooperate) may be used in place of a single device/article.
Similarly, where more than one device or article is described herein (whether they
cooperate), it will be clear that a single device/article may be used in place of the more
35 than one device or article or a different number of devices/articles may be used instead of
14
the shown number of devices or programs. The functionality and/or the features of a device
may be alternatively embodied by one or more other devices which are not explicitly
described as having such functionality/features. Thus, other embodiments of the invention
need not include the device itself.
5 Finally, the language used in the specification has been principally selected for readability
and instructional purposes, and it may not have been selected to delineate or circumscribe
the inventive subject matter. It is therefore intended that the scope of the invention be
limited not by this detailed description, but rather by any claims that issue on an application
based here on. Accordingly, the embodiments of the present invention are intended to be
10 illustrative, but not limiting, of the scope of the invention, which is set forth in the
following claims.
While various aspects and embodiments have been disclosed herein, other aspects and
embodiments will be apparent to those skilled in the art. The various aspects and
15 embodiments disclosed herein are for purposes of illustration and are not intended to be
limiting, with the true scope and spirit being indicated by the following claims.
Advantages of the embodiment of the present disclosure are illustrated herein.
1. Enhancing user experience by providing a set of emojis associated with user’s image
only while chatting;
2. Providing a personalized experience to the users while using chatting platforms

We Claim:
1. A method of generating emojis expressing user’s emotions, the method comprising:
receiving (402) a facial image (104) of the user in a real-time;
determining (404) a set of user facial parameter values for the facial image (104)
of the user;
generating (406) a user emoji corresponding to the facial image (104) of the user
based on the set of user facial parameter values;
determining (408) a set of user emoji-facial parameter values for the user emoji;
and
applying (410) a trained modulation factor upon the set of user emoji-facial
parameter values in order to generate a new user emoji expressing the user’s emotion.
2. The method as claimed in claim 1, wherein the trained modulation factor for each
emoji is generated by:
receiving (302) a plurality of human facial images corresponding to a plurality of
emotions over a period of time-interval;
determining (304) a set of human-facial parameter values for each human facial
image;
generating (306) a plurality of emojis corresponding to the plurality of human facial
images based on the set of human-facial parameter values;
determining (308) a set of emoji-facial parameter values for each emoji;
determining (310) a variation between the set of emoji-facial parameter values and
the set of human-facial parameter values for each human facial image; and
generating (312) the trained modulation factor for each emoji based on the
variation, wherein the trained modulation factor comprises a set of modulated emoji-facial
parameter values indicating the values required for matching the plurality of emojis with
the corresponding plurality of human facial images, and wherein the trained modulation
factor is generated for each emotion of the plurality of emotions.
3. The method as claimed in claim 1, wherein:
the set of user facial parameter values comprises at least one of user eye area, user
mouth dimension, user ear dimension, user eyebrow area, user cheek area, and user nose
area;
17
the set of user emoji-facial parameter values comprises at least one of user emoji
eye area, user emoji mouth dimension, user emoji ear dimension, user emoji eyebrow area,
user emoji cheek area, and user emoji nose area;
the set of human-facial parameter values comprises at least one of human eye area,
human mouth dimension, human ear dimension, human eyebrow area, human cheek area,
and human nose area; and
the set of emoji-facial parameter values comprises at least one of emoji eye area,
emoji mouth dimension, emoji ear dimension, emoji eyebrow area, emoji cheek area, and
emoji nose area.
4. The method as claimed in claim 1, wherein the user-face emoji is generated in at
least one of a 2-dimensional or a 3-dimensional form.
5. The method as claimed in claim 1, wherein the user’s emotions comprises at least
one of anger, contempt, disgust, fear, happiness, sad and surprise.
6. A system (102) for generating emojis expressing user’s emotions, the system (102)
comprising:
a receiving unit (214) configured to receive a facial image (104) of the user in a
real-time;
a facial parameter determination unit (216) configured to determine a set of user
facial parameter values for the facial image (104) of the user;
an emoji generation unit (218) configured to generate a user emoji corresponding
to the facial image (104) of the user based on the set of user facial parameter values;
an emoji facial parameter determination unit (220) configured to determine a set of
user emoji-facial parameter values for the user emoji; and
an execution unit (222) configured to apply a trained modulation factor upon the
set of user emoji-facial parameter values in order to generate a new user emoji expressing
the user’s emotion.
7. The system (102) as claimed in claim 6, further comprising a generation unit (224)
configured to generate the trained modulation factor for each emoji by:
receiving a plurality of human facial images corresponding to a plurality of
emotions over a period of time interval ;
18
determining a set of human-facial parameter values for each human facial image;
generating a plurality of emojis corresponding to the plurality of human facial
images based on the set of human-facial parameter values;
determining a set of emoji-facial parameter values for each emoji;
determining a variation between the set of emoji-facial parameter values and the
set of human-facial parameter values for each human facial image; and
generating the trained modulation factor for each emoji based on the variation,
wherein the trained modulation factor comprises a set of modulated emoji-facial parameter
values indicating the values required for matching the plurality of emojis with the
corresponding plurality of human facial images, and wherein the trained modulation factor
is generated for each emotion of the plurality of emotions.
8. The system (102) as claimed in claim 6, wherein:
the set of user facial parameter values comprises at least one of user eye area, user
mouth dimension, user ear dimension, user eyebrow area, user cheek area, and user nose
area;
the set of user emoji-facial parameter values comprises at least one of user emoji
eye area, user emoji mouth dimension, user emoji ear dimension, user emoji eyebrow area,
user emoji cheek area, and user emoji nose area;
the set of human-facial parameter values comprises at least one of human eye area,
human mouth dimension, human ear dimension, human eyebrow area, human cheek area,
and human nose area; and
the set of emoji-facial parameter values comprises at least one of emoji eye area,
emoji mouth dimension, emoji ear dimension, emoji eyebrow area, emoji cheek area, and
emoji nose area.
9. The system (102) as claimed in claim 6, wherein the user-face emoji is generated
in at least one of a 2-dimensional or a 3-dimensional form.
10. The system (102) as claimed in claim 6, wherein the user’s emotions comprises at
least one of anger, contempt, disgust, fear, happiness, sad and surprise.

Documents

Application Documents

# Name Date
1 202011000571-FER.pdf 2025-04-03
1 202011000571-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2020(online)].pdf 2020-01-06
2 202011000571-FORM 18 [14-11-2023(online)].pdf 2023-11-14
2 202011000571-PROVISIONAL SPECIFICATION [06-01-2020(online)].pdf 2020-01-06
3 202011000571-POWER OF AUTHORITY [06-01-2020(online)].pdf 2020-01-06
3 202011000571-CERTIFIED COPIES TRANSMISSION TO IB [23-02-2021(online)].pdf 2021-02-23
4 202011000571-FORM 1 [06-01-2020(online)].pdf 2020-01-06
4 202011000571-Covering Letter [23-02-2021(online)].pdf 2021-02-23
5 202011000571-Request Letter-Correspondence [23-02-2021(online)].pdf 2021-02-23
5 202011000571-DRAWINGS [06-01-2020(online)].pdf 2020-01-06
6 202011000571-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2020(online)].pdf 2020-01-06
6 202011000571-COMPLETE SPECIFICATION [04-01-2021(online)].pdf 2021-01-04
7 abstract.jpg 2020-01-17
7 202011000571-CORRESPONDENCE-OTHERS [04-01-2021(online)].pdf 2021-01-04
8 202011000571-Proof of Right [10-02-2020(online)].pdf 2020-02-10
8 202011000571-DRAWING [04-01-2021(online)].pdf 2021-01-04
9 202011000571-DRAWING [04-01-2021(online)].pdf 2021-01-04
9 202011000571-Proof of Right [10-02-2020(online)].pdf 2020-02-10
10 202011000571-CORRESPONDENCE-OTHERS [04-01-2021(online)].pdf 2021-01-04
10 abstract.jpg 2020-01-17
11 202011000571-COMPLETE SPECIFICATION [04-01-2021(online)].pdf 2021-01-04
11 202011000571-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2020(online)].pdf 2020-01-06
12 202011000571-DRAWINGS [06-01-2020(online)].pdf 2020-01-06
12 202011000571-Request Letter-Correspondence [23-02-2021(online)].pdf 2021-02-23
13 202011000571-Covering Letter [23-02-2021(online)].pdf 2021-02-23
13 202011000571-FORM 1 [06-01-2020(online)].pdf 2020-01-06
14 202011000571-CERTIFIED COPIES TRANSMISSION TO IB [23-02-2021(online)].pdf 2021-02-23
14 202011000571-POWER OF AUTHORITY [06-01-2020(online)].pdf 2020-01-06
15 202011000571-FORM 18 [14-11-2023(online)].pdf 2023-11-14
15 202011000571-PROVISIONAL SPECIFICATION [06-01-2020(online)].pdf 2020-01-06
16 202011000571-FER.pdf 2025-04-03
16 202011000571-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2020(online)].pdf 2020-01-06
17 202011000571-FORM 3 [12-05-2025(online)].pdf 2025-05-12
18 202011000571-FER_SER_REPLY [03-10-2025(online)].pdf 2025-10-03
19 202011000571-DRAWING [03-10-2025(online)].pdf 2025-10-03

Search Strategy

1 202011000571E_27-03-2024.pdf