Abstract: Present disclosure relates to a technique for providing one or more avatars from an encrypted image of a user. The technique comprises receiving at least one encrypted image of the user from a user device; classifying the encrypted image using a neural network based on a tagged dataset enclosed with the encrypted image. The technique further comprises identifying one or more avatars associated with the encrypted image based on said classification; and providing the at least one or more avatars based on said identification for user selection
FIELD OF THE INVENTION:
[001] The present disclosure relates to a technique for securely generating and providing
emoji/avatar/animoji from a user’s image. More specifically, said disclosure relates to
generating and providing one or more emoji/avatars/animoji from an encrypted image of a
user, maintaining privacy of the user.
BACKGROUND OF THE INVENTION:
[002] Usage of avatars and emojis in instant messaging services is well known. Avatars/emoji
helps express the real emotional reaction of a user while communicating in instant
messaging services. With the advancement of technology, a user is now able to generate
avatars/emoji from his selfie image.
[003] For the generation of avatars/emoji through selfie or image, a user clicks an image and
sends it to a server. The server uses several techniques to generate avatars/emoji from the
received user image.
[004] The problem, however, exists that the received image of the user is stored in unencrypted
form on the server. This causes serious privacy implications wherein a hacker might hack
a server and may retrieve/use the user image and related user information mis appropriately
without the consent of the user. Further, this may also lead to dissemination of user’s
personal information on web which is not at all desirable.
[005] Thus, there is a need in the art to protect user data and ensure privacy such that a user image
can be securely transmitted on the server and can be processed to generate desired avatar.
[006] It may be noted that, this Background is provided to introduce a brief context for the
Summary and Detailed Description that follow. This Background is not intended to be an
aid in determining the scope of the claimed subject matter nor be viewed as limiting the
3
described subject matter to implementations that solve any or all of the disadvantages or
problems presented by existing art.
SUMMARY OF THE INVENTION:
[007] The present disclosure overcomes one or more shortcomings of the prior art and provides
additional advantages discussed throughout the present disclosure. Additional features and
advantages are realized through the techniques of the present disclosure. Other
embodiments and aspects of the disclosure are described in detail herein and are considered
a part of the claimed disclosure.
[008] It is to be understood that the aspects and embodiments of the disclosure described below
may be used in any combination with each other. Several of the aspects and embodiments
may be combined together to form a further embodiment of the disclosure.
[009] In one non-limiting embodiment of the present disclosure, a method for providing one or
more avatars from an encrypted image of a user is disclosed. The method comprises
receiving at least one encrypted image of the user from a user device. The method further
comprises classifying the encrypted image using a neural network based on a tagged dataset
enclosed with the encrypted image. The method further comprises identifying one or more
avatars associated with the encrypted image based on said classification. At last, the
method provides the at least one or more avatars based on said identification for user
selection.
[010] In still another non-limiting embodiment of the present disclosure, the method discloses
that the encrypted image is generated using a web-platform available on the user device, in
communication with at least one server.
[011] In yet another non-limiting embodiment of the present disclosure, the method discloses
that the encrypted image is generated at the user device from an image or selfie of the user,
wherein the encrypted image further includes a tagged dataset attached within.
4
[012] In still another non-limiting embodiment of the present disclosure, the method discloses
that classifying the encrypted image using the neural network based on the tagged dataset
associated with the encrypted image, further comprises analyzing the tagged dataset of the
encrypted image, wherein the tagged dataset represents a selfie or image mapped to a set
of avatar or emoji components, and wherein the emoji components represent a user’s
appearance in a unique fashion. Further, the method comprises training the neural network
to generate the set of avatar or emoji components from the tagged data to provide at the
least one or more avatar or emoji.
[013] In yet another non-limiting embodiment of the present disclosure, the method further
comprises transmitting the at least one or more avatars to the user device to be used in the
web platform.
[014] In another non-limiting embodiment of the present disclosure, a system for providing one
or more avatars from an encrypted image of a user is provided. The system comprises one
or more user devices and a server. The server is communication with the one or more user
devices. The server comprises a transceiver which is configured to receive at least one
encrypted image of the user from a user device among the one or more user devices.
Further, the server comprises a processing unit which is electrically coupled to the
transceiver. The processing unit is configured to classify the encrypted image using a
neural network based on a tagged dataset enclosed with encrypted image. The processing
unit is further configured to identify one or more avatars associated with the encrypted
image based on said classification. At last, the processing unit is configured to provide the
at least one or more avatars based on said identification for user selection.
[015] In yet another non-limiting embodiment of the present disclosure, the encrypted image is
generated using a web platform available on the user device, in communication with at
least one server.
5
[016] In still another non-limiting embodiment of the present disclosure, the encrypted image is
generated at the user device from an image or selfie of the user, wherein the encrypted
image further includes a tagged dataset attached within.
[017] In yet another non-limiting embodiment of the present disclosure, the processing unit when
classifying the encrypted image using the neural network based on the tagged dataset
associated with the encrypted image, is configured to analyze the tagged dataset of the
encrypted image, wherein the tagged dataset represents a selfie or image mapped to a set
of avatar or emoji components, and wherein the emoji components represent a user’s
appearance in a unique fashion. Further, the processing unit is configured to train the neural
network to generate the set of avatar or emoji components from the tagged data to provide
at the least one or more avatar or emoji.
[018] In still another non-limiting embodiment of the present disclosure, the transceiver is further
configured to transmit the at least one or more avatars to the user device to be used in the
web platform.
[019] In another non-limiting embodiment of the present disclosure, a system for providing one
or more avatars from an encrypted image of a user is provided. The system comprises one
or more user devices, and a server comprising a first module and a second module. The
first module is configured to receive at least one encrypted image of the user from a user
device among the one or more user devices. Further, the second module is configured to
classify the encrypted image using a neural network based on a tagged dataset enclosed
with encrypted image. The second module is further configured to identify one or more
avatars associated with the encrypted image based on said classification. At last, the second
module is configured to provide the at least one or more avatars based on said identification
for user selection.
[020] In yet another non-limiting embodiment of the present disclosure, the second module when
classifying the encrypted image using the neural network based on the tagged dataset
associated with the encrypted image, is configured to analyze the tagged dataset of the
6
encrypted image, wherein the tagged dataset represents a selfie or image mapped to a set
of avatar or emoji components, and wherein the emoji components represent a user’s
appearance in a unique fashion. Further, the second module is configured to train the neural
network to generate the set of avatar or emoji components from the tagged data to provide
at the least one or more avatar or emoji.
[021] In still another non-limiting embodiment of the present disclosure, the first module is
further configured to transmit the at least one or more avatars to the user device to be used
in the web platform.
[022] The foregoing summary is illustrative only and is not intended to be in any way limiting. In
addition to the illustrative aspects, embodiments, and features described above, further
aspects, embodiments, and features will become apparent by reference to the drawings and
the following detailed description.
OBJECTS OF THE INVENTION:
[023] The main object of the present invention is to securely generate one or more avatars from
an encrypted image of a user.
[024] Further object of the present invention is to provide one or more avatars of a user to be
used on a web platform on a user device without compromising the privacy of a user.
BRIEF DESCRIPTION OF DRAWINGS:
[025] The accompanying drawings, which are incorporated in and constitute a part of this
disclosure, illustrate exemplary embodiments and, together with the description, serve to
explain the disclosed embodiments. In the figures, the left-most digit(s) of a reference
number identifies the figure in which the reference number first appears. The same
numbers are used throughout the figures to reference like features and components. Some
embodiments of system and/or methods in accordance with embodiments of the present
7
subject matter are now described, by way of example only, and with reference to the
accompanying figures, in which:
[026] Figure 1 illustrates a system facilitating the present invention according to an embodiment
of the present disclosure.
[027] Figure 2a illustrates a block diagram of a server for providing one or more avatars from
an encrypted image of a user according to an embodiment of the present disclosure.
[028] Figure 2b illustrates a block diagram of the server for providing one or more avatars from
an encrypted image of a user according to an embodiment of the present disclosure.
[029] Figure 3 discloses a flowchart of a method for providing one or more avatars from an
encrypted image of a user according to an embodiment of present disclosure.
[030] It should be appreciated by those skilled in the art that any block diagrams herein represent
conceptual views of illustrative systems embodying the principles of the present subject
matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition
diagrams, pseudo code, and the like represent various processes which may be substantially
represented in computer readable medium and executed by a computer or processor,
whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF DRAWINGS:
[031] Referring now to the drawings, there is shown an illustrative embodiment of the disclosure
“A system for providing one or more avatars from an encrypted image of a user and a
method thereof”. It is understood that the disclosure is susceptible to various modifications
and alternative forms; specific embodiments thereof have been shown by way of example
in the drawings and will be described in detail below. It will be appreciated as the
description proceeds that the disclosure may be realized in different embodiments.
8
[032] In the present document, the word “exemplary” is used herein to mean “serving as an
example, instance, or illustration”. Any embodiment or implementation of the present
subject matter described herein as “exemplary” is not necessarily to be construed as
preferred or advantageous over other embodiments.
[033] While the disclosure is susceptible to various modifications and alternative forms, specific
embodiment thereof has been shown by way of example in the drawings and will be
described in detail below. It should be understood, however that it is not intended to limit
the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to
cover all modifications, equivalents, and alternative falling within the scope of the
disclosure.
[034] The terms “comprises”, “comprising”, “include(s)”, or any other variations thereof, are
intended to cover a non-exclusive inclusion, such that a setup, system or method that
comprises a list of components or steps does not include only those components or steps
but may include other components or steps not expressly listed or inherent to such setup or
system or method. In other words, one or more elements in a system or apparatus proceeded
by “comprises… a” does not, without more constraints, preclude the existence of other
elements or additional elements in the system or apparatus.
[035] Term virtual world in context of the present application may refer to an environment,
wherein said environment may represent a real or fictitious world governed by rules of
interaction. In other words, virtual world may refer to simulated environment where a user
may be able to make changes in the virtual environment as per his/her choice and is allowed
to interact within such environment via his/her avatar. In particular, users in the virtual
world may appear on a platform in the form of representations referred to as avatars. The
degree of interaction between the avatars and the simulated environment may be
implemented by one or more applications that govern such interactions as simulated
physics, exchange of information between users, and the like. In an exemplary
embodiment, the term virtual world, virtual environment and virtual reality may be used to
interchangeably without departing from the scope of the present application.
9
[036] Avatar in context of the present application relates to graphical representation of a user,
user’s image/selfie or the user's character. Thus, it may be said that an avatar may be
configured to represent emotion/expression/feeling of the user by means of an image
converted into avatar capturing such emotion/expression/feelings by various facial
expressions or added objects such as heart, kisses etc. Further, it is to be appreciated that
an avatar may take either a two-dimensional form as an icon on a virtual platform such as
messaging/chat platforms and or a three-dimensional form such as in virtual environment.
Further, in an exemplary embodiment the term animoji/emoji/avatar may be generated
from users selfie/image, in context of the present invention and thus said term may be used
interchangeably.
[037] According to an aspect, the present disclosure provides a technique that ensures user’s
privacy by providing one or more avatars from an encrypted image of a user. In the
proposed technique, a server receives at least one or more encrypted images of a user from
a user device, the user device being in communication with the server. The encrypted image
is generated using a web platform available on the user device. The encrypted image is
generated at the user device from an image or selfie of the user, wherein the encrypted
image further includes a tagged dataset attached within. In one of the exemplary
embodiments, the tagged dataset represents a selfie or image mapped to a set of avatar or
emoji components, and wherein the emoji components represent a user’s appearance in a
unique fashion. Further, in the proposed technique, the server classifies the encrypted
image using a neural network based on the tagged dataset enclosed with the encrypted
image. For classification, the server, analyzes the tagged dataset of the encrypted image,
and trains the neural network to generate the set of avatar or emoji components from the
tagged data to provide at the least one or more avatar or emoji. Based on said classification,
the server identifies one or more avatars associated with the encrypted and provides the at
least one or more avatars based on said identification which are transmitted to the user
device for user selection.
10
[038] In the following detailed description of the embodiments of the disclosure, reference is
made to the accompanying drawings that form a part hereof, and in which are shown by
way of illustration specific embodiments in which the disclosure may be practiced. These
embodiments are described in sufficient detail to enable those skilled in the art to practice
the disclosure, and it is to be understood that other embodiments may be utilized and that
changes may be made without departing from the scope of the present disclosure. The
following description is, therefore, not to be taken in a limiting sense.
[039] Fig. 1 shows an exemplary system 100 to provide one or more avatars from an encrypted
image of a user. The system 100 comprises one or more user terminals 102a … 102n and
a server 108 which are communicably coupled to each other via a network 106. The server
108 further comprises of a processor 110 and a memory 112, collectively referred as a
processing unit.
[040] In the illustrated embodiment, the system 100 includes two user terminals 102a and 102b
connected to a server 108 through a communication network 106. In alternative
embodiment, the system 100 may include a plurality of user terminals 102a, 102b…. 102c,
102n. Each of the plurality of said user terminals 102a….102n may include one or more
processing units, encryption units, a transceiver, and one or more memories communicably
coupled to each other (not shown), to implement one or more functionalities of the user
terminals respectively. Example of user terminals 102a ... 102n may include, but not
limited to, a personal computer, a mobile phone, a laptop, a tablet and so forth. Further,
each of the user terminals 102a, … 102n may include any number of other components as
required for their operation. However, description of such components has been avoided
for sake of brevity.
[041] In non- limiting example, user terminals 102a and 102b are in a chat session or
conversation with each other, on a virtual communication platform or a web platform. The
user terminal 102a and 102b may use one or more avatars while communicating with each
other. In non-limiting example, the web platform or the virtual communication platform
may refer to hike land or any other instant messaging service platform. The present disclosure is particularly used when the user terminals 102a is in
communication with at least one another user terminal 102b through instant messaging
service as indicated above. The server 108 may be configured to provide the instant
messaging service to the user terminals 102a and 102b. When a user is in communication
with the at least one another user, the user may wish to communicate with the at least one
another user through one or more avatars/emojis. To further enjoy the communication
experience with the one or more another users, the user may wish to use avatars/emoji
which resemble the user face to better explain the emotional reaction of the user. In such
case, a selfie/image of a user may be transmitted to the server 108 by the user terminals
102a and/or 102b and an emoji may be retrieved by the server 108. The transmitting of the
selfie/image of the user to the server 108 and the retrieval of the emoji/avatar from the
server 100 is now explained below:
[043] In an exemplary embodiment, a user terminal 102a may configured to capture user image
or selfie and generate an encrypted image using the web platform, before sharing it with
server 108. To encrypt the selfie or user image, the encryption unit of the user terminal
102a may use any available encryption algorithm which serves the purpose of the present
disclosure. In one of the non-limiting examples, the encryption algorithm is Cheon-KimKin-Song (CKKS). The use of this algorithm provides various functions that will allow to
perform vector operations directly without having to decrypt the image. However, it may
be noted from above, the user terminals 102 may encrypt the captured images and/or selfie
using any other suitable encryption technique. With the use of such scheme of encryption,
the encrypted image remains encrypted even when transmitted to the server 108 and the
server 108 does not have direct access to the content of the encrypted image. This ensures
the privacy of the user while communicating with server.
[044] Before encrypting the user selfie or image, the user terminal 102a may be configured to
create a tagged data set by mapping the user’s selfie or image to a set of emoji or avatar
components. In a non-limiting example, the emoji or avatar components represent a user’s
appearance in a unique fashion. For example, the emoji or avatar components may refer to
12
characteristic features of the user’s face. Thereafter, the encryption unit of the user terminal
102a may be configured to encrypt the user’s selfie and generate a corresponding dataset
with the same set of emoji or avatar components as mapped earlier. After generating the
encrypted image including the tagged data set, the user terminal 102a … 102n may be
configured to transmit the encrypted image of the user including the tagged dataset to the
server 108. In a non-limiting example, the user terminal 102a may store the encrypted
image along with the tagged dataset in the memory (not shown) of the user terminal 102a.
It may be worth noted that each of the user terminals 102a……102n may perform similar
operations/ functions as described in paragraphs [042]- [043], to achieve technical
effects/advantages of the present disclosure.
[045] Upon receiving at least one encrypted image from the user terminals 102a……102n, the
server 108 may classify the at least one encrypted image including the tagged dataset to
identify one or more avatar/emoji associated with the encrypted image. In an exemplary
embodiment, for classification, the server 108 may use a neural network model wherein
the where the model is trained to predict the expected avatar/emoji. For example, to classify
the at least one encrypted image, the server 108 may train the neural network to analyze
the tagged dataset included in the at least one encrypted image, wherein the tagged dataset
represents a selfie or image mapped to a set of avatar or emoji components. Each of these
emoji components represent a user’s appearance in a unique fashion. Further, the server
108 may train the neural network to generate the set of avatar or emoji components from
the tagged data to provide or generate at the least one or more avatar or emoji. Several data
sets may be trained in this way using neural network techniques. After generating the at
the least one or more avatar or emoji from the encrypted image, the server 108 may be
configured to transmit the at least one or more avatars to the respective user terminals 102a
… 102n to be selected and used in respective web platforms.
[046] By way of an exemplary embodiment, user A may be communicating with another user B
using an instant messaging service and may wish to include an emoji of a “smile” in the
conversation with user B. Further, the user A wishes that the emoji of “smile” should
resemble the face of the user A. Thus, user A clicks a selfie image through the camera of
13
the user terminal 102a available with user A. The encryption unit of the user terminal 102a
encrypts the selfie image clicked by the user A using homomorphic encryption techniques.
Once encrypted, the user terminal 102a transmits the encrypted image to the server 108
which is provides the instant messaging services to the user A and user B.
[047] Once the encrypted image is received by the server 108, the server 108 generates a happy
emoji/avatar from the user image using the neural network model which is trained to
generate an emoji from an encrypted selfie image of the user. once an emoji/avatar is
generated, the server transmits the generated avatar/emoji to the user terminal 102a. The
user A may then use the received avatar/emoji in the instant messaging service.
[048] The detailed explanation of the server 108 for providing one or more avatars from an
encrypted image of a user is provided below in Figs. 2a and 2b.
[049] FIG. 2a shows a detailed block diagram 200a illustrating the server 108 of system 100 in
FIG. 1, in accordance with an embodiment of the present disclosure. According to an
embodiment of present disclosure, the server 108 comprises an I/O interface 202, a
processing unit 204, a transceiver 206. In one exemplary embodiment, the processing unit
204 may comprise a processor 210 and a memory 208. In another exemplary embodiment,
the processing unit 204 may comprise at least one processor 210. The memory 208 may
reside outside the processing unit 204 and is in electronic communication with the
processing unit 204. Further, the memory 210 may store information not limited to,
encrypted images of various users, etc. The I/O interface 202 and the transceiver 206 may
be electronically coupled to the memory 210 and/or the processing unit 204. The I/O
interface 202 may include a variety of software and hardware interfaces, for example, a
web interface, a graphical user interface, input device, output device and the like. The I/O
interface 202 may allow the server 108 to interact with the user terminal or device directly
or through other devices. The memory 208 is communicatively coupled to the processing
unit 204.
14
[050] In an embodiment, the memory 208 may be a computer-readable medium known in the art
including, for example, volatile memory, such as static random access memory (SRAM)
and dynamic random access memory (DRAM), and/or non-volatile memory, such as read
only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical
disks, and magnetic tapes.
[051] In an embodiment, the information may be stored within the memory 208 in the form of
various data structures. Additionally, the information stored in memory may be organized
using data models, such as relational or hierarchical data models or lookup tables. The
memory may also store other data such as temporary data and temporary files, generated
by the various units 204, 206 for performing the various functions of the server 108.
[052] In an embodiment, the information may be processed by one or more processing units 204
of the server 108. As used herein, the term ‘unit’ refers to an application specific integrated
circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), a
combinational logic circuit, and/or other suitable components that provide the described
functionality. In an embodiment, the other units may be used to perform various
miscellaneous functionalities of the server 108. It will be appreciated that such units may
be represented as a single unit or a combination of different units.
[053] In an embodiment, the transceiver 206 may be configured to receive at least one encrypted
image from at least one the user terminals 102a … 102n. Said encrypted image or selfie of
a user includes a tagged dataset. After receiving the at least one encrypted image or selfie,
the processing unit 204 may be configured to classify said encrypted image or selfie using
a neural network based on tagged dataset enclosed with said encrypted image. To classify
the at least one encrypted image, the processing unit 204 may be configured to use a neural
network to analyze the tagged dataset of the at least one encrypted image and to train the
neural network to generate the set of avatar or emoji components from the tagged data to
provide at the least one or more avatar or emoji. The tagged dataset represents a selfie or
image mapped to a set of avatar or emoji components, and wherein the emoji components
represent a user’s appearance in a unique fashion. Based on said classification, the
15
processing unit 204 may be configured to identify one or more avatars associated with the
encrypted image and provides or generates the at least one or more avatars based on said
identification. After generating the at the least one or more avatar or emoji from the
encrypted image, the transceiver 206 may be configured to transmit the at least one or more
avatars to the respective user terminals 102a … 102n to be used in respective web
platforms.
[054] Upon receiving the avatar/emoji from the memory 210, the user terminals 102a … 102n may
use the emoji/avatar in the conversation in the web platform.
[055] Figure 2b illustrates a block diagram 200b of the server 108 for providing one or more
avatars from an encrypted image of a user according to an embodiment of the present
disclosure. In an exemplary embodiment, the server 108 may comprise modules 211. In
some embodiments, the data stored in the memory 208 (not shown) may be processed by
the modules 211 of the server 108. The modules 211 may be stored within the memory 208
in form of software instructions. In another example, the modules 211 communicatively
coupled to the processing unit 204, and may also be present outside the memory 208, and
implemented as hardware.
[056] In some embodiments, the modules 211 may include, for example, a first module 211a, a
second module 211b and other modules 211c. The other modules 211c may be used to
perform various miscellaneous functionalities of the server 108. It will be appreciated that
such aforementioned modules 211 may be represented as a single module or a combination
of different modules.
[057] The first module 211a may receive at least one encrypted image of the user from a user
device among the one or more user devices/terminals. Further, the second module 211b
may classify the encrypted image using a neural network based on a tagged dataset
enclosed with encrypted image. The second module 211b is further configured to identify
one or more avatars associated with the encrypted image based on said classification. At
last, the second module is configured to provide the at least one or more avatars based on
16
said identification for user selection. The second module 211b when classifying the
encrypted image using the neural network based on the tagged dataset associated with the
encrypted image, may analyze the tagged dataset of the encrypted image, wherein the
tagged dataset represents a selfie or image mapped to a set of avatar or emoji components,
and wherein the emoji components represent a user’s appearance in a unique fashion.
Further, the second module 211b is configured to train the neural network to generate the
set of avatar or emoji components from the tagged data to provide at the least one or more
avatar or emoji. Thereafter, the first module 211a is further configured to transmit the at
least one or more avatars to the user device to be used in the web platform.
[058] In another exemplary embodiment, the server 108 may comprise various “means” 211 for
providing one or more avatars from an encrypted image of a user. For example, the server
may comprise means for receiving 211a at least one encrypted image of the user from a
user device among the one or more user devices/terminals. Further, the server may
comprise means for classifying 211b the encrypted image using a neural network based on
a tagged dataset enclosed with encrypted image. The means for classifying 211b further
comprising means for identifying one or more avatars associated with the encrypted image
based on said classification. At last, means for classifying 211b further comprising means
for providing the at least one or more avatars based on said identification for user selection.
The means for classifying 211b further comprising means for analyzing the tagged dataset
of the encrypted image, wherein the tagged dataset represents a selfie or image mapped to
a set of avatar or emoji components, and wherein the emoji components represent a user’s
appearance in a unique fashion. The means for classifying 211b further comprising means
for training the neural network to generate the set of avatar or emoji components from the
tagged data to provide at the least one or more avatar or emoji. Thereafter, the means for
receiving 211a further comprising means for transmitting the at least one or more avatars
to the user device to be used in the web platform.
[059] It will be appreciated that such aforementioned means 211 may be implemented in
hardware, software or a combination of both.
17
[060] To understand the embodiments defined in Figs. 2a and 2b, let us consider an example, in
which a user of the terminal 102a is in a conversation with user of other terminal 102b, on
a virtual communication platform. Further, each of the user terminals 102a and 102b is
connected to the server 108 through a communication network 106. The conversation may
be by way of exchanging text, stickers, emoji, avatars, video, audio, etc., but not limited
thereto. During conversation, the user of the terminal 102a may wish to include an
emoji/avatar representing “grinning” in the conversation with the user of other terminal
102b. Further, the user of the terminal 102a wishes that the emoji of “grinning” should
resemble her facial appearance or facial characteristics, e.g., the “short curly hair”, “broad
forehead” and a “cleft chin”. Thus, the user of the terminal 102a clicks a selfie image
through the camera of the user terminal 102a. The user terminal 102a encrypts the selfie
image clicked by the user of the terminal 102a using any available homomorphic
encryption techniques. Once encrypted, the user terminal 102a transmits the encrypted
image to the server 108 which is providing messaging services to the users of terminals
102a and 102b.
[061] To ensure that the image or selfie of the user is transmitted to the server 108 in a secure
manner so as to keep intact user’s privacy, the user terminal 102a may be configured to
encrypt the selfie of the user of the terminal 102a before transmitting the same to the server
108. Before encrypting, the terminal 102a generates a tagged dataset by mapping the selfie
of the user of the terminal 102a to a set of emoji or avatar components such as the “short
curly hair”, “broad forehead” and a “cleft chin” as desired by the user of the terminal 102a
or from the features as found in the selfie of the user. Thereafter, the user terminal 102a
encrypts the selfie as mapped with unique facial characteristics of the user of the terminal
102a and generate a corresponding dataset tagged with the same set of emoji or avatar
components as mapped earlier. After generating the encrypted image including tagged data
set, the user terminal 102a transmits the encrypted image of the user including with the
tagged dataset to the server 108.
[062] Once the encrypted image of the user of user terminal 102a is received by the server 108,
the server 108 generates a “grinning” emoji/avatar from the encrypted image using the
18
neural network. After receiving the at least one encrypted image or selfie, the server 108
classifies said encrypted image or selfie using a neural network based on tagged dataset
enclosed with said encrypted image. Since the tagged dataset represents the selfie or image
of the user of user terminal 102a mapped to a set of avatar or emoji components such as
the “short curly hair”, “broad forehead” and a “cleft chin”, the server 108 uses the neural
network to analyze the tagged dataset and generate said set of avatar or emoji components
from the tagged data. It may be worth noted that the neural network is being trained on
different data sets to generate the set of avatar or emoji components from the tagged data
to provide at the least one or more avatar or emoji. Once, said set of avatar or emoji
components are generated, the server 108 provides or generate at the least one or more
“grinning” avatar or emoji which resembles the unique facial appearance of the user of user
terminal 102a. After generating the at the least one or more “grinning” avatar or emoji from
the encrypted image, the server 108 transmits the at least one or more “grinning” avatars
to the respective user terminal 102a to be selected and used in the conversation with user
of other terminal 102b on a virtual communication platform. Since the server 108 has no
direct access to the content of the image of selfie itself and the neural network is being
trained on an image equivalent (encrypted image including tagged dataset) to generate at
least one or more avatar/emojis, the user privacy is preserved.
[063] It may be worth noted that, aforementioned paragraphs provide various technical effects or
advantages such as providing enhanced security to the user’s images while communicating
on a web-platform, thereby keeping user privacy intact. This prevents unauthorized use of
user related data at the server, such data/information is being communicated to the server.
Conclusively, the present disclosure provides an improved method and system for
providing one or more avatar of an encrypted image of a user without compromising the
privacy of the user. The system also provides technical effect of receiving an emoji or an
avatar at the user device from the server, while the server has no direct access to the data/
information of the user.
19
[064] FIG. 3 shows a flowchart of an exemplary method 300 for providing one or more avatars
from an encrypted image of a user, in accordance with another embodiment of the present
disclosure.
[065] At block 302, the method may describe receiving at least one encrypted image of the user
at the server 108, from a user device 102a. The user device could be any other user device
among the one or more user devices or terminals 102a …….102n. In an exemplary
embodiment, the encrypted image is generated using a web-platform available on the user
device, the user device being in communication with the server 108. In an embodiment of
the present disclosure, the method may describe that the encrypted image is generated at
the user device from an image or selfie of the user, wherein the encrypted image further
includes a tagged dataset attached within. In an exemplary embodiment, the encrypted
image may be generated by capturing multiple images or selfies of the user. Further, the
tagged dataset includes a dataset of a user’s selfie mapped to the various components that
are used by the server to predict the emoji or an avatar of the user.
[066] At block 304, the method may describe classifying the encrypted image using a neural
network based on a tagged dataset enclosed with the encrypted image. In an embodiment,
the method describes that classifying the encrypted image using the neural network based
on the tagged dataset associated with the encrypted image, further comprises analyzing the
tagged dataset of the encrypting image, wherein the tagged dataset represents a selfie or
image mapped to a set of avatar or emoji components, and wherein the emoji components
represent a user’s appearance in a unique fashion.
[067] In another embodiment, the method describes that wherein classifying the encrypted image
using the neural network based on the tagged dataset associated with the encrypted image,
further comprises training the neural network to generate the set of avatar or emoji
components from the tagged data to provide at the least one or more avatar or emoji. In an
exemplary embodiment, the method describes that the neural network is trained with the
tagged dataset. Further, the method describes that the trained neural network may be used
for inference for classifying the tagged components of the encrypted image.
20
[068] At block 306, the method may describe identifying one or more avatars associated with the
encrypted image based on said classification. Further, at block 308, the method describes
providing the at least one or more avatars based on said identification for user selection. In
an exemplary embodiment, the user may select or choose at least one or more avatars for
communicating with other user devices. In another embodiment, the method further
comprises transmitting the at least one or more avatars to the user device to be used in the
web-platform.
[069] Furthermore, the above-mentioned steps provide various technical effects such as
providing enhanced security to the user’s images while communicating on a web-platform.
The present disclosure provides an improved method and system for providing one or more
avatar of an encrypted image of a user without compromising the privacy of the user. The
method also provides technical effect of receiving an emoji or an avatar from the server
without accessing the content of the user.
[070] The illustrated steps are set out to explain the exemplary embodiments shown, and it should
be anticipated that ongoing technological development will change the manner in which
particular functions are performed. These examples are presented herein for purposes of
illustration, and not limitation. Further, the boundaries of the functional building blocks
have been arbitrarily defined herein for the convenience of the description. Alternative
boundaries can be defined so long as the specified functions and relationships thereof are
appropriately performed.
[071] Those of skill would further appreciate that the various illustrative logical blocks, units,
means, modules, circuits, and algorithm steps described in connection with the aspects
disclosed herein may be implemented as electronic hardware, computer software, or
combinations of both. To clearly illustrate this interchangeability of hardware and software,
various illustrative components, blocks, modules, circuits, and steps have been described
above generally in terms of their functionality. Whether such functionality is implemented
as hardware or software depends upon the particular application and design constraints
21
imposed on the overall system. Skilled artisans may implement the described functionality
in varying ways for each particular application, but such implementation decisions should
not be interpreted as causing a departure from the scope of the present disclosure.
[072] As used in this disclosure, the terms “means”, “module”, “unit”, and the like are intended
to refer to a computer-related entity, either hardware, a combination of hardware and
software, software, or software in execution. For example, “means” may be, but is not
limited to being, a process running on a processor, a processor, an object, an executable, a
thread of execution, a program, and/or a computer. By way of illustration, both an
application running on a server and the server can be a system. One or more components
may reside within a process and/or thread of execution and a component may be localized
on one computer and/or distributed between two or more computers.
[073] The word “exemplary” is used herein to mean serving as an example, instance, or
illustration. Any aspect or design described herein as “exemplary” is not necessarily to be
construed as preferred or advantageous over other aspects or designs.
[074] Various aspects will be presented in terms of systems that may include a number of
units/components, means, modules, and the like. It is to be understood and appreciated that
the various systems may include additional components/units, modules, etc. and/or may
not include all of the components, modules, means etc. discussed in connection with the
figures. A combination of these approaches may also be used.
[075] In addition, the various illustrative logical blocks, units, modules, and means described in
connection with the aspects disclosed herein may be implemented or performed with a
general purpose processor, a digital signal processor (DSP), an application specific
integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable
logic device, discrete gate or transistor logic, discrete hardware components, or any
combination thereof designed to perform the functions described herein. A general purpose
processor may be a microprocessor, but in the alternative, the processor may be any
conventional processor, controller, microcontroller, or state machine. A processor may also
22
be implemented as a combination of computing devices, e.g., a combination of a DSP and
a microprocessor, a plurality of microprocessors, one or more microprocessors in
conjunction with a DSP core, or any other such configuration.
[076] While the present disclosure has been described by way of example and in terms of the
preferred embodiments, it is to be understood that the invention is not limited to the
disclosed embodiments. To the contrary, it is intended to cover various modifications and
similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope
of the appended claims should be accorded the broadest interpretation so as to encompass
all such modifications and similar arrangements.
1. A method for providing one or more avatars from an encrypted image of a user, the method
comprising:
receiving at least one encrypted image of the user from a user device;
classifying the encrypted image using a neural network based on a tagged dataset
enclosed with the encrypted image;
identifying one or more avatars associated with the encrypted image based on said
classification;
providing the at least one or more avatars based on said identification for user
selection.
2. The method of claim 1, wherein the encrypted image is generated using a web platform
available on the user device, in communication with at least one server.
3. The method of claim 1, wherein the encrypted image is generated at the user device from
an image or selfie of the user, wherein the encrypted image further includes a tagged dataset
attached within.
4. The method of claim 1, wherein classifying the encrypted image using the neural network
based on the tagged dataset associated with the encrypted image, comprises:
analyzing the tagged dataset of the encrypted image, wherein the tagged dataset
represents a selfie or image mapped to a set of avatar or emoji components, and wherein
the emoji components represent a user’s appearance in a unique fashion;
training the neural network to generate the set of avatar or emoji components from
the tagged data to provide at the least one or more avatar or emoji.
5. The method of claim 1, further comprising transmitting the at least one or more avatars to
the user device to be used in the web platform.
6. A system for providing an avatar from an encrypted image of a user, the system comprising:
24
one or more user devices; and
a server in communication with the one or more user devices, the server
comprising:
a transceiver configured to:
receive at least one encrypted image of the user from a user device
among the one or more user devices; and
a processing unit electronically coupled to the transceiver, and configured
to:
classify the encrypted image using a neural network based on a
tagged dataset enclosed with encrypted image;
identify one or more avatars associated with the encrypted image
based on said classification;
provide the at least one or more avatars based on said identification
for user selection.
7. The system of claim 6, wherein the encrypted image is generated using a web platform
available on the user device, in communication with at least one server.
8. The system of claim 6, wherein the encrypted image is generated at the user device from
an image or selfie of the user, wherein the encrypted image further includes a tagged dataset
attached within.
9. The system of claim 6, wherein the processing unit when classifying the encrypted image
using the neural network based on the tagged dataset associated with the encrypted image,
is configured to:
analyze the tagged dataset of the encrypted image, wherein the tagged dataset
represents a selfie or image mapped to a set of avatar or emoji components, and wherein
the emoji components represents a user’s appearance in a unique fashion;
train the neural network to generate the set of avatar or emoji components from the
tagged data to provide at the least one or more avatar or emoji.
25
10. The system of claim 6, wherein the transceiver is further configured to transmit the at least
one or more avatars to the user device to be used in the web platform.
| # | Name | Date |
|---|---|---|
| 1 | 202011000448-FER.pdf | 2025-04-01 |
| 1 | 202011000448-FORM 18 [26-10-2023(online)].pdf | 2023-10-26 |
| 1 | 202011000448-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2020(online)].pdf | 2020-01-06 |
| 2 | 202011000448-PROVISIONAL SPECIFICATION [06-01-2020(online)].pdf | 2020-01-06 |
| 2 | 202011000448-FORM 18 [26-10-2023(online)].pdf | 2023-10-26 |
| 2 | 202011000448-CERTIFIED COPIES TRANSMISSION TO IB [23-02-2021(online)].pdf | 2021-02-23 |
| 3 | 202011000448-CERTIFIED COPIES TRANSMISSION TO IB [23-02-2021(online)].pdf | 2021-02-23 |
| 3 | 202011000448-Covering Letter [23-02-2021(online)].pdf | 2021-02-23 |
| 3 | 202011000448-FORM 1 [06-01-2020(online)].pdf | 2020-01-06 |
| 4 | 202011000448-Covering Letter [23-02-2021(online)].pdf | 2021-02-23 |
| 4 | 202011000448-DRAWINGS [06-01-2020(online)].pdf | 2020-01-06 |
| 4 | 202011000448-Request Letter-Correspondence [23-02-2021(online)].pdf | 2021-02-23 |
| 5 | 202011000448-Request Letter-Correspondence [23-02-2021(online)].pdf | 2021-02-23 |
| 5 | 202011000448-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2020(online)].pdf | 2020-01-06 |
| 5 | 202011000448-COMPLETE SPECIFICATION [04-01-2021(online)].pdf | 2021-01-04 |
| 6 | 202011000448-FORM-26 [08-01-2020(online)].pdf | 2020-01-08 |
| 6 | 202011000448-CORRESPONDENCE-OTHERS [04-01-2021(online)].pdf | 2021-01-04 |
| 6 | 202011000448-COMPLETE SPECIFICATION [04-01-2021(online)].pdf | 2021-01-04 |
| 7 | 202011000448-DRAWING [04-01-2021(online)].pdf | 2021-01-04 |
| 7 | 202011000448-CORRESPONDENCE-OTHERS [04-01-2021(online)].pdf | 2021-01-04 |
| 7 | abstract.jpg | 2020-01-17 |
| 8 | 202011000448-DRAWING [04-01-2021(online)].pdf | 2021-01-04 |
| 8 | 202011000448-Proof of Right [07-02-2020(online)].pdf | 2020-02-07 |
| 9 | 202011000448-DRAWING [04-01-2021(online)].pdf | 2021-01-04 |
| 9 | 202011000448-Proof of Right [07-02-2020(online)].pdf | 2020-02-07 |
| 9 | abstract.jpg | 2020-01-17 |
| 10 | abstract.jpg | 2020-01-17 |
| 10 | 202011000448-FORM-26 [08-01-2020(online)].pdf | 2020-01-08 |
| 10 | 202011000448-CORRESPONDENCE-OTHERS [04-01-2021(online)].pdf | 2021-01-04 |
| 11 | 202011000448-COMPLETE SPECIFICATION [04-01-2021(online)].pdf | 2021-01-04 |
| 11 | 202011000448-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2020(online)].pdf | 2020-01-06 |
| 11 | 202011000448-FORM-26 [08-01-2020(online)].pdf | 2020-01-08 |
| 12 | 202011000448-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2020(online)].pdf | 2020-01-06 |
| 12 | 202011000448-DRAWINGS [06-01-2020(online)].pdf | 2020-01-06 |
| 12 | 202011000448-Request Letter-Correspondence [23-02-2021(online)].pdf | 2021-02-23 |
| 13 | 202011000448-Covering Letter [23-02-2021(online)].pdf | 2021-02-23 |
| 13 | 202011000448-DRAWINGS [06-01-2020(online)].pdf | 2020-01-06 |
| 13 | 202011000448-FORM 1 [06-01-2020(online)].pdf | 2020-01-06 |
| 14 | 202011000448-CERTIFIED COPIES TRANSMISSION TO IB [23-02-2021(online)].pdf | 2021-02-23 |
| 14 | 202011000448-FORM 1 [06-01-2020(online)].pdf | 2020-01-06 |
| 14 | 202011000448-PROVISIONAL SPECIFICATION [06-01-2020(online)].pdf | 2020-01-06 |
| 15 | 202011000448-FORM 18 [26-10-2023(online)].pdf | 2023-10-26 |
| 15 | 202011000448-PROVISIONAL SPECIFICATION [06-01-2020(online)].pdf | 2020-01-06 |
| 15 | 202011000448-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2020(online)].pdf | 2020-01-06 |
| 16 | 202011000448-FER.pdf | 2025-04-01 |
| 16 | 202011000448-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2020(online)].pdf | 2020-01-06 |
| 17 | 202011000448-FORM 3 [15-05-2025(online)].pdf | 2025-05-15 |
| 18 | 202011000448-OTHERS [01-10-2025(online)].pdf | 2025-10-01 |
| 19 | 202011000448-FER_SER_REPLY [01-10-2025(online)].pdf | 2025-10-01 |
| 20 | 202011000448-COMPLETE SPECIFICATION [01-10-2025(online)].pdf | 2025-10-01 |
| 21 | 202011000448-CLAIMS [01-10-2025(online)].pdf | 2025-10-01 |
| 22 | 202011000448-ABSTRACT [01-10-2025(online)].pdf | 2025-10-01 |
| 1 | 00448E_14-03-2024.pdf |