Sign In to Follow Application
View All Documents & Correspondence

“Systems And Methods For Generating User Specific Stickers”

Abstract: The present disclosure describes a method for dynamically generating a graphical image. The method includes receiving, a user input from a user via a first user interface. The method further includes processing, the user input to identify context associated with information included in the user input. The method also includes suggesting one or more graphical images for the user selection via a second user interface, based on the identified context. The second user interface is displayed subsequent to the first user interface. Further, the method includes suggesting, via a third user interface, one or more modifications to the at least one of the selected graphical image, for user selection. The third user interface is displayed subsequent to the second user interface. Then the method includes dynamically modifying the at least one graphical image presented via the second user interface based on the at least one user selected modification.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 February 2020
Publication Number
36/2021
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
ipo@knspartners.com
Parent Application

Applicants

Hike Private Limited
4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi, Delhi 110037, India

Inventors

1. Kavin Bharti Mittal
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
2. Aditya Gupta
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
3. Gaurav Arora
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
4. Siddharth Gupta
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
5. Deepanshu
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
6. Mudit Khandelwal
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
7. Dinesh Gaur
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
8. Rohit Garg
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
9. Dhairya Gandhi
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
10. Anshuman Misra
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037
11. Vineet Kala
Hike Private Limited, 4th Floor, Indira Gandhi International Airport, Worldmark 1, Northern Access Rd, Aerocity, New Delhi - 110037

Specification

[0001] The present disclosure relates in general to computer user interfaces. More
particularly but not exclusively, to a method and system for generating and modifying
graphical images using one or more user interfaces displayed simultaneously.
BACKGROUND
[0002] Rapid growth in the field of digital portable device has provided significant
motivation for development in digital messaging. Generally, users communicate by way of
publishing a post, making a comment, or sending a message. Users use various system
which enable such communication for the user. Some messaging services/systems also
provide a user with various graphical images (such as, emoji, emoticons, stickers) for instant
messaging to express their emotions or mood.
[0003] Such application present in the messaging systems enable users to express their
current emotions or the emotions associated with the message that a user wants to deliver.
The advantage of using such messaging application is that user can further express their
emotions with expression illustrated in the generated graphical image rather than typing a
large message content to express their associated emotions with the message. In the exiting
messaging application, a user can generate and select such graphical image based on its
requirement. However, the existing messaging applications do not provide a facility to edit
such graphical image which is generated instantly upon receiving user inputs. Also, current
messaging applications does not provide the overlay feature on the output interface from
where user can select and edit the graphical image without opening a separate window,
which may lead to user inconvenience and more time-consuming process during selection
of the graphical image.
[0004] Thus, there exists a need for the technology that can help in mitigating the
inconvenience of the user during selection and modification of a graphical image during
messaging and saving the time of the user.
3
SUMMARY OF THE INVENTION
[0005] One or more shortcomings discussed above are overcome, and additional advantages
are provided by the present disclosure. Additional features and advantages are realized
through the techniques of the present disclosure. Other embodiments and aspects of the
disclosure are described in detail herein and are considered a part of the disclosure.
[0006] According to an aspect of present disclose, a system and method is provided to
enable a user to generate and modify a graphical image in real-time.
[0007] In one non-limiting embodiment of the present disclosure, the present application
discloses a method for dynamically generating a graphical image. The method comprises
receiving, a user input from a user via a first user interface. The user input comprising at
least one of textual information, audio information or video information. The method further
comprises processing, the user input to identify context associated with the information
included in the user input. Further, the method comprises suggesting one or more graphical
images for the user selection via a second user interface, based on the identified context.
The second user interface is displayed subsequent to the first user interface. The method
also comprises suggesting, via a third user interface, one or more modifications to the at
least one of the selected graphical image, for user selection. The third user interface is
displayed subsequent to the second user interface. The method further comprises
dynamically modifying the at least one graphical image presented via the second user
interface based on the at least one user selected modification.
[0008] According to another embodiment of the disclosed method, the second and the third
user interfaces cover at least a portion of the first user interface and wherein the first, second
and third user interfaces are presented to the user simultaneously over same display.
[0009] According to yet another embodiment of the disclosed method, the one or more
graphical images include at least one of emoticons, stickers, and emojis.
4
[0010] According to yet another embodiment of the disclosed method, the one or more
graphical images includes one or more graphical components, the one or more graphical
components include at least one of a background image, a graphical representation of the
user, and a textual data.
[0011] According to yet another embodiment of the disclosed method, the one or more
modification is based on one or more characteristics of the one or more graphical
components of the one or more graphical images.
[0012] In another non-limiting embodiment of the present disclosure, the present
application discloses a device for dynamically generating a graphical image. The device
comprises a memory and a processor communicably coupled to the memory. The processor
configured to receive, a user input from a user via a first user interface. The user input
comprising at least one of textual information, audio information or video information. The
processor is also configured to process, the user input to identify context associated with the
information included in the user input. The processor is further configured suggest one or
more graphical images for the user selection via a second user interface, based on the
identified context. The second user interface is displayed subsequent to the first user
interface. The processor is further configured to suggest, via a third user interface, one or
more modifications to the at least one of the selected graphical image, for user selection.
The third user interface is displayed subsequent to the second user interface. Further, the
processor is configured to dynamically modify the at least one graphical image presented
via the second user interface based on the at least one user selected modification.
[0013] According to another embodiment of the disclosed device, the second and the third
user interfaces cover at least a portion of the first user interface and wherein the first, second
and third user interfaces are presented to the user simultaneously over same display.
5
[0014] According to yet another embodiment of the disclosed device, the one or more
graphical images include at least one of emoticons, stickers, and emojis.
[0015] According to yet another embodiment of the disclosed device, the one or more
graphical images includes one or more graphical components, the one or more graphical
components include at least one of a background image, a graphical representation of the
user, and a textual data.
[0016] According to yet another embodiment of the disclosed device, the one or more
modification is based on one or more characteristics of the one or more graphical
components of the one or more graphical images.
[0017] The foregoing summary is illustrative only and is not intended to be in any way
limiting. In addition to the illustrative aspects, embodiments, and features described above,
further aspects, embodiments, and features will become apparent by reference to the
drawings and the following detailed description.
OBJECTS OF THE INVENTION:
[0018] The main object of the present invention is to enable the users to generate and modify
a graphical image in a real time in a messaging environment.
[0019] Another main object of the present invention is to save the time and provide
convenience to the user by dynamically modifying the graphical image in real time based
on user selection.
[0020] Yet another object of the present invention isto reduce the inconvenience of the user
by overlaying a user interface presenting modification recommendation of the graphical
image on the same output interface screen presenting messaging environment.
BRIEF DESCRIPTION OF DRAWINGS
6
[0021] Further aspects and advantages of the present disclosure will be readily understood
from the following detailed description with reference to the accompanying drawings.
Reference numerals have been used to refer to identical or functionally similar elements.
The figures together with a detailed description below, are incorporated in and form part of
the specification, and serve to further illustrate the embodiments and explain various
principles and advantages, in accordance with the present disclosure wherein:
[0022] Fig. 1 illustrates an exemplary diagram of a system for implementing message
communication via graphical images, in accordance with an embodiment of present
disclosure.
[0023] Fig. 2 illustrates a block diagram of a user device, in accordance with an embodiment
of present disclosure.
[0024] Figs. 3A-3C illustrate various user interfaces presented to a user, in accordance with
an embodiment of present disclosure.
[0025] Fig. 4 is a flow chart representing exemplary method for generating and modifying
a graphical image according to an embodiment of the present disclosure.
[0026] It should be appreciated by those skilled in the art that any block diagrams herein
represent conceptual views of illustrative systems embodying the principles of the present
subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state
transition diagrams, pseudo code, and the like represent various processes which may be
substantially represented in computer readable medium and executed by a computer or
processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Referring now to the drawings, there is shown an illustrative embodiment of the
disclosure “systems and methods for generating and modifying graphical image”. It is
7
understood that the disclosure is susceptible to various modifications and alternative forms;
specific embodiments thereof have been shown by way of example in the drawings and will
be described in detail below. It will be appreciated as the description proceeds that the
disclosure may be realized in different embodiments.
[0028] In the present document, the word "exemplary" is used herein to mean "serving as
an example, instance, or illustration." Any embodiment or implementation of the present
subject matter described herein as "exemplary" is not necessarily to be construed as
preferred or advantageous over other embodiments.
[0029] While the disclosure is susceptible to various modifications and alternative forms,
specific embodiment thereof has been shown by way of example in the drawings and will
be described in detail below. It should be understood, however that it is not intended to limit
the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover
all modifications, equivalents, and alternative falling within the scope of the disclosure.
[0030] The terms “comprises”, “comprising”, or any other variations thereof, are intended
to cover non-exclusive inclusions, such that a setup, device that comprises a list of
components that does not include only those components but may include other components
not expressly listed or inherent to such setup or device. In other words, one or more elements
in a system or apparatus proceeded by “comprises… a” does not, without more constraints,
preclude the existence of other elements or additional elements in the system or apparatus
or device. It could be noted with respect to the present disclosure that the terms like “a
system for generating multiple expressive emojis”, “The system” refers to the same system
which is used using the present disclosure.
[0031] The term like “graphical image”, and/or “sticker” and/or “emoji” and/or “avatar”,
may be used interchangeably or in combination throughout the description.
8
[0032] In the following detailed description of the embodiments of the disclosure, reference
is made to the accompanying drawings that form a part hereof, and in which are shown by
way of illustration specific embodiments in which the disclosure may be practiced. These
embodiments are described in sufficient detail to enable those skilled in the art to practice
the disclosure, and it is to be understood that other embodiments may be utilized and that
changes may be made without departing from the scope of the present disclosure. The
following description is, therefore, not to be taken in a limiting sense.
[0033] According to an aspect of the present disclosure provides a technique dynamically
generate and modify a graphical image. The technique provides a first user interface to
receive a user input. The technique provides a second user interface to suggest one or more
graphical images based on received user input. Further, a selection of at least one of the
graphical images is received via the second user interface. The technique further provides a
third user interface to suggest one or more modification to the at least one user selected
graphical image. Further, a selection of at least one of the modifications to the at least one
user selected image is received via the third user interface. The technique then dynamically
modify the at least one graphical image and present the modified graphical image via the
second user interface. Further, each of the first, second and third user interface is provided
simultaneously to provide convenience to the user while using graphical images to
communicate to other users efficiently and effectively.
[0034] Figure 1 illustrates an exemplary environment 100 of a system for implementing
message communication via graphical images. The system 100 includes a plurality of user
devices 102a-102n (interchangeably referred to as “the user device 102”), a server 106, and
network 104 connecting the user device 102 and the server 106.
[0035] The user devices 102a-102n may be communicably coupled to each other via the
network 104. The user device 102 may enable a user to communicate with other user via
any suitable communication means such as, but not limited to, calling, messaging and so
9
forth. Examples of user device 102 may include any suitable communication device such
as, but not limited to, smartphone, mobile phone, laptop, tablet, portable communication
device and so forth. In an exemplary embodiment, the user device 102 may include a
memory and a processor, communicably coupled to each other and configured to perform
the desired functionality of the user device 102. In alternative embodiments, the user device
102 may include any additional component required to perform the desired functionality of
the user device 102, in accordance with the embodiments of present disclosure.
[0036] The server 106 may be configured to enable a messaging platform resident on the
user device 102 to allow communication among the user devices 102. The messaging
platform may implement one or more user interfaces to allow message communication
among users via graphical images. The server 106 may include a memory unit (not shown)
and a processing unit (not shown) configured to implement the desired functionality of the
server 106. Additionally, the server 106 may include any suitable component required to
perform the desired functionality of the server 106, in accordance with the embodiments of
present disclosure.
[0037] The network 104 may include a data network such as, but not restricted to, the
Internet, Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area
Network (MAN), etc. In certain embodiments, the network 104 may include a wireless
network, such as, but not restricted to, a cellular network and may employ various
technologies including Enhanced Data rates for Global Evolution (EDGE), General Packet
Radio Service (GPRS), Global System for Mobile Communications (GSM), Internet
protocol Multimedia Subsystem (IMS), Universal Mobile Telecommunications System
(UMTS) etc. In other embodiments, the network 104 may include or otherwise cover
networks or subnetworks, each of which may include, for example, a wired or wireless data
pathway.
10
[0038] Fig. 2 illustrates a block diagram of the user device 102 (hereinafter referred as “the
device 102”), in accordance with an embodiment of present disclosure. The device 102
includes a transceiver 202, a I/O interface 204, a memory 206, a processor 208 and one or
more units. Each of said components of the device 102 may be communicably coupled to
each other.
[0039] The transceiver 202 may be configured to enable communication between the device
102 and the server 106 (shown in Fig. 1). Further, the transceiver 202 may also be configured
to enable communication between the device 102 and the other user devices 102. The
transceiver 202 may be configured to enable transmission or reception of data from and at
the device 102. In some embodiments, the transceiver 202 may include communication
devices such as, but not limited to, antennas, modulators, demodulators and so forth. In an
exemplary embodiment, the transceiver 202 may configured to receive data from the server
106 to implement the messaging platform at the device 102.
[0040] The I/O interface 204 may enable a user to interact with the device 102. The I/O
interface 204 may include, but not limited to, a mouse, a pointer, a keyboard, a touch screen,
a display, a graphical user interface and/or any other combination of input and output
devices of a computing system. In some embodiments, the user may provide user input via
the I/O interface 204. The I/O interface 204 may be configured to present one or more user
interfaces required to implement the functionality of the messaging platform provided by
the server 106.
[0041] The device 102 may include the memory 206. In an embodiment, the memory 206
may be configured to store data relating to the messaging platform. In an exemplary
embodiment, the memory 206 may store one or more graphical images and corresponding
modification. The graphical images may correspond to at least one of, but not limited to, a
sticker, an emoji, an emoticon and so forth. Each of the graphical image may comprises one
or more graphical components. The graphical components may include at least one of a
11
background image, a graphical representation of a user and a textual data. The modification
store in the memory 206 may include different variations in one or more characteristics of
said graphical components associated with each of the graphical image. In addition, the
memory 206 may include, but not restricted to, a Random Access Memory (RAM) unit
and/or a non-volatile memory unit such as a Read Only Memory (ROM), optical disc drive,
magnetic disc drive, flash memory, Electrically Erasable Read Only Memory (EEPROM),
and so forth.
[0042] The device 102 may include the processor 208 which may be communicably coupled
to the memory 206 to perform one or more desired functionality. The processor 208 may
present a first user interface to the user via the I/O interface 204. In an exemplary
embodiment, the processor 208 may be configured to receive a user input from the user via
the first user interface. The user input may include at least one of textual information, audio
information or video information. The textual information may indicate a text message user
intend to send to a recipient. The audio information may indicate a voice message user
intend to send to the recipient. The video information may indicate a video recording
message user intend to send to the recipient. The processor 208 may also be configured to
process the user input to identify context associated with the information included in the
user input. For an instance, the processor 208 may analyze the text contained in message
typed by the user and identify an emotion of the user, topic of conversation between the user
and the recipient and/or relation between the user and the recipient. In case of audio or video
information, the processor 208 may be configured to perform one or more conversions of
the information in any suitable form required to identify the context. In an example, the
processor 208 may perform speech to text conversion of an audio information to identify
the context associated with the audio message. Similar, conversion may be performed by
video information included in the user input. In alternative embodiments, the processor 208
may be configured to analyze other parameters associated with the audio or video message
to identify the context. For example, the processor 208 may consider tone of the user in
audio information to identify the context of the user input. In another example, the processor
12
208 may consider expression of the user in the video information to identify the context of
the user input.
[0043] The processor 208 may be configured to present a second user interface to the user
via the I/O interface 204. In an exemplary embodiment, the processor 208 may display the
second user interface based on the user input received via the first user interface. The
processor 208 may also be configured to suggest one or more graphical images to the user
via the second user interface based on the identified context. The second user interface may
be displayed subsequent to the first user interface. The second user interface may cover only
a portion of the first user interface. The processor 208 may also allow user to select one or
more suggest graphical images. The graphical images may be generated based on the context
user input. The processor 208 may generate the graphical images which may represent the
intend of the user, the user wants to communicate via the user input. Example of the
graphical images may include, sticker, emoticons, emojis and so forth. The graphical image
may one or more graphical contents such as, but not limited to, a background image, a
graphical representation of the user, a textual data. In some embodiment, the background
image may be automatically selected based on the context of the user input. For example, if
a user type “Lets Party!”, the background may include stars or balloon or any other suitable
mean which represent the context of the text. In other embodiment, the background image
may be a solid color background. The background may be static image or a dynamic image.
In alternative embodiment, the background may be selected based on at least one of user
preference, user chat history, user relationship with the recipient, an environment of the
messaging platform and so forth. Further, a graphical representation of the user may be prestored in the memory 206. In an embodiment, the graphical representation of the user may
be artistic representation of the user image. The graphical representation of the user may be
generated by the user or automatically generated based on image information stored in the
memory 206. For example, the processor 208 may consider user profile image and generate
the graphical representation of the user. Moreover, the textual data may include text typed
by user or extracted from the audio or the video information.
13
[0044] The processor 208 may also be configured to present a third user interface via the
I/O interface 204. In an exemplary embodiment, the processor 208 may be configured to
display the third user interface in response to a user selection to an edit option presented on
the second user interface. The processor 208 may configured to suggest one or more
modification to the at least one of the selected graphical image. The third user interface may
be displayed subsequent to the second user interface. In an exemplary embodiment, the third
user interface may be displayed above the second user interface and covers only a portion
of the first user interface. Therefore, in an embodiment, each of the first, second and third
user interfaces may be displayed simultaneously to the user. The processor 208 may also
enable the user to select one or more of the suggested modifications. In an exemplary
embodiment, the one or more modification may be based on one or more characteristics of
the one or more graphical components of the graphical images. The characteristics of a
background image may include, but not limited to, color, shape, design and so forth. The
characteristics of the graphical representation of the user may include, but not limited to,
expression of the user, clothes, accessories and so forth. The characteristics of the textual
data may include, but not limited to, color, size and font. Accordingly, the one or more
modifications suggested to the user may include different colored background images,
different shaped background images, various graphical representations of the user with
different expression, different fonts text and so forth. The processor 208 may allow a user
to select any modification in any manner as per the requirements. For instance, the processor
208 may allow the user to change both the background and the graphical representation of
the user.
[0045] The processor 208 may also be configured to dynamically modify the at least one
graphical image based on the user selected modification. The processor 208 may
dynamically modify the at least one graphical image based on selection of modification
received via the third user interface and present the modified graphical image to the user via
the second user interface. The modification to the at least one graphical image may be
reflected in real-time. The processor 208 may be configured to present each of the first,
14
second and third user interface simultaneously, thereby allow the user to easily and
effectively use the graphical image to communicate with the other users.
[0046] The processor 208 allows the user to select appropriate emotion of the graphical
representation of the user, thereby enables to user to effectively express the emotions user
wishes to communicate.
[0047] In alternative embodiments, the processor 208 may be configured to automatically
modify the graphical image using one or more neural networks (not shown) and/or any
suitable technique such as machine learning, artificial intelligence and so forth.
[0048] The device 102 may also include one or more units configured to perform one or
more operations of the processor 208. In an embodiment, the processor 208 may be
operatively coupled to the units. The operations and/or functions of the processor 208 and
the units may be performed interchangeably and/or in combination with each other. In an
exemplary embodiment, the device 102 may include a user interface generating unit 210, a
graphical image suggesting unit 212, a modification suggesting unit 214 and a modifying
unit 216. The user interface generating unit 210 may be configured to generate the first,
second and third user interfaces. The graphical image suggesting unit 212 may be configured
to process the user input, identify the context of the user input and suggest one or more
graphical images based on the identified context of the user input. The modification
suggesting unit 214 may be configured to suggest one or more modifications to the one or
more graphical images. The modifying unit 216 may be configured to dynamically modify
the at least one graphical image based on the one or more modifications selected by the user.
Each of the user interface generating unit 210, graphical image suggesting unit 212,
modification suggesting unit 214 and modifying unit 216 may be implement by any suitable
combination of various hardware and software components.
[0049] Figs. 3A-3C illustrate various user interfaces presented to a user, in accordance with
an embodiment of present disclosure.
15
[0050] Figure 3A illustrates a display 300 of a messaging platform. The display 300
includes a first user interface 302 configured to enable a user to communicate with another
user. The first user interface 302 displays a name of recipient, a virtual keyboard, and
various other components which enables a user to chat with another user. Figure 3B
illustrates the first user interface 302 and a second user interface 304 presented on the
display 300. The second user interface 304 may be presented based on the user input
received the first user interface 302. The second user interface 304 suggest one or more
graphical images based on the user input. In an illustrated embodiment, the user input is the
text “Sale!” and the graphical images are presented based on said textual information via
the second user interface 304. Figure 3C illustrates the first user interface 302, the second
user interface 304 and a third user interface 306 presented on the display 300. The third user
interface 306 may be displayed based on selection of user to a modifying option illustrated
via the second user interface 304. The third user interface 306 may display various
modifications to at least one of the graphical images. In the illustrated embodiments, the
third user interface 306 illustrates various expression of a graphical representation of the
user which is a part of the graphical image. Each of the first, second and third user interfaces
302, 304, 306 may include one or more user selectable components which may enable the
implementation of the present disclosure.
[0051] Fig. 4 shows a flow chart illustrating a method 400 for generating and modifying a
graphical image in accordance with some embodiment of the present disclosure.
[0052] As illustrated in figure 4, the method 400 includes one or more blocks illustrating a
method to facilitate the messaging service. The method 400 may be described in the general
context of computer executable instructions. Generally, computer executable instructions
can include routines, programs, objects, components, data structures, procedures, modules,
and functions, which perform specific functions or implement specific abstract data types.
16
[0053] The order in which the method 400 is described is not intended to be construed as a
limitation, and any number of the described method blocks can be combined in any order to
implement the method. Additionally, individual blocks may be deleted from the methods
without departing from the spirit and scope of the subject matter described herein.
Furthermore, the method can be implemented in any suitable hardware, software, firmware,
or combination thereof.
[0054] Figure 4 is described in conjunction with Figures 1-3.
[0055] At step 402, the method 400 includes receiving, a user input from a user via the first
user interface 302. The user input comprising at least one of textual information, audio
information or video information
[0056] At step 404, the method 400 includes processing, the user input to identify context
associated with the information included in the user input.
[0057] At step 406, the method 400 includes suggesting one or more graphical images for the
user selection via the second user interface 304, based on the identified context. The second user
interface 304 is displayed subsequent to the first user interface 302.
[0058] At step 408, the method 400 includes suggesting, via the third user interface 306,
one or more modifications to the at least one of the selected graphical image, for user
selection. The third user interface 306 is displayed subsequent to the second user interface 304.
[0059] At step 410, the method 400 includes dynamically modifying the at least one graphical
image presented via the second user interface based on the at least one user selected modification.
[0060] Accordingly, from the above disclosure, it may be worth noting that the present
disclosure provides an easy, convenient and efficient technique to generate and modify
graphical image based on the user requirements.
17
[0061] The foregoing description of the various embodiments is provided to enable any
person skilled in the art to make or use the present disclosure. Various modifications to these
embodiments will be readily apparent to those skilled in the art, and the generic principles
defined herein may be applied to other embodiments without departing from the spirit or
scope of the disclosure. Thus, the present disclosure is not intended to limit the embodiments
shown herein, and instead the embodiments should be accorded the widest scope consistent
with the principles and novel features disclosed herein.

We claim:

1. A method for dynamically generating a graphical image, comprising:
receiving, a user input from a user via a first user interface, wherein the user input
comprising at least one of textual information, audio information or video information;
processing, the user input to identify context associated with the information
included in the user input;
suggesting one or more graphical images for the user selection via a second user
interface, based on the identified context, wherein the second user interface is displayed
subsequent to the first user interface;
suggesting, via a third user interface, one or more modifications to the at least one
of the selected graphical image, for user selection, wherein the third user interface is
displayed subsequent to the second user interface; and
dynamically modifying the at least one graphical image presented via the second
user interface based on the at least one user selected modification.
2. The method as claimed in claim 1, wherein the second and the third user interfaces cover at
least a portion of the first user interface and wherein the first, second and third user interfaces
are presented to the user simultaneously over same display.
3. The method as claimed in claim 1, wherein the one or more graphical images include at least
one of emoticons, stickers, and emojis.
4. The method as claimed in claim 1, wherein the one or more graphical images includes one
or more graphical components, the one or more graphical components include at least one
of a background image, a graphical representation of the user, and a textual data.
5. The method as claimed in claim 4, wherein the one or more modification is based on one or
more characteristics of the one or more graphical components of the one or more graphical
images.
6. A device for dynamically generating a graphical image, comprising:
a memory; and
19
a processor communicably coupled to the memory, the processor configured to:
receive, a user input from a user via a first user interface, wherein the user input
comprising at least one of textual information, audio information or video information;
process, the user input to identify context associated with the information included
in the user input;
suggest one or more graphical images for the user selection via a second user
interface, based on the identified context, wherein the second user interface is displayed
subsequent to the first user interface;
suggest, via a third user interface, one or more modifications to the at least one of
the selected graphical image, for user selection, wherein the third user interface is displayed
subsequent to the second user interface; and
dynamically modify the at least one graphical image presented via the second user
interface based on the at least one user selected modification.
7. The device as claimed in claim 6, wherein the second and the third user interfaces cover at
least a portion of the first user interface and wherein the first, second and third user interfaces
are presented to the user simultaneously over same display.
8. The device as claimed in claim 6, wherein the one or more graphical images include at least
one of emoticons, stickers, and emojis.
9. The device as claimed in claim 6, wherein the one or more graphical images includes one
or more graphical components, the one or more graphical components include at least one
of a background image, a graphical representation of the user, and a textual data.
10. The device as claimed in claim 9, wherein the one or more modification is based on one or
more characteristics of the one or more graphical components of the one or more graphical
images.

Documents

Application Documents

# Name Date
1 202011006236-FORM 18 [03-01-2024(online)].pdf 2024-01-03
1 202011006236-STATEMENT OF UNDERTAKING (FORM 3) [13-02-2020(online)].pdf 2020-02-13
2 abstract.jpg 2021-10-18
2 202011006236-PROVISIONAL SPECIFICATION [13-02-2020(online)].pdf 2020-02-13
3 202011006236-POWER OF AUTHORITY [13-02-2020(online)].pdf 2020-02-13
3 202011006236-COMPLETE SPECIFICATION [13-02-2021(online)].pdf 2021-02-13
4 202011006236-CORRESPONDENCE-OTHERS [13-02-2021(online)].pdf 2021-02-13
4 202011006236-FORM 1 [13-02-2020(online)].pdf 2020-02-13
5 202011006236-DRAWING [13-02-2021(online)].pdf 2021-02-13
5 202011006236-DRAWINGS [13-02-2020(online)].pdf 2020-02-13
6 202011006236-DECLARATION OF INVENTORSHIP (FORM 5) [13-02-2020(online)].pdf 2020-02-13
7 202011006236-DRAWINGS [13-02-2020(online)].pdf 2020-02-13
7 202011006236-DRAWING [13-02-2021(online)].pdf 2021-02-13
8 202011006236-FORM 1 [13-02-2020(online)].pdf 2020-02-13
8 202011006236-CORRESPONDENCE-OTHERS [13-02-2021(online)].pdf 2021-02-13
9 202011006236-POWER OF AUTHORITY [13-02-2020(online)].pdf 2020-02-13
9 202011006236-COMPLETE SPECIFICATION [13-02-2021(online)].pdf 2021-02-13
10 202011006236-PROVISIONAL SPECIFICATION [13-02-2020(online)].pdf 2020-02-13
10 abstract.jpg 2021-10-18
11 202011006236-FORM 18 [03-01-2024(online)].pdf 2024-01-03
11 202011006236-STATEMENT OF UNDERTAKING (FORM 3) [13-02-2020(online)].pdf 2020-02-13
12 202011006236-FER.pdf 2025-06-05
13 202011006236-FORM 3 [02-07-2025(online)].pdf 2025-07-02

Search Strategy

1 202011006236_SearchStrategyNew_E_1001E_04-06-2025.pdf