Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Generating Unified Image On A Messaging Platform

Abstract: Disclosed herein is a system and method for generating a unified image on a messaging platform. The present disclosure provides a technique that uses emotional state of the user and the context of current conversation to identify the keywords. Based on the keyword, the system determines the associated items and select one of the items. Further, the system extracts the user image, recipient image and selected item image to generate the unified image.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 February 2020
Publication Number
36/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@knspartners.com
Parent Application

Applicants

HIKE PRIVATE LIMITED
Hike Private Limited, 4th Floor, Worldmark 1, Northern Access Road, Aerocity, New Delhi – 110037, India

Inventors

1. Dipankar Sarkar
Hike Private Limited, 4th Floor, Worldmark 1, Northern Access Road, Aerocity, New Delhi – 110037, India
2. Ankur Narang
Hike Private Limited, 4th Floor, Worldmark 1, Northern Access Road, Aerocity, New Delhi – 110037, India
3. Kavin Bharti Mittal
Hike Private Limited, 4th Floor, Worldmark 1, Northern Access Road, Aerocity, New Delhi – 110037, India

Specification

[0001] The present disclosure relates to a messaging platform. More specifically, the present
disclosure relates to a system and method for generating unified content to enhance user experience
5 on the messaging platform.
BACKGROUND
[0002] Emojis, stickers or emoticons are one of the most common means of conversing these days.
If one needs to express his/her feelings, mood or needs to present his views quickly, they may
10 simply need to look out for an appropriate emoji/sticker/avatar. In order to retrieve the
emoji/sticker based on the emotion or conversation, the user may have to look out for various
options manually. Further, the existing messaging platform do not propose creating a unified
content based on emoji/sticker/avatar of two or more users in a single message. Further, the prior
art systems also do not provide options for proposing some actions based on the conversation
15 along with the aggregation of emojis/stickers/avatars present in the conversation.
[0003] Moreover, the technical challenge here is to appropriately understand the intention/interest
of the users during the conversation and generate corresponding emojis at the same time. The
time to process and analyze the chat data and simultaneously generating the corresponding
emojis instantly is a challenge in existing systems.
20
[0004] Thus, it would be beneficial to create a system that may generate unified emojis of users
and/or a juxtaposition of an emoji with other digital data that is tailored for the conversation.
OBJECT OF THE INVENTION
25 [0005] An object of the present disclosure is to dynamically create a content representing
interest of two or more users based on detected conversation or emotions.
[0006] Another object of the present disclosure is to customization of various stickers to the user
based on conversation between the users.
30
3
[0007] Another object of the present invention is to automatically add emojis in the likeness of
celebrities or publicly known individuals based on the conversation between users to
enrich user experience.
5 [0008] Yet another object of the present invention is to automatically generate event-based
stickers and transmit the same depending on user’s or group’s conversation.
SUMMARY
10 [0009] The present disclosure overcomes one or more shortcomings of the prior art and provides
additional advantages discussed throughout the present disclosure. Additional features
and advantages are realized through the techniques of the present disclosure. Other
embodiments and aspects of the disclosure are described in detail herein and are
considered a part of the claimed disclosure.
15
[0010] In an embodiment of the present disclosure, method of generating unified image on a
messaging platform is disclosed. The method comprises capturing current conversation
data to determine user’s emotional state, based on current conversation between a user
and a recipient. The current conversation data comprises user messages and recipient
20 messages in at least one of a textual format, an audio format, an image format and a video
format. The method further comprises identifying one or more relevant keywords, from
the current conversation data, indicating interest of the user and the recipient during the
conversation. The method further comprises determining one or more items available in
vicinity of the user and the recipient based on the interest indicated while identifying the
25 one or more relevant keywords. Further, the method comprises selecting at least one item
amongst the one or more items. The method also comprises extracting user image,
recipient image and selected item image from a memory. The method further comprises
generating the unified image by combining the user image, the recipient image and the
selected item image.
30
[0011] In another embodiment of the present disclosure, a system for generating unified image
on a messaging platform is disclosed. The system comprises a capturing unit configured
4
to capture current conversation data to determine user’s emotional state, based on current
conversation between a user and a recipient. The conversation data comprises user
messages and recipient messages in at least one of a textual format, an audio format, an
image format and a video format. The system further comprises an identification unit
5 configured to identify one or more relevant keywords, from the current conversation
data, indicating interest of the user and the recipient during the conversation. The system
further comprises a determination unit configured to determine one or more items
available in vicinity of the user and the recipient based on the interest indicated while
identifying the one or more relevant keywords. The system further comprises a selection
10 unit configured to select at least one item amongst the one or more items. The system
also comprises an extraction unit configured to extract user image, recipient image and
selected item image from a memory. The system further comprises a generation unit
configured to generate the unified image by combining the user image, the recipient
image and the selected item image.
15
[0012] The foregoing summary is illustrative only and is not intended to be in any way limiting.
In addition to the illustrative aspects, embodiments, and features described above, further
aspects, embodiments, and features will become apparent by reference to the drawings
and the following detailed description.
20
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The accompanying drawings, which are incorporated in and constitute a part of this
disclosure, illustrate exemplary embodiments and, together with the description, serve to
25 explain the disclosed embodiments. In the figures, the left-most digit(s) of a reference
number identifies the figure in which the reference number first appears. The same
numbers are used throughout the figures to reference like features and components. Some
embodiments of system and/or methods in accordance with embodiments of the present
subject matter are now described, by way of example only, and with reference to the
30 accompanying figures, in which:
5
[0014] Figure 1A shows an environment 100 for generating unified content based on
conversation between users, in accordance with an embodiment of the present disclosure;
[0015] Figure 1B shows an exemplary embodiment that represents unified content generation
5 for users on their respective devices, in accordance with an embodiment of the present
disclosure;
[0016] Figure 1C shows an exemplary embodiment that represents interest graph of user derived
from historical conversation, in accordance with an embodiment of the present
10 disclosure;
[0017] Figure 2 shows a block diagram 200 illustrating a system for generating a unified image
based on the conversation between users on a messaging platform, in accordance with
an embodiment of the present disclosure;
15
[0018] Figure 3 shows a method 300 of generating a unified image based on the conversation
between users on a messaging platform, in accordance with an embodiment of the
present disclosure.
20 [0019] The figures depict embodiments of the disclosure for purposes of illustration only. One
skilled in the art will readily recognize from the following description that alternative
embodiments of the structures and methods illustrated herein may be employed without
departing from the principles of the disclosure described herein.
25 DETAILED DESCRIPTION
[0020] The foregoing has broadly outlined the features and technical advantages of the present
disclosure in order that the detailed description of the disclosure that follows may be
better understood. It should be appreciated by those skilled in the art that the conception
30 and specific embodiment disclosed may be readily utilized as a basis for modifying or
designing other structures for carrying out the same purposes of the present disclosure.
6
[0021] The novel features which are believed to be characteristic of the disclosure, both as to its
organization and method of operation, together with further objects and advantages will
be better understood from the following description when considered in connection with
the accompanying figures. It is to be expressly understood, however, that each of the
5 figures is provided for the purpose of illustration and description only and is not intended
as a definition of the limits of the present disclosure.
[0022] In the present document, the word "exemplary" is used herein to mean "serving as an
example, instance, or illustration". Any embodiment or implementation of the present
10 subject matter described herein as "exemplary" is not necessarily to be construed as
preferred or advantageous over other embodiments.
[0023] Further, the terms like “comprises”, “comprising”, or any other variations thereof, are
intended to cover non-exclusive inclusions, such that a setup, device that comprises a list
15 of components that does not include only those components but may include other
components not expressly listed or inherent to such setup or device. In other words, one
or more elements in a system or apparatus proceeded by “comprises… a” does not,
without more constraints, preclude the existence of other elements or additional elements
in the system or apparatus or device.
20
[0024] Furthermore, the terms like “recipient”, “another user” may be used interchangeably or
in combination throughout the description.
[0025] Furthermore, the terms like “stickers”, “avatar”, “emoji” may be used interchangeably
25 or in combination throughout the description.
[0026] Disclosed herein is a method and a system for generating unified content on the
messaging platform based on current conversation between users. Demand of providing
real-time experience is increasing in every field of technology so as in the instant
30 messaging. The user may use text, image, audio, video in the messages to connect with
the recipient and to enhance user experience. The present disclosure detects a shared or
mutually dependent action of user based on a keyword based search or through a selflearning neural network. Once the mutual action is detected from the conversation, the
7
system may determine one or more items that are present in the memory based on the
interest of the user and the recipient. The system may further extract emoji/sticker/avatar
of various users (i.e. user and recipient) based on the content of conversation and
aggregate those along with the extracted item(s) to propose suggestions for the
5 conversation. Furthermore, the present disclosure unveils allowing a user to share the
customized sticker with another user during the conversation to enhance the user
experience of messaging. Most importantly, said disclosure allows aggregation of
multiple stickers into a single sticker based on the content and context of conversation
to generate unified image.
10
[0027] The present disclosure determines the availability of items in the memory based on the
conversation of the users. And in case the item is not available in the memory, the
disclosure provide auto-updation of the memory from the web to include the item in the
memory and to enhance user experience. In this way, items are updated dynamically in
15 the memory and aggregated with the stickers available for users present in the chat
conversation to generate unified image.
[001] Figures 1A presents exemplary environment of a system for generating unified
content based on the conversation between users on messaging platform in accordance with
20 an embodiment of the present disclosure. Figure 1B shows an application or example of the
exemplary environment presented in figure 1A. Figure 1C shows an example of an interest
graph generated for the user based on the conversation, in accordance with an embodiment
of the present disclosure. Further, figure 1A illustrates the information capturing aspect of
the system for the keyword identification, to capture interest of the user and recipient to
25 generate unified image. Figure 1B illustrates the conversion between users (i.e. user and
recipient) for generation of unified content based on user interest in real-time. A person
skilled in the art by referring to these figures (1A and 1B) may understand that the figure
1B is a real-time implementation of figure 1A. Figure 1C illustrates the interest graph of
user based on the conversation performed by him with another user on the messaging
30 platform. It must also be appreciated that the system may also keep track of activities
performed by the user on the web for generating the interest graph for the user. It may also
be appreciated that the system presented in figures 1A and 1B and the interest graph
8
generated in figure 1C are exemplary and may also be implemented in various environments
and for other users, other than as shown in Figs. 1A, 1B and 1C. The system 100 presents
various user devices (102_1……102_n) on one side of the system 100 and various recipient
devices (104_1…..104_n) on other side. It may be understood that the user communicates
5 with the recipient(s) with their devices. These devices may be communicatively coupled to
the servers via a communication network.
[0028] The detailed explanation of the exemplary environment 100 is explained in conjunction
with Figure 2 that shows a block diagram 200 of a system 202 for generating unified
10 image, in accordance with an embodiment of the present disclosure. Although the present
disclosure is explained considering that the system 202 is implemented on a server, it
may be understood that the system 202 may be implemented in a variety of computing
systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a
mainframe computer, a server, a network server, a cloud-based computing environment.
15 It may be understood that the system 202 may be accessed by multiple users through one
or more user devices or applications residing on the user devices. In one implementation,
the system 202 may be implemented in the cloud-based computing environment in which
a user may operate individual computing systems configured to execute remotely located
applications. Examples of the user devices may include, but are not limited to, a IoT
20 device, IoT gateway, portable computer, a personal digital assistant, a handheld device,
and a workstation. The user devices are communicatively coupled to the system 202
through a network.
[0029] In one implementation, the network may be a wireless network, a wired network or a
25 combination thereof. The network may be implemented as one of the different types of
networks, such as intranet, local area network (LAN), wide area network (WAN), the
internet, and the like. The network may either be a dedicated network or a shared
network. The shared network represents an association of the different types of networks
that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP),
30 Hypertext Transfer Protocol Secure (HTTPS), Transmission Control Protocol/Internet
Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate
9
with one another. Further the network may include a variety of network devices,
including routers, bridges, servers, computing devices, storage devices, and the like.
[0030] In one implementation, the system 202 may comprise an I/O interface 204, a processor
5 206, a GPS receiver 208, a timing circuitry 210, a memory 212, and the units 214. The
memory 212 may be communicatively coupled to the processor 206 and the units 214.
The processor 206 may be implemented as one or more microprocessors,
microcomputers, microcontrollers, digital signal processors, central processing units,
state machines, logic circuitries, and/or any devices that manipulate signals based on
10 operational instructions. Among other capabilities, the processor 206 is configured to
fetch and execute computer-readable instructions stored in the memory 212. The I/O
interface 204 may include a variety of software and hardware interfaces, for example, a
web interface, a graphical user interface, and the like. The I/O interface 204 may allow
the system 202 to interact with the user directly or through the user devices. Further, the
15 I/O interface 204 may enable the system 202 to communicate with other computing
devices, such as web servers and external data servers (not shown). The I/O interface
204 may facilitate multiple communications within a wide variety of networks and
protocol types, including wired networks, for example, LAN, cable, etc., and wireless
networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one
20 or more ports for connecting many devices to one another or to another server.
[0031] In one implementation, the units 214 may comprise a capturing unit 216, an identifying
unit 218, a selection unit 220, a determination unit 222, an extraction unit 224 and a
generation unit 226. According to embodiments of present disclosure, these units 216-
25 226 may comprise hardware components like processor, microprocessor,
microcontrollers, application-specific integrated circuit for performing various
operations of the system 202. It must be understood to a person skilled in art that the
processor 206 may also perform all the functions of the units 216-226 according to
various embodiments of the present disclosure.
30
[0032] As explained above, the figure 1A describes the of the system 202 to generate unified
image based on the conversation of different users on a messaging platform. The system
10
202, firstly, captures the current conversation between a user and recipient via the
capturing unit 216. Every conversation is captured and stored in the memory 212 of the
system 202. In an exemplary embodiment, the user may converse with recipient through
text message. In another embodiment, the user may share images while conversing with
5 recipient. Similarly, the user may converse with the recipient through audio or video
messages. It may be understood that the user and the recipient may converse with each
other in a single format or may use combination of formats so that they may express their
emotions effectively.
10 [0033] The memory 212 may store all the conversation data of current conversation or past
conversations between the user and plurality of recipients. The memory 212 may also
store the queries provided by user over the web. Based on the searches/queries performed
by the user and time spent by the user for a particular thing/item, the system 202 may
generate an interest graph for the user, which may be further stored in the memory 212.
15 An example of the interest graph is shown in figure 1C. For example, if the user “SAM”
perform conversation on the messaging platform and he keeps on searching for some
items over the web like movies, latest gadgets, new places to roam around, best places
for food etc. then the capturing unit 216 of the system 202 captures all these details of
the user and provide the information to the memory 212. Based on the information
20 captured by the capturing unit 216, an interest graph is plotted for a particular user. From
the figure 1C, it may be understood that the user “SAM” is more interested in travel and
less interested in fashion. In the same way, the system 202 may create interest graphs for
each of the individual present on the messaging platform. Such interest graph is created
based on the historical information present about the user in the memory.
25
[0034] Once the conversation data is captured, in next step, the identification unit 218 may
identify one or more relevant keywords, from the current conversation data. In an
exemplary embodiment, the identification unit 218 may search the chat history of the
user for the mutual dependent actions or the interest presented by the user in the chat
30 history. Mainly, the identification unit 218 in communication with capturing unit 216
check the recent conversation of the user and the recipient and identifies emotional state
of the user such as happy, sad, angry, surprised etc. During the current conversation, the
11
identification unit 218 may identify the relevant keywords based on the user interest and
recipient interest. The keywords are usually associated with the one or more items that
are frequently searched by various users (102_1…..102_n) over the web. For example,
if the user “SAM” is conversating with recipient “Johana” in below manner:
5 SAM-Hi Johana, How are you?
Where are you working these days?
JOHANA- Hey Sam, I am gud, working with “XYZ”company
You say….
SAM-I am looking for a job, do you have any reference in “glacia”.
10
[0035] Here, based on the conversation and the queries searched over web, the identification
unit 218 identify the relevant keywords such as “work”, “XYZ company”, “glacia” etc. The
keyword identified by the system presents the interest of the user “SAM” and the recipient
15 “Johana”.
[0036] Once the emotional state of the user, user-recipient mutual interest and the relevant
keywords are identified, then the determination unit 222 may determine the items available
in the vicinity of the user and the recipient. The availability of the items in vicinity of the
20 user or recipient may be determined by the GPS receiver 208. The GPS receiver 208 may
easily determine the location of the user and the recipient and based on the current location
of the user or recipient, the determination unit 222 may determine the availability of those
items in which the user or recipient or both may be interested in. In an exemplary
embodiment, the vicinity of the user from the item may be considered from a distance of 5-
25 10 kms from the current location of the user. In another exemplary embodiment, the vicinity
may be decided from different set of distance ranges from the current location of the user
where the item may be available or there are vendors available who may help the
user/recipient for making the item available with in a certain distance.
30 [0037] Similarly, a timing circuitry 210 may also help in impacting the vicinity of the item
for the user and the recipient. In an exemplary embodiment, the determination unit 222 may
12
connect with timing circuitry 210 and the GPS receiver 208 to check the vicinity of the item.
For example, the item maybe available in the store from 10am to 8pm only. Then, the
determination unit 222 may not be able to gather the data for the vicinity or availability of
item if not connected with the timing circuitry 210 and GPS receiver 208. However, if
5 timing and location is not a constraint, then the determination unit 222 may standalone
determine the vicinity of the item associated with relevant keyword based on user and
recipient interest.
[0038] The selection unit 220 may help in selecting the item amongst one or more items
10 available in the vicinity of the user. In other words, it may be understood that the role of
selection unit 220 is to arrange the items based on interest of individual. For selecting the
item, the selection unit 220 may consider different parameters. In an exemplary
embodiment, different items are available for a relevant keyword which is identified from
the current conversation of the user and the recipient. In case, one or more item providers
15 or vendors may participate in bidding of providing their item to the users of the messaging
platform. Then, the selection unit 220 may select a set of bids amongst a plurality of bids
based on the interest of the user and recipient during the current conversation. Referring
back to the example shown in figure 1B, the capturing unit 216 may capture the emotional
state of user “SAM” during the conversation with “Johana”. For example, the capturing unit
20 216 may capture the emotional state of user as “waiting” (the emotional state is captured
through the current conversation data). Then the identification unit 218 may determine that
the user is interested in “meeting”. Thereafter, the determination unit 222 may determine
the places within the vicinity of the user and recipient and provide good atmosphere for
meeting. The selection unit 220 may analyze the bids based on a bidding criteria which may
25 include, but not limited to, user web history, geographic location, timing of the conversation
between user and the recipient. The selection unit 220 then selects at least one bid, amongst
the set of bids, satisfying the bidding criteria. In an exemplary embodiment, the selection
unit 220 identifies that the as per web history of the user, “SAM likes roof plaza” as he may
have searched for “roof plaza” and the location is near to both “SAM” and “Johana”.
30 Accordingly, the selection unit 220 may select the bid related to “roof plaza”. Further, the
selection unit 220 may also search for other location options based on availability of seats,
interest of recipient “Johana”, timing availability and other factors. As shown in figure 1B,
13
the selection unit 220 have provided first option as “roof plaza”, second option as “bio
diversity park” and third option as “solicit comfort”. A person skilled in the art must
appreciate that figure 1B has captured only three options, however, the selection unit 220
of the system 202 may provide less or more options depending on the factors explained in
5 above paragraph.
[0039] In embodiment, the selection unit 220 may provide an image of the selected item.
For example, in above case, the selection unit 220 may provide an image of “roof plaza”,
“bio diversity park” and “solicit comfort” for generating the unified image. In an exemplary
10 embodiment, the selection unit 220 may also provide a link to the selected item for ease of
convenience to the user and the recipient. By clicking on the link, the user or the recipient
may check the specification of the item and may also book or purchase the item for their
personal use or for gifting option. In an exemplary embodiment, the link may also direct the
user to website of the vendor, to a voucher, to a map /direction etc. It should be apparent to
15 a person skilled in the art that the item may be represented in any other forms also and the
above are just few examples.
[0040] Once the selection unit 220 selects the item, the extraction unit 224 may extract the
image of the user and the recipient according to their emotional states for that selected item.
20 The generation unit 226 may then combine the image of user, image of recipient and the
image of the selected item to generate a unified image for presentation on the user’s screen
instantly during the conversation. This way the system enhances the user experience may
also help for better decision making. In the similar way, the generation unit 226 may also
provide similar unified image options to the recipient on their messaging platform.
25 However, the unified images of recipient may depend on their interest and keywords and
based on their interest graph.
[0041] In an exemplary embodiment, the memory 212 may store the user image, the
recipient image and the selected item image in the form of emoticons, stickers, avatars and
30 emojis. The system 202 may provide various options to the user depending on user
preference. In an exemplary embodiment, the system 202 may provide the options to the
user based on user preference, however, the same may be converted to different form based
14
on recipient preference. For example, the user “SAM” may like avatars for messaging
whereas the recipient “Johana” like stickers for messaging. In this case, the system 202 may
provide unified image in form of avatar at “Sam’s” end and may convert the same in sticker
format to serve “Johana”. In another exemplary embodiment, the system 202 looks at the
5 current conversation and infer the presence of certain "mutual" action keywords. In an
embodiment, the user may express his/her emotional state by expression, "I X you" or "Let
do Y together" and so on. The system 202 may convert these actions into a sticker that
contain the user's emoji, the recipient emoji and selected item representing the context (text,
other information). For example, the system 202 may create a unified content for expression
10 “I hate you” into an angry self/user emoji + a dazed emoji of the recipient.
[0042] Referring back to figure 1B in conjunction with figure 2, figure 1B describes the
real- time application of the unified image generation for the user on messaging platform to
cater user’s interest and emotional state. Once the system 202 has been trained with the
15 selection and generation of the unified images, the system 202 may operate in real-time to
provide different unified image option to the user or recipient based on their mutual interest
during the conversation.
[0043] In an exemplary embodiment, the system 202 may automatically search the calendar
20 of the user for some important dates such as Birthdays, anniversaries, friendship day, new
year, valentine day, day of addition as friends, etc which involve a social aspect and present
a unified image of user based on the event with the corresponding recipient on the
messaging platform. Based on the chat history or the context of conversation, the system
202 may suggest such emojis/stickers/emoticons to the users. In one example, if the user
25 creates the emoji/sticker/emoticon expressing his feelings for his girlfriend. The system 202
may automatically suggest for combine stickers of user’s emotion for his girlfriend on her
birthday or valentine day etc. In another exemplary embodiment, if user wants to use an
emoji in context of asking the recipient for a dinner or movie. In that aspect, a unified image
of user and recipient along with the theme of dinner or movie may also be generated on the
30 messaging platform. Such visualization helps the sender to express his emotions naturally.
15
[0044] Figure 3 depicts a method 300 for generating a unified image based on conversation
between users on a messaging platform, in accordance with an embodiment of the present
disclosure. As illustrated in figure 3, the method 300 includes one or more blocks illustrating
a method for embedding a creative content with an image. The method 300 may be
5 described in the general context of computer executable instructions. Generally, computer
executable instructions may include routines, programs, objects, components, data
structures, procedures, modules, and functions, which perform specific functions or
implement specific abstract data types.
10 [0045] The order in which the method 300 is described is not intended to be construed as a
limitation, and any number of the described method blocks may be combined in any order
to implement the method. Additionally, individual blocks may be deleted from the methods
without departing from the spirit and scope of the subject matter described.
15 [0046] At block 302, the method 300 may include capturing current conversation data to
determine user’s emotional state, based on current conversation between a user and a
recipient. The emotional state of the user may be identified as “sad”, “happy”, “surprised”,
“waiting”, “idle”, “enjoying”, “laughing” etc. Further, the current conversation data
comprises user messages and recipient messages in at least one of a textual format, an audio
20 format, an image format and a video format. The method also includes extracting an interest
graph of the user available in the memory to identify the interest of the user for the one or
more items. The interest graph is generated based on historical conversation data available
for the user in the memory.
25 [0047] At block 304, the method 300 may include identifying one or more relevant
keywords, from the current conversation data, indicating interest of the user and the
recipient during the conversation. The one or more relevant keywords, amongst the plurality
of keywords, are associated with the one or more items frequently being searched on web.
30 [0048] At block 306, the method 300 may include determining one or more items available
in vicinity of the user and the recipient based on the interest indicated while identifying the
one or more relevant keywords. The vicinity may be determined by the location of the user
16
and the recipient. In an exemplary embodiment, the item may be considered in vicinity of
the user when it is located with in a distance of 20km from the current location of the user.
[0049] At block 308, the method 300 may include selecting at least one item amongst the
5 one or more items. The selection of at least one item may include selecting a set of bids
amongst a plurality of bids based on the one or more items associated with relevant keyword
identified in the current conversation. Then, analyzing the set of bids based on a bidding
criteria comprising at least one of user web history, geographic location of the user and
current time and selecting at least one bid, amongst the set of bids, satisfying the bidding
10 criteria, wherein the at least one item is selected corresponding to the selected at least one
bid.
[0050] At block 310, the method 300 may include extracting user image, recipient image
and selected item image from a memory. The images of the user, recipient and the selected
15 item may be selected from at least one of emoticons, stickers, and emojis, prestored in the
memory.
[0051] At block 312, the method 300 may include generating the unified image by
combining the user image, the recipient image and the selected item image.
20
[0052] A description of an embodiment with several components in communication with
each other does not imply that all such components are required. On the contrary, a variety
of optional components are described to illustrate the wide variety of possible embodiments
of the invention.
25
[0053] When a single device or article is described herein, it will be clear that more than
one device/article (whether they cooperate) may be used in place of a single device/article.
Similarly, where more than one device or article is described herein (whether they
cooperate), it will be clear that a single device/article may be used in place of the more than
30 one device or article or a different number of devices/articles may be used instead of the
shown number of devices or programs. The functionality and/or the features of a device
may be alternatively embodied by one or more other devices which are not explicitly
17
described as having such functionality/features. Thus, other embodiments of the invention
need not include the device itself.
[0054] Finally, the language used in the specification has been principally selected for
5 readability and instructional purposes, and it may not have been selected to delineate or
circumscribe the inventive subject matter. It is therefore intended that the scope of the
invention be limited not by this detailed description, but rather by any claims that issue on
an application based here on. Accordingly, the embodiments of the present invention are
intended to be illustrative, but not limiting, of the scope of the invention, which is set forth
10 in the following claims.
[0055] While various aspects and embodiments have been disclosed herein, other aspects
and embodiments will be apparent to those skilled in the art. The various aspects and
embodiments disclosed herein are for purposes of illustration and are not intended to be
15 limiting, with the true scope and spirit being indicated by the following claims.
[0056] The illustrated steps are set out to explain the exemplary embodiments shown, and
it should be anticipated that ongoing technological development will change the manner in
which particular functions are performed. These examples are presented herein for purposes
20 of illustration, and not limitation. Further, the boundaries of the functional building blocks
have been arbitrarily defined herein for the convenience of the description. Alternative
boundaries may be defined so long as the specified functions and relationships thereof are
appropriately performed. Alternatives (including equivalents, extensions, variations,
deviations, etc., of those described herein) will be apparent to persons skilled in the relevant
25 art(s) based on the teachings contained herein. Such alternatives fall within the scope and
spirit of the disclosed embodiments. It must also be noted that as used herein and in the
appended claims, the singular forms “a,” “an,” and “the” include plural references unless
the context clearly dictates otherwise.
30 [0057] Advantages of the embodiment of the present disclosure are illustrated herein:
1. Correctly presenting emotional state of user and generating a unified image
with the recipient to express the emotions of user for recipient
18
2. Providing a personalized experience to the recipients while conversing with
user on messaging platforms
Referral Numerals:
Reference
Numeral
Description
100A
Exemplary environment for generating unified image on messaging
platform
100B Exemplary environment for generation of unified image in real-time
100C Exemplary interest graph of the user
200 Block diagram of the system
202 System
204 I/O Interface
206 Processor
208 GPS Receiver
210 Timing circuitry
212 Memory
212a User Image
212b Web/chat history
212c Interest Graph
214 Units
216 Capturing unit
218 Identification unit
220 Selection unit
222 Determination unit
19
5
10
15

224 Extraction Unit
226 Generation Unit
300 Method for generating unified image on a messaging platform

We Claim:

1. A method of generating unified image on a messaging platform, the method comprising:
capturing (302) current conversation data to determine user’s emotional state, based
5 on current conversation between a user and a recipient, wherein current conversation data
comprises user messages and recipient messages in at least one of a textual format, an audio
format, an image format and a video format;
identifying (304) one or more relevant keywords, from the current conversation
data, indicating interest of the user and the recipient during the conversation;
10 determining (306) one or more items available in vicinity of the user and the
recipient based on the interest indicated while identifying the one or more relevant
keywords;
selecting (308) at least one item amongst the one or more items;
extracting (310) user image, recipient image and selected item image from a
15 memory; and
generating (312) the unified image by combining the user image, the recipient image
and the selected item image.
2. The method as claimed in claim 1, wherein the one or more relevant keywords, amongst
20 the plurality of keywords, are associated with the one or more items frequently being
searched on web.
3. The method as claimed in claim 1, wherein the selecting at least one item amongst the one
or more items further comprising:
25 selecting a set of bids amongst a plurality of bids based on the one or more items associated
with relevant keyword identified in the current conversation;
analyzing the set of bids based on a bidding criteria comprising at least one of user web
history, geographic location of the user and current time; and
selecting at least one bid, amongst the set of bids, satisfying the bidding criteria, wherein
30 the at least one item is selected corresponding to the selected at least one bid.
21
4. The method as claimed in claim 1, wherein the user image, the recipient image and the
selected item image comprises at least one of emoticons, stickers, and emojis, prestored in
the memory.
5
5. The method as claimed in claim 1, further comprising:
extracting an interest graph of the user available in the memory to identify the interest of
the user for the one or more items, wherein the interest graph is generated based on historical
conversation data available for the user in the memory.
10
6. A system for generating unified image on a messaging platform, the system comprising:
a capturing unit (216) configured to capture current conversation data to determine
user’s emotional state, based on current conversation between a user and a recipient,
wherein current conversation data comprises user messages and recipient messages in at
15 least one of a textual format, an audio format, an image format and a video format;
an identification unit (218) configured to identify one or more relevant keywords,
from the current conversation data, indicating interest of the user and the recipient during
the conversation;
a determination unit (222) configured to determine one or more items available in
20 vicinity of the user and the recipient based on the interest indicated while identifying the
one or more relevant keywords;
a selection unit (220) configured to select at least one item amongst the one or more
items;
an extraction unit (224) configured to extract user image, recipient image and
25 selected item image from a memory; and
a generation unit (226) configured to generate the unified image by combining the
user image, the recipient image and the selected item image.
7. The system as claimed in claim 6, wherein the one or more relevant keywords, amongst the
30 plurality of keywords, are associated with the one or more items frequently being searched
on web.
22
8. The system as claimed in claim 1, wherein the selection unit (220) is further configured to
select at least one item amongst the one or more items by:
selecting a set of bids amongst a plurality of bids based on the one or more items associated
5 with relevant keyword, identified in the current conversation;
analyzing the set of bids based on a bidding criteria comprising at least one of user web
history, geographic location of the user and current time; and
selecting at least one bid, amongst the set of bids, satisfying the bidding criteria, wherein
the at least one item is selected corresponding to the selected at least one bid.
10
9. The system as claimed in claim 6, wherein the user image, the recipient image and the
selected item image comprises at least one of emoticons, stickers, and emojis, prestored in
the memory.
15 10. The system as claimed in claim 6, wherein the extraction unit (224) is further configured
to:
extract an interest graph of the user available in the memory to identify the interest of the
user for the one or more items, wherein the interest graph is generated based on historical
conversation data available for the user in the memory

Documents

Application Documents

# Name Date
1 202011006990-FORM 18 [28-12-2023(online)].pdf 2023-12-28
1 202011006990-STATEMENT OF UNDERTAKING (FORM 3) [18-02-2020(online)].pdf 2020-02-18
2 abstract.jpg 2021-10-18
2 202011006990-PROVISIONAL SPECIFICATION [18-02-2020(online)].pdf 2020-02-18
3 202011006990-POWER OF AUTHORITY [18-02-2020(online)].pdf 2020-02-18
3 202011006990-COMPLETE SPECIFICATION [18-02-2021(online)].pdf 2021-02-18
4 202011006990-CORRESPONDENCE-OTHERS [18-02-2021(online)].pdf 2021-02-18
4 202011006990-FORM 1 [18-02-2020(online)].pdf 2020-02-18
5 202011006990-DRAWINGS [18-02-2020(online)].pdf 2020-02-18
5 202011006990-DRAWING [18-02-2021(online)].pdf 2021-02-18
6 202011006990-DECLARATION OF INVENTORSHIP (FORM 5) [18-02-2020(online)].pdf 2020-02-18
6 202011006990-Covering Letter [01-10-2020(online)].pdf 2020-10-01
7 202011006990-Proof of Right [28-09-2020(online)].pdf 2020-09-28
7 202011006990-PETITION u-r 6(6) [01-10-2020(online)].pdf 2020-10-01
8 202011006990-Proof of Right [28-09-2020(online)].pdf 2020-09-28
8 202011006990-PETITION u-r 6(6) [01-10-2020(online)].pdf 2020-10-01
9 202011006990-DECLARATION OF INVENTORSHIP (FORM 5) [18-02-2020(online)].pdf 2020-02-18
9 202011006990-Covering Letter [01-10-2020(online)].pdf 2020-10-01
10 202011006990-DRAWING [18-02-2021(online)].pdf 2021-02-18
10 202011006990-DRAWINGS [18-02-2020(online)].pdf 2020-02-18
11 202011006990-CORRESPONDENCE-OTHERS [18-02-2021(online)].pdf 2021-02-18
11 202011006990-FORM 1 [18-02-2020(online)].pdf 2020-02-18
12 202011006990-POWER OF AUTHORITY [18-02-2020(online)].pdf 2020-02-18
12 202011006990-COMPLETE SPECIFICATION [18-02-2021(online)].pdf 2021-02-18
13 abstract.jpg 2021-10-18
13 202011006990-PROVISIONAL SPECIFICATION [18-02-2020(online)].pdf 2020-02-18
14 202011006990-STATEMENT OF UNDERTAKING (FORM 3) [18-02-2020(online)].pdf 2020-02-18
14 202011006990-FORM 18 [28-12-2023(online)].pdf 2023-12-28