Sign In to Follow Application
View All Documents & Correspondence

A System And Method For An Automated Sticker Assembly

Abstract: The present disclosure relates to a system and method for automated assembly of one or more stickers at a social networking application on a user device. In a preferred example the method is disclosed comprising: deriving at least one of a phrase, language and a theme based on the received real time input data at user device using a machine learning module; determining, from a corpus of one or more components of pre-stored stickers, a layout and a background template of the one or more stickers based on at least the derived theme and determining from said corpus at least one of a character, a relevant phrase and a combination thereof, of the one or more stickers based on the received input data. Further said method comprising: real time assembling of the one or more stickers using the determined layout, said background template and said at least one of the character, the relevant phrase and a combination thereof.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
16 January 2019
Publication Number
35/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patent@saikrishnaassociates.com
Parent Application
Patent Number
Legal Status
Grant Date
2025-01-13
Renewal Date

Applicants

HIKE PRIVATE LIMITED
World Mark 1, 4th Floor, Tower-A, Asset Area No. 11, Hospitality District, Indira Gandhi International Airport, New Delhi- 110037, India

Inventors

1. DEBDOOT MUKHERJEE
I-1715, First floor, Chittaranjan Park, New Delhi: 110019
2. KAVIN BHARTI MITTAL
World Mark 1, 4th Floor, Tower-A, Asset Area No. 11, Hospitality District, Indira Gandhi International Airport, New Delhi- 110037, India
3. MURTUZA ALI HUSSAINY
C-69, 3rd Floor, Malviya Nagar, C Block, New Delhi
4. DHANANJAY DINKARNATH GARG
M8, Yeshwant Nagar North Ambazari Road Nagpur - 440033
5. JASMINE RAMCHANDANI
23/63 Old Rajendra Nagar, New Delhi - 110060

Specification

FIELD OF INVENTION
The present invention generally relates to the field of social networking
applications, and more particularly, to systems and methods for automated
5 sticker assembly.
BACKGROUND
This section is intended to provide information relating to the field of the
10 invention and thus any approach or functionality described below should not be
assumed to be qualified as prior art merely by its inclusion in this section.
Millions of people use portable electronic devices for daily communications. Chat
is where users spend a lot of their time or maybe most of their time on their
15 electronic devices. During a chat, users may express their emotional state using
visual expressions such as stickers. While inputting stickers during the chat, a
user opens a sticker selection window or a pallet, and then selects and inputs an
appropriate sticker. Typically, a pool of stickers is locally stored on the electronic
device of the user, which can be accessed through the sticker selection window
20 or the pallet.
Since, there are so many ways of expressing the same thing in a chat, it’s difficult
for a user to find an appropriate sticker from the stored pool of stickers as per
the requirements. Moreover, the user/social networking apps have to regularly
25 update the stored pool of stickers in order to get updated stickers which were
absent earlier from the stored pool. However, still there is a possibility that the
user may not get a sticker appropriate for the emotion which the user wants to
portray in the chat. Furthermore, the user has to purchase new stickers which
are available for purchase in order to update the locally stored pool of stickers,
3
which is both time consuming and cost ineffective. Considering the limited
available storage space on portable electronic devices, it is inefficient to store a
large pool of stickers locally on the device. Even, if the pool is stored remotely on
a server, still the problem of unavailability of a required sticker will be faced by
5 the user occasionally. Therefore, there is a need to alleviate problems existing in
the prior art and develop a more efficient solution for providing stickers to users.
SUMMARY
10 This section is provided to introduce certain objects and aspects of the present
disclosure in a simplified form that are further described below in the detailed
description. This summary is not intended to identify the key features or the
scope of the claimed subject matter.
In order to overcome at least a few problems associated with the known
15 solutions as provided in the previous section, an object of the present disclosure
is to provide a method and system for an automated sticker assembly.
One aspect of the present disclosure relates to a method for automated
assembly of one or more stickers at a social networking application on a user
device. The method comprises receiving a user input at the social networking
20 application on the user device and deriving, in real-time, an input data at the
user device based on the user input. Next, the method includes deriving at least
one of a phrase, language and a theme based on the received input data using a
machine learning module; determining, from a corpus of one or more
components of pre-stored stickers, a layout and a background template of the
25 one or more stickers based on at least the derived theme; determining, from said
corpus, at least one of a character, a relevant phrase and a combination thereof,
of the one or more stickers based on the received input data; and assembling in
real-time, the one or more stickers using the determined layout, said background
4
template and said at least one of the character, the relevant phrase and a
combination thereof.
Further, another aspect of the present disclosure encompasses a system for
automated assembly of one or more stickers at a social networking application
5 on a user device. The system comprises a corpus of one or more components of
pre-stored stickers, wherein each component is associated with one or more
anchor points and one or more tags, and said one or more components
comprises one or more of a pre-defined layouts, a pre-defined background
templates and pre-defined characters. The system also comprises an input unit,
10 configured to receive in a user input at the social networking application on the
user device. Further the system also comprises a processing unit, coupled to said
corpus and said input unit, wherein the processing unit is configured to derive, in
real-time, an input data at the user device based on the user input, and further
derive at least one of a phrase, language and a theme based on the received
15 input data using a machine learning module, determine, from said corpus, at
least one of a character, a relevant phrase and a combination thereof, of the one
or more stickers based on the received input data, and assemble, in real-time,
the one or more stickers using the determined layout, said background template
and said at least one of the character, the relevant phrase and a combination
20 thereof.
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a
part of this disclosure, illustrate exemplary embodiments of the disclosed
25 methods and systems in which like reference numerals refer to the same parts
throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Some drawings may indicate the
5
components using block diagrams and may not represent the internal circuitry of
each component. It will be appreciated by those skilled in the art that disclosure
of such drawings includes disclosure of electrical components, electronic
components or circuitry commonly used to implement such components.
5
FIG.1 illustrates a block diagram of the system unit [100], in accordance with
exemplary embodiment of the present disclosure.
FIG. 2 illustrates an exemplary method flow diagram [200] depicting method for
10 an automated sticker assembly.
FIG. 3 illustrate an exemplary method flow diagram [300] depicting method for
generating a text sticker in real time.
15 FIG. 4 illustrate an exemplary method flow diagram [400] depicting method for
generating a character sticker in real time.
FIG. 5 illustrate an exemplary method flow diagram [500] depicting method for
generating in real time a sticker comprising a character and text.
20
The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DESCRIPTION OF THE INVENTION
25
In the following description, for the purposes of explanation, numerous
examples have been set forth in order to provide a brief description of the
invention. It will be apparent however, that the invention may be practiced
6
without these specific details, features and examples; and the scope of the
present invention is not limited to the examples provided herein below.
The present invention facilitates recommendation of a sticker in a social
5 networking application, wherein the social networking application is typically
installed on or resides on a personal electronic device. The personal electronic
device may include but is not limited to a smart phone, a mobile phone, a tablet,
personal computer, etc.; and the social networking application may include a
messaging application or any other social networking application.
10
As used herein, a “sticker” may denote a graphic image or illustration which is
available to be placed on or added to a chat, on a social networking application,
for expressing an emotion, sentiment, thought or an action through among other
actions via cartoons, animations, elaborate emoticons and emoji’s.
15
As used herein, a “processing unit” or “processor” includes one or more
processors, wherein processor refers to any logic circuitry for processing
instructions. A processor may be a general-purpose processor, a special purpose
processor, a conventional processor, a digital signal processor, a plurality of
20 microprocessors, one or more microprocessors in association with a DSP core, a
controller, a microcontroller, Application Specific Integrated Circuits, Field
Programmable Gate Array circuits, any other type of integrated circuits, etc. The
processor may perform signal coding data processing, input/output processing,
and/or any other functionality that enables the working of the system according
25 to the present disclosure. More specifically, the processor or processing unit is a
hardware processor.
As used herein, “memory unit”, refers to a storage area to store the data. The
said memory unit may include a volatile or non-volatile memory. The memory
7
unit may be random access memory, read only memory, EROM, EPROM,
NVRAM, external memory devices such as pen drive, hard disk drive, flash
memory, magnetic tapes etc.
5 The system of the present invention efficiently creates a sticker by assembling
the sticker on the fly as the message is inputted or typed by a user during the
chat. The system assembles the sticker by putting together suitable sets of
sticker parts from a corpus being created for different chat phrases. The system
includes the corpus of components for a defined set of sticker parts, wherein
10 each component is annotated with anchor points to hook other parts and each
component is appropriately tagged to facilitate automatic assembly. The system
further utilizes machine learning modules which extract a list of commonly used
chat phrases across different languages and link them to themes, for example,
based on emotions or types of expressions. The system is configured to assemble
15 the sticker at the run time by putting together suitable sets of sticker parts from
the available corpus for different chat phrases. The system of the present
invention may be used for scaling sticker creation for common chat phrases.
Therefore, the system of the present invention not only makes texting easier and
faster but also helps a user to communicate their emotions in a way that text can
20 never perform. Further, the system can dynamically assemble a sticker for
anything a user would like to communicate in a chat.
Referring to FIG. 1 an architecture of system unit [100], is shown in accordance
with exemplary embodiments of the present invention. The system [100]
25 comprises at least one input unit [102], at least one processing unit [104] and at
least one memory unit [106]. The memory unit [106] comprises a corpus of one
or more components of pre-stored stickers, wherein each component is
associated with one or more anchor points and one or more tags. Further, the
said components comprises one or more of a pre-defined layouts, a pre-defined
8
background templates and pre-defined characters. The system [100] may be
installed/resides within at least one of the at least one user device, at least one
server unit and partly in user device and partly in server unit.
5 The input unit [102] is configured to receive user input at the social networking
application of the user device, based on which input data is derived in real time
at the user device. The user input may be a text input and/or a voice input. In
case voice based input is received at the user device, the invention encompasses
using voice converted to text and text to derive further information.
10
Further the system [100] comprises, at least one processing unit [104], coupled
to said memory unit [106] and said input unit [102]. The processing unit [104] is
configured to derive input data based on the user input and other information,
wherein the input data may include but not limited to a phrase, language, and/or
15 a theme. Further in an example, the input data may include realism, treatment
to be performed on a sticker, and/or font style to be used on the sticker. The
processing unit [104] is configured to fetch/derive at least one of a phrase,
language and a theme based on the received input data automatically from at
least one input unit [102] using machine learning modules. Further in an example
20 the processing unit [104] is configured to determine a theme to be used for a
sticker, for example, based at least on emotions, types of expressions or other
input data. The processing unit [104] is further configured to determine a layout
and a background template which is to be used for a particular theme, using the
machine learning modules. Using neural network based multi-class classification
25 models, given a theme and summary, the layout and background template is
selected with high probability. This deep neural network also has input about
user personalization parameters, which help in selecting the layout and
background template that best matches the preferences of the user. These
parameters include time of day, day of week, gender, relationship etc. In an
9
example once, the theme is selected by the system [100], the system [100]
determines a suitable layout for the determined theme and in another example,
the system [100] is not able to determine a specific layout in accordance with the
theme, the system [100] uses a generic layout which is universal and is applicable
5 to all the stickers.
Further the processing unit [104], is configured to determine at least one set of
background template based on at least one theme. Further at least one suitable
background template for at least one sticker is extracted from the said set of
10 background templates. The said extracted background template specifies anchor
points to which props for the theme is added. Further in an example at least one
of a colour or a texture based on machine learning module, is added to the
background template. The said background template and the layout is
determined on the basis of at least one derived theme, from a corpus of one or
15 more components of pre-stored stickers.
Further the processing unit [104] is also configured to determine at least one
character, at least one relevant phrase and at least a combination thereof from
the said corpus of one or more components of pre-stored stickers based on the
20 received input data. The processing unit [104] further assembles at least one of
the said determined components such as said layout, background template,
character, phrase and a combination thereof, to generate a sticker in real time,
wherein the said sticker is assembled by hooking the anchor points of each said
determined components with each other. The said each determined component
25 is appropriately tagged to facilitate the sticker assembly. In an example the one
or more assembled stickers are at least one of a text stickers, character stickers
and a combination thereof. The selection of components for the sticker
generation are done basis the theme and summary extracted from user input.
For instance if the theme is sports then the props, background, phrase font and
10
color are all chosen to match along with considering the type of sport. The
sticker can also be generated to ml models trained to generate relevant stickers
suited to personal taste. Here, generative models in ML can be used such as
GANs (Generative Adversarial Networks)
5
The system [100] also comprises a graphical user interface configured to display
the assembled one or more stickers. Further, in an example, the system [100]
may present the generated sticker on a separate view in a chat window, or as a
pop-up message on the social networking application or as any other input icon
10 in the input area of the graphical user interface of the social networking
application. The generated sticker may be inserted in any chat by the user or in
any other form of input in the social networking system.
The present disclosure comprises at least a processor, at least one memory and
15 one or more modules configured to perform one or more functions in
accordance with the invention. In an embodiment, the system may reside in a
server, while in another embodiment, the system may reside in the personal
electronic device. In yet another embodiment, the system may reside partly in
the user’s personal electronic device and partly at the server. The system of the
20 present invention is configured to communicate with the personal electronic
device via one or more wired or wireless networks.
FIG.2 illustrates an exemplary method flow diagram [200] depicting method for
an automated sticker assembly. The method begins at step [202], when the user
25 inputs the data to communicate with other user via social networking
application. In an example the system [100] may be configured to fetch and
process real time input data to provide the user at least one automatically
generated sticker.
11
At step [204], user input is received at the social networking application based on
which the input data is derived in real-time at the system [100] at the user
device. The user input may be a text input and/or a voice input. In case voice
based input is received at the user device, the invention encompasses using
5 voice converted to text and text to derive further information. In an example, the
received input data may comprises at least one of the information related to the
hyperlocal geographic data, emotion, sentiment, mood, discussion, thought or an
action among other actions. In another example said input data may include but
not limited to a phrase, language, and/or a theme. For example, in IPL season the
10 good morning sticker will have IPL theme based background/ foreground
components. Further the said input data is received/extracted by the system
[100] automatically using the machine learning modules. In an example the input
data further comprises the at least one of the information related to the realism,
treatment to be performed on a sticker, font style to be used on sticker and
15 other similar parameters.
Next, the method proceeds to step [206] that includes deriving at least one of a
phrase, language and a theme based on the received input data such as
emotions or type of expressions, using a machine learning module. The method
20 further leads to step [208], wherein the method comprises determining at least
one layout and at least one background template of the one or more stickers
based on at least the derived theme. The said layout and background template is
determined from a corpus of one or more components of pre-stored stickers. In
an example, a colour and texture is also added to the said background template
25 on the basis of said derived theme. In an example the layout is selected on the
basis of derived theme and in another example if the system [100] is not able to
determine the theme, the system [100] uses a generic layout which is universal
and is applicable to all the stickers.
12
The method further leads to the step [210], that includes determination of at
least one of a character, a relevant phrase and a combination thereof, of the one
or more stickers based on the received input data from a corpus of one or more
components of pre-stored stickers. The invention also encompasses determining
5 a text layout, a font style, a font color, a shadow strength and a sticker border for
the one or more stickers based on the derived theme.
The method further at step [212] comprises assembling, in real-time, the one or
more stickers using the determined components such as said layout, said
10 background template and said at least one of the character, the relevant phrase
and a combination thereof, wherein the said sticker is assembled by hooking the
anchor points of each said determined components with each other. The said
each determined component is appropriately tagged to facilitate the sticker
assembly. The determined stickers further displayed by the system [100], by
15 creating a separate view in a chat window, or as a pop-up message on the social
networking application or as any other input icon in the input area of the
graphical user interface of the social networking application. The generated
sticker may be inserted in any chat by the user or in any other form of input in
the social networking system.
20 The method further terminates at step 214.
In an example with respect to the present invention, say two users are
communicating with each other regarding scoring a century by specific cricket
player via social messaging application. The present system [100] extracts the
25 input data of the users using machine learning modules and on the basis of said
input data assembles at least one sticker in real time comprising one of a text
related to the said century of said cricket player, character of said cricket player
indicating a century and character of the said cricket player along with the score
in text format indicating the century made by said cricket player. Further the
13
system [100] in accordance with the received input data determines the other
parameters related to stickers, including but not limited to background template,
theme, format of the text, colour of the sticker, features of the generated
characters etc. In the current example, the said other parameters may comprises
5 a background indicating a cricket stadium, batting pitch or any tournament etc.
Further in another example in accordance with the current invention, say two
users are communicating with each other regarding shopping and the said user is
female, the present system [100], retrieves the input data by using machine
10 learning modules and on the basis of said retrieved input data generates at least
one sticker comprises one of at least one text related to shopping, at least one
character of a female indicating shopping and at least one character of a female
along with the text indicating shopping. The system [100], using machine
learning techniques assembles a real time sticker by determining in real time at
15 least one of a head and a body of at least one of a female character along with at
least one shopping bag, at least one text related to shopping and at least one
background template indicating a shopping complex.
Referring to FIG. 3, a flowchart for a method performed by the system of the
20 present invention for assembling a text sticker, is disclosed. The said method
begins at step [302], further at step [304] the system receives a user input at the
social networking application of the user device and derives an input data based
on said user input, wherein the input data may include but not limited to a
phrase, language, and/or a theme. The user input may be a text input and/or a
25 voice input. In case voice based input is received at the user device, the invention
encompasses using voice converted to text and text to derive further
information. In one example, the input data may be extracted by the system
automatically using machine learning modules, from the text entered by a user.
14
In one example, by using the machine learning modules, the system determines
a theme to be used, for example, based on emotions or types of expressions.
At step [306], a layout is determined which is to be used for a particular theme,
5 using the machine learning modules. Once, the theme is selected by the system
[100], the system [100] determines a suitable layout for the determined theme.
In an example, the system [100] is not able to determine a specific layout in
accordance with the theme, the system uses a generic layout which is universal
and is applicable to all the stickers.
10
At step [308], the system [100] determines a background template to be used in
the sticker. Firstly, a set of background templates applicable for the theme which
is determined by the machine learning modules, is extracted. From the extracted
set, a suitable background template is selected. The selected background
15 template specifies anchor points to which props for the theme is added.
Thereafter, the system [100] adds a colour scheme to the background template
based on the theme determined by the machine learning modules. In an
example, a texture is added to the background template.
20 At step [310], a phrase on the sticker which is to be created is added. The system
[100] automatically selects a text layout based on length of the input data.
Further, the system [100] automatically selects font style based on theme
determined by the machine learning modules. The system [100] anchors the
phrase to the selected text layout. In an example, the system [100] uses the
25 selected font style, text colours in-line with the determined theme, and adds a
shadow or sticker border.
At step [312], the system [100] assembles the sticker based on the above
components and outputs the generated text sticker. The system [100] may
present the generated stickers by creating a separate view in a chat window, or
15
as a pop-up message on the social networking application or as any other input
icon in the input area of the graphical user interface of the social networking
application. The generated sticker may be inserted in any chat by the user or in
any other form of input in the social networking system. The method further
5 terminates at step [314].
Referring to FIG. 4, a flowchart for a method performed by the system [100] of
the present invention for assembling a character sticker, is disclosed. At step
[402] the method begins and the system [100] at step [404] receives a user input
10 at the social networking application of the user device and deriving an input data
based on said user data, similar to as described above for step [304]. The user
input may be a text input and/or a voice input. In case voice based input is
received at the user device, the invention encompasses using voice converted to
text and text to derive further information. Additionally, the input data received
15 at step [404] includes gender information provided by the user. Further, the
input data includes an indication for the type of sticker required by the user. In
this case, the indication field points to a character sticker. At step [406], the
system [100] selects a layout similar to as described above for step [306].
Additionally, the system [100] selects the layout which can accommodate a
20 character.
At step [408], the system [100] determines a character which is to be included in
the character sticker. The determination of at least one character comprises,
selecting a body of said at least one character based on said determined layout
25 and input data and also comprises selecting a face and at least one face part
based on the received input data, wherein the said input data includes at least
one of a gender information, an expression tag, and a current hyper-local
geographic data. The system [100] selects a body for the character, in
accordance with the determined layout and based on realism or treatment
16
received in the input data. The system selects the body based on gender and
expression tags specified in the input data. The system [100] anchors the
determined body to the selected layout. In an example, the system [100] may
further apply a skin colour to the determined body.
5
At step [410], the system [100] selects a face for the body determined at step
[408]. The system [100] selects the face to match orientation, gender, and style
of the selected body. In a preferred example, the system [100] selects the face
which matches the expression tags up to a predetermined level. The system
10 [100] anchors the face to the body and may further apply skin colour to the face.
In an example, the system [100] may further apply colour to hair portion of the
selected body.
At step [412], the system [100] selects face parts for the face selected at step
15 [410]. The system [100] selects eyes, nose, and mouth portion of the face in
order to match orientation, gender, and style of the selected face. The eyes,
nose, and the mouth are selected so that they match the expression tags. The
system [100] further anchors the aforementioned selected parts of the face to
the face selected at step [410]. At step [414], the system [100] determines
20 background templates and props similar to as described for step [408]. Further,
at step [416], the system [100] outputs the character sticker similar to as
described above for step [312]. The method then terminates step [418].
Referring to FIG. 5, a flowchart for a method performed by the system [100] of
25 the present invention for assembling a sticker comprising both character and
text, is disclosed. In steps [502-520], the system [100] performs steps similar to
steps performed in flowcharts illustrated in FIG. 3 and FIG. 4. For the sake of
brevity, all the steps have not been repeated herein. However, it will be obvious
for a person skilled in the art that the steps which are common in flowcharts of
17
FIG. 3 and FIG. 4 will be executed by the system only once at the time of
performing steps 502-520.
The invention also encompasses multi-lingual sticker generation identifying the
5 languages used in voice+text user input received at the user device and using
hyper-local cultural based personalization to make the sticker more relevant to
the community.
The invention also encompasses adding background components that reflect the
10 current hyper-local geographic theme, for e.g. in IPL season the good morning
sticker will have IPL theme based background/foreground components added.
The invention enables generation of multiple stickers by considering the current
hyper-local geographic and cultural themes to provide enough variety to the
user. Further, the invention encompasses personalization of the sticker
15 generated basis context including: weather, time of the day, day of the week,
chat context, relationship in the people involved in the chat, in addition to hyperlocal cultural and hyper-local geographic embellishment of the sticker generated.
In case, the Hikemoji is included in the sticker, the decision to include Hikemoji
20 and whether to include Hikemoji of friend(s) as well in the sticker, is determined
by ML and heuristics for maximal sticker performance.
Further, generation of animated sticker is also encompassed by the present
disclosure, with the animation assignment to the components determined by ML
25 module whose objective to highlight dominant items in the text/audio input, as
well as dominant hyper-local cultural/geographic themes. The style of animation
is intended to be concise and soft, while ensuring that it is pleasing to the user.
The style and duration is learnt using ML basis user feedback. This can change
over time, where again ML helps to adapt the sticker generation accordingly.
18
Although the present invention has been discussed with reference to assembly
of a single sticker, it shall be appreciated by those skilled in the art that the
invention encompasses assembling of multiple stickers in accordance with the
5 invention. While the present invention has been described with reference to
certain preferred embodiments and examples thereof, other embodiments,
equivalents and modifications are possible and are also encompassed by the
scope of the present disclosure.

We Claim:
1. A method for automated assembly of one or more stickers at a social
networking application on a user device, the method comprising:
5
- Receiving a user input at the social networking application on the user
device;
- Deriving, in real-time, an input data at the user device based on the
user input;
10 - Deriving at least one of a phrase, language and a theme based on the
received input data using a machine learning module;
- Determining, from a corpus of one or more components of pre-stored
stickers, a layout and a background template of the one or more
stickers based on at least the derived theme;
15 - Determining, from said corpus, at least one of a character, a relevant
phrase and a combination thereof, of the one or more stickers based
on the received input data; and
- Assembling, in real-time, the one or more stickers using the
determined layout, said background template and said at least one of
20 the character, the relevant phrase and a combination thereof.
2. The method as claimed in claim 1 wherein determining the background
template of the one or more stickers further includes determining a
background colour and a texture of the one or more stickers.
25
3. The method as claimed in claim 1 further comprising displaying at the
user device the assembled one or more stickers at a graphical user
interface of the social networking application.
20
4. The method as claimed in claim 1 further comprising determining a text
layout, a font style, a font color, a shadow strength and a sticker border
for the one or more stickers based on the derived theme.
5
5. The method as claimed in claim one where in determining at least one
character for the comprises:
- selecting a body of said at least one character based on said
10 determined layout and input data; and
- selecting a face and at least one face part based on the received input
data.
6. The method as claimed in claim 5 wherein the input data includes at least
15 one of a gender information, an expression tag and current hyper-local
geographic data.
7. The method as claimed in claim 1 wherein the user input is one of a text
input and a voice input.
8. The method as claimed in claim 7 wherein in an event user input is the
20 voice input, the method further comprises converting said voice input
into text.
9. The method as claimed in claim 1 further comprising personalising sticker
assembly based on at least one of a weather, a time of the day, a day of
the week, a chat context and a relationship between users.
25 10. The method as claimed in claim 1 wherein the assembled one or more
stickers is one of an animated sticker and an non-animated sticker.
11. A system [100] for automated assembly of one or more stickers at a social
networking application on a user device, the system [100] comprising:
21
- a corpus of one or more components of pre-stored stickers, wherein
each component is associated with one or more anchor points and
one or more tags, and
5 - said one or more components comprises one or more of a predefined layouts, a pre-defined background templates and pre-defined
characters;
- an input unit [102], configured to receive in real-time, user input at
the social networking application on the user device; and
10 - a processing unit [104], coupled to said corpus and said input unit
[102], wherein the processing unit [104] is configured to
derive at least one input data based on the user input,
derive at least one of a phrase, language and a theme
based on the received input data using a machine learning
15 module,
determine, from said corpus, a layout and a background
template of the one or more stickers based on at least the
derived theme,
determine, from said corpus, at least one of a character, a
20 relevant phrase and a combination thereof, of the one or
more stickers based on the received input data, and
assemble, in real-time, the one or more stickers using the
determined layout, said background template and said at
least one of the character, the relevant phrase and a
25 combination thereof.
12. The system [100] as claimed in claim 11 wherein the assembled one or
more stickers are at least one of a text stickers, character stickers and a
combination thereof.
22
13. The system [100] as claimed on claim 11 further comprising a graphical
user interface configured to display the assembled one or more stickers.
5 14. The system as claimed in claim 11 wherein the user input is one of a text
input and a voice input.
15. The system as claimed in claim 11 wherein the processing unit [104] is
further configured to personalising sticker assembly based on at least
one of a weather, a time of the day, a day of the week, a chat context and
10 a relationship between users.
16. The system as claimed in claim 11 wherein the assembled one or more
stickers is one of an animated sticker and an non-animated sticker.
17. The system as claimed in claim 11 wherein the input data includes at
least one of a gender information, an expression tag and current hyper15 local geographic data.

Documents

Application Documents

# Name Date
1 201911001913-IntimationOfGrant13-01-2025.pdf 2025-01-13
1 201911001913-STATEMENT OF UNDERTAKING (FORM 3) [16-01-2019(online)].pdf 2019-01-16
1 201911001913-Written submissions and relevant documents [31-10-2024(online)].pdf 2024-10-31
2 201911001913-FORM-26 [14-10-2024(online)].pdf 2024-10-14
2 201911001913-PatentCertificate13-01-2025.pdf 2025-01-13
2 201911001913-PROVISIONAL SPECIFICATION [16-01-2019(online)].pdf 2019-01-16
3 201911001913-Correspondence to notify the Controller [10-10-2024(online)].pdf 2024-10-10
3 201911001913-FORM 1 [16-01-2019(online)].pdf 2019-01-16
3 201911001913-Written submissions and relevant documents [31-10-2024(online)].pdf 2024-10-31
4 201911001913-US(14)-HearingNotice-(HearingDate-17-10-2024).pdf 2024-09-23
4 201911001913-FORM-26 [14-10-2024(online)].pdf 2024-10-14
4 201911001913-DRAWINGS [16-01-2019(online)].pdf 2019-01-16
5 abstract.jpg 2019-02-28
5 201911001913-FER_SER_REPLY [18-01-2024(online)].pdf 2024-01-18
5 201911001913-Correspondence to notify the Controller [10-10-2024(online)].pdf 2024-10-10
6 201911001913-US(14)-HearingNotice-(HearingDate-17-10-2024).pdf 2024-09-23
6 201911001913-FORM-26 [09-04-2019(online)].pdf 2019-04-09
6 201911001913-FER.pdf 2023-08-04
7 201911001913-Power of Attorney-120419.pdf 2019-04-23
7 201911001913-FORM 3 [19-07-2023(online)].pdf 2023-07-19
7 201911001913-FER_SER_REPLY [18-01-2024(online)].pdf 2024-01-18
8 201911001913-Correspondence-120419.pdf 2019-04-23
8 201911001913-FER.pdf 2023-08-04
8 201911001913-FORM 18 [20-12-2022(online)].pdf 2022-12-20
9 201911001913-FORM 3 [19-01-2021(online)].pdf 2021-01-19
9 201911001913-FORM 3 [19-07-2023(online)].pdf 2023-07-19
9 201911001913-Proof of Right (MANDATORY) [15-07-2019(online)].pdf 2019-07-15
10 201911001913-FORM 18 [20-12-2022(online)].pdf 2022-12-20
10 201911001913-FORM 3 [07-02-2020(online)].pdf 2020-02-07
10 201911001913-OTHERS-180719.pdf 2019-07-26
11 201911001913-Correspondence-180719.pdf 2019-07-26
11 201911001913-FORM 3 [19-01-2021(online)].pdf 2021-01-19
11 201911001913-REQUEST FOR CERTIFIED COPY [24-01-2020(online)].pdf 2020-01-24
12 201911001913-COMPLETE SPECIFICATION [16-01-2020(online)].pdf 2020-01-16
12 201911001913-ENDORSEMENT BY INVENTORS [16-01-2020(online)].pdf 2020-01-16
12 201911001913-FORM 3 [07-02-2020(online)].pdf 2020-02-07
13 201911001913-REQUEST FOR CERTIFIED COPY [24-01-2020(online)].pdf 2020-01-24
13 201911001913-DRAWING [16-01-2020(online)].pdf 2020-01-16
14 201911001913-COMPLETE SPECIFICATION [16-01-2020(online)].pdf 2020-01-16
14 201911001913-ENDORSEMENT BY INVENTORS [16-01-2020(online)].pdf 2020-01-16
15 201911001913-Correspondence-180719.pdf 2019-07-26
15 201911001913-DRAWING [16-01-2020(online)].pdf 2020-01-16
15 201911001913-REQUEST FOR CERTIFIED COPY [24-01-2020(online)].pdf 2020-01-24
16 201911001913-ENDORSEMENT BY INVENTORS [16-01-2020(online)].pdf 2020-01-16
16 201911001913-FORM 3 [07-02-2020(online)].pdf 2020-02-07
16 201911001913-OTHERS-180719.pdf 2019-07-26
17 201911001913-Proof of Right (MANDATORY) [15-07-2019(online)].pdf 2019-07-15
17 201911001913-Correspondence-180719.pdf 2019-07-26
17 201911001913-FORM 3 [19-01-2021(online)].pdf 2021-01-19
18 201911001913-OTHERS-180719.pdf 2019-07-26
18 201911001913-FORM 18 [20-12-2022(online)].pdf 2022-12-20
18 201911001913-Correspondence-120419.pdf 2019-04-23
19 201911001913-FORM 3 [19-07-2023(online)].pdf 2023-07-19
19 201911001913-Power of Attorney-120419.pdf 2019-04-23
19 201911001913-Proof of Right (MANDATORY) [15-07-2019(online)].pdf 2019-07-15
20 201911001913-Correspondence-120419.pdf 2019-04-23
20 201911001913-FER.pdf 2023-08-04
20 201911001913-FORM-26 [09-04-2019(online)].pdf 2019-04-09
21 201911001913-FER_SER_REPLY [18-01-2024(online)].pdf 2024-01-18
21 201911001913-Power of Attorney-120419.pdf 2019-04-23
21 abstract.jpg 2019-02-28
22 201911001913-DRAWINGS [16-01-2019(online)].pdf 2019-01-16
22 201911001913-FORM-26 [09-04-2019(online)].pdf 2019-04-09
22 201911001913-US(14)-HearingNotice-(HearingDate-17-10-2024).pdf 2024-09-23
23 201911001913-Correspondence to notify the Controller [10-10-2024(online)].pdf 2024-10-10
23 201911001913-FORM 1 [16-01-2019(online)].pdf 2019-01-16
23 abstract.jpg 2019-02-28
24 201911001913-DRAWINGS [16-01-2019(online)].pdf 2019-01-16
24 201911001913-FORM-26 [14-10-2024(online)].pdf 2024-10-14
24 201911001913-PROVISIONAL SPECIFICATION [16-01-2019(online)].pdf 2019-01-16
25 201911001913-Written submissions and relevant documents [31-10-2024(online)].pdf 2024-10-31
25 201911001913-STATEMENT OF UNDERTAKING (FORM 3) [16-01-2019(online)].pdf 2019-01-16
25 201911001913-FORM 1 [16-01-2019(online)].pdf 2019-01-16
26 201911001913-PROVISIONAL SPECIFICATION [16-01-2019(online)].pdf 2019-01-16
26 201911001913-PatentCertificate13-01-2025.pdf 2025-01-13
27 201911001913-STATEMENT OF UNDERTAKING (FORM 3) [16-01-2019(online)].pdf 2019-01-16
27 201911001913-IntimationOfGrant13-01-2025.pdf 2025-01-13

Search Strategy

1 201911001913E_03-08-2023.pdf

ERegister / Renewals

3rd: 17 Mar 2025

From 16/01/2021 - To 16/01/2022

4th: 17 Mar 2025

From 16/01/2022 - To 16/01/2023

5th: 17 Mar 2025

From 16/01/2023 - To 16/01/2024

6th: 17 Mar 2025

From 16/01/2024 - To 16/01/2025

7th: 17 Mar 2025

From 16/01/2025 - To 16/01/2026