Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Generating Emojis For Presenting A Conversation In A Narrative Format

Abstract: ABSTRACT Methods and systems for generating emojis for presenting a story in a narrative format. The story includes emojis, depicting the context of a conversation. Metadata of the conversation, comprising of texts and emojis, is extracted to determine emotions, actions, things and intents expressed through the texts and emojis. Emojis, in an emoji library are mapped to emotions. Emojis expressing positive, neutral, or negative emotions, with respect to the emotions expressed by the texts and emojis, are fetched from the emoji library. The fetched emojis are displayed for selection. A displayed emoji is selected as response Emoji, and a resultant emoji is generated based on the emojis in the conversation and the response Emoji. The response emoji and the resultant emoji are appended with the emojis in the conversation, to generate the story; which is sent as reply. FIG. 5a

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 June 2020
Publication Number
52/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patent@bananaip.com
Parent Application
Patent Number
Legal Status
Grant Date
2025-04-30
Renewal Date

Applicants

1. SAMSUNG ELECTRONICS CO., LTD
129, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-742, Republic of Korea

Inventors

1. Monika
#2870, Phoenix Building,, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037, Karnataka, India
2. Poshith Udayashankar
#2870, Phoenix Building,, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037, Karnataka, India
3. Deepak Nathan
#2870, Phoenix Building,, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037, Karnataka, India

Specification

Claims:STATEMENT OF CLAIMS
I/We claim:
1. A method for presenting a story, the method comprising:
extracting, by a device (300), metadata associated with a conversation;
displaying, by the device (300), a plurality of emojis, based on the extracted metadata, for enabling a user of the device (300) to select a response emoji from amongst the displayed plurality of emojis;
determining by the device (300), a resultant emoji based on at least one of the response emoji and at least one emoji present in the conversation; and
displaying, by the device (300), a filmstrip comprising at least one of the response emoji and the resultant emoji.
2. The method, as claimed in claim 1, wherein the metadata associated with the conversation comprises at least one of emotion expressed in the conversation, action performed by at least one emoji present in the conversation, intent expressed in the conversation, things indicated in at least one text present in the conversation, things indicated in at least one media present in the conversation, and things depicted in the at least one emoji present in the conversation.
3. The method, as claimed in claim 1, wherein each of the displayed plurality of emojis is mapped to one of a positive emotion, a neutral emotion, and a negative emotion, with respect to at least one emotion expressed in the conversation.
4. The method, as claimed in claim 3, wherein the plurality of displayed emojis is fetched from one of an emoji library in the device (300) and an emoji library in an external entity.
5. The method, as claimed in claim 1, wherein the filmstrip depicts the at least one emoji received from the user, the response emoji, and the resultant emoji, in a sequential format.
6. The method, as claimed in claim 1, wherein the method further comprising:
displaying, by the device (300), a first text message and an emoji based on the extracted metadata, wherein the first text message is selected from amongst a plurality of text messages in the conversation the emoji is generated based on a plurality of emojis in the conversation, wherein the extraction of metadata involves selection of the first text message, wherein the first text message causes the plurality of users to send the plurality of text messages and the plurality of emojis; and
displaying, by the device (300), a second text message in place of the first text message at a current time instant, wherein the second text message causes the plurality of users to send the plurality of text messages and the plurality of emojis at the current time instant.
7. The method, as claimed in claim 6, wherein the extraction of metadata involves generating the displayed emoji based on one of an amount of elements contributing to an ambience effect and an intensity of the elements contributing to the ambience effect, wherein the amount of elements and the intensity of the elements is based on a number of users that have sent the plurality of text messages and the plurality of emojis.
8. A device (300) for presenting a story, the device (300) configured to:
extract metadata associated with a conversation;
display a plurality of emojis, based on the extracted metadata, for enabling a user of the device (300) to select a response emoji from amongst the displayed plurality of emojis;
determine a resultant emoji based on at least one of the response emoji and at least one emoji present in the conversation; and
display a filmstrip comprising at least one of the response emoji and the resultant emoji.
9. The device (300), as claimed in claim 8, wherein the metadata associated with the conversation comprises at least one of emotion expressed in the conversation, action performed by at least one emoji present in the conversation, intent expressed in the conversation, things indicated in at least one text present in the conversation, things indicated in at least one media present in the conversation, and things depicted in the at least one emoji present in the conversation.
10. The device (300), as claimed in claim 8, wherein each of the displayed plurality of emojis is mapped to one of a positive emotion, a neutral emotion, and a negative emotion, with respect to at least one emotion expressed in the conversation.
11. The device (300), as claimed in claim 10, wherein the plurality of displayed emojis is fetched from one of an emoji library in the device (300) and an emoji library in an external entity.
12. The device (300), as claimed in claim 8, wherein the filmstrip depicts the at least one emoji received from the user, the response emoji, and the resultant emoji, in a sequential format.
13. The device (300), as claimed in claim 8, wherein the device (300) is further configured to:
display a first text message and an emoji based on the extracted metadata, wherein the first text message is selected from amongst a plurality of text messages in the conversation the emoji is generated based on a plurality of emojis in the conversation, wherein the extraction of metadata involves selection of the first text message, wherein the first text message causes the plurality of users to send the plurality of text messages and the plurality of emojis; and
display a second text message in place of the first text message at a current time instant, wherein the second text message causes the plurality of users to send the plurality of text messages and the plurality of emojis at the current time instant.
14. The device (300), as claimed in claim 13, wherein the extraction of metadata involves generating the displayed emoji based on one of an amount of elements contributing to an ambience effect and an intensity of the elements contributing to the ambience effect, wherein the amount of elements and the intensity of the elements is based on a number of users that have sent the plurality of text messages and the plurality of emojis. , Description:TECHNICAL FIELD
[001] Embodiments herein relate to generation of emojis, and more particularly to methods and systems for generating emojis for enabling stories to be presented in a narrative format.
BACKGROUND
[002] With advent of 5th Generation (5G) communication and Rich Communication Services (RCS), native messaging applications and services, along with messenger applications, are introducing new emojis, stickers, animojis (animated emojis), face emojis, and so on, for gaining user attention and increasing overall usage. With increasing number of emojis introduced in native and messenger applications, it may not be easy for all users to familiarize with all of them, and utilize a particular emoji aptly based on a context in a conversation or story. The user may experience a cognitive load, if the user is required to figure out the meaning of a particular emoji in a given context and manually fetch emojis, which are apt to be included in the replies; in order to ensure that there is no cognitive disconnection.
[003] According to current User Interface (UI) trends, users are likely to expect contents (such as story, emoji, gif, audio, video) in a conversation to automatically scroll/play, particularly when it comes to consuming crisp visual media, as it requires lesser effort to consume the contents and establishes a better context. However, in messaging applications, if the user wishes to view or play the contents, the users may need to manually scroll the contents and individually select the contents, respectively. Currently, users are likely to share screenshots of conversations or tweets/mails, in order to facilitate viewers to easily and quickly familiarize with a topic that has been shared. This also allows the viewers to comprehend the context of the topic, without extensive scrolling of the contents.
[004] Currently, a user can send emoji(s), meme(s), image(s), GIF(s), and so on, in a reply, in response to a received emoji, or a received message. Once the exchange is completed (reply is sent), the comprehension of the emoji(s), image(s), GIF(s), meme(s), and so on, is left to imagination of the individual user. FIG. 1 depicts an example scenario, wherein users are engaged in a conversation experience cognitive disconnection due to inability of a user to comprehend the intent of another user using a received emoji. As depicted in FIG. 1, Sam receives a text from Kim “Are you ready yet? I am waiting!” On receiving the text, Sam can send an emoji, for expressing his emotions, and a text for informing the actual situation. On receiving the emoji and the text, if Kim decides to reply with an emoji, she needs to manually search through the emoji tray to select an apt emoji, which can convey her emotions to Sam. The emoji selected by Kim may not satisfactorily convey her intention. However, she does not have a choice, as she needs to select an emoji from within the emoji tray. When Sam receives the emoji, Sam may not be able to comprehend the intent of Kim using the emoji sent by Kim.
[005] Currently, if a user, who is part of a social media group, joins a conversation at a later point of time, the user needs to scroll and read all the previous messages that been exchanged, if the user intends to understand the context of the group conversation in order to join the group conversation. FIG. 2 depicts an example scenario, wherein a user struggles to understand the context of a group conversation due to joining the conversation late. Consider that the group comprises of four users, viz., Dave, Jessica, Noah, and Jenny. As depicted in FIG. 2, Dave had posted a text “Guys! I am getting married”. The text is received by other users and they can respond by posting emojis and other congratulatory messages as reply. Consider that Jenny joins the conversation late and is overwhelmed by the messages and emojis that have been exchanged earlier. She may not be able to trace the initial message that had triggered the responses without extensive attentive scrolling. Thus, Jenny is not able to figure out the context (cause of celebration) of the group conversation.
OBJECTS
[006] The principal object of the embodiments herein is to disclose methods and systems for generating a filmstrip presenting the context of a conversation in a narrative format, wherein the film strip comprises at least one emoji, which is generated based on a context derived from text, emojis, graphical contents, media, and so on, in the conversation, and metadata associated with the text, emojis, graphical contents, media, and so on.
[007] Another object of the embodiments herein is to extract metadata associated with the contents in the conversation, wherein the metadata comprises at least one of emotions expressed through the text, emojis, graphical contents, media, and so on; characters in the emojis, graphical contents and media; actions performed by the emojis and graphical contents, intents expressed through the text, emojis, graphical contents, media, and so on; and things indicated or depicted in the text, emojis, graphical contents, media, and so on.
[008] Another object of the embodiments herein is to map emojis, in an emoji library in a user device or a cloud, to particular emotions; wherein the emotions mapped to the emojis in the emoji library can be either of a positive emotion, a neutral emotion, or a negative emotion, with respect to the emotions expressed through the text, emojis, graphical contents, media, and so on.
[009] Another object of the embodiments herein is to fetch at least one emoji from the emoji library or create at least one emoji in the emoji library, wherein the emotions associated with the at least one fetched emojis and the created emojis can be categorized as least one of a positive emotion, a neutral emotion, or a negative emotion, with respect to the emotions expressed through the text, emojis, graphical contents, media, and so on.
[0010] Another object of the embodiments herein is to suggest one or more fetched/generated emojis to allow the user to select a response emoji.
[0011] Another object of the embodiments herein is to generate a resultant emoji based on the received emoji and the selected response emoji, wherein the resultant emoji depicts either of a reaction on the received emoji, an action performed on the received emoji, or an action performed on an emoji created based on a context derived from the text
[0012] Another object of the embodiments herein is to generate the filmstrip of the received emoji the response emoji and the resultant emoji, and send the filmstrip as a reply in the conversation.
[0013] Another object of the embodiments herein is to allow a user to familiarize with a context of a group conversation, if the user joins the group conversation at a later point in time; wherein the context is derived from a message, which had triggered participants of the group to send messages and react by sending emojis, and a plurality of emojis sent by the participants of the group.
SUMMARY
[0014] Accordingly, the embodiments provide methods and systems for generating a filmstrip, which comprises of emojis, for presenting the context of a conversation in a narrative format. The filmstrip comprises of received at least one emoji, a response emoji, and a resultant emoji. Users can append further emojis to the filmstrip, which leads to expansion of the filmstrip. The embodiments include receiving a conversation, which comprises texts, emojis, graphical contents, media, and so on. The embodiments include determining whether metadata associated with the texts, emojis, graphical contents, media, and so on is available. If the metadata is not available, the embodiments include extracting the associated metadata. The embodiments include deriving a context of the conversation. The extracted metadata can include at least one of emotion expressed through the texts, emojis, graphical contents, media, and so on; action performed by the emoji and graphical contents; intents expressed through the texts, emojis, graphical contents, media, and so on; things indicated/depicted in the text, emoji, graphical contents, media, and so on; characters in the emojis, and so on. The embodiments include sending the contents of the conversation and the extracted to a cloud.
[0015] The embodiments include mapping emojis, in an emoji library in a user device and an emoji library in the cloud, to different emotions. The embodiments include fetching emojis from the emoji library in the user device or the emoji library in the cloud. The emojis, fetched from the emoji library in the user device and the emoji library in the cloud, are mapped with emotions, which can be categorized as either of a positive emotion, a neutral emotion, or a negative emotion, with respect to at least one emotion expressed through the text, emoji, graphical contents, media, and so on. In case, emojis mapped to emotions, categorized as either of a positive emotion, a neutral emotion, or a negative emotion, with respect to the emotion associated the text, emoji, graphical contents, media, and so on, is not available in the user device and the cloud, the embodiments include creating emojis. The created emojis can be mapped to emotions, which can be categorized as either a positive emotion, a neutral emotion, or a negative emotion, with respect to the at least one emotion expressed through the text, emoji, graphical contents, media, and so on. The created emojis are stored in the emoji library in the user device and sent to the cloud, wherein the cloud can store the created emojis in the emoji library in the cloud.
[0016] The embodiments include displaying a User Interface (UI), comprising a plurality of fetched/created emojis. The UI provides an option to the user to select an emoji, amongst the plurality of displayed emojis. The emoji selected by the user is the response emoji. When the user selects the response emoji, a corresponding resultant emoji is generated or retrieved from the user device or the cloud. The resultant emoji is displayed on the UI. Each of the resultant emojis depict a reaction on the received emoji, an action performed on the received emoji or a created emoji. The embodiments include generating the filmstrip of the received emoji/graphical content, the response emoji and the resultant emoji. The embodiments include sending the filmstrip as a reply to the sender.
[0017] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0018] Embodiments herein are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0019] FIG. 1 depicts an example scenario, wherein users engaged in a conversation experience cognitive disconnection due to inability of a user to comprehend the intent of another user using a received emoji;
[0020] FIG. 2 depicts an example scenario, wherein a user struggles to understand the context of a group conversation due to late joining;
[0021] FIG. 3 depicts various units of a device configured to generate a story for presenting the context of a conversation in a narrative format, according to embodiments as disclosed herein;
[0022] FIG. 4 is a flowchart depicting a method for generating a story for presenting the context of a conversation in a narrative format, according to embodiments as disclosed herein;
[0023] FIGS. 5a-5c depict example scenarios, wherein emojis and stories are generated to present the context of a conversation in a narrative format, according to embodiments as disclosed herein; and
[0024] FIG. 6 is an example scenario depicting the generation of a summary of a group conversation to facilitate comprehension of the context of the group conversation, according to embodiments as disclosed herein.

DETAILED DESCRIPTION
[0025] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0026] Embodiments herein disclose methods and systems for generating a filmstrip comprising of at least one emoji, for presenting the context of an emoji or a conversation in a narrative format. The embodiments include analyzing a received conversation for extracting metadata from the received conversation. The conversation can include text, emojis, graphical contents, media, and so on. The extracted metadata comprises emotion(s) expressed through the conversation, action(s) performed by the emojis and/or graphical contents, intent(s) expressed through the emojis and/or graphical contents, things indicated in the text, emojis, graphical contents, media, and so on. The embodiments include deriving a context of the conversation based on the extracted metadata.
[0027] The embodiments include mapping emojis from an emoji library in a user device or a cloud, to at least one emotion. The emotion mapped to the emojis in the emoji library can be either a positive emotion, a neutral emotion, or a negative emotion, with respect to at least one emotion expressed through the text, emojis, graphical contents, media, and so on. The positivity, negativity, or neutrality is ascribed based on at least one emotion expressed through the text, emojis, graphical contents, media, and so on. The at least one emotion is a part of the metadata associated with the conversation. The embodiments include fetching a plurality of emojis from the emoji library in the user device, if the user device includes emojis, which have been mapped to emotions, categorized as either of positive, neutral, or negative, with respect to the emotion(s) expressed through the text, emojis, graphical contents, media, and so on. The embodiments include creating a plurality of emojis if the user device does not include emojis, which are mapped to emotions, categorized as either of positive, neutral, or negative, with respect to the at least one emotion expressed through the text, emojis, graphical contents, media, and so on. Thereafter, the embodiments fetch the created plurality of emojis.
[0028] The embodiments include displaying the plurality of fetched/created emojis to allow the user to select a response emoji. When a response emoji is selected, a resultant emoji is generated or fetched. The resultant emoji depicts a reaction on the emoji or graphical content or an action being performed on the emoji or graphical content. The resultant emoji is generated based on the response emoji and the at least one received emoji. A filmstrip of the at least one received emoji, response emoji and the resultant emoji is generated, which can be sent as a reply.
[0029] Referring now to the drawings, and more particularly to FIGS. 3 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[0030] FIG. 3 depicts various units of a device 300 configured to generate a story for presenting the context of a conversation in a narrative format, according to embodiments as disclosed herein. As depicted in FIG. 3, the device 300 comprises a processor 301, a memory 302, a display 303, and a communication interface 304. The device 300 is capable of availing Rich Communication Services (RCS) or any other messaging services. The device 300 can communicate with remote devices and servers using cloud based services. Examples of the device 300 are, but not related to, a smart phone, a laptop, a tablet, an Internet of Things (IoT) device, a wearable computing device, and any other device capable of availing RCS. The device 300 can avail at least one messaging application/service.
[0031] The story generated by the device 300 is a filmstrip comprising of at least one previously received emoji, at least one previous response emoji, a current response emoji, and a resultant emoji. If the device 300 initiates a conversation, using a messaging application/service, the story comprises one or more emojis. The filmstrip can be appended, by the user of the device 300 or other users, with further emojis. Consider that the communication interface 304 receives at least one of a text, media, graphical content (facemoji, animoji, GIF, memes, and so on), and an emoji from a sender, through the application/service, which triggers a conversation.
[0032] The processor 301 can recognize a context of the conversation, based on at least one of the text, media, graphical content and the emoji. For example: if the text is “Can’t wait to meet”, and the emoji depicts a person smiling, the derived context will be a person is going to meet somebody and is anticipating the meeting. The processor 301 can recognize the context based on texts and the emojis previously exchanged during the course of the conversation, if available. The processor 301 can generate an emoji, which aptly represents the recognized context.
[0033] The processor 301 can check the memory 302 to determine whether metadata associated with the conversation is available in the memory 302. If the processor 301 determines that metadata associated with the conversation is not available in the memory 302, then metadata associated with the conversation can be extracted. In an embodiment, at least one of text, image, emojis, stickers, animojis, face emojis, and so on, in the conversation; can be fetched by natural language processing models or/and image processing techniques to understand possible meanings and senses associated with the conversation. Thereafter, the metadata can be extracted. Examples of metadata, extracted by the processor 301, from the conversation includes, but not limited to, at least one emotion expressed through the conversation, at least one action performed by the emoji/graphical content, intent of the sender expressed through the conversation, at least one thing indicated in the text, at least one thing depicted in the emoji/graphical content, characters in the emojis/graphical content, and so on. Considering the example, wherein the text is “Can’t wait to meet”, and the emoji depicts a person smiling and carrying balloons, the metadata that is extracted by the processor 301 includes anticipation, eagerness, joy, carrying gifts, desire to meet, and so on.
[0034] Once the metadata has been extracted, the processor 301 can store the conversation and the metadata associated with the conversation in the memory 302. The emoji can be stored in an emoji library of the device 300 (in the memory 302). The emoji library is updated when an emoji is borrowed from a cloud/external entity or when an emoji is received from another device/external entity. This allows the device 302 to expedite the process of extraction of metadata, if a similar text or emoji is received by the device 300 again. The processor 301 can send the text, emoji, graphical content, media, and so on; and the associated metadata, to the cloud/ external entity. This allows other devices to fetch the emoji/graphical content if those devices are looking for similar emojis or similar graphical content.
[0035] The processor 301 can map the emojis, in the emoji library of the device 300, to at least one emotion. The processor 301 can fetch emojis from the emoji library of the device 300, which are mapped to emotions categorized as either of a positive emotion, a neutral emotion, or a negative emotion, with respect to the at least one emotion expressed using the conversation (derived from metadata). Considering the example, the processor 301 can fetch emojis, which are positive, neutral, or negative, with respect to anticipation, eagerness, and joy.
[0036] In an embodiment, if the processor 301 determines that the emoji library of the device 300 does not include emojis mapped to at least one of a positive emotion, a neutral emotion, or a negative emotion, with respect to the at least one emotion expressed using the conversation, the processor 301 can fetch emojis from an emoji library in the cloud/external entity. The emojis in the emoji library of the cloud/external entity are mapped to different emotions. The processor 301 can fetch those emojis from the emoji library of the cloud, with respect to the at least one emotion expressed in the conversation. If the processor 301 determines that the emoji library of the cloud does not include emotions, which are positive, neutral, or negative, with respect to the at least one emotion expressed in the conversation, the processor 301 can create emojis. The processor 301 can map the created emojis with different types of emotions, which can be categorized as positive, neutral, or negative, with respect to the at least one emotion expressed in the conversation. The processor 301 can store the created emojis in the emoji library of the device 300. The processor 301 can sent the created emojis to the cloud. The cloud can store the created emojis in the emoji library in the cloud.
[0037] The processor 301 can display a User Interface (UI), on the display 303, comprising a plurality of fetched/created emojis. The emojis have been fetched emojis from the emoji libraries in either the device 300 or the cloud. The UI provides an option to the user to select at least one emoji, amongst the plurality of emojis. The emoji selected by the user is the response emoji. When the user selects the response emoji, the processor 301 can fetch/generate a corresponding resultant emoji.
[0038] In an embodiment, each of the resultant emojis depicts a reaction or an action performed on the received emoji or a created emoji. The resultant emoji can be fetched from the the emoji library of the device 300. The processor 301 can search for a suitable emoji, which contextually fits (as close as possible) with a received emoji. If such a suitable emoji is not found, the search can be further generalized until a match is found. If a suitable emoji capable of acting as a resultant emoji is not available in the emoji library of the device 300, then the embodiments include searching the emoji library in the cloud to fetch a resultant emoji. If the search does not yield a resultant emoji, then a resultant emoji can be generated. The processor 301 can generate the resultant emoji based on the corresponding fetched emoji and the received emoji. Considering the example, if the user selects “pouring water” emoji, which is mapped to a negative emotion with respect to the received emoji, i.e., person smiling and carrying balloons; the resultant emoji can depict that water is being poured on the person depicted on the received emoji. Consider a case where the conversation comprises of only text and there are no emojis in the conversation, the processor 301 can create an emoji depicting a person, wherein the resultant emoji can depict that water is being poured on the person depicted in the created emoji.
[0039] Once the response emoji has been selected and the resultant emoji has been fetched or generated, the processor 301 can generate a filmstrip comprising of one or more of the received emoji, the response emoji and the resultant emoji. In an embodiment, if the conversation includes a filmstrip of emojis, which have been previously exchanged, then a story can be presented in a narrative format by appending, by the processor 301, the filmstrip with the response emoji and the resultant emoji. If there are no emojis in the conversation, the processor 301 can generate the filmstrip comprising of the selected response emoji and the resultant emoji. The communication interface 304 can send the filmstrip as a reply to the other participant(s) in the conversation. When the other participants in the conversation click on the filmstrip, the at least one previously sent emoji, the response emoji and the resultant emoji are played sequentially.
[0040] In an embodiment, for group conversations, wherein a plurality of users are involved in a conversation (wherein the conversation comprises a plurality of emojis and text messages), the processor 301 can extract metadata associated with the group conversation. The processor 301 can extract the metadata by determining a text/emoji, to which majority of users of the group have reacted/responded by sending emojis/texts.
[0041] Consider that a user of a group has missed one or more messages, the processor 301 can extract metadata associated with the missed one or more messages of the group conversation, which have been marked as unread.
[0042] Consider that a user of a group has joined the conversation and has not viewed any of the texts/emojis, the processor 301 can extract metadata associated with all messages that have been exchanged during the group conversation.
[0043] Once the metadata has been extracted, the text/emoji message, which had triggered the majority of users of the group to react/respond, can serve as a conversation trigger point. The processor 301 can highlight the text/emoji amongst all the unread messages. The conversation trigger point is likely temporary and can be updated based on real time data (another text/emoji which had triggered the majority of the users to react/respond at a current time instant).
[0044] In an embodiment, the processor 301 can fetch/create an emoji corresponding to emotions conveyed through the text/emojis, which were considered to extract metadata. For example, if the conversation trigger point is a text/emoji mentions or depicts a celebration theme, the processor 301 can fetch/create floating confetti emoji. The emoji can be displayed along with the conversation trigger point text/emoji. The amount or intensity of elements, which are contributing to an ambience effect, can be based on number of users in the group or the time instant at which the messages had been delivered. This allows the users to familiarize themselves with the context of the group conversation, even if the users join the group conversation at a later point in time.
[0045] FIG. 3 shows exemplary units of the device 300, but it is to be understood that other embodiments are not limited thereon. In other embodiments, the device 300 may include less or more number of units. Further, the labels or names of the units are used only for illustrative purpose and does not limit the scope of the invention. One or more units can be combined together to perform same or substantially similar function in the device 300.
[0046] FIG. 4 is a flowchart 400 depicting a method for generating a story for presenting the context of a conversation in a narrative format, according to embodiments as disclosed herein. At step 401, the method includes extracting metadata associated with the contents of a conversation (which can comprise of at least one of text, emojis, graphical content, media, and so on). The embodiments can determine whether metadata associated with the conversation is available in the device 300. If it is determined that metadata associated with the conversation is not available in the device 300, then the embodiments include extracting metadata associated with the conversation. The metadata comprises at least one emotion expressed using the at least one of text, emoji, graphical content, and media, actions performed by the at least one of emojis and the graphical content, intents expressed through the at least one text, emoji graphical content and media, at least one thing indicated/depicted in the at least one texts/emojis/graphical content/media, and so on.
[0047] The embodiments include storing the contents of the conversation, and the metadata associated with the conversation in the emoji library of the device 300. The embodiments include sending the at least one text, the emoji, and the metadata associated with the conversation to the cloud, wherein other devices can fetch the emojis from the cloud (if required).
[0048] At step 402, the method includes fetching emojis from an emoji library, wherein emotions associated with the fetched emojis are categorized as least one of a positive emotion, a neutral emotion, or a negative emotion, with respect to emotions expressed in the conversation. The embodiments include mapping emojis, in the emoji library to particular emotions. The emotions mapped to the emojis in the emoji library can be a positive emotion, a neutral emotion, or a negative emotion, with respect to the at least one emotion expressed in the conversation.
[0049] If the embodiments determine that the emoji library of the device 300 does not include emojis mapped to emotions, which are at least one of a positive emotion, a neutral emotion, or a negative emotion, with respect to the at least one emotion expressed in the conversation, the embodiments include fetching emojis from the emoji library in the cloud, which are mapped to different emotions. The emojis that are fetched from the emoji library in the cloud are mapped to at least one of a positive emotion, a neutral emotion, or a negative emotion emotions, with respect to the at least one emotion expressed in the conversation.
[0050] If the embodiments determine that the emoji library of the cloud does not include emotions, which are at least one of a positive emotion, a neutral emotion, or a negative emotion, with respect to the at least one emotion expressed using the at least one text and the at least one emoji in the conversation, the embodiments include creating emojis. The embodiments include mapping the created emojis with different types of emotions, which can be categorized as least one of a positive emotion, a neutral emotion, or a negative emotion, with respect to the at least one emotion expressed through the conversation.
[0051] The embodiments include storing the created emojis in the emoji library of the device 300. The embodiments include sending the created emojis to the cloud. The cloud can store the created emojis in the emoji library in the cloud.
[0052] At step 403, the method includes displaying the fetched emojis to enable the user of the device 300 to select an emoji as a response emoji. The emoji, which is selected by the user, can be referred to as response emoji. The emojis can be fetched from the emoji library of the device 300 or the emoji library of the cloud. If the emojis to be fetched are not available in the emoji libraries of the cloud, the embodiments include creating the emojis and storing the emojis in the emoji library of the device 300; from which the emojis can be fetched. In an embodiment, the fetched emojis can be displayed in a UI. The UI can provide an option to the user to select an emoji, amongst the fetched emojis.
[0053] At step 404, the method includes fetching/generating a resultant emoji corresponding to the response emoji. The resultant emoji can be fetched from the the emoji library of the device 300. If a suitable emoji capable of acting as a resultant emoji is not available in the emoji library of the device 300, then the embodiments include fetching a resultant emoji from the emoji library in the cloud. If such a suitable emoji, capable of acting as a resultant emoji, is not available in the emoji library; then a resultant emoji can be generated based on the response emoji and/or the emojis in the conversation. The resultant emoji is displayed in the UI along with the response emoji. The resultant emoji depicts a reaction or an action performed on the at least one emoji in the conversation or a created emoji.
[0054] In case there are no emojis in the conversation, the embodiments include creating at least one emoji. The embodiments include generating the corresponding resultant emojis based on the corresponding fetched emojis and/or the at least one created emoji. The resultant emojis depict a reaction or an action being performed on the at least one created emoji.
[0055] At step 405, the method includes generating a filmstrip comprising of the at least one emoji in the conversation, the response emoji and the resultant emoji. In an embodiment herein, the filmstrip can comprise text (as provided by the user) and/or media (as inserted by the user). The filmstrip can be referred to as the story, wherein the previous response and resultant emojis, currently selected response emoji and the corresponding resultant emoji, are played in a sequential manner. The embodiments include sending the filmstrip to other users.
[0056] In an embodiment, if the conversation already includes a filmstrip of emojis, then the filmstrip can be appended with the response emoji and the corresponding resultant emoji. If there are no emojis in the conversation, the generated filmstrip includes the selected response emoji and the resultant emoji.
[0057] The various actions in the flowchart 400 may be performed in the order presented, in a different order, or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.
[0058] FIGS. 5a-5c depict example scenarios, wherein emojis and stories are generated to present the context of a conversation in a narrative format, according to embodiments as disclosed herein. As depicted in FIG. 5a, Sam receives a text from Kim “Are you ready yet? I am waiting!” On receiving the text, Sam can send an emoji, expressing his emotions of anticipation, joy, and celebration, using balloons, and a text “10 more min…can’t wait to meet!”
[0059] The embodiments include extracting metadata from the text and the emoji sent by Sam. Based on the metadata, the emotions expressed by Sam are found to be anticipation, joy, and celebration. The embodiments include fetching emojis mapped with emotions categorized as positive, neutral, or negative, with respect to the emotions expressed through the emoji sent by Sam. Consider that Kim is annoyed due to the fact that she has to wait another 10 minutes. Thus, the emotion felt by Kim is negative with respect to the emotions of Sam, as expressed by Sam through the emoji. Therefore, she selects an emoji mapped to annoyance, which is negative with respect to anticipation, joy, and celebration. The emoji selected by Kim depicts water being poured.
[0060] Once Kim selects the emoji mapped to the negative emotion of annoyance, a resultant emoji is concurrently generated. An action of pouring water is depicted on the person depicted in the resultant emoji. It can be noted that the person depicted in the resultant emoji is the person depicted in the emoji sent by Sam. Kim can also send a text “Get here right now!!!” along with the emoji. The embodiments include generating a filmstrip comprising of the emoji sent by Sam, the emoji selected by Kim and the resultant emoji, generated based on the emoji sent by Sam and the emoji selected by Kim. Thereafter, Kim can send the filmstrip to Sam. On receiving the filmstrip, Sam can select the filmstrip. When the filmstrip is selected, the emoji sent by Sam, the emoji selected by Kim, and the resultant emoji are played sequentially.
[0061] As depicted in FIG. 5b, John has sent a text to Jessica “Hey! Let’s start studying now. We really need to catch up”. On receiving the text, Jessica can send an emoji, expressing her emotions of laziness and relaxation, along with a text “I am watching a movie”. The embodiments include extracting metadata from the text and the emoji sent by Jessica. Based on the metadata, the emotions expressed by Jessica are found to be laziness and relaxation. When John intends to send an emoji to Jessica as a reply, the embodiments include fetching emojis mapped with emotions categorized as positive, neutral, or negative, with respect to the emotions expressed through the emoji sent by Jessica.
[0062] Consider that John has got angry due to the fact that Jessica is not serious about study and is watching a movie. Thus, the emotion felt by John is negative with respect to the emotions expressed by Jessica through the emoji. Therefore, John selects an emoji mapped to anger, which is negative with respect to laziness and relaxation. The emoji depicts a closed fist with the intent to punch Jessica. Once John selects the emoji mapped to the negative emotion of anger, a resultant emoji is fetched or generated. An action of punching is depicted on the person, depicted in the emoji sent by Jessica. This is the resultant emoji. John can also send a text “What? Seriously?” along with the emoji.
[0063] The embodiments include generating a filmstrip comprising of the emoji sent by Jessica, the emoji selected by John and the resultant emoji, generated based on the emoji sent by Jessica and the emoji selected by John. Thereafter, John can send the filmstrip to Jessica. On receiving the filmstrip, Jessica can select the filmstrip. When the filmstrip is selected, the emoji sent by Jessica, the emoji selected by John, and the resultant emoji are played sequentially.
[0064] As depicted in FIG. 5c, Dave had initiated a group conversation by announcing the news of his marriage, among his group of friends. Dave has sent a text “Guys! I am getting married! ”. The embodiments include extracting metadata from the text sent by Dave. Based on the metadata, the emotions derived from the text are found to be joy and happiness. Consider that Jessica is the first person in the group to respond to the message. Jessica intents to express her emotions by sending an emoji. The embodiments can fetch emojis mapped with emotions categorized as positive, neutral, or negative, with respect to the emotions expressed in the text sent by Dave. Consider that Jessica has become happy due to the fact that Dave is going to get married. Thus, the emotion felt by Jessica is positive with respect to the emotion expressed by Dave through the text. Therefore, Jessica selects an emoji mapped to celebration, which is positive with respect to joy and happiness.
[0065] As Dave has not sent any emoji, the embodiments include creating an emoji. In an example, the created emoji can be a smiley. Once Jessica selects the emoji mapped to the positive emotion of celebration, a resultant emoji is concurrently generated. The resultant emoji is a celebration themed emoji. Jessica can also send a text “Awesome! Celebrations time...Let’s party” along with the emoji. In this scenario, the filmstrip comprises only of the resultant emoji sent by Jessica.
[0066] FIG. 6 is an example scenario depicting the generation of a summary of a group conversation to facilitate comprehension of the context of the group conversation, according to embodiments as disclosed herein. Consider that the group comprises of four users, viz., Dave, Jessica, Noah, and Jenny. As depicted in FIG. 6, Dave had posted a text “Guys! I am getting married”. The text is received by other users and they respond by posting emojis and other congratulatory messages.
[0067] Jessica is the first person in the group to respond to the text send by Dave, followed by Noah. Both Jessica and Noah express their emotions by sending text messages and emojis. The embodiments can fetch emojis mapped with emotions categorized as positive, neutral, or negative, with respect to the emotions expressed in the text sent by Dave. Consider that both Jessica and Noah are happy due to the fact that Dave is going to get married. The emotions felt by Jessica and Noah is positive with respect to the emotion expressed by Dave through the text. Therefore, Jessica and Noah, respectively select emojis, which are mapped to celebration.
[0068] Consider that Jenny joins the conversation late and is overwhelmed by the number of messages and emojis that have been exchanged earlier. In this scenario, the embodiments include extracting metadata associated with the group conversation. As Jenny has joined the conversation late, and has not viewed any of the texts and emojis exchanged previously, the embodiments include extracting metadata associated with all messages that have been exchanged during the group conversation. The embodiments can extract the metadata, which involves determining that the text “Guys! I am getting married” has caused the majority of members of the group to react/respond by sending emojis/texts. The text “Guys! I am getting married” is considered as the conversation trigger point, and is, consequently, highlighted. The embodiments can generate an emoji based on all the emojis that have been exchanged during the conversation. The emoji is displayed along with the text “Guys! I am getting married”. The emoji is generated based on the amount/intensity of elements, which are contributing to an ambience effect, which in turn is based on number of members in the group. The conversation trigger point and the generated emoji allow Jenny to familiarize with the context of the group conversation.
[0069] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in FIG. 3 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0070] The embodiments disclosed herein describe methods and systems for generating a filmstrip comprising of emojis, wherein the emojis are generated based on a context derived from text and emojis in a conversation, and metadata associated with the text and emojis in the conversation, wherein the filmstrip presents the context of the conversation in a narrative format. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment through or together with a software program written in example Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of portable device that can be programmed. The device may also include means, which could be, for example, a hardware means, for example, an Application-specific Integrated Circuit (ASIC), or a combination of hardware and software means, for example, an ASIC and a Field Programmable Gate Array (FPGA), or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, for example, using a plurality of Central Processing Units (CPUs).
[0071] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of the embodiments as described herein.

Documents

Application Documents

# Name Date
1 202041025971-Annexure [04-12-2024(online)].pdf 2024-12-04
1 202041025971-IntimationOfGrant30-04-2025.pdf 2025-04-30
1 202041025971-STATEMENT OF UNDERTAKING (FORM 3) [19-06-2020(online)].pdf 2020-06-19
1 202041025971-US(14)-ExtendedHearingNotice-(HearingDate-19-11-2024)-1230.pdf 2024-11-06
2 202041025971-Annexure [30-10-2024(online)].pdf 2024-10-30
2 202041025971-PatentCertificate30-04-2025.pdf 2025-04-30
2 202041025971-PETITION UNDER RULE 137 [04-12-2024(online)].pdf 2024-12-04
2 202041025971-REQUEST FOR EXAMINATION (FORM-18) [19-06-2020(online)].pdf 2020-06-19
3 202041025971-Annexure [04-12-2024(online)].pdf 2024-12-04
3 202041025971-Correspondence to notify the Controller [30-10-2024(online)].pdf 2024-10-30
3 202041025971-POWER OF AUTHORITY [19-06-2020(online)].pdf 2020-06-19
3 202041025971-RELEVANT DOCUMENTS [04-12-2024(online)].pdf 2024-12-04
4 202041025971-FORM 18 [19-06-2020(online)].pdf 2020-06-19
4 202041025971-FORM-26 [30-10-2024(online)].pdf 2024-10-30
4 202041025971-PETITION UNDER RULE 137 [04-12-2024(online)].pdf 2024-12-04
4 202041025971-Written submissions and relevant documents [04-12-2024(online)].pdf 2024-12-04
5 202041025971-US(14)-HearingNotice-(HearingDate-07-11-2024).pdf 2024-10-10
5 202041025971-RELEVANT DOCUMENTS [04-12-2024(online)].pdf 2024-12-04
5 202041025971-FORM 1 [19-06-2020(online)].pdf 2020-06-19
5 202041025971-Correspondence to notify the Controller [14-11-2024(online)].pdf 2024-11-14
6 202041025971-Written submissions and relevant documents [04-12-2024(online)].pdf 2024-12-04
6 202041025971-FORM-26 [14-11-2024(online)].pdf 2024-11-14
6 202041025971-DRAWINGS [19-06-2020(online)].pdf 2020-06-19
6 202041025971-CLAIMS [04-07-2022(online)].pdf 2022-07-04
7 202041025971-COMPLETE SPECIFICATION [04-07-2022(online)].pdf 2022-07-04
7 202041025971-Correspondence to notify the Controller [14-11-2024(online)].pdf 2024-11-14
7 202041025971-DECLARATION OF INVENTORSHIP (FORM 5) [19-06-2020(online)].pdf 2020-06-19
7 202041025971-US(14)-ExtendedHearingNotice-(HearingDate-19-11-2024)-1230.pdf 2024-11-06
8 202041025971-Annexure [30-10-2024(online)].pdf 2024-10-30
8 202041025971-COMPLETE SPECIFICATION [19-06-2020(online)].pdf 2020-06-19
8 202041025971-CORRESPONDENCE [04-07-2022(online)].pdf 2022-07-04
8 202041025971-FORM-26 [14-11-2024(online)].pdf 2024-11-14
9 202041025971-Abstract_19-06-2020.jpg 2020-06-19
9 202041025971-Correspondence to notify the Controller [30-10-2024(online)].pdf 2024-10-30
9 202041025971-DRAWING [04-07-2022(online)].pdf 2022-07-04
9 202041025971-US(14)-ExtendedHearingNotice-(HearingDate-19-11-2024)-1230.pdf 2024-11-06
10 202041025971-Annexure [30-10-2024(online)].pdf 2024-10-30
10 202041025971-FER.pdf 2022-01-04
10 202041025971-FER_SER_REPLY [04-07-2022(online)].pdf 2022-07-04
10 202041025971-FORM-26 [30-10-2024(online)].pdf 2024-10-30
11 202041025971-Correspondence to notify the Controller [30-10-2024(online)].pdf 2024-10-30
11 202041025971-OTHERS [04-07-2022(online)].pdf 2022-07-04
11 202041025971-US(14)-HearingNotice-(HearingDate-07-11-2024).pdf 2024-10-10
12 202041025971-CLAIMS [04-07-2022(online)].pdf 2022-07-04
12 202041025971-FER.pdf 2022-01-04
12 202041025971-FER_SER_REPLY [04-07-2022(online)].pdf 2022-07-04
12 202041025971-FORM-26 [30-10-2024(online)].pdf 2024-10-30
13 202041025971-US(14)-HearingNotice-(HearingDate-07-11-2024).pdf 2024-10-10
13 202041025971-DRAWING [04-07-2022(online)].pdf 2022-07-04
13 202041025971-COMPLETE SPECIFICATION [04-07-2022(online)].pdf 2022-07-04
13 202041025971-Abstract_19-06-2020.jpg 2020-06-19
14 202041025971-CLAIMS [04-07-2022(online)].pdf 2022-07-04
14 202041025971-COMPLETE SPECIFICATION [19-06-2020(online)].pdf 2020-06-19
14 202041025971-CORRESPONDENCE [04-07-2022(online)].pdf 2022-07-04
15 202041025971-COMPLETE SPECIFICATION [04-07-2022(online)].pdf 2022-07-04
15 202041025971-DECLARATION OF INVENTORSHIP (FORM 5) [19-06-2020(online)].pdf 2020-06-19
15 202041025971-DRAWING [04-07-2022(online)].pdf 2022-07-04
16 202041025971-CLAIMS [04-07-2022(online)].pdf 2022-07-04
16 202041025971-CORRESPONDENCE [04-07-2022(online)].pdf 2022-07-04
16 202041025971-DRAWINGS [19-06-2020(online)].pdf 2020-06-19
16 202041025971-FER_SER_REPLY [04-07-2022(online)].pdf 2022-07-04
17 202041025971-US(14)-HearingNotice-(HearingDate-07-11-2024).pdf 2024-10-10
17 202041025971-DRAWING [04-07-2022(online)].pdf 2022-07-04
17 202041025971-FORM 1 [19-06-2020(online)].pdf 2020-06-19
17 202041025971-OTHERS [04-07-2022(online)].pdf 2022-07-04
18 202041025971-FORM-26 [30-10-2024(online)].pdf 2024-10-30
18 202041025971-FORM 18 [19-06-2020(online)].pdf 2020-06-19
18 202041025971-FER_SER_REPLY [04-07-2022(online)].pdf 2022-07-04
18 202041025971-FER.pdf 2022-01-04
19 202041025971-Abstract_19-06-2020.jpg 2020-06-19
19 202041025971-Correspondence to notify the Controller [30-10-2024(online)].pdf 2024-10-30
19 202041025971-OTHERS [04-07-2022(online)].pdf 2022-07-04
19 202041025971-POWER OF AUTHORITY [19-06-2020(online)].pdf 2020-06-19
20 202041025971-REQUEST FOR EXAMINATION (FORM-18) [19-06-2020(online)].pdf 2020-06-19
20 202041025971-FER.pdf 2022-01-04
20 202041025971-COMPLETE SPECIFICATION [19-06-2020(online)].pdf 2020-06-19
20 202041025971-Annexure [30-10-2024(online)].pdf 2024-10-30
21 202041025971-Abstract_19-06-2020.jpg 2020-06-19
21 202041025971-DECLARATION OF INVENTORSHIP (FORM 5) [19-06-2020(online)].pdf 2020-06-19
21 202041025971-STATEMENT OF UNDERTAKING (FORM 3) [19-06-2020(online)].pdf 2020-06-19
21 202041025971-US(14)-ExtendedHearingNotice-(HearingDate-19-11-2024)-1230.pdf 2024-11-06
22 202041025971-COMPLETE SPECIFICATION [19-06-2020(online)].pdf 2020-06-19
22 202041025971-DRAWINGS [19-06-2020(online)].pdf 2020-06-19
22 202041025971-FORM-26 [14-11-2024(online)].pdf 2024-11-14
23 202041025971-Correspondence to notify the Controller [14-11-2024(online)].pdf 2024-11-14
23 202041025971-DECLARATION OF INVENTORSHIP (FORM 5) [19-06-2020(online)].pdf 2020-06-19
23 202041025971-FORM 1 [19-06-2020(online)].pdf 2020-06-19
24 202041025971-DRAWINGS [19-06-2020(online)].pdf 2020-06-19
24 202041025971-FORM 18 [19-06-2020(online)].pdf 2020-06-19
24 202041025971-Written submissions and relevant documents [04-12-2024(online)].pdf 2024-12-04
25 202041025971-FORM 1 [19-06-2020(online)].pdf 2020-06-19
25 202041025971-POWER OF AUTHORITY [19-06-2020(online)].pdf 2020-06-19
25 202041025971-RELEVANT DOCUMENTS [04-12-2024(online)].pdf 2024-12-04
26 202041025971-FORM 18 [19-06-2020(online)].pdf 2020-06-19
26 202041025971-PETITION UNDER RULE 137 [04-12-2024(online)].pdf 2024-12-04
26 202041025971-REQUEST FOR EXAMINATION (FORM-18) [19-06-2020(online)].pdf 2020-06-19
27 202041025971-STATEMENT OF UNDERTAKING (FORM 3) [19-06-2020(online)].pdf 2020-06-19
27 202041025971-POWER OF AUTHORITY [19-06-2020(online)].pdf 2020-06-19
27 202041025971-Annexure [04-12-2024(online)].pdf 2024-12-04
28 202041025971-PatentCertificate30-04-2025.pdf 2025-04-30
28 202041025971-REQUEST FOR EXAMINATION (FORM-18) [19-06-2020(online)].pdf 2020-06-19
29 202041025971-IntimationOfGrant30-04-2025.pdf 2025-04-30
29 202041025971-STATEMENT OF UNDERTAKING (FORM 3) [19-06-2020(online)].pdf 2020-06-19

Search Strategy

1 SearchHistory(43)E_27-12-2021.pdf

ERegister / Renewals

3rd: 29 Jul 2025

From 19/06/2022 - To 19/06/2023

4th: 29 Jul 2025

From 19/06/2023 - To 19/06/2024

5th: 29 Jul 2025

From 19/06/2024 - To 19/06/2025

6th: 29 Jul 2025

From 19/06/2025 - To 19/06/2026