Sign In to Follow Application
View All Documents & Correspondence

Systems And Methods For Dynamic Learing And Prediction Of Emoticons

Abstract: The present invention provides a method and systems for dynamic learning and prediction of emoticons, wherein during offline, if user enters text in an application that supports emoticons, provided the application is integrated with the component of the invention. The component receives the text entered by user and identifies the emoticon-word/phrase/context index association in order to allow application to predict emoticons, wherein the emoticon index is mapped with word/phrase/context index. Further, during online, the user may acquire/download a new set of emoticons from the application store. The new set of emoticons is acquired/downloaded along with emoticon-word index association information, such that when user is entering a text in application, the component receives text and identifies the emoticon-word/phrase/context index association in order to allow application to predict the newly installed emotions first and followed by old ones.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 February 2014
Publication Number
36/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

KEYPOINT TECHNOLOGIES INDIA PVT. LTD.
RAJAPRAASADAMU, RAJA PRAASADAMU JUNCTION, LEVEL 2, WING 1B & 2, BOTANICAL GARDENS ROAD, KONDAPUR, HYDERABAD - 500 084

Inventors

1. MEDAPURAM SUBRAMANYA VARUN
FLAT NO: 301, VENKATA RAMANA RESIDENCY, VENKATA RAMANA COLONY, BANJARA HILLS, HYDERABAD - 500 004
2. GANGAPURI KRISHNA
FLAT NO: 203, HNO 14-1-90/489, GAYATHRI NAGAR, BORABANDA, HYDERABAD - 500 018
3. SUMIT GOSWAMI
FLAT NO: 52, DAFFODIL, L&T SERENE COUNTRY, TELECOM NAGAR, GACHIBOWLI, HYDERABAD - 500 032
4. SUNIL MOTAPARTI
PLOT NO: 918, ROAD NO 47, JUBILEE HILLS, HYDERABAD - 500 033

Specification

DESC:[0001] Priority Claim:
[0002] This application claims priority from the Provisional Application Number: 482/DEL/2014 dated February 3, 2014 filed with Indian patent Office, Chennai entitled “System and method for dynamic learning and prediction of emoticons”, the entirety of which is expressly incorporated herein by reference.
[0003] Technical field of the invention
[0004] The present invention relates generally to communications and in particular to systems and methods for facilitating dynamic learning of the emoticon indices and predicting the emoticons (animated and/or non-animated)/images/pictures associated with an application during provisioning of a user-input.
[0005] Background of the invention
[0006] Generally, users exchange text type character messages using a short message service (SMS) or a multimedia message service (MMS) in order to transfer photos, moving pictures or music with text. A user of a mobile communication terminal occasionally includes emoticons in a message. ‘Emoticon’ is a combination of emotion and an icon. A user inputs emoticons while texting or in any other related field in order to express her/his emotion efficiently and concisely. Recently, graphic emoticons generated in graphic or image format are globally used and also the number and variety of emoticons from which a user can choose from has grown vastly. Due to the expansion in number, usage, availability, and variety of emoticons, it requires a lot of time for users to browse through and to select the newly installed/updated emoticon.
[0007] The existing chat/text transfer applications have a lot of emoticons which are both unicode and non-unicode and the chat applications do not have any specific offering where the emoticons are predicted based on the content being typed by the user in the application. Current keyboard based emoticon prediction methods can predict only unicode images/pictures, but there are some other limitations too. If users of specific applications want to acquire/download new emoticons (images/stickers, emojies) on the fly, they do so directly from within that application. This new emoticon library is not available to the keyboards installed to the input framework that is part of the underlying OS (operating system). Hence, if these independently installed keyboards have to predict the new emoticons, they have to get all this information offline from the third party applications and agree on some protocol for entering these images from keyboard into the editor. The keyboard software also has to be updated on the device to reflect this protocol change. This is a crude and time consuming, and resource intensive process, if one wants to ship a keyboard that works across all applications with dynamic emoticon prediction.
[0008] Hence, looking at the problems in the prior art, there is a need of a system and method for dynamic learning and prediction of emoticons, which are acquired/downloadedinside an application that supports emoticons, and offering the acquired/downloadedemoticons as predictions with relevance and context.
[0009] Summary of the invention
[0010] The present invention overcomes the drawbacks in the prior art and provides a system and method for dynamic learning and prediction of emoticons.In various example embodiments, the dynamic learning of new emoticons may be weaved into the invention through a plug-in or Android application package (APK) or keyboard application or source code interaction technique, as the newly available images/emoticons are being acquired/downloaded from an application store. Upon suchacquire/download of the new emoticons, the application may receive mapping information, for example an emoticon-word index map that may provide a mapping between the words and emoticons. The application may then push the mapping information onto the component, wherein the component may be a plug-in or Android application package (APK) or keyboard application or source code.The component may then update its emoticon add-on with this mapping information. The next time the user types a word for which there is a mapping to new emoticon, based on contextual relevance or a weighted metric, the component is able to identify one or more suitable emoticon indices to suggest to the application. The component sends this information to the underlying application and the application makes a decision on displaying the emoticons predictions.
[0011] In an embodiment of the present invention, a method is provided for dynamic learning and prediction of emoticons.The method includes receiving a user input in an editor of an application, wherein the user input is word or phrase or text. The input is identified by filtering the input from a memory based on initial character of the input, wherein the memory is defined by one or more dictionaries, each holding quantitative information relating to the input, wherein the quantitative information comprises probability information relating to words/phrases historical usage.The filtered input is prioritized based on the prioritization parameters, wherein the prioritization parameters include the input associated with the top ranked word in the memory and identifying a word index associated with top ranked word. Further, one or more emoticon indices associated with the user input are identified using a component, wherein the emoticons indices are identified based on a contextual relevance and a weighted metric, and sends the information to the application. The predicted emoticons are displayed to the user in the application, wherein the predicted emoticons are the emoticons that are mapped with relevance and context of the input.Then, the emoticons selected by the user are sent to the component to dynamically update the usage statistics of the user, wherein said usage statistics include the combined weighted metrics of the input.
[0012] The words/phrase historical usage may include occurrence and/or association of two or more words within a phrase, context determining the likelihood of a phrase being grouped with one or more other words to determine the context of a phrase or a sentence.
[0013] The words of one or more dictionaries may be prioritized by considering factors like system behavioral parameters, historical usage, and contextual relevance.
[0014] In another configuration, a method is provided to acquire/download emoticons from store for dynamic learning and prediction of emoticons.The method mayinclude the step ofacquiring the user desired images/emoticons from an application store and receiving mapping information along with the acquired images/emoticons, wherein the mapping information includes word index association information obtained along with the acquired emoticons. The mapping information is transferred to the component, wherein the component dynamically updates one or more word indices with one or more acquired emoticon indices. A user input is received in the editor of an application, wherein the user input is word or phrase or text. Further, one or more acquired/downloaded emoticon indices associated with the user input are identified using a component, wherein the emoticons indices are identified based on a contextual relevance and a weighted metric, and sends the information to the application. The predicted emoticons are displayed to the user in the application, wherein the predicted emoticons are the acquired emoticons that are mapped with relevance and context of the input. The emoticons selected by the user are sent to the component to dynamically update the usage statistics of the user, wherein said usage statistics includes the combined weighted metrics of the input. The step of displaying displays the acquired emoticons to the user in the application, wherein the acquired emoticon is displayed first followed by an existing emoticon integrated with the application.
[0015] In a preferred embodiment of the present invention, said component may be a plug-in or Android Application Package (APK) or keyboard application or source code.
[0016] In a preferred embodiment of the present invention, the component is configured to predict one or more emoticons while entering a text or a character in the editor of the application, wherein the component is integrated to a third party application that supports emoticons.
[0017] In a preferred embodiment of the present invention, the user input is not limited to manually entering the text but it includes one or more of the providing user gesture, swiping on a keyboard, tapping on a keyboard, audio input etc.
[0018] In another embodiment of the present invention, the method is further configured to identify an error in the user’s input and provides the nearest possible word for the received error, wherein the error in the input includes misspell or typographical error, wherein the method further provides the predicted emoticon for the nearest possible word.
[0019] In another embodiment of the present invention, a system is provided for dynamic learning and prediction of emoticons. The system comprises of a componentintegrated with an application module, wherein the component dynamically updates the usage statistics of a user. The application module further comprises of an input module to receive a user input in an editor of an application, a memory unit, wherein the input is identified by filtering the input from the memory unit based on initial character of the input, a prioritizing module to prioritize the filtered input based on the prioritization parameters, wherein the prioritization parameters includes the user input associated with the top ranked word in the memory unit and a emoticon moduleto predict one or more emoticons indices for the user input using the component.
[0020] In the preferred embodiment of the present invention, the system is further configured to acquire the user desired images/emoticons from an application store and integrates the acquired images/emoticons with the component.
[0021] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
[0022] Brief description of the drawings:
[0023] The foregoing and other features of embodiments will become more apparent from the following detailed description of embodiments when read in conjunction with the accompanying drawings. In the drawings, like reference numerals refer to like elements.
[0024] Figure 1 illustratesa process flow for dynamic learning and prediction of emoticons in accordance with an embodiment of the invention.
[0025] Figure 2 illustrates a method to acquire/download emoticons from store for dynamic learning and prediction of emoticons in accordance with an embodiment of the invention.
[0026] Figure 3 illustrates a screen shot of an exemplary third party application integrated with the component to predict emoticons in accordance with an embodiment of the invention.
[0027] Figure 4, Figure 5 and Figure 6 illustrates a screen shot of an exemplary third party application that allows user to acquire/download emoticons from store and to assign a priority to the newly acquired/downloadedemoticons in accordance with an embodiment of the invention.
[0028] Figure 7 illustrates a screen shot of an exemplary third party application to suggest multiple images for a single word in accordance with an embodiment of the invention.
[0029] Figure 8 illustrates a screen shot of an exemplary third party application to suggest emoticons of the error-corrected word in accordance with an embodiment of the invention.
[0030] Figure 9 illustrates a table showing the multiple emoticons based on one word input.
[0031] Figure 10 illustrates the block diagram for dynamic learning and prediction of emoticons in accordance with an embodiment of the invention.
[0032] Detailed description of the invention:
[0033] Reference will now be made in detail to the description of the present subject matter, one or more examples of which are shown in figures. Each embodiment is provided to explain the subject matter and not a limitation. These embodiments are described in sufficient detail to enable a person skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, physical, and other changes may be made within the scope of the embodiments. The following detailed description is, therefore, not be taken as limiting the scope of the invention, but instead the invention is to be defined by the appended claims.
[0034] In the various embodiment of the present invention the term “component” refers to Plug-in or Android application package (APK) or keyboard application or source code.
[0035] The term ‘new emoticons’ may refer to one or more sets of emoticons acquired/downloadedfrom the application store provided by the application.
[0036] The term ‘old emoticons’ may refer to the emoticons which are already associated with the application or integrated within the application.
[0037] The present invention provides a method and systems for dynamically learning the emoticons and new emoticons acquired/downloadedinside an application that supports emoticons, and offering the acquired/downloadedemoticons as predictions with relevance and context. In various example embodiments, the dynamic learning of new emoticons may be weaved into the invention through a component interaction technique, as the newly available images/emoticons are being acquired/downloadedfrom an application store. On suchacquire/download of the new emoticons, the application may receive mapping information, for example an emoticon-word index map that may provide a mapping between the words and emoticons. The application may then push the mapping information onto the component.The component may then update it’s emoticon add-on with this mapping information. The next time the user types a word for which there is a mapping to new emoticon, based on contextual relevance or a weighted metric, the component is able to identify one or more suitable emoticon indices to suggest to the application. The component sends this information to the underlying application and the application makes a decision on displaying the emoticons predictions. Various methods and systems for facilitating dynamic inclusion of emoticons are disclosed herein, and are explained below in further detail.
[0038] In an embodiment, the systems for facilitating dynamic inclusion of emoticons may be embodied in a component (which is a form of source code) that works in tandem with any editor. In an embodiment, the component may be configured to predict one or more emoticons while entering a text or a character after integrating the component to a third party application that supports emoticons. Examples of such applications may include, but are not limited to, text applications such as SMS, multimedia applications and the like. As used herein, the term ‘Emoticons’ may refer to animated and non-animated emoticons, images and/or pictures represented based on expression, text, activity or emotions.
[0039] Figure 1 illustratesa process flow for dynamic learning and prediction of emoticons in accordance with an embodiment of the invention. The method (100) initiates with step at101, whena user input is received in an editor of an application, wherein the input may be word or phrase or text. At step 102, the input is identified by filtering the input from a memory based on initial character of the input and at step 103,the filtered input is prioritized based on the prioritization parameters, wherein the prioritization parameters include the input associated with the top ranked word in the memory. At step 104, a word index associated with top ranked word is identified. Further, at step 105, one or more emoticon indices are identified using a component, based on a contextual relevance and a weighted metric and this information issent to the application. Then, at step 106, the predicted emoticons are displayed to the user in the application, wherein the predicted emoticons are the emoticons that are mapped with relevance and context of the input. Finally, at step107, the emoticons selected by the user are sent to the component, wherein the component dynamically updates the usage statistics of the user.
[0040] Figure 2 illustrates a method to acquire/download emoticons from store for dynamic learning and prediction of emoticons in accordance with an embodiment of the invention. The method (200) initiates with the step 201, where user desired images/emoticons are acquired/downloaded from an application store and mapping information is received along with the acquired images/emoticons. The mapping information includes word index association information obtained along with the acquired emoticons. At step 202,the mapping information is sent to the component, wherein the component dynamically updates one or more word indices with one or more acquired emoticon indices. At step 203,the user input is received in an editor of an application. At step 204, one or more emoticon indices are identified using a component, based on a contextual relevance and a weighted metric. The identified emoticons indices are sent to the application. Further, at step 205, predicted emoticons are displayed to the user in the application, wherein the predicted emoticons are the acquired emoticons that are mapped with relevance and context of the input. Finally, at step206, the emoticons selected by the user are sent to the component, wherein the component dynamically updates the usage statistics of the user.
[0041] In another embodiment of the present invention, the user enters a text in an application that supports emoticon such that the application is integrated with the component of the invention. In this case the user device is connected to the internet. The user may acquire/download a new set of emoticons from the application store through internet connection. The new set of emoticons may be acquired/downloadedalong with emoticon-word index association information; however the word index that is mapped with the emoticon index is not the same word index of the word the user has entered. For example, the user typed ‘happy’ but the happy emoticon index is mapped to the word ‘glad’ index. In this case, when the user types glad, the component identifies the map between the index of the word glad maps and the index of the emoticon happy, hence the application starts predicting when the user enters the first character ‘g’ of the word glad and shows the emoticon happy. The component maps the word index that is associated with the newly acquired/downloadedemoticon index to the word index associated with the word entered by the user.
[0042] Figure 3 illustrates a screen shot of an exemplary third party application that is integrated with the component to predict emoticons in accordance with an embodiment of the invention. In the present invention, as the user enters the input in an editor (301) of an application, the system receives the user input, for example, a character pressed or the text entered in the application as shown in Figure 3. In the preferred embodiment, the user input is not limited to manually entering the text but may include one or more of the providing user gesture, swiping on a keyboard, tapping on a keyboard, audio input etc. The system further identifies one or more emoticon indices that are associated with word/phrase/context index. The step of identifying the emoticon indices may take into consideration the mapping information that may include contextual relevance of the possible emoticons with words/phrases that precede or succeed it, wherein other parameters included in the mapping information may involve frequencies, sentiments, semantics ontological information etc., that may be used to derive a combined weighted metric to select one or more best emoticon indices that may be sent to the application. The mapping information is then further sent to the applications to help suggest one or more emoticons (302)in a user interface. An exemplary user interface is illustrated in Figure 3. The emoticons (302) selected by the user may be passed as mapping information back to the component for it to dynamically update the usage statistics which influences the combined weighted metrics the next time a similar usage occurs.
[0043] Figure 4, Figure 5 and Figure 6 illustrates a screen shot of an exemplary third party application that allows user to download/acquire emoticons from a store and to assign a priority to the newly downloaded/acquired emoticons in accordance with an embodiment of the invention. In an embodiment of the present invention, consider an example wherein the user may intent to acquire/download a set of emoticons on-line using the store (401) provided in the application, wherein the user may be using a third party application, which may be configured to support emoticons and is integrated off-line with the system (embodied, for example, a component) of the present invention. Each set of emoticons, that may be available on store of the application is equipped with a file that has emoticon to word associations to identify relevant one or more emoticon indices, which can be unicode or non-unicode, such that one or more emoticons of the application can be suggested while entering text.
[0044] In an embodiment, the system (embodied, for example, in a component) may have pre-stored information on a pre-defined set of words and standard emoticon mapping, integrated offline. Figures 4 illustrates example screen-shot describing a scenario, wherein a user may intent to acquire/download an emoticons from an application store (401). In an embodiment, the user may access the application store (401) of the application and identify the emoticon. For example, as illustrated in Figure 5, the user may access an application store (401) to select an emoticon, for example a krish-3 (506) emoticon, and acquire/download the same from the application store. The application may then inform the system (for example, the component) about the newly acquired/downloadedemoticons by sending an emoticon-word map. The system (or component) of the present invention then learns dynamically the mapping information associated with the newly acquired/downloadedemoticons. The mapping information may include word index association information that may be obtained along with the newly acquired/downloadedemoticons, wherein the new emoticons may include one or more emoticons acquired/downloadedfrom the application store provided by the application. In an example embodiment, the emoticon-word index association information may include a file comprising a mapping between one or more emoticon indices and one or more word indices. At least one index can be associated with one or more emoticons/images, and similarly at least one index can be associated with one or more words of a standard dictionary. The mapping information stored in a file can be obtained while acquiring/downloadingthe emoticons from the application store.
[0045] Further, when the user types a word, where the index of the word present in the emoticon-word index association, the system (or the component) may identify the new emoticon indices that match the word/phrase index first and then followed by the old ones.
[0046] In an embodiment of the present invention, the ‘new emoticons’ may be converted to the ‘old emoticons’ when a new set of emoticons or one or more emoticons are acquired/downloadedfrom the application store. For example, referring to Figure 6, a ‘new emoticon, for example, a krish-3 emoticon (601) may be acquired/downloadedfrom the application store and may be displayed, and thereafter the ‘old emoticons’ (602) which were already integrated with the application or acquired/downloadedprior to the downloading/acquiring new emoticon i.e., krish-3 emoticons (601) may be displayed, when the user inputs the word ‘lol’ Additionally or alternatively, a priority order of display of the emoticons may be based on other metrics. In various embodiments, the metrics may be include combined weight of context, error correction, type of a keyboard, third party chat application type, frequency and user preferences, sentiments, semantics ontological, and the like.
[0047] In another embodiment, when new words/phrases are identified by the application, then emoticons may be created to match these new words/phrases. These emoticons may be uploaded to the store of the respective application. These emoticon-word associations are originally not present within the component emoticon add-on. Once the user acquires/downloads the new emoticons, the application pushes the new emoticon-word index association information to the component. The component then dynamically updates its emoticon-word association map and the emoticon indices. The next time the user types the new trend/word/phrase, the component is able to predict the newly added emoticon set based on the emoticon indices and related metrics.
[0048] In various embodiments, the system (embodied, for example in a component) may detect the right images/emoticons indices by considering a combined weight of context, error correction, type of a keyboard, third party chat application type, frequency and user preferences, sentiments, semantics ontological, and the like.
[0049] In various embodiments, for different languages there may exist an add-on which may consist of word indices and also a link file that may contain information on emoticon index and words/phrases/context index associations. The said associations are aligned based on combined weight metric of word/phrase to emoticon association, context of the word to emoticon association, frequency of using the emoticon, and the like.
[0050] In various example scenarios, the system (for example, the component) can be integrated by a third party chat application (for example, WhatsappTM, HikeTM) in order to leverage the emoticon prediction capability of the component. The third party application, in general consists of Unicode emoticons and custom emoticons. The custom emoticons are the ones which don’t have Unicodes. So, if the application integrates the component, the application may provide one or more emoticons as the user provides an input. For example, if a user types ‘Hi’ in said application, the said application can provide an emoticon relevant to the word ‘Hi’ to the user using the component’s emoticon prediction Application Programming Interface (API).
[0051] Figure 7 illustrates a screen shot of an exemplary third party application to suggest multiple images for a single word in accordance with an embodiment of the invention. Considering an example, if a user inputs a term, ‘fruit’ (701) in the editor provided in the application, then ‘fruit’ is mapped to emoticons (702) apple, banana and orange. The weight given to the emoticons may be 4, 6 and 3 respectively, hence banana is given as the first prediction. In this example, a method for predicting the emoticons i.e apple, banana and orange may include the steps, if the user first enters the character ‘F’, editor receives ‘F’ the application sends the input ‘F’ to component. The component uses the obtained input ‘F’ to obtain the matching words for “F” from the language dictionary based on the said combined weight metric. Each identified word/phrase/context index from the text entered by the user in the application, is allowed to look for at least one matching emoticon index and provides a list of emoticons to the user. Few word/phrases/context indices are associated with one or more emoticons indices and some may not have associated with emoticon indices and the component detects only the emoticons indices which are associated with the word/context/phrase indices and omits the rest. Then, once the emoticon indices are obtained, they are passed onto application to render in the designated space using the rendering APIs.Also, if the user repeatedly selects ‘banana’ emoticon for a particular context, the combined weight for ‘banana’ emoticon index is identified and increased further so that when the user enters a similar context again, the same emoticon index is identified by the component and the emoticon ‘banana’ is selected by the application.
[0052] In addition in one of the embodiment of the present invention, the user can be supplied an option to provide his custom word/phrase tag for one or more emoticons, so that if user next time types the custom word, user will be supplied with at least one emoticon for which the word/phrase is tagged.
[0053] Figure 8 illustrates a screen shot of an exemplary third party application to suggest emoticons of the error-corrected word in accordance with an embodiment of the invention. The present invention detects an emoticon index, if there is an error typed by the user, wherein the error may be misspell, typographical error etc. Considering an example, wherein, if user types “xool” (802) instead of “cool” (803) and the system will still be able to suggest one or more emoticons (801) for the word “cool” (803) as shown in Figure 8.
[0054] Further, in one of the embodiment, the present invention suggests emoticon with a prefix/suffix associated, wherein the prefix/suffix may be “ed” or “ing etc. Considering an example, if a user intent’s to type “I’m shopping” and the word “shop” is tagged to an emoticon but “shopping” is not tagged to any emoticon, the system may analyze and look for partial words which have matching emoticon and prepares a combination of emoticon and the suffix ‘ing’ that suits best for typed-in string . Hence, the suggestion would be emoticon+ing.
[0055] Figure 9(a) and Figure 9(b) illustrates a table showing the multiple emoticons based on one word input. In another embodiment of the present invention, the input word is truncated and the index of the truncated parts is mapped with index of the emoticons, wherein the mapping may be done in two ways. Firstly, direct mapping of index of word is done with index of emoticon. Secondly, a phonetic sound of the truncated part is created, then the sound of the truncated part is mapped with the sound of the emoticon and finally, the predicted emoticons is joined and suggested to the user, wherein the prediction of emoticons are obtained using the above mentioned process.
[0056] Considering a table as shown in Figure 9(a), for the user input “Facebook” (901), the predicted emoticon is (902) be as shown in the table. Similarly, for the user input “5star”, the predicted emoticon is shown in the table.
[0057] Further, this embodiment is not limited to predict a union of emoticons, the predicted set of emoticon may also contain one or more characters placed along with the emoticons as shown in Figure 9(b). Considering the table as shown in Figure 9(b), for the input words “Behrain”, “iceland”, ”Iran”, “Portugal”, “Singapore” etc, the corresponding predicted set of emoticon having one or more characters placed along with the emoticons is displayed to the user.
[0058] Figure 10 illustrates the block diagram for dynamic learning and prediction of emoticons in accordance with an embodiment of the invention. In another embodiment of the present invention, a system is provided for dynamic learning and prediction of emoticons. The system comprises of a component (911) integrated with an application module (920), wherein the component (911)dynamically updates the usage statistics of a user, wherein the component may be Plug-in or Android application package (APK) or keyboard application or source code. The application module (920) further comprises of an input module (912) to receive a user input in an editor of an application, a memory unit (913), wherein the input is identified by filtering the input from the memory unit (913) based on initial character of the input, a prioritizing module (915) to prioritize the filtered input based on the prioritization parameters, wherein the prioritization parameters includes the user input associated with the top ranked word in the memory unit (913) and a emoticon module (917) to predict one or more emoticons indices for the user input using the component. The emoticon module identifies one or more emoticon indices associated with the user input based on a contextual relevance and a weighted metric using a component, and sends the information to the application. Then, a displaying module (916) displays the predicted emoticons to the user.
[0059] The system of the present invention is further configured to acquire/download the user desired images/emoticons from an application store and integrates the acquired images/emoticons with the component.
[0060] It is to be understood, however, that even though several characteristics and advantages of the present invention have been set forth in the foregoing description, together with details of the structure and function of the invention, the disclosure is illustrative only. Changes may be made in the details, especially in matters of shape, size, and arrangement of parts within the principles of the invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
,CLAIMS:1) A method for dynamic learning and prediction of emoticons, the method comprising the steps of:
a) receiving an user input in an editor of an application, wherein the user input is word or phrase or text;
b) identifying the input by filtering the input from a memory based on initial character of the input;
c) prioritizing the filtered input based on the prioritization parameters, wherein the prioritization parameters includes the input associated with the top ranked word in the memory;
d) identifying a word index associated with top ranked word;
e) identifying one or more emoticon index associated with the user input based on a contextual relevance and a weighted metric using a component, and sending the information to the application;
f) displaying a predicted emoticons to the user in the application, wherein the predicted emoticons are the emoticons that are mapped with relevance and context of the input; and
g) sending the emoticons selected by the user to the component to dynamically update the usage statistics of the user.
2) The method as claimed in claim1, wherein said component may be a Plug-in or Android application package (APK) or keyboard application or source code.
3) The method as claimed in claim 1, wherein said memory is defined by one or more dictionaries each holding quantitative information relating to the input.
4) The method as claimed in claim 2, wherein said quantitative information comprises a probability information relating to words/phrases historical usage of the user.
5) The method as claimed in claim 1, wherein said usage statistics includes the combined weighted metrics of the input.
6) The method as claimed in claim 1, wherein said component is configured to predict one or more emoticons while entering a text or a character in the editor of the application, wherein the component is integrated to a third party application that supports emoticons.
7) The method as claimed in claim 1, wherein said input is any of the text, but not limited to word, phrase etc.
8) The method as claimed in claim 1, wherein the method further predicts one or more emoticons based on one word input, wherein the input word is truncated and the index of the truncated input is mapped with index of the emoticons.
9) The method as claimed in claim 1, wherein said user input is not limited to manually entering the text but it includes one or more of the providing user gesture, swiping on a keyboard, tapping on a keyboard, audio input, and the like.
10) The method as claimed in claim 1, wherein the method further identifies an error in the user’s input and provides the nearest possible word for the received error, wherein the error in the input includes misspell or typographical error.
11) The method as claimed in claim 10, wherein the method further provides the predicted emoticon for the nearest possible word.
12) The method for dynamic learning and prediction of emoticons, the method comprising the steps of:
a) acquiring the user desired images/emoticons from an application store and receiving a mapping information along with the acquired images/emoticons, wherein the mapping information includes a word index association information obtained along with the acquired emoticons;
b) sending the mapping information to the component, wherein the component dynamically updates one or more word indices with one or more acquired emoticon indices;
c) receiving an user input in an editor of an application, wherein the user input is word or phrase or text;
h) identifying one or more acquired emoticon indices associated with the user input based on a contextual relevance and a weighted metric using an component, and sends the information to the application;
i) displaying a predicted emoticons to the user in the application, wherein the predicted emoticons are the acquired emoticons that are mapped with relevance and context of the input; and
j) sending the emoticons selected by the user to the component to dynamically update the usage statistics of the user.
13) The method as claimed in claim 12, wherein the step of displaying displays the acquired emoticons to the user in the application, wherein the acquired emoticon is displayed first followed by an existing emoticon integrated with the application.
14) The method as claimed in claim 12, wherein said component is integrated by a third party chat application in order to leverage the emoticon prediction capability of the component, wherein the third party application comprises of a unicode emoticons and a custom emoticons.
15) A system for dynamic learning and prediction of emoticons, wherein the system comprises of:
a) a component (911) integrated with an application module, wherein the component dynamically updates the usage statistics of an user;
b) the application module further comprising:
an input module (912) to receive a user input in an editor of an application;
a memory unit (913), wherein the input is identified by filtering the input from
the memory unit (913) based on initial character of the input;
a prioritizing module (915) to prioritize the filtered input based on the
prioritization parameters, wherein the prioritization parameters includes the
input associated with the top ranked word in the memory unit (913);
a emoticon module (917) to predict one or more emoticons indices for the user
input using the component (911); and
a displaying module (916) to display a predicted emoticons.
16) The system as claimed in claim 15, wherein said component may be a Plug-in or Android application package (APK) or keyboard application or source code.
17) The system as claimed in claim 15, wherein the system further acquires the user desired images/emoticons from an application store (914) and integrates the acquired images/emoticons with the component (911).

Documents

Application Documents

# Name Date
1 482-CHE-2014 FORM -5 03-02-2014.pdf 2014-02-03
2 482-CHE-2014 FORM -2 03-02-2014.pdf 2014-02-03
3 482-CHE-2014 FORM -1 03-02-2014.pdf 2014-02-03
4 482-CHE-2014 FORM - 3 03-02-2014.pdf 2014-02-03
5 482-CHE-2014 DRAWINGS 03-02-2014.pdf 2014-02-03
6 482-CHE-2014 DESCRIPTION (PROVISIONAL) 03-02-2014.pdf 2014-02-03
7 482-CHE-2014 CORRESPONDENCE OTHERS 03-02-2014.pdf 2014-02-03
8 482-CHE-2014 POWER OF ATTORNEY 11-03-2014.pdf 2014-03-11
9 482-CHE-2014 FORM-1 11-03-2014.pdf 2014-03-11
10 482-CHE-2014 CORRESPONDENCE OTHERS 11-03-2014.pdf 2014-03-11
11 Complete Specification-(2015-02-02).pdf 2015-02-02
12 Abstract Image.jpg 2015-03-12
13 Other Document [03-12-2015(online)].pdf 2015-12-03
14 Form 13 [03-12-2015(online)].pdf 2015-12-03
15 Description(Complete) [03-12-2015(online)].pdf 2015-12-03
16 482-CHE-2014-Power of Attorney-151215.pdf 2016-06-09
17 482-CHE-2014-Correspondence-PA-151215.pdf 2016-06-09
18 482-CHE-2014-FORM 18 [02-02-2018(online)].pdf 2018-02-02
19 482-CHE-2014-FER.pdf 2021-10-17

Search Strategy

1 2020-09-0114-25-57E_01-09-2020.pdf