Abstract: METHOD OF PERFORMING ACTIONS FROM AN ON-GOING CONVERSATION WINDOW AND A USER INTERFACE THEREOF ABSTRACT Disclosed herein is method and action generator for performing one or more actions from an on-going conversation window. In an embodiment, a context of the on-going conversation is determined by analysing the on-going conversation using one or more predetermined analysis techniques. Further, the context of the on-going conversation and a context of previous conversation is analysed using predetermined Artificial Intelligence (AI) techniques for generating one or more actionable items to be suggested for the user. The one or more actionable items relate to the one or more actions. Subsequently, the actionable items are suggested to the user during the on-going conversation. Finally, an overlay screen, corresponding to the one or more actionable items selected by the user, is generated over the on-going conversation window for facilitating the user to perform the one or more of actions from the on-going conversation window, thereby enhancing overall user experience with the chat applications. FIG. 1
Claims:WE CLAIM:
1. A method of performing one or more actions from an on-going conversation window, the method comprising:
determining, by an action generator, a context of the on-going conversation by analysing the on-going conversation using one or more predetermined analysis techniques;
analysing, by the action generator, the context of the on-going conversation and a context of previous conversation using predetermined Artificial Intelligence (AI) techniques for generating one or more actionable items to be suggested for the user, wherein the one or more actionable items relate to the one or more actions;
suggesting, by the action generator, the one or more actionable items to the user during the on-going conversation; and
generating, by the action generator, an overlay screen, corresponding to the one or more actionable items selected by the user, over the on-going conversation window for facilitating the user to perform the one or more of actions from the on-going conversation window.
2. The method as claimed in claim 1, wherein the one or more predetermined analysis techniques comprises one or more Natural Language Processing (NLP) based analysis techniques including at least one of automatic text summarization and sentiment analysis.
3. The method as claimed in claim 1, wherein the one or more actionable items are suggested to the user in a selected one of one or more predetermined actionable forms comprising at least one of Uniform Resource Locator (URL) links, customized texts, pictures and videos.
4. The method as claimed in claim 1, wherein analysing the context of the on-going conversation comprises interfacing the action generator with one or more predetermined browsing tools for dynamically identifying the one or more actionable items relevant for the context of the on-going conversation and the context of the previous conversation.
5. The method as claimed in claim 1, wherein the predetermined AI techniques used for generating the one or more actionable items comprises at least one of probabilistic classifiers, neural networks, Artificial Neural Networks (ANN), auto text summarization and contextual text identification and classification.
6. The method as claimed in claim 1, wherein the overlay screen comprises a Picture-in-Picture (PIP) display.
7. An action generator for performing one or more actions from an on-going conversation window, the action generator comprising:
a processor; and
a memory, communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which on execution, cause the processor to:
determine a context of the on-going conversation by analysing the on-going conversation using one or more predetermined analysis techniques;
analyse the context of the on-going conversation and a context of previous conversation using predetermined Artificial Intelligence (AI) techniques for generating one or more actionable items to be suggested for the user, wherein the one or more actionable items relate to the one or more actions;
suggest the one or more actionable items to the user during the on-going conversation; and
generate an overlay screen, corresponding to the one or more actionable items selected by the user, over the on-going conversation window for facilitating the user to perform the one or more of actions from the on-going conversation window.
8. The action generator as claimed in claim 7, wherein the one or more predetermined analysis techniques comprise one or more Natural Language Processing (NLP) based analysis techniques including at least one of automatic text summarization and sentiment analysis.
9. The action generator as claimed in claim 7, wherein the processor suggests the one or more actionable items to the user in a selected one of one or more predetermined actionable forms comprising at least one of Uniform Resource Locator (URL) links, customized texts, pictures and videos.
10. The action generator as claimed in claim 7, wherein the processor interfaces the action generator with one or more predetermined browsing tools for dynamically identifying the one or more actionable items relevant for the context of the on-going conversation and the context of the previous conversation.
11. The action generator as claimed in claim 7, wherein the predetermined AI techniques used for generating the one or more actionable items comprises at least one of probabilistic classifiers, neural networks, Artificial Neural Networks (ANN), auto text summarization and contextual text identification and classification.
12. The action generator as claimed in claim 7, wherein the overlay screen comprises a Picture-in-Picture (PIP) display.
Dated this 1st day of April, 2021
SANDEEP N P
IN/PA-2851
OF K & S PARTNERS
AGENT FOR THE APPPLICANT
, Description:FORM 2
THE PATENTS ACT 1970
[39 OF 1970]
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10; Rule 13]
TITLE: “METHOD OF PERFORMING ACTIONS FROM AN ON-GOING CONVERSATION WINDOW AND A USER INTERFACE THEREOF”
Name and Address of the Applicant:
SYMMETRICS TECH MATRIX PVT LTD., No 5c-501, 2nd Block, HRBR Layout, Banaswadi, Bangalore – 560045
Nationality: India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The present subject matter is, in general, related to interactive User Interfaces (UIs), but not exclusively, to a method and UI that facilitates performing one or more actions from an on-going conversation window.
BACKGROUND
Presently, numerous chat applications are used by millions of users across the world. Being one of the most useful utilities, the chat applications undergo continuous improvements and are updated with many features very frequently. Generally, most of the features that are available within each chat application are uniquely built for them.
There may be scenarios in which the users must leave and/or go outside an on-going chat window to search for a required information or to perform a specific action. For example, if a user has to specify about a movie to other users in the chat, then the user has to first leave the on-going chat window, open one of the browsing tools, browse for the details of the movie and then come back to the chat window to include the details of the movie in the chat window. Suppose if the user needs to perform some more actions (like listening to a song of the same movie or booking tickets etc.), then the user has to leave the chat window again and open one or more relevant applications to perform the required action.
However, this causes inconvenience to the users since the users have to switch between multiple application windows during the on-going chat. Therefore, it would be advantageous to have an interface that allows users to perform various required actions from the same chat window.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
SUMMARY
Disclosed herein is a method for performing one or more actions from an on-going conversation window. The method comprises determining, by an action generator, a context of the on-going conversation by analyzing the on-going conversation using one or more predetermined analysis techniques. Upon determining the context, the method comprises analyzing the context of the on-going conversation and a context of a previous conversation using predetermined Artificial Intelligence (AI) techniques for generating one or more actionable items to be suggested for the user. The one or more actionable items relate to the one or more actions. Subsequently, the method comprises suggesting the one or more actionable items to the user during the on-going conversation. Finally, the method comprises generating an overlay screen corresponding to the one or more actionable items selected by the user, over the on-going conversation window for facilitating the user to perform the one or more of actions from the on-going conversation window.
Further, the present disclosure relates to an action generator for performing one or more actions from an on-going conversation window. The action generator comprises a processor and a memory. The memory is communicatively coupled to the processor and stores processor-executable instructions, which on execution, cause the processor to determine a context of the on-going conversation by analyzing the on-going conversation using one or more predetermined analysis techniques. Further, the instructions cause the processor to analyze the context of the on-going conversation and a context of previous conversation using predetermined Artificial Intelligence (AI) techniques for generating one or more actionable items to be suggested for the user. The one or more actionable items relate to the one or more actions. Subsequently, the instructions cause the processor to suggest the one or more actionable items to the user during the on-going conversation. Finally, the instructions cause the processor to generate an overlay screen, corresponding to the one or more actionable items selected by the user, over the on-going conversation window for facilitating the user to perform the one or more of actions from the on-going conversation window.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and regarding the accompanying figures, in which:
FIG. 1 illustrates an exemplary flowchart illustrating a method of performing one or more actions from an on-going conversation window in accordance with some embodiments of the present disclosure.
FIG. 2 shows a detailed block diagram of an action generator in accordance with some embodiments of the present disclosure.
FIG. 3A – 3I illustrate exemplary embodiments in accordance with various aspects of the present disclosure.
FIG. 4 shows a flowchart illustrating a method of performing one or more actions from an on-going conversation window in accordance with some embodiments of the present disclosure.
FIG. 5 illustrates a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether such computer or processor is explicitly shown.
DETAILED DESCRIPTION
In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the specific forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.
The terms “comprises”, “comprising”, “includes”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device, or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
The present disclosure relates to a method of performing one or more actions from an on-going conversation window and a User Interface (UI) thereof. In some embodiments, the present disclosure provides an intelligent search, auto text summarization/contextual text identification based auto-suggest and actioning facility within a chat window of any existing chat applications, such that the end user can stay within the chat window itself, even while performing other actions and looking for information from multiple external search engines. Thus, the present disclosure aims to enhance the overall user experience associated with the chat applications.
In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
FIG. 1 illustrates an exemplary flowchart illustrating a method of performing one or more actions from an on-going conversation window in accordance with some embodiments of the present disclosure.
In an embodiment, the method and the action generator of the proposed disclosure may be used as a plugin or an add-on tool to any of the existing instant messaging/chat applications. In an embodiment, the action generator may be provided as an optional feature on the chat applications. The user may activate or deactivate the action generator using a simple user interface such as, without limitation, a toggle button, integrated with a digital keypad of the chat application.
In an embodiment, suppose the user has launched and/or initialized the action generator on a conversation window 101, while the user is engaged in a conversation. After initialization, the action generator may sense the on-going conversation by tracking the keywords and key phrases being exchanged during the on-going conversation, as shown in step 103. Further, the action generator may understand the context of the conversation by analysing the keywords and key phrases sensed in the previous step using various predetermined Artificial Intelligence (AI) techniques 107, as shown in step 105. In an embodiment, the AI techniques may be installed on a user device in which the chat application is installed.
In an embodiment, once the context of the conversation has been determined, the action generator may automatically perform an Internet search for finding information and contents related to the context of the on-going conversation, as shown in step 109. The auto-search functionality may be performed with the help of one or more predetermined browser tools/applications 111 installed on the user device. In an embodiment, a default browser tool 111 to be used for conducting the search may be selected by the user. Further, the information and content may be listed as sequenced by prioritization policies of the selected browser tool 111.
In an embodiment, after performing the auto-search, the action generator may dynamically identify and suggest most relevant information to the user, as shown in step 113. Here, the most relevant information may be identified based on the context of the on-going conversation. In an embodiment, the relevant information may be suggested by way of customized fonts, Universal Resource Locators (URLs), images and the like displayed on the on-going conversation window. After the relevant information has been provided to the user, the user may select one of the provided information and/or options, while the user is continuing the conversation, as shown in step 115. Subsequently, the action generator may suggest one or more further actionable items to the user, as shown in step 117. Finally, at step 119, the one or more actionable items selected by the user may be performed using an overlay screen generated on the conversation window, thereby facilitating the user to perform one or more required actions from the same conversation window.
In an embodiment, the proposed action generator provides an intelligent search, auto-suggest and actioning facility within a conversation window of any existing chat application such as, without limiting to, WhatsApp®, Facebook Messenger® and the like. That is, the proposed action generator allows the user to stay within the chat window itself while looking for information from multiple external sources/browser tools.
In an embodiment, the user may enable the action generator on the fly from a casual conversation and the action generator immediately senses the context of the conversation and automatically suggests one or more actionable items for the user. The one or more actionable items provide the ability to the user to look-up and select the related content from the Internet. In an embodiment, the one or more actionable items provided to the user may be customized with various font styles such as ‘bold’, ‘italics’ or ‘underlined’ or as a URL, which is selectable by a touch or a click. Additionally, the one or more actionable items may be provided as links, images, videos, buying options and the like.
In an embodiment, when the user clicks on the actionable item, the user may be taken to a full screen overlay display, wherein the user will be given multiple choices from different search engines and/or browsing windows. In addition to the browsing, the user may also perform actions like booking tickets, buying books etc., within the same conversation window instead of switching to other applications. In an embodiment, the browsing windows to be associated with the action generator may be selected by the user, as per the personal preferences of the user.
In an embodiment, the number of actionable items to be suggested to the user may be decided based on various aspects like content and performance of the application or the user device. Alternatively, the number of actions to be displayed may be set by the user according to preference of the user.
In an embodiment, the action generator may look for the context of the conversation not only from the current and/or on-going chat, but also from the previous chats of the user. Further, the user may be given options to enable/disable the action generator feature using a button such as “suggest” and/or “on the go” option provided on the digital keypad of the chat application. Alternatively, the option to enable/disable the action generator may be provided under ‘application settings’ and/or ‘device settings’ of the user device.
FIG. 2 shows a detailed block diagram of an action generator 200 in accordance with some embodiments of the present disclosure.
In some implementations, the action generator 200 may include an I/O interface 201, a processor 203 and a memory 205. The I/O interface 201 may be communicatively interfaced with one or more Input/Output devices of a user device. In an embodiment, the I/O interface may be configured for receiving one or more user inputs such as, conversation texts, queries etc., entered by the user. The memory 205 may be communicatively coupled to the processor 203 and may store data 207 and one or more modules 209. The processor 203 may be configured to perform one or more functions of the action generator 200 for performing one or more actions from the on-going conversation window, using the data 207 and the one or more modules 209.
In an embodiment, the data 207 stored in the memory 205 may include, without limitation, context information 211, actionable items and other data 215. In some implementations, the data 207 may be stored within the memory 205 in the form of various data structures. Additionally, the data 207 may be organized using data models, such as relational or hierarchical data models. The other data 215 may include various temporary data and files generated by the one or more modules 209 while performing various functions of the action generator 200. As an example, the other data 215 may include, without limitation, temporarily stored conversations of the user, one or more keywords derived from the conversations, user preferences and the like.
In an embodiment, the context information 211 indicates the context of the on-going conversation. The context information 211 may be generated by analyzing the on-going conversation using AI based techniques such as automatic text summarization, Natural Language Processing (NLP) and the like. The context information 211 may be used for selecting one or more actionable items 213 to be suggested for the user during the on-going conversation. As an example, if the context of the on-going conversation indicates that the user is having a conversation about travelling to a place ABC, then the actionable items 213 generated may include, without limiting to, a URL to access brief overview of the place, an option to book flight tickets to the place and one or more photographs of the place. Once the above actionable items 213 are identified, the same may be suggested to the user on the conversation window. Then the user may select one or more of the required actionable items 213 during the conversation. As an example, the user may select and add one or more photographs of the place in the on-going conversation. Further, the user may even book tickets to the place by simply clicking on the booking URL suggested in the conversation window, which opens a booking site as an overlay screen over the on-going conversation window. Likewise, the user may be allowed to perform multiple actions from the same conversation window.
In an embodiment, the context information 211 may also include context of the historical conversations of the user. The context of the historical conversations may be used to determine user preferences and behavioral patterns of the user. Subsequently, this context information 211 may be used for identifying most relevant actionable items 213 for suggesting to the user.
In an embodiment, the actionable items 213 may include the one or more actionable items 213 being suggested to the user. As an example, the one or more actionable items 213 may be provided in various forms including, without limiting to, text information relevant to the context of the on-going conversation, URLs related to one or more further actions, keywords and key phrases in customized font colors and sizes and the like. In an embodiment, the one or more actionable items 213 facilitate the user to perform the required actions during the on-going conversation.
In an embodiment, the data 207 may be processed by the one or more modules 209 of the action generator 200. In some implementations, the one or more modules 209 may be distinct hardware entities, which may be communicatively coupled to the processor 203 for performing one or more functions of the action generator 200. In an implementation, the one or more modules 209 may include, without limiting to, a context determination module 217, an Artificial Intelligence (AI) analysis module 219, a suggestion module 221, a screen generation module 223 and other modules 225.
As used herein, the term module may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a hardware processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. In an implementation, each of the one or more modules 209 may be configured as stand-alone hardware computing units. In an embodiment, the other modules 225 may be used to perform various miscellaneous functionalities of the action generator 200. It will be appreciated that such one or more modules 209 may be represented as a single module or a combination of different modules.
In an embodiment, the context determination module 217 may be configured for determining a context of the on-going conversation. The context of the on-going conversation may be determined by analyzing the conversation and identifying one or more keywords from the conversation. As an example, suppose the user sends a text message saying - “I like bananas, but I do not know if it is good for my diet”. Here, the context determination module 217 may analyze the message and identify keywords such as “banana”, “good” and “diet”. Further, based on analysis of the identified keywords, the context determination module 217 may understand that the user may be conversing about ‘banana’ fruit and the ‘diet’ plans.
In an embodiment, the AI analysis module 219 may be configured for determining one or more actionable items 213 to be suggested for the user based on the context of the on-going conversation. From the above example, since the context of the conversation is determined to be related to ‘banana’ fruit and ‘diet’ plan, the AI analysis module 219 may generate one or more actionable items 213 corresponding to the above context. As an example, the AI analysis module 219 may generate a URL, which takes the user to a webpage that describes nutrient facts of ‘banana’ and the dietary advantages of ‘banana’. Alternatively, the AI analysis module 219 may provide details/location information of nearby stores where the user can buy ‘banana’ fruit. That is, the AI analysis module 219 ensures to identify one or more actionable items 213 that can be suggested for the user based on the context of the on-going conversation.
In an embodiment, the suggestion module 221 may be configured for suggesting and providing the one or more actionable items 213 identified by the AI analysis module 219 to the user. In an embodiment, the suggestion module 221 may be configured for identifying the relevant and appropriate actionable items 213 for the user based on the historical conversations and user preferences.
In an embodiment, the screen generation module 223 may be configured for generating an overlay screen on the conversation window. In an embodiment, the overlay screen may be provided as a Picture-in-Picture (PIP) frame over the on-going conversation, such that the overlay screen covers the entire conversation and allows the user to perform the required action. In other words, the overlay screen allows the user to perform the required action from the conversation window without leaving and/or coming out of the on-going conversation window.
FIGS. 3A – 3I illustrate various functionalities of the action generator 200 with the help of exemplary embodiments.
In an embodiment, FIG. 3A – 3F show depiction of a user interface of a user device 300 used by the user for carrying out the conversation. As shown in FIG. 3A, suppose two users, Joe and Kate, are having a conversation with each other. Suppose, during the conversation, the user Kate asks the user Joe – “Can you suggest a good movie for the weekend?”. By reviewing the on-going conversation, the action generator 200 may identify the keywords such as “movie” and “weekend”. Accordingly, the action generator 200 may determine that the context of the conversation relates to watching a ‘movie’ over the ‘weekend’.
In an embodiment, as soon as the user Joe replies positively to the user Kate, the action generator 200 may automatically provide a suggestion to the user Joe based on the context of the on-going conversation. As an example, the action generator 200 may display a suggestion, such as ‘Movie xyz is popular this weekend’, at the bottom of the on-going conversation, as shown in FIG. 3B. In other words, the action generator 200 may automatically perform an Internet search in the background of the conversation, so that it can provide a relevant suggestion to the user, based on the context of the on-going conversation, while the user is involved in the on-going conversation.
In an embodiment, as soon as the user clicks on the suggestion provided, the suggestion may be included in the subsequent conversation, as shown in FIG. 3C. That is, the action generator 200 allows the user Joe to perform the required action of finding the movie information from the on-going conversation window, without having to switch to and/or open a browsing window separately.
Suppose, the user Kate continues the conversation and requests the user Joe to book tickets for the movie, as indicated in FIG. 3D. At this point, the action generator 200 may understand the context that the user Kate is looking to book tickets for the movie and accordingly, it may provide an URL that allows the user Joe to book the movie tickets directly from the conversation window. As an example, the URL may be provided at the bottom of the on-going conversation, as shown in FIG. 3D.
Subsequently, the user Joe may click on the URL provided and book the tickets for the movie. In an embodiment, the booking application/site may be provided as an overlay screen on the on-going conversation window, as indicated in the FIG. 3E. Finally, once the booking has been complete by the user Joe, the user Joe may dynamically insert the booking details to the on-going conversation, as shown in FIG. 3F. That is, once again, the action generator 200 allows the user to perform the required action without leaving the on-going conversation window.
FIG. 3G illustrates another exemplary scenario/use case of the action generator 200. Suppose, during the conversation, the user mentions about a ‘cake’. Here, the action generator 200 may sense that the context of the conversation relates to ‘sweets’ and may dynamically suggest the user about including a ‘chocolate box’. For example, as shown in FIG. 3G, the action generator 200 may automatically throw a suggestion such as – “How about a Chocolate box?”. Additionally, the action generator 200 may customize the suggestion to enable the user to search additional information about the ‘chocolate box’ without leaving the application window. As an example, the action generator 200 may embed a URL to the phrase “Chocolate box” and make it bold and underlined, to indicate the user about possibility of doing an in-app browsing. Subsequently, the user may click on the suggestion provided. At this point, the action generator 200 may pop-up a PIP screen on the chat window for displaying the search results relating to “Chocolate box” to the user. Then the user may select required information from the displayed search results and dynamically include it in the conversation. As an example, the user may select a picture of a chocolate box from the displayed search results and include it in the conversation, thus making the conversation more interesting and relevant to the other user.
FIG. 3H shows yet another exemplary scenario in which the action generator 200 allows the user to book a movie ticket while continuing the conversation. For example, when the user responds saying he is okay to watch the movie either online or offline, the action generator 200 recognizes the user preference and provides the relevant suggestion to the user. As an example, the action generator 200 may highlight the word ‘OK’ in the phrase “Both are OK”, thereby allowing the user to instantly search for the information related to the movies. Once the user selects the suggestion provided, the action generator 200 may open a PIP screen and display various information including, without limiting to, links, images and videos related to the movies on the display screen. Then the user may be allowed to browse through the required information on the display screen. Suppose, if the user clicks on one of the links to book the movie ticket, the screen may navigate to a booking site, where the user can book the movie tickets, get the booking details and even include the booking details to the conversation. Thus, the action generator 200 allows the user to search for various movies, book tickets and even share the booking details without actually leaving the ongoing conversation window.
FIG. 3I illustrates yet another exemplary user case/scenario of the action generator 200. This scenario shows how the action generator 200 can intelligently recognize the context of the conversation and make smart recommendations to the user. For example, as shown in FIG. 3I, suppose the user is having a conversation about his choice of running shoes with the other user. Here, the action generator 200 may recognize that the user is talking about a specific brand/model of the running shoes and may provide a customized suggestion to the user, using which the user can readily review the information related to the shoe. Additionally, since the user has expressed his interest in specific brand/model of the shoes, the action generator 200 may also provide a link for checking the price of the shoes. As soon as the user clicks one of the suggestions provided, the action generator 200 opens a PIP screen, which displays various shoes and their pricing details to the user. Subsequently, the user can browse, review and select a particular shoe on the same PIP screen. Additionally, the user may even purchase the shoes on the same page and get in-app payment processing information and order confirmation. Thus, once again, the action generator 200 allows the user to perform a required action without leaving the conversation window.
FIG. 4 shows a flowchart illustrating a method of performing one or more actions from an on-going conversation window in accordance with some embodiments of the present disclosure.
As illustrated in FIG. 4, the method 400 may include one or more blocks illustrating a method for performing one or more actions from an on-going conversation window using an action generator 200 illustrated in FIG. 2. The method 400 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform specific functions or implement specific abstract data types.
The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
At block 401, the method 400 includes determining, by the action generator 200, a context of the on-going conversation by analyzing the on-going conversation using one or more predetermined analysis techniques. As an example, the one or more predetermined analysis techniques may comprise, without limiting to, one or more Natural Language Processing (NLP) based analysis techniques including at least one of an automatic text summarization and sentiment analysis techniques.
At block 403, the method 400 includes analyzing, by the action generator 200, the context of the on-going conversation and a context of a previous conversation using predetermined Artificial Intelligence (AI) techniques for generating one or more actionable items 213 to be suggested for the user. In an embodiment, the one or more actionable items 213 relate to the one or more actions. In an embodiment, analysing the context of the on-going conversation may comprise interfacing the action generator 200 with one or more predetermined browsing tools for dynamically identifying the one or more actionable items 213 relevant for the context of the on-going conversation and the context of the previous conversation. As an example, the predetermined browsing tools may be any of the existing web browser applications such as Google Chrome®, Mozilla Firefox® and the like.
At block 405, the method 400 includes suggesting, by the action generator 200, the one or more actionable items 213 to the user during the on-going conversation. As an example, the one or more actionable items 213 may be suggested to the user in a selected one of one or more predetermined actionable forms including, without limiting to, Uniform Resource Locator (URL) links, customized texts, pictures and videos. As an example, the URL may be a hyperlink or a web address that allows the user to directly access the required content/information from the Internet. The customized texts may include words and phrases provided in distinct font type, size and colours, that help the user to easily differentiate the suggestions from the rest of the conversation, as shown in FIG. 3C. The pictures and videos may be retrieved from one or more external resources and online databases based on the keywords identified in the on-going conversation. For example, if the user is involved in a conversation about ‘Yoga’, the action generator may dynamically browse one or more images and videos that depict various ‘Asanas’, during the on-going conversation.
At block 407, the method 400 includes generating, by the action generator 200, an overlay screen, corresponding to the one or more actionable items 213 selected by the user, over the on-going conversation window for facilitating the user to perform the one or more of actions from the on-going conversation window. In an embodiment, the overlay screen may include a Picture-in-Picture (PIP) or similar display.
Computer System
FIG. 5 illustrates a block diagram of an exemplary computer system 500 for implementing embodiments consistent with the present disclosure. In an embodiment, the computer system 500 may be the action generator 200 illustrated in FIG. 2, which may be used for facilitating performing one or more actions from an on-going conversation window. The computer system 500 may include a central processing unit (“CPU” or “processor”) 502. The processor 502 may comprise at least one data processor for executing program components for executing user- or system-generated business processes. A user may include a common user of the computer system 500 or any system/sub-system being operated parallelly to the computer system 500. The processor 502 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
The processor 502 may be disposed in communication with one or more Input/Output (I/O) devices (511 and 512) via I/O interface 501. The I/O interface 501 may employ communication protocols/methods such as, without limitation, audio, analog, digital, stereo, IEEE®-1394, serial bus, Universal Serial Bus (USB), infrared, PS/2, BNC, coaxial, component, composite, Digital Visual Interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, Video Graphics Array (VGA), IEEE® 802.n /b/g/n/x, Bluetooth, cellular (e.g., Code-Division Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System For Mobile Communications (GSM), Long-Term Evolution (LTE) or the like), etc. Using the I/O interface 501, the computer system 500 may communicate with one or more I/O devices 511 and 512.
In some embodiments, the processor 502 may be disposed in communication with a communication network 509 via a network interface 503. The network interface 503 may communicate with the communication network 509. The network interface 503 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), Transmission Control Protocol/Internet Protocol (TCP/IP), token ring, IEEE® 802.11a/b/g/n/x, etc. Using the network interface 503 and the communication network 509, the computer system 500 may connect to one of the predetermined browsing tools for searching and retrieving information related to the one or more actionable items 213 that need to be suggested for the user.
In an implementation, the communication network 509 may be implemented as one of the several types of networks, such as intranet or Local Area Network (LAN) and such within the organization. The communication network 509 may either be a dedicated network or a shared network, which represents an association of several types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the communication network 509 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
In some embodiments, the processor 502 may be disposed in communication with a memory 505 (e.g., RAM 513, ROM 514, etc. as shown in FIG. 5) via a storage interface 504. The storage interface 504 may connect to memory 505 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as Serial Advanced Technology Attachment (SATA), Integrated Drive Electronics (IDE), IEEE-1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
The memory 505 may store a collection of program or database components, including, without limitation, user/application interface 506, an operating system 507, a web browser 508, and the like. In some embodiments, computer system 500 may store user/application data 506, such as the data, variables, records, etc. as described in this invention. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle® or Sybase®.
The operating system 507 may facilitate resource management and operation of the computer system 500. Examples of operating systems include, without limitation, APPLE® MACINTOSH® OS X®, UNIX®, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTION® (BSD), FREEBSD®, NETBSD®, OPENBSD, etc.), LINUX® DISTRIBUTIONS (E.G., RED HAT®, UBUNTU®, KUBUNTU®, etc.), IBM® OS/2®, MICROSOFT® WINDOWS® (XP®, VISTA®/7/8, 10 etc.), APPLE® IOS®, GOOGLE TM ANDROID TM, BLACKBERRY® OS, or the like.
The user interface 506 may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, the user interface 506 may provide computer interaction interface elements on a display system operatively connected to the computer system 500, such as cursors, icons, check boxes, menus, scrollers, windows, widgets, and the like. Further, Graphical User Interfaces (GUIs) may be employed, including, without limitation, APPLE® MACINTOSH® operating systems’ Aqua®, IBM® OS/2®, MICROSOFT® WINDOWS® (e.g., Aero, Metro, etc.), web interface libraries (e.g., ActiveX®, JAVA®, JAVASCRIPT®, AJAX, HTML, ADOBE® FLASH®, etc.), or the like.
The web browser 508 may be a hypertext viewing application. Secure web browsing may be provided using Secure Hypertext Transport Protocol (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), and the like. The web browsers 508 may utilize facilities such as AJAX, DHTML, ADOBE® FLASH®, JAVASCRIPT®, JAVA®, Application Programming Interfaces (APIs), and the like. Further, the computer system 500 may implement a mail server stored program component. The mail server may utilize facilities such as ASP, ACTIVEX®, ANSI® C++/C#, MICROSOFT®, .NET, CGI SCRIPTS, JAVA®, JAVASCRIPT®, PERL®, PHP, PYTHON®, WEBOBJECTS®, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), MICROSOFT® exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system 500 may implement a mail client stored program component. The mail client may be a mail viewing application, such as APPLE® MAIL, MICROSOFT® ENTOURAGE®, MICROSOFT® OUTLOOK®, MOZILLA® THUNDERBIRD®, and the like.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present invention. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
Advantages of the embodiments of the present disclosure are illustrated herein.
In an embodiment, the proposed action generator uses Artificial Intelligence (AI) techniques to understand context of an on-going conversation and provides auto-generated suggestions and auto-searched information to the user while the user is involved in the conversation, thereby making the conversations more relevant and interesting.
In an embodiment, the present disclosure facilitates a user to perform one or more actions during an on-going conversation, without leaving the actual conversation window. As a result, the present disclosure enhances user experience associated with the chat applications.
In an embodiment, according to the present disclosure, the user can perform multiple actions from the same window. Consequently, the present disclosure ensures optimal resource utilization (i.e., processing speed and battery power) of the user device used for the conversation, since the user need not open and/or initialize multiple application during the conversation.
The aforesaid technical advancements and practical applications of the proposed method may be attributed to the aspects of a) generating one or more actionable items by analyzing the context of an on-going conversation and a context of previous conversation using predetermined Artificial Intelligence techniques and b) generating an overlay screen over the on-going conversation window for facilitating the user to perform the one or more of actions from the on-going conversation window, as disclosed in steps 2 and 4 of the independent claims 1 and 7 of the present disclosure.
In light of the technical advancements provided by the disclosed method and fault detection system, the claimed steps, as discussed above, are not routine, conventional, or well-known aspects in the art, as the claimed steps provide the aforesaid solutions to the technical problems existing in the conventional technologies. Further, the claimed steps clearly bring an improvement in the functioning of the system itself, as the claimed steps provide a technical solution to a technical problem.
The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "some embodiments", and "one embodiment" mean "one or more (but not all) embodiments of the invention(s)" unless expressly specified otherwise.
The terms "including", "comprising", “having” and variations thereof mean "including but not limited to", unless expressly specified otherwise.
The enumerated listing of items does not imply that any or all the items are mutually exclusive, unless expressly specified otherwise. The terms "a", "an" and "the" mean "one or more", unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be clear that more than one device/article (whether they cooperate) may be used in place of a single device/article. Similarly, where more than one device/article is described herein (whether they cooperate), it will be clear that a single device/article may be used in place of the more than one device/article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of invention need not include the device itself.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Referral Numerals:
Reference Number Description
200 Action generator
201 I/O Interface
203 Processor
205 Memory
207 Data
209 Modules
211 Context information
213 Actionable items
215 Other data
217 Context determination module
219 AI analysis module
221 Suggestion module
223 Screen generation module
225 Other modules
300 User device
500 Exemplary computer system
501 I/O Interface of the exemplary computer system
502 Processor of the exemplary computer system
503 Network interface
504 Storage interface
505 Memory of the exemplary computer system
506 User/Application
507 Operating system
508 Web browser
509 Communication network
511 Input devices
512 Output devices
513 RAM
514 ROM
| # | Name | Date |
|---|---|---|
| 1 | 202141015696-STATEMENT OF UNDERTAKING (FORM 3) [01-04-2021(online)].pdf | 2021-04-01 |
| 2 | 202141015696-FORM 1 [01-04-2021(online)].pdf | 2021-04-01 |
| 3 | 202141015696-DRAWINGS [01-04-2021(online)].pdf | 2021-04-01 |
| 4 | 202141015696-DECLARATION OF INVENTORSHIP (FORM 5) [01-04-2021(online)].pdf | 2021-04-01 |
| 5 | 202141015696-COMPLETE SPECIFICATION [01-04-2021(online)].pdf | 2021-04-01 |
| 6 | 202141015696-Proof of Right [14-04-2021(online)].pdf | 2021-04-14 |
| 7 | 202141015696-FORM-26 [25-05-2021(online)].pdf | 2021-05-25 |
| 8 | 202141015696-Request Letter-Correspondence [14-04-2022(online)].pdf | 2022-04-14 |
| 9 | 202141015696-Power of Attorney [14-04-2022(online)].pdf | 2022-04-14 |
| 10 | 202141015696-Form 1 (Submitted on date of filing) [14-04-2022(online)].pdf | 2022-04-14 |
| 11 | 202141015696-Covering Letter [14-04-2022(online)].pdf | 2022-04-14 |
| 12 | 202141015696-FORM-9 [26-04-2022(online)].pdf | 2022-04-26 |
| 13 | 202141015696-FORM 3 [09-05-2022(online)].pdf | 2022-05-09 |
| 14 | 202141015696-FORM 18A [09-05-2022(online)].pdf | 2022-05-09 |
| 15 | 202141015696-FER.pdf | 2022-07-20 |
| 16 | 202141015696-OTHERS [17-01-2023(online)].pdf | 2023-01-17 |
| 17 | 202141015696-FER_SER_REPLY [17-01-2023(online)].pdf | 2023-01-17 |
| 18 | 202141015696-DRAWING [17-01-2023(online)].pdf | 2023-01-17 |
| 19 | 202141015696-CORRESPONDENCE [17-01-2023(online)].pdf | 2023-01-17 |
| 20 | 202141015696-COMPLETE SPECIFICATION [17-01-2023(online)].pdf | 2023-01-17 |
| 21 | 202141015696-CLAIMS [17-01-2023(online)].pdf | 2023-01-17 |
| 22 | 202141015696-ABSTRACT [17-01-2023(online)].pdf | 2023-01-17 |
| 23 | 202141015696-US(14)-HearingNotice-(HearingDate-17-07-2023).pdf | 2023-07-05 |
| 24 | 202141015696-US(14)-ExtendedHearingNotice-(HearingDate-09-08-2023).pdf | 2023-07-14 |
| 25 | 202141015696-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [14-07-2023(online)].pdf | 2023-07-14 |
| 26 | 202141015696-Correspondence to notify the Controller [06-08-2023(online)].pdf | 2023-08-06 |
| 27 | 202141015696-FORM-26 [09-08-2023(online)].pdf | 2023-08-09 |
| 28 | 202141015696-Written submissions and relevant documents [24-08-2023(online)].pdf | 2023-08-24 |
| 1 | SEARCHSTRATEGY_202141015696E_20-07-2022.pdf |