Abstract: Disclosed herein is a method and response generation system for providing contextual responses to user interaction. In an embodiment, input data related to user interaction, which may be received from a plurality of input channels in real-time, may be processed using processing models corresponding to each of the input channels for extracting interaction parameters. Thereafter, the interaction parameters may be combined for computing a contextual variable, which in turn may be analyzed to determine a context of the user interaction. Finally, responses corresponding to the context of the user interaction may be generated and provided to the user for completing the user interaction. In some embodiments, the method of present disclosure accurately detects context of the user interaction and provides meaningful contextual responses to the user interaction. FIG. 1
Claims:WE CLAIM:
1. A method of providing contextual responses (109) to user interaction, the method comprising:
receiving, by a response generation system (105), input data (210), related to the user interaction, from each of a plurality of input channels (103) in real-time;
processing, by the response generation system (105), the input data (210) using one or more processing models (107) corresponding to each of the plurality of input channels (103) for extracting plurality of interaction parameters (211) from the input data (210);
combining, by the response generation system (105), each of the plurality of interaction parameters (211) for computing a contextual variable (213) corresponding to the user interaction;
determining, by the response generation system (105), a context of the user interaction based on analysis of the contextual variable (213); and
generating, by the response generation system (105), one or more responses corresponding to the context of the user interaction for providing the contextual responses (109) to the user interaction.
2. The method as claimed in claim 1, wherein the plurality of input channels (103) comprise a voice input channel, a textual input channel, a visual input channel and a sensory input channel, and wherein the plurality of interaction parameters (211) comprise emotion of a user (101), gestures and facial expressions of the user (101) and physiological factors associated with the user (101).
3. The method as claimed in claim 1, wherein each of the one or more processing models (107) are configured with predetermined techniques for processing the input data (210) received from corresponding each of the plurality of input channels (103), and wherein each of the one or more processing models (107) are trained with historical input data (210) for identifying the plurality of interaction parameters (211) in the input data (210).
4. The method as claimed in claim 1, wherein determining the context of the user interaction comprises:
assigning a context score to the contextual variable (213) based on comparison of the contextual variable (213) with each of a plurality of predetermined emotion variables;
identifying an emotion variable corresponding to the contextual variable (213) based on the context score; and
determining the context of the user interaction based on identified emotion variable.
5. The method as claimed in claim 1 further comprises training a goal-oriented prediction model with historical contextual variables and outcome of corresponding user interactions for predicting an outcome of the user interaction, and wherein the outcome of the user interaction is at least one of user acceptance or user rejection to the contextual responses (109) provided to the user (101).
6. A response generation system (105) for providing contextual responses (109) to user interaction, the response generation system (105) comprising:
a processor (203); and
a memory (205), communicatively coupled to the processor (203), wherein the memory (205) stores processor-executable instructions, which on execution, cause the processor (203) to:
receive input data (210), related to the user interaction, from each of a plurality of input channels (103) in real-time;
process the input data (210) using one or more processing models (107) corresponding to each of the plurality of input channels (103) to extract plurality of interaction parameters (211) from the input data (210);
combine each of the plurality of interaction parameters (211) for computing a contextual variable (213) corresponding to the user interaction;
determine a context of the user interaction based on analysis of the contextual variable (213); and
generate one or more responses corresponding to the context of the user interaction to provide the contextual responses (109) to the user interaction.
7. The response generation system (105) as claimed in claim 6, wherein the plurality of input channels (103) comprises a voice input channel, a textual input channel, a visual input channel and a sensory input channel, and wherein the plurality of interaction parameters (211) comprise emotion of a user (101), gestures and facial expressions of the user (101) and physiological factors associated with the user (101).
8. The response generation system (105) as claimed in claim 6, wherein the processor (203) configures each of the one or more processing models (107) with predetermined techniques for processing the input data (210) received from corresponding each of the plurality of input channels (103), and wherein the processor (203) trains each of the one or more processing models (107) with historical input data (210) for identifying the plurality of interaction parameters (211) in the input data (210).
9. The response generation system (105) as claimed in claim 6, wherein to determine the context of the user interaction, the processor (203) is configured to:
assign a context score to the contextual variable (213) based on comparison of the contextual variable (213) with each of a plurality of predetermined emotion variables;
identify an emotion variable corresponding to the contextual variable (213) based on the context score; and
determine the context of the user interaction based on identified emotion variable.
10. The response generation system (105) as claimed in claim 6, wherein the processor (203) trains a goal-oriented prediction model with historical contextual variables and outcome of corresponding user interactions for predicting an outcome of the user interaction, and wherein the outcome of the user interaction is at least one of user acceptance or user rejection to the contextual responses (109) provided to the user (101).
Dated this 27th day of December 2018
R. RAMYA RAO
OF K&S PARTNERS
ATTORNEY FOR THE APPLICANT
IN/PA-1607
, Description:TECHNICAL FIELD
The present subject matter is, in general, related to generating automated responses and more particularly, but not exclusively, to method and system for providing contextual responses to user interaction.
| # | Name | Date |
|---|---|---|
| 1 | 201841049483-STATEMENT OF UNDERTAKING (FORM 3) [27-12-2018(online)].pdf | 2018-12-27 |
| 2 | 201841049483-REQUEST FOR EXAMINATION (FORM-18) [27-12-2018(online)].pdf | 2018-12-27 |
| 3 | 201841049483-POWER OF AUTHORITY [27-12-2018(online)].pdf | 2018-12-27 |
| 4 | 201841049483-FORM 18 [27-12-2018(online)].pdf | 2018-12-27 |
| 5 | 201841049483-FORM 1 [27-12-2018(online)].pdf | 2018-12-27 |
| 6 | 201841049483-DRAWINGS [27-12-2018(online)].pdf | 2018-12-27 |
| 7 | 201841049483-DECLARATION OF INVENTORSHIP (FORM 5) [27-12-2018(online)].pdf | 2018-12-27 |
| 8 | 201841049483-COMPLETE SPECIFICATION [27-12-2018(online)].pdf | 2018-12-27 |
| 9 | 201841049483-Request Letter-Correspondence [03-01-2019(online)].pdf | 2019-01-03 |
| 10 | 201841049483-Power of Attorney [03-01-2019(online)].pdf | 2019-01-03 |
| 11 | 201841049483-Form 1 (Submitted on date of filing) [03-01-2019(online)].pdf | 2019-01-03 |
| 12 | 201841049483-Proof of Right (MANDATORY) [06-05-2019(online)].pdf | 2019-05-06 |
| 13 | Correspondence by Agent_Proof of Right_10-05-2019.pdf | 2019-05-10 |
| 14 | 201841049483-FORM 3 [24-08-2021(online)].pdf | 2021-08-24 |
| 15 | 201841049483-PETITION UNDER RULE 137 [25-08-2021(online)].pdf | 2021-08-25 |
| 16 | 201841049483-OTHERS [25-08-2021(online)].pdf | 2021-08-25 |
| 17 | 201841049483-FER_SER_REPLY [25-08-2021(online)].pdf | 2021-08-25 |
| 18 | 201841049483-DRAWING [25-08-2021(online)].pdf | 2021-08-25 |
| 19 | 201841049483-COMPLETE SPECIFICATION [25-08-2021(online)].pdf | 2021-08-25 |
| 20 | 201841049483-CLAIMS [25-08-2021(online)].pdf | 2021-08-25 |
| 21 | 201841049483-FER.pdf | 2021-10-17 |
| 22 | 201841049483-PatentCertificate05-09-2023.pdf | 2023-09-05 |
| 23 | 201841049483-IntimationOfGrant05-09-2023.pdf | 2023-09-05 |
| 24 | 201841049483-PROOF OF ALTERATION [05-12-2023(online)].pdf | 2023-12-05 |
| 1 | 2021-01-2212-56-57E_22-01-2021.pdf |