Abstract: Disclosed herein is a method and system for recommending a response for a voice-based user input. The method includes detecting voice-based user input or input based on a query provided by a user. The method includes, extracting one or more voice parameters from pronunciation of the input. Thereafter, a disease type associated with the user is identified based on one or more voice parameters. Further, system verifies correctness of each word in the input based on comparison of each of the one or more voice parameters with a first set of predetermined corresponding one or more voice parameters associated with the disease type. Finally, the response is recommended for the input based on verification of correctness of each word in the input. The present disclosure recommends an accurate response for the input since one or more words detected incorrectly are auto corrected and provides a better user experience. FIG. 2
Claims:We Claim:
1. A method of recommending a response for a voice-based user input, the method comprising:
detecting, by a response recommendation system 107, the voice-based user input, wherein the voice-based user input corresponds to a query provided by a user 103;
extracting, by the response recommendation system 107, one or more voice parameters from pronunciation of the voice-based user input;
identifying, by the response recommendation system 107, a disease type associated with the user 103 based on the one or more voice parameters;
verifying, by the response recommendation system 107, correctness of each word in the voice-based user input based on comparison of each of the one or more voice parameters with a first set of predetermined corresponding one or more voice parameters associated with the disease type; and
recommending, by the response recommendation system 107, the response for the voice-based user input based on verification of the correctness of each word in the voice-based user input.
2. The method as claimed in claim 1, wherein the one or more voice parameters comprises at least one of pitch, word spacing, frequency, speed at which each word is uttered, and time taken to utter the voice-based user input.
3. The method as claimed in claim 1, wherein identifying the disease type comprises:
comparing the one or more voice parameters with a second set of predetermined corresponding one or more voice parameters, wherein the second set of predetermined corresponding one or more voice parameters correspond to one or more disease types; and
identifying the disease type among the one or more disease types, corresponding to the second set of predetermined corresponding one or more voice parameters, that matches the one or more voice parameters during the comparison.
4. The method as claimed in claim 3, wherein the second set of predetermined corresponding one or more voice parameters are recorded for voice-based user input uttered by one or more users affected by the one or more disease types.
5. The method as claimed in claim 1, wherein the first set of predetermined corresponding one or more voice parameters are recorded for a voice-based user input uttered by one or more users during normal condition.
6. The method as claimed in claim 1, wherein verifying the correctness of each word in the voice-based user input is performed by comparing each word in the voice-based user input with each word in a voice-based user input uttered by one or more users during normal condition.
7. The method as claimed in claim 6 further comprising identifying one or more words detected incorrectly in the voice-based user input.
8. The method as claimed in claim 7 further comprising performing auto-correction of the one or more words detected incorrectly to form a corrected voice-based user input.
9. The method as claimed in claim 1 further comprising:
determining, by the response recommendation system 107, a relationship among each word in a corrected voice-based user input with one or more previous voice-based user input for identifying a context of the user 103, wherein the one or more previous voice-based user input is related to the query provided by the user 103;
generating, by the response recommendation system 107, a knowledge graph comprising one or more combination of nodes based on the voice-based user input, the corrected voice-based user input and each of the one or more previous voice-based user input;
identifying, by the response recommendation system 107, a node matching the context of the user 103; and
validating, by the response recommendation system 107, the corrected voice-based user input when the identified node matches with the corrected voice-based user input.
10. The method as claimed in claim 1 comprises storing one or more medical profiles in a medical profile database 111 associated with the response recommendation system 107, wherein each of the one or more medical profiles comprises information of a disease type being suffered by a user, a first set of predetermined corresponding one or more voice parameters and a second set of predetermined corresponding one or more voice parameters.
11. A response recommendation system 107 for recommending a response for a voice-based user input, the response recommendation system 107 comprising:
a processor 203; and
a memory 205 communicatively coupled to the processor 203, wherein the memory 205 stores the processor-executable instructions, which, on execution, causes the processor 203to:
detect the voice-based user input, wherein the voice-based user input corresponds to a query provided by a user 103;
extract one or more voice parameters from pronunciation of the voice-based user input;
identify a disease type associated with the user 103 based on the one or more voice parameters;
verify correctness of each word in the voice-based user input based on comparison of each of the one or more voice parameters with a first set of predetermined corresponding one or more voice parameters associated with the disease type; and
recommend the response for the voice-based user input based on verification of the correctness of each word in the voice-based user input.
12. The response recommendation system 107 as claimed in claim 11, wherein the one or more voice parameters comprises at least one of pitch, word spacing, frequency, speed at which each word is uttered, and time taken to utter the voice-based user input.
13. The response recommendation system 107 as claimed in claim 11, wherein to identify the disease type, the processor 203 is configured to:
compare the one or more voice parameters with a second set of predetermined corresponding one or more voice parameters, wherein the second set of predetermined corresponding one or more voice parameters correspond to one or more disease types; and
identify the disease type among the one or more disease types corresponding to the second set of predetermined corresponding one or more voice parameters that matches the one or more voice parameters during the comparison.
14. The response recommendation system 107 as claimed in claim 13, wherein the processor 203 records the second set of predetermined corresponding one or more voice parameters for a voice-based user input uttered by one or more users affected by the one or more disease types.
15. The response recommendation system 107 as claimed in claim 11, wherein the processor 203 records the first set of predetermined corresponding one or more voice parameters for a voice-based user input uttered by one or more users during normal condition.
16. The response recommendation system 107 as claimed in claim 11, wherein the processor 203 verifies the correctness of each word in the voice-based user input by comparing each word in the voice-based user input with each word in a voice-based user input uttered by one or more users during normal condition.
17. The response recommendation system 107 as claimed in claim 16, wherein the processor 203 is further configured to identify one or more words detected incorrectly in the voice-based user input based on the comparison.
18. The response recommendation system 107 as claimed in claim 17, wherein the processor 203 is further configured to perform auto-correction of the one or more words detected incorrectly to form a corrected voice-based user input.
19. The response recommendation system 107 as claimed in claim 11, wherein the processor 203 is further configured to:
determine a relationship among each word in a corrected voice-based user input with one or more previous voice-based user input for identifying a context of the user 103, wherein the one or more previous voice-based user input is related to the query provided by the user 103;
generate a knowledge graph comprising one or more combination of nodes based on the voice-based user input, the corrected voice-based user input and each of the one or more previous voice-based user input;
identify a node matching the context of the user 103; and
validate the voice-based user input when the identified node matches with the corrected voice-based user input.
20. The response recommendation system 107 as claimed in claim 11 is associated with a medical profile database 111 comprising one or more medical profiles, wherein each of the one or more medical profiles comprises information of a disease type being suffered by a user, a first set of predetermined corresponding one or more voice parameters and a second set of predetermined corresponding one or more voice parameters.
Dated this 18th day of January 2018
SWETHA S. N
OF K&S PARTNERS
ATTORNEY FOR THE APPLICANT
, Description:TECHNICAL FIELD
The present subject matter is generally related to artificial intelligence based human-machine interaction systems and more particularly, but not exclusively, to a method and system for recommending a response for a voice-based user input.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 201841002124-IntimationOfGrant03-11-2022.pdf | 2022-11-03 |
| 1 | 201841002124-STATEMENT OF UNDERTAKING (FORM 3) [18-01-2018(online)].pdf | 2018-01-18 |
| 2 | 201841002124-PatentCertificate03-11-2022.pdf | 2022-11-03 |
| 2 | 201841002124-REQUEST FOR EXAMINATION (FORM-18) [18-01-2018(online)].pdf | 2018-01-18 |
| 3 | 201841002124-Written submissions and relevant documents [09-09-2022(online)].pdf | 2022-09-09 |
| 3 | 201841002124-REQUEST FOR CERTIFIED COPY [18-01-2018(online)].pdf | 2018-01-18 |
| 4 | 201841002124-POWER OF AUTHORITY [18-01-2018(online)].pdf | 2018-01-18 |
| 4 | 201841002124-AMENDED DOCUMENTS [23-08-2022(online)].pdf | 2022-08-23 |
| 5 | 201841002124-FORM 18 [18-01-2018(online)].pdf | 2018-01-18 |
| 5 | 201841002124-Correspondence to notify the Controller [23-08-2022(online)].pdf | 2022-08-23 |
| 6 | 201841002124-FORM 13 [23-08-2022(online)].pdf | 2022-08-23 |
| 6 | 201841002124-FORM 1 [18-01-2018(online)].pdf | 2018-01-18 |
| 7 | 201841002124-POA [23-08-2022(online)].pdf | 2022-08-23 |
| 7 | 201841002124-DRAWINGS [18-01-2018(online)].pdf | 2018-01-18 |
| 8 | 201841002124-US(14)-HearingNotice-(HearingDate-30-08-2022).pdf | 2022-08-19 |
| 8 | 201841002124-DECLARATION OF INVENTORSHIP (FORM 5) [18-01-2018(online)].pdf | 2018-01-18 |
| 9 | 201841002124-COMPLETE SPECIFICATION [18-01-2018(online)].pdf | 2018-01-18 |
| 9 | 201841002124-FER.pdf | 2021-10-17 |
| 10 | 201841002124-FER_SER_REPLY [24-02-2021(online)].pdf | 2021-02-24 |
| 10 | abstract 201841002124.jpg | 2018-01-22 |
| 11 | 201841002124-FORM 3 [24-02-2021(online)].pdf | 2021-02-24 |
| 11 | 201841002124-Proof of Right (MANDATORY) [20-04-2018(online)].pdf | 2018-04-20 |
| 12 | 201841002124-Information under section 8(2) [24-02-2021(online)].pdf | 2021-02-24 |
| 12 | Correspondence by Agent_Form 1_26-04-2018.pdf | 2018-04-26 |
| 13 | 201841002124-PETITION UNDER RULE 137 [24-02-2021(online)].pdf | 2021-02-24 |
| 14 | 201841002124-Information under section 8(2) [24-02-2021(online)].pdf | 2021-02-24 |
| 14 | Correspondence by Agent_Form 1_26-04-2018.pdf | 2018-04-26 |
| 15 | 201841002124-FORM 3 [24-02-2021(online)].pdf | 2021-02-24 |
| 15 | 201841002124-Proof of Right (MANDATORY) [20-04-2018(online)].pdf | 2018-04-20 |
| 16 | 201841002124-FER_SER_REPLY [24-02-2021(online)].pdf | 2021-02-24 |
| 16 | abstract 201841002124.jpg | 2018-01-22 |
| 17 | 201841002124-FER.pdf | 2021-10-17 |
| 17 | 201841002124-COMPLETE SPECIFICATION [18-01-2018(online)].pdf | 2018-01-18 |
| 18 | 201841002124-DECLARATION OF INVENTORSHIP (FORM 5) [18-01-2018(online)].pdf | 2018-01-18 |
| 18 | 201841002124-US(14)-HearingNotice-(HearingDate-30-08-2022).pdf | 2022-08-19 |
| 19 | 201841002124-POA [23-08-2022(online)].pdf | 2022-08-23 |
| 19 | 201841002124-DRAWINGS [18-01-2018(online)].pdf | 2018-01-18 |
| 20 | 201841002124-FORM 13 [23-08-2022(online)].pdf | 2022-08-23 |
| 20 | 201841002124-FORM 1 [18-01-2018(online)].pdf | 2018-01-18 |
| 21 | 201841002124-FORM 18 [18-01-2018(online)].pdf | 2018-01-18 |
| 21 | 201841002124-Correspondence to notify the Controller [23-08-2022(online)].pdf | 2022-08-23 |
| 22 | 201841002124-POWER OF AUTHORITY [18-01-2018(online)].pdf | 2018-01-18 |
| 22 | 201841002124-AMENDED DOCUMENTS [23-08-2022(online)].pdf | 2022-08-23 |
| 23 | 201841002124-Written submissions and relevant documents [09-09-2022(online)].pdf | 2022-09-09 |
| 23 | 201841002124-REQUEST FOR CERTIFIED COPY [18-01-2018(online)].pdf | 2018-01-18 |
| 24 | 201841002124-REQUEST FOR EXAMINATION (FORM-18) [18-01-2018(online)].pdf | 2018-01-18 |
| 24 | 201841002124-PatentCertificate03-11-2022.pdf | 2022-11-03 |
| 25 | 201841002124-IntimationOfGrant03-11-2022.pdf | 2022-11-03 |
| 25 | 201841002124-STATEMENT OF UNDERTAKING (FORM 3) [18-01-2018(online)].pdf | 2018-01-18 |
| 1 | search6E_15-05-2020.pdf |