Abstract: A method for real time order matching based on intent from speech input, the method comprising: - receiving, by a microphone unit [101] at least one speech input by the user; - processing, by a processing unit [102], to generate a first speech vector for the at least one speech input, wherein the processing further comprises; o generating one or more segments of pre-defined lengths based on the speech input, - processing, by a speech to intent unit [103], the speech input to detect an intent, wherein the processing further comprises; o generating a second speech vector based on the first speech vectors, o computing at least one first probability score for the second speech vector based on at least one pre-stored intent, and o determining intent of the user based on the first probability score; o processing, by an order matching unit [104], the speech input based on at least one pre-stored order; generating a third speech vector, based on first speech vector, for each segment of the one or more segments, wherein the third speech vector consists at least one dimension representing at least one character of a language script, o generating a fourth vector for each character of at least one pre-stored order, o comparing the third speech vector for each segment of the one or more segments with the fourth vector of each character to compute at least one matching score, and o determining a target order from the at least one pre-stored order based on the determined matching score; and - automatically execute, by a processing unit [102], at least one operation based on the detected intent and the determined target order.
WE CLAIM:
1. A method for real time order matching based on intent from speech input, the method comprising:
- receiving, by a microphone unit [101] at least one speech input by the user;
- processing, by a processing unit [102], to generate a first speech vector for the at least one speech input, wherein the processing further comprises;
o generating one or more segments of pre-defined lengths based on the speech input,
- processing, by a speech to intent unit [103], the speech input to
detect an intent, wherein the processing further comprises;
o generating a second speech vector based on the first speech vectors,
o computing at least one first probability score for the second speech vector based on at least one pre-stored intent, and
o determining intent of the user based on the first probability score;
o processing, by an order matching unit [104], the speech input
based on at least one pre-stored order; generating a third
speech vector, based on first speech vector, for each segment of the one or more segments, wherein the third speech vector consists at least one dimension representing at least one character of a language script,
o generating a fourth vector for each character of at least one pre-stored order,
o comparing the third speech vector for each segment of the one or
more segments with the fourth vector of each character to
compute at least one matching score, and
o determining a target order from the at least one pre-stored order
based on the determined matching score; and
- automatically execute, by a processing unit [102], at least one
operation based on the detected intent and the determined target
order.
2. The method as claimed in claim 1, wherein the speech to intent unit [103] is pretrained based on audio speech recognition data.
3. The method as claimed in claim 1, wherein the speech input may comprise of one or more speech input by one or more sources, wherein the one or more source includes one or more speech input from one or more mediums.
4. The method as claimed in claim 1, wherein the language script can comprise one or more language in the input speech.
5. The method as claimed in claim 1, wherein executing at least one operation comprises executing a pre-defined operation from a list of operations stored in a database.
6. A system for real time order matching based on intent from speech input, the system comprises:
- a microphone unit [101] configured to receive at least one speech input by the user;
- a processing unit [102] connected at least to the microphone unit [101], said processing unit [102] configured:
o to generate a first speech vector for the at least one speech
input, and o to generate one or more segments of pre-defined lengths
based on the speech input;
- a speech to intent unit [103] connected at least to the microphone
unit [101] and the processing unit [102], said speech to intent unit
[103] configured to process the speech input to detect an intent,
wherein the speech to intent unit [103] is further configured to:
o generate a second speech vector based on the first speech vector,
o compute at least one first probability score for the second speech vector based on at least one pre-stored intent , and
o determine intent of the user based on the first probability score; and - an order matching unit [104] at least to the microphone unit [101],
the processing unit [102] and the speech to intent unit [103], said
order matching unit [104] configured to:
o process the speech input based on at least one pre-stored order;
o generate a third speech vector for each segment of the one or more segments, wherein the third speech vector consists at least one dimension representing at least one character of a language script,
o generate a fourth vector for each character of at least one pre-stored order,
o compare the third speech vector for each segment of the one or more segments with the fourth vector of each character to compute at least one second matching score, and
o determine a target order from the at least one pre-stored order based on the determined second matching score.
Wherein the processing unit [102] is further configured to execute at least one operation based on the detected intent and the determined target order;
7. The system as claimed in claim 6, wherein the speech to intent unit [103] is pretrained based on audio speech recognition data.
8. The system as claimed in claim 6, wherein the speech input may comprise of one or more speech input by one or more sources, wherein the one or more source includes one or more speech input from one or more mediums.
9. The system as claimed in claim 6, wherein the language script can comprise one or more language in the input speech.
10. The system as claimed in claim 6, wherein executing at least one operation comprises executing a pre-defined operation from a list of operations stored in a database.
| # | Name | Date |
|---|---|---|
| 1 | 202341010509-STATEMENT OF UNDERTAKING (FORM 3) [16-02-2023(online)].pdf | 2023-02-16 |
| 2 | 202341010509-REQUEST FOR EXAMINATION (FORM-18) [16-02-2023(online)].pdf | 2023-02-16 |
| 3 | 202341010509-PROOF OF RIGHT [16-02-2023(online)].pdf | 2023-02-16 |
| 4 | 202341010509-POWER OF AUTHORITY [16-02-2023(online)].pdf | 2023-02-16 |
| 5 | 202341010509-FORM 18 [16-02-2023(online)].pdf | 2023-02-16 |
| 6 | 202341010509-FORM 1 [16-02-2023(online)].pdf | 2023-02-16 |
| 7 | 202341010509-FIGURE OF ABSTRACT [16-02-2023(online)].pdf | 2023-02-16 |
| 8 | 202341010509-DRAWINGS [16-02-2023(online)].pdf | 2023-02-16 |
| 9 | 202341010509-DECLARATION OF INVENTORSHIP (FORM 5) [16-02-2023(online)].pdf | 2023-02-16 |
| 10 | 202341010509-COMPLETE SPECIFICATION [16-02-2023(online)].pdf | 2023-02-16 |
| 11 | 202341010509-Correspondence_Power Of Attorney_05-06-2023.pdf | 2023-06-05 |
| 12 | 202341010509-FORM-9 [09-08-2023(online)].pdf | 2023-08-09 |
| 13 | 202341010509-FER.pdf | 2025-03-03 |
| 14 | 202341010509-FORM 3 [30-05-2025(online)].pdf | 2025-05-30 |
| 15 | 202341010509-FER_SER_REPLY [27-08-2025(online)].pdf | 2025-08-27 |
| 1 | searchdocE_14-12-2023.pdf |