Abstract: A method and system are described for controlling an Internet of Things (IoT) device using multi-modal gesture commands. The method includes receiving one or more multi-modal gesture commands comprising at least one of one or more personalized gesture commands and one or more personalized voice commands of a user. The method includes detecting one or more multi-modal gesture commands using at least one of a gesture grammar database and a voice grammar database. The method includes determining one or more control parameters and IoT device status information associated with a plurality of IoT devices in response to the detection. The method includes identifying IoT device that user intends to control from plurality of IoT devices based on user requirement, IoT device status information, and line of sight information associated with user. The method includes controlling identified IoT device based on one or more control parameters and IoT device status information.
Claims:WE CLAIM:
1. A method for controlling an Internet of Things (IoT) device using multi-modal gesture commands, the method comprising:
receiving, by an application server, one or more multi-modal gesture commands comprising at least one of one or more personalized gesture commands and one or more personalized voice commands of a user;
detecting, by the application server, the one or more multi-modal gesture commands using at least one of a gesture grammar database and a voice grammar database;
determining, by the application server, one or more control parameters and IoT device status information associated with a plurality of IoT devices in response to the detection;
identifying, by the application server, the IoT device that the user intends to control from the plurality of IoT devices based on user requirement, the IoT device status information, and line of sight information associated with the user; and
controlling, by the application server, the identified IoT device based on the one or more control parameters and the IoT device status information.
2. The method of claim 1, further comprising initiating a conversation with the user to receive additional information if at least one of the user requirement, the determined one or more control parameters and the IoT device status information are insufficient for identifying the IoT device that the user intends to control.
3. The method of claim 2, further comprising determining a mode of controlling the identified IoT device based on at least one of the additional information, the line of sight information, the user requirement, and the IoT device status information, wherein the mode of controlling the identified IoT device comprises a gesture command mode, a voice command mode, and a hybrid mode.
4. The method of claim 1, wherein the one or more multi-modal gesture commands are captured by each of the plurality of IoT devices using one or more sensors, wherein the one or more sensors comprise an image sensor, an audio sensor, and a haptic sensor.
5. The method of claim 4, further comprising controlling the identified IoT device from a remote location based on the one or more multi-modal gesture commands captured by each of the plurality of IoT devices using the one or more sensors.
6. The method of claim 1, wherein detection comprises performing at least one of: one or more image processing techniques and speech processing techniques on the received one or more multi-modal gesture commands.
7. The method of claim 1, wherein detection further comprises comparing the one or more personalized gesture commands with a gesture grammar database; and comparing one or more personalized voice commands with a voice grammar database to determine a match.
8. The method of claim 7, wherein each of the plurality of IoT devices are pre-configured by:
receiving a voice input from the user, wherein the voice input may be processed using speech processing techniques to identify the user;
assigning a unique name to each of the plurality of IoT devices based on an output of one or more natural language processing techniques implemented on the received voice input;
defining one or more multi-modal gesture commands to control each of the plurality of IoT devices, wherein the one or more multi-modal gesture commands comprise at least one of one or more personalized gesture commands and one or more personalized voice commands, wherein the gesture grammar database is created based on the defined one or more personalized gesture commands using one or more deep learning techniques; and wherein the voice grammar database is created based on the defined one or more personalized voice commands using the one or more deep learning techniques.
9. An application server to control an Internet of Things (IoT) device using multi-modal gesture commands, the application server comprising:
a processor; and
a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which, on execution, causes the processor to:
receive one or more multi-modal gesture commands comprising at least one of one or more personalized gesture commands and one or more personalized voice commands of a user;
detect the one or more multi-modal gesture commands using at least one of a gesture grammar database and a voice grammar database;
determine one or more control parameters and IoT device status information associated with a plurality of IoT devices in response to the detection;
identify the IoT device that the user intends to control from the plurality of IoT devices based on user requirement, the IoT device status information, and line of sight information associated with the user; and
control the identified IoT device based on the one or more control parameters and the IoT device status information.
10. The application server of claim 9, wherein the processor is further configured to initiate a conversation with the user to receive additional information if at least one of the user requirement, the determined one or more control parameters and the IoT device status information are insufficient for identifying the IoT device that the user intends to control.
11. The application server of claim 10, wherein the processor is further configured to determine a mode of controlling the identified IoT device based on at least one of the additional information, the line of sight information, the user requirement, and the IoT device status information, wherein the mode of controlling the identified IoT device comprises a gesture command mode, a voice command mode, and a hybrid mode.
12. The application server of claim 9, wherein the one or more multi-modal gesture commands are captured by each of the plurality of IoT devices using one or more sensors, wherein the one or more sensors comprise an image sensor, an audio sensor, and a haptic sensor.
13. The application server of claim 12, wherein the processor is further configured to control the identified IoT device from a remote location based on the one or more multi-modal gesture commands captured by each of the plurality of IoT devices using the one or more sensors.
14. The application server of claim 9, wherein detection comprises performing at least one of: one or more image processing techniques and speech processing techniques on the received one or more multi-modal gesture commands.
15. The application server of claim 9, wherein detection further comprises comparing the one or more personalized gesture commands with a gesture grammar database; and comparing one or more personalized voice commands with a voice grammar database to determine a match.
16. The application server of claim 15, wherein each of the plurality of IoT devices are pre-configured by:
receiving a voice input from the user, wherein the voice input may be processed using speech processing techniques to identify the user;
assigning a unique name to each of the plurality of IoT devices based on an output of one or more natural language processing techniques implemented on the received voice input;
defining one or more multi-modal gesture commands to control each of the plurality of IoT devices, wherein the one or more multi-modal gesture commands comprise at least one of one or more personalized gesture commands and one or more personalized voice commands, wherein the gesture grammar database is created based on the defined one or more personalized gesture commands using one or more deep learning techniques; and wherein the voice grammar database is created based on the defined one or more personalized voice commands using the one or more deep learning techniques.
Dated this 27th day of March, 2017
Swetha S. N
Of K&S Partners
Agent for the Applicant , Description:TECHNICAL FIELD
The present subject matter is related, in general to controlling IoT (Internet of Things) devices, and more particularly, but not exclusively to a method and a system for controlling an IoT device using multi-modal gesture commands.
| # | Name | Date |
|---|---|---|
| 1 | 201741010818-RELEVANT DOCUMENTS [20-09-2023(online)].pdf | 2023-09-20 |
| 1 | Power of Attorney [27-03-2017(online)].pdf | 2017-03-27 |
| 2 | 201741010818-PROOF OF ALTERATION [27-10-2021(online)].pdf | 2021-10-27 |
| 2 | Form 5 [27-03-2017(online)].pdf | 2017-03-27 |
| 3 | Form 3 [27-03-2017(online)].pdf | 2017-03-27 |
| 3 | 201741010818-IntimationOfGrant17-09-2021.pdf | 2021-09-17 |
| 4 | Form 18 [27-03-2017(online)].pdf_409.pdf | 2017-03-27 |
| 4 | 201741010818-PatentCertificate17-09-2021.pdf | 2021-09-17 |
| 5 | Form 18 [27-03-2017(online)].pdf | 2017-03-27 |
| 5 | 201741010818-FER_SER_REPLY [11-08-2020(online)].pdf | 2020-08-11 |
| 6 | Form 1 [27-03-2017(online)].pdf | 2017-03-27 |
| 6 | 201741010818-FORM 3 [11-08-2020(online)].pdf | 2020-08-11 |
| 7 | Drawing [27-03-2017(online)].pdf | 2017-03-27 |
| 7 | 201741010818-Information under section 8(2) [11-08-2020(online)].pdf | 2020-08-11 |
| 8 | Description(Complete) [27-03-2017(online)].pdf_410.pdf | 2017-03-27 |
| 8 | 201741010818-PETITION UNDER RULE 137 [11-08-2020(online)].pdf | 2020-08-11 |
| 9 | 201741010818-FER.pdf | 2020-05-12 |
| 9 | Description(Complete) [27-03-2017(online)].pdf | 2017-03-27 |
| 10 | Correspondence by Agent_Form30 And Form1_27-06-2017.pdf | 2017-06-27 |
| 10 | PROOF OF RIGHT [22-06-2017(online)].pdf | 2017-06-22 |
| 11 | Correspondence by Agent_Form30 And Form1_27-06-2017.pdf | 2017-06-27 |
| 11 | PROOF OF RIGHT [22-06-2017(online)].pdf | 2017-06-22 |
| 12 | 201741010818-FER.pdf | 2020-05-12 |
| 12 | Description(Complete) [27-03-2017(online)].pdf | 2017-03-27 |
| 13 | 201741010818-PETITION UNDER RULE 137 [11-08-2020(online)].pdf | 2020-08-11 |
| 13 | Description(Complete) [27-03-2017(online)].pdf_410.pdf | 2017-03-27 |
| 14 | 201741010818-Information under section 8(2) [11-08-2020(online)].pdf | 2020-08-11 |
| 14 | Drawing [27-03-2017(online)].pdf | 2017-03-27 |
| 15 | 201741010818-FORM 3 [11-08-2020(online)].pdf | 2020-08-11 |
| 15 | Form 1 [27-03-2017(online)].pdf | 2017-03-27 |
| 16 | 201741010818-FER_SER_REPLY [11-08-2020(online)].pdf | 2020-08-11 |
| 16 | Form 18 [27-03-2017(online)].pdf | 2017-03-27 |
| 17 | 201741010818-PatentCertificate17-09-2021.pdf | 2021-09-17 |
| 17 | Form 18 [27-03-2017(online)].pdf_409.pdf | 2017-03-27 |
| 18 | Form 3 [27-03-2017(online)].pdf | 2017-03-27 |
| 18 | 201741010818-IntimationOfGrant17-09-2021.pdf | 2021-09-17 |
| 19 | Form 5 [27-03-2017(online)].pdf | 2017-03-27 |
| 19 | 201741010818-PROOF OF ALTERATION [27-10-2021(online)].pdf | 2021-10-27 |
| 20 | Power of Attorney [27-03-2017(online)].pdf | 2017-03-27 |
| 20 | 201741010818-RELEVANT DOCUMENTS [20-09-2023(online)].pdf | 2023-09-20 |
| 1 | 127thtposearchstrategyE_12-05-2020.pdf |
| 2 | 127thfileinpassE_12-05-2020.pdf |