Abstract: “AN ARTIFICIALLY INTELLIGENT VIRTUAL ASSISTANT FOR CONTEXTUAL TASK EXECUTION, RESPONSE GENERATION AND COMMUNICATION” Methods and system for contextual task execution and response generation through a virtual assistant (VA) is described. The method includes receiving a task to be executed, extracting at least one context input from at least one source, inputting the at least one context input to a neural network, determining, by the neural network, a context/situation, determining a relevance level of the task based on the context/situation, determining whether to perform or delay or abort the task based on the relevance level of the task, determining a response based on one of the performed task, the delayed task, or the aborted task, determining a privacy level of the response based on the context/situation and classifying the response based on the privacy level, mapping the response to a type of response based on the classification, and presenting the response to the user. [FIG. 2(a)]
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION (See section 10, rule 13)
“AN ARTIFICIALLY INTELLIGENT VIRTUAL ASSISTANT FOR CONTEXTUAL TASK EXECUTION, RESPONSE GENERATION AND COMMUNICATION”
ZENSAR TECHNOLOGIES LIMITED of Plot#4 Zensar Knowledge Park, MIDC, Kharadi, Off Nagar Road, Pune, Maharashtra – 411014, India
The following specification particularly describes the invention and the manner in which it is to
be performed.
TECHNICAL FIELD
[0001] The present disclosure relates to the field of virtual assistant system. More
particularly, but not exclusively, the present disclosure describes a method and system for contextual task execution, response generation, and communication.
BACKGROUND
[0002] Now a days, a virtual assistant or a personal assistant is capable of providing
assistance to a user through voice-based query or command and through a natural-language user interface. The virtual assistant may either provide a response to the query or perform a task based on a command from the user. The examples of exiting virtual assistants are Alexa, Google Assistant, Siri or Cortana.
[0003] Presently, the virtual assistants are not intelligent enough to understand the risk
involved in providing the information to the user or executing a task for the user. In other words, the virtual assistants have no decision-making capability on whether the task should be performed or discarded, or the information being asked for should be provided or discarded based on assessment of the risk involved. The decision to perform or not perform a task, must depend on a context/situation of the user and the risk involved in receiving the response from the virtual assistant. Further, the existing virtual assistants while responding to a user do not take the privacy level of the response into consideration and do not provide the response in a specific format, comprehensible only to the user who asked the query.
[0004] Therefore, there exists a need in the art of an artificially intelligent virtual assistant
that is able to decide whether to execute, delay, or discard the task and send a response to the user regarding the task executed or the information being asked, depending on the context/situation of the user and the relevance and privacy of the task or the query being asked.
SUMMARY
[0005] The present disclosure overcomes one or more shortcomings of the prior art and
provides additional advantages discussed throughout the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments
and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
[0006] In one non-limiting embodiment of the present disclosure, a method of contextual
task execution and response generation through a virtual assistant (VA) is disclosed. The method comprises a step of receiving a task to be executed and extracting at least one context input from at least one source, the at least one context input comprising user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar, and the at least one source comprising a user device or other devices present in the proximity of the user device. The method further comprises a step of inputting the at least one context input to a neural network and determining, by the neural network, a context/situation, the determined context/situation comprising one of a public space and a private space.
[0007] In still non-limiting embodiment of the present disclosure, the method further
comprises a step of determining a relevance level of the task based on the context/situation, determining whether to perform or delay or abort the task based on the relevance level of the task, determining a response based on one of the performed task, the delayed task, or the aborted task, determining a privacy level of the response based on the context/situation and classifying the response based on the privacy level, mapping the response to a type of response based on the classification, the type of response comprising at least one of: audio from VA device, video screen on VA device, phone call, SMS, email, or internet messenger, and presenting the response to the user.
[0008] In yet another non-limiting embodiment of the present disclosure, the method
further comprises a step of providing training data to the neural network, the training data
comprising a plurality of context inputs, and each context input is mapped to a corresponding
context/situation and training the neural network based on the training data.
[0009] In yet another non-limiting embodiment of the present disclosure, the determination
of whether to perform or delay or abort the task based on the relevance level of the task comprises preforming the task if the task is of high relevance level, delaying the task if the task is of medium relevance level, and aborting the task if the task is of low relevance level.
[0010] In yet another non-limiting embodiment of the present disclosure, the presentation
of the response to the user comprises receiving a feedback from the user, presenting the response based on one of the feedback from the user, or the type of response.
[0011] In yet another non-limiting embodiment of the present disclosure, the user device
and the other devices comprises a plurality of sensors, and the plurality of sensors comprises image sensor, gyroscope, accelerometer, proximity sensor, light-sensor, barometer, fingerprint sensor, pedometer, hall sensor, digital compass, augmented and virtual reality, infrared sensor, pressure sensor, temperature sensor, iris scanner, air humidity sensor, pulse oximeter, Geiger counter, near field communication (NFC) sensor, laser, and air gesture sensor.
[0012] In yet another non-limiting embodiment of the present disclosure, a method of
contextual response generation through a virtual assistant (VA) is disclosed. The method comprises steps of receiving a query to be executed and extracting at least one context input from at least one source. The at least one context input comprises user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar, and the at least one source comprises a user device or other devices present in the proximity of the user device. The method further comprises the step of inputting the at least one context input to a neural network and determining, by the neural network, a context/situation, the determined context/situation comprises one of a public space and a private space.
[0013] In yet another non-limiting embodiment of the present disclosure, the method
further comprises steps of determining a relevance level of the query based on the context/situation, determining whether to immediately determine a response or delay a determination of response or abort a determination of response based on the relevance level of the query, if the response is determined, determining a privacy level of the response based on the context/situation and classifying the response based on the privacy level, mapping the response to a type of response based on the classification, the type of response comprising at least one of audio from VA device, video screen on VA device, phone call, SMS, email, or internet messenger, and presenting the response to the user.
[0014] In yet another non-limiting embodiment of the present disclosure, the method
further comprises steps of providing training data to the neural network, the training data
comprising a plurality of context inputs, and each context input is mapped to a corresponding
context/situation, and training the neural network based on the training data.
[0015] In yet another non-limiting embodiment of the present disclosure, the determination
of whether to immediately determine a response or delay a determination of response or abort a determination of response based on the relevance level of the query comprises: determining the response if the query is of high relevance level, delaying the determination of the response if the query is of medium relevance level, and aborting the determination of the response if the query is of low relevance level.
[0016] In yet another non-limiting embodiment of the present disclosure, the presentation
of the response to the user comprises receiving a feedback from the user, presenting the response based on one of the feedback from the user, or the type of response.
[0017] In yet another non-limiting embodiment of the present disclosure, a virtual assistant
(VA) system for contextual task execution and response generation is disclosed. The VA system comprising a neural network, a user interface configured to receive a task to be executed, a processing system in communication with the neural network and the user interface and configured to extract at least one context input from at least one source, the at least one context input comprising user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around the user, people around the user, size of an area, video feed of the place, user device activities, user’s emails and calendar, and the at least one sources comprising a user device or other devices present in the proximity of the user device, and provide the at least one context input to the neural network.
[0018] In yet another non-limiting embodiment of the present disclosure, the neural
network is configured to receive the at least one context input, determine a context/situation based on at least one context input, the determined context/situation comprising one of a public space and a private space. The processing system is configured to determine a relevance level of the task based on the context/situation, determine whether to perform or delay or abort the task based on the relevance level of the task, determine a response based on one of the performed task, the delayed task, or the aborted task, determine a privacy level of the response based on the context/situation and classify the response based on the privacy level, and map the response to a type of response based on the classification, the type of response comprises at least one of audio
from VA device, video screen on VA device, phone call, SMS, email, or internet messenger. The VA system further comprises an output device in communication with the processing system and configured to present the response to the user.
[0019] In yet another non-limiting embodiment of the present disclosure, the processing
system is configured to provide training data to the neural network, the training data comprising a plurality of context inputs, and each context input is mapped to a corresponding context/situation and training the neural network based on the training data.
[0020] In yet another non-limiting embodiment of the present disclosure, to determine
whether to perform or delay or abort the task based on the relevance level of the task, the processing
system is configured to perform the task if the task is of high relevance level, delay the task if the
task is of medium relevance level, and abort the task if the task is of low relevance level.
[0021] In yet another non-limiting embodiment of the present disclosure, to present the
response to the user, the user interface is configured to receive a feedback from the user, the output device configured to present the response based on one of the feedback from the user, or the type of response.
[0022] In yet another non-limiting embodiment of the present disclosure, a virtual assistant
(VA) system for contextual response generation is disclosed. The VA system comprising a neural network, a user interface configured to receive a query to be executed, a processing system in communication with the neural network and the user interface and configured to extract at least one context input from at least one source, the at least one context input comprising user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar, and the at least one source comprising a user device or other devices present in the proximity of the user device, and provide the at least one context input to the neural network.
[0023] In yet another non-limiting embodiment of the present disclosure, the neural
network is configured to receive the at least one context input, and determine a context/situation based on the at least one context input, the determined context/situation comprising one of a public space and a private space. The processing system is configured to determine a relevance level of the query based on the context/situation, determine whether to immediately determine a response
or delay a determination of the response or abort a determination of the response based on the relevance level of the query, if the response is determined, determine a privacy level of the response based on the context/situation and classify the response based on the privacy level, and map the response to a type of response based on the classification. The VA system further comprises an output device in communication with the processing system and configured to present the response based on the user.
[0024] In yet another non-limiting embodiment of the present disclosure, the processing
system is configured to provide training data to the neural network, the training data comprising a plurality of context inputs, and each context input is mapped to a corresponding context/situation and train the neural network based on the training data.
[0025] In yet another non-limiting embodiment of the present disclosure, to determine
whether to immediately determine a response or delay a determination of the response or abort a determination of the response based on the relevance level of the query, the processing system is configured to determine the response if the query is of high relevance level, delay the determination of the response if the query is of medium relevance level, and abort the determination of the response if the query is of low relevance level.
[0026] In yet another non-limiting embodiment of the present disclosure, to present the
response to the user, the user interface is configured to receive a feedback from the user, the output device configured to present the response based on one of the feedback from the user, or the type of response.
[0027] The foregoing summary is illustrative only and is not intended to be in any way
limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
[0028] The features, nature, and advantages of the present disclosure will become more
apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout. Some embodiments of
system and/or methods in accordance with embodiments of the present subject matter are now
described, by way of example only, and with reference to the accompanying figures, in which:
[0029] Fig. 1 shows a flowchart of an exemplary method for contextual task execution and
response generation through a virtual assistant (VA), in accordance with an embodiment of the
present disclosure;
[0030] Fig. 2(a) shows a block diagram illustrating a virtual assistant (VA) system for
contextual task execution and response generation and fig. 2(b) show a block diagram illustrating
an AI module, in accordance with another embodiment of the present disclosure;
[0031] Fig. 3 shows a flowchart of an exemplary method for contextual response
generation through a virtual assistant (VA), in accordance with another embodiment of the present
disclosure;
[0032] Fig. 4(a) shows a block diagram illustrating a virtual assistant (VA) system for
contextual response generation and fig. 4(b) show a block diagram illustrating an AI module, in
accordance with another embodiment of the present disclosure;
[0033] Fig. 5 illustrates a block diagram of a system for query execution and response
generation, in accordance with another embodiment of the present disclosure;
[0034] Fig. 6 illustrates a block diagram of a system for task execution and response
generation, in accordance with another embodiment of the present disclosure;
[0035] It should be appreciated by those skilled in the art that any block diagram herein
represents conceptual views of illustrative systems embodying the principles of the present subject
matter. Similarly, it will be appreciated that any flow charts, flow diagrams and the like represent
various processes which may be substantially represented in computer readable medium and
executed by a computer or processor, whether or not such computer or processor is explicitly
shown.
DETAILED DESCRIPTION
[0038] The terms “comprises”, “comprising”, “include(s)”, or any other variations thereof,
are intended to cover a non-exclusive inclusion, such that a setup, system or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or system or method. In
other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
[0039] In the following detailed description of the embodiments of the disclosure,
reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0040] Fig. 1 shows a flowchart of an exemplary method 100 for contextual task execution
and response generation through a virtual assistant (VA), in accordance with an embodiment of the present disclosure.
[0041] At block 101, a task for execution is received by the VA. The task may comprise
social media tasks, data entry tasks, ecommerce tasks, administrative tasks, email and chat support. The task may also comprise personal assistance like booking of accommodation and flights, managing your calendar and booking appointments, responding to customer queries through email or phone, handling personal accounts and segregation of bills, sending over details requested by client, noting down minutes of a meeting and documenting them, and extracting data from various data sources. It is to be noted that the present disclosure is not limited to the above-mentioned task. Any other task which may be executed or performed by the VA is well within the scope of the present disclosure.
[0042] At block 103, at least one context input from at least one source is extracted for
determining a context/situation of the user. The at least one context input may comprise user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar. The at least one context input may be extracted from a user device or other devices present in the proximity of the user.
[0043] In an embodiment of the present disclosure, the user device or other devices may
comprise a plurality of sensors. The plurality of sensors may comprise image sensor, gyroscope,
accelerometer, proximity sensor, light-sensor, barometer, fingerprint sensor, pedometer, hall sensor, digital compass, augmented & virtual reality, infrared sensor, pressure sensor, temperature sensor, iris scanner, air humidity sensor, pulse oximeter, Geiger counter, near field communication (NFC) sensor, laser, and air gesture sensor.
[0044] At block 105, the at least one context input is provided to an Artificial Intelligent
(AI) module. The AI module may comprise a neural network. The neural network may comprise plurality of layers. At block 107, the at least one context input is processed by the neural network to determine a context/situation. The context/situation may comprise one of a public space and a private space. The public space may further comprise a public space inside home and a public space outside home.
[0043] In an embodiment of the present disclosure, the neural network of the AI module is
trained by training data. The training data may comprise a plurality of context inputs. Each of the
context input is mapped to a corresponding context/situation. For example, a formal attire may be
mapped to a public space, user’s location may comprise inside home or outside home locations,
and the sound at the place is mapped to a public or private space based on the decibels of the sound.
Similarly, other types of context inputs may be mapped with a respective context/ situation. The
neural network may also be trained with different combinations of context inputs and each
combination of context inputs is associated with a respective context/situation.
[0044] At block 109, a relevance level of the task based on the context/situation. A
relevance level is determined as appropriateness of execution of the task in the present context/situation of the user. Different types of task may be assigned a relevance level based on the context/situation of the user. For example, a user may assign a task of playing a song to a VA when the user is present in a meeting room of the office. In such a scenario, the task may be assigned a low relevance level. In another scenario, a user assigns a task of starting a presentation to a VA in a meeting room, then the task may be assigned a high relevance value. Similarly, different types of tasks may be assigned a relevance level i.e. high, medium, or low based on appropriateness of the task based on the context/situation of the user.
[0045] In an embodiment of the present disclosure, a task may be assigned a relevance
level based on urgency and importance of the task. If the task is urgent and important, then the task may be assigned a high relevance level. If the task is urgent and not important, then the task
may be assigned a high relevance level. If the task is not urgent but important, then the task is
assigned a medium relevance level. If the task is neither urgent nor important, then it is assigned a
low relevance level.
[0046] At block 111, the task is performed, delayed, or aborted based on the relevance
level. If the task is of high relevance level, then the task is performed immediately. If the task is
of medium relevance level, then the task may be delayed by a particular time duration or a
predetermined time period. The predetermined time period may be determined based on a user
preference. If the task is of low relevance level, then the task may be discarded.
[0047] At block 113, a response is determined based on one of the performed task, the
delayed task, or the aborted task. The response may comprise information related to the performed
task, the delayed task, or the aborted task. The response of the delayed task or the aborted task may
also comprise reason for delaying or aborting the task.
[0048] At block 115, a privacy level of the response is determined. An information present
in the response is checked for privacy level. If the response comprises information that is private
with respect to present context/situation of the user, then the response is assigned a high privacy
level. If the response comprises information that is not private with respect to present
context/situation of the user, then the response is assigned a low privacy level. Based on the privacy
level of the response, the response is classified as private or non-private.
[0049] At block 117, the response is mapped to a type of response based on the
classification of the response. The type of response may comprise audio from VA device, video
screen on VA device, phone call, SMS, email, or internet messenger. In an exemplary embodiment,
If the response is classified as private, then the response is mapped to a phone call, an SMS, email,
or internet messenger. If the response is classified as non-private, then the response is mapped to
audio from VA device or video screen on VA device. In an exemplary embodiment, the response
may be modified based on the context/situation of the user. For example, in a non-limiting
embodiment, the response may be modified to a hint of an implicit content rather than the complete
response.
[0050] At block 119, the response is presented to the user based on the mapping. In an
exemplary embodiment, the response may be presented to the user based on a user preference. The
user preference may be taken in the form of a user feedback. In another exemplary embodiment
the user preference is given higher priority than the mapped type of response, while presenting the
response. The steps of method 100 may be performed in an order different from the order described
above. The above method 100 may be understood by an exemplary embodiment mentioned below.
[0051] In an exemplary embodiment of the present disclosure, a user has set a task in
virtual assistant system to immediately alert him whenever there is a major bug found in the product his company is selling. The user present context/situation is calculated based on at one context input disclosed above. The user’s determined context is a meeting with a client who uses that product. If a bug is detected, a relevance level of generating an alert is determined based on the context/situation of the user. The user wants to have an immediate alert of the bug, the task of generating an alert becomes urgent and important. Based on the urgency and importance of the task, the task is assigned a high relevance level. Since, the task is of high relevance level it is executed or performed, and a response is determined.
[0052] The determined response is now checked for privacy level based on the user
context/situation. Since, the user is in a meeting with the client who uses that product, the response is classified as private. As major bug has been found in the product which can alert the clients to not buy the product. Based on the classification, VA identifies that in the context of this meeting, audio output of any message may be a disturbance for the user and others attending the meeting, and sending an email with all the content may be dangerous as the laptop of the user may be connected to a projector. In such a situation, the VA may decide to present the response by giving a hint to the user through an email and hiding the information relevant to the context. The content may read as follows: “There is something important which needs to your Attention. You may have a look at it after your meeting and not now. Click here to have a look ”. In a non-limiting embodiment, the VA may also notify the user with a text message, thereby securing the private information.
[0053] Fig. 2(a) shows a block diagram illustrating a virtual assistant (VA) system 200 for
contextual task execution and response generation and fig. 2(b) show a block diagram illustrating
an AI module 209, in accordance with another embodiment of the present disclosure.
[0054] In an embodiment of the present disclosure, a virtual assistant (VA) system 200 for
contextual task execution and response generation is disclosed. The VA system comprises a user interface 201, a processing system 203, data sources 207, AI module 209, output device 211 in
communication with each other. The AI module 209 may comprise a neural network 217, a memory 213, and processor 215 in communication with each other.
[0055] The user interface 201 may be configured to receive a task to be executed from a
user. The task may comprise any task performed or executed by the VA as discussed above. The processing system 203 may be configured to extract at least one context input from at least one source. The at least one context input may comprise user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around the user, people around the user, size of an area, video feed of the place, user device activities, user’s emails and calendar. The at least one source may comprise a user device or other devices present in the proximity of the user device. The processing system 203 may be then configured to provide the at least one context input to the neural network 217.
[0056] The neural network 217 is trained by the processing system 203 based on training
data as discussed above. The neural network 217 may be configured to receive the at least one context input and determine a context/situation based on at least one context input. The determined context/situation may comprise one of a public space and a private space. The public space may comprise a public space inside home and a public space outside home.
[0057] The processing system 203 may be configured to determine a relevance level of the
task based on the context/situation. The relevance level of the task may be determined based on the context/situation as discussed above. The processing system 203 may be then configured to determine whether to perform or delay or abort the task based on the relevance level of the task. If the task is of high relevance level, then the task is performed immediately. If the task is of medium relevance level, then the task may be delayed by a particular time duration or a predetermined time period. The predetermined time period may be determined based on a user preference. If the task is of low relevance level, then the task is discarded.
[0058] The processing system 203 may be then configured to determine a response based
on one of the performed task, the delayed task, or the aborted task. The response may comprise
information related to the performed task, the delayed task, or the aborted task. The response of
the delayed task or the aborted task may also comprise reason for delaying or aborting the task.
[0059] The processing system 203 may be then configured to determine a privacy level of
the response based on the context/situation and classify the response based on the privacy level. If
the response comprises information that is private with respect to present context/situation of the user, then the response is assigned a high privacy level. If the response comprises information that is not private with respect to present context/situation of the user, then the response is assigned a low privacy level. Based on the privacy level of the response, the response is classified as private or non-private. In an exemplary embodiment, the response may be modified based on the context/situation of the user. For example, in a non-limiting embodiment, the response may be modified to a hint of an implicit content rather than the complete response.
[0060] The processing system 203 may be then configured to map the response to a type
of response based on the classification. The type of response comprises at least one of: audio from
VA device, video screen on VA device, phone call, SMS, email, or internet messenger. The output
device 211 may be configured to present the response to the user. the response is presented to the
user based on the mapping. In an exemplary embodiment, the response may be presented to the
user based on a user preference. The user preference may be taken in the form of a user feedback.
In another exemplary embodiment the user preference is given higher priority than the mapped
type of response, while presenting the response, thereby securing the private information.
[0061] Fig. 3 shows a flowchart of an exemplary method 300 of contextual response
generation through a virtual assistant (VA), in accordance with another embodiment of the present disclosure.
[0062] At block 301, a query for execution is received by the VA. The query may comprise
an internet search or a data source search. The internet search may comprise keyword search, image and chart search, product research, or geographical search. It is to be noted that the present disclosure is not limited to the above-mentioned query. Any other query which may be executed by the VA is well within the scope of the present disclosure.
[0063] At block 303, at least one context input from at least one source is extracted for
determining a context/situation of the user. The at least one context input may comprise user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar. The at least one context input may be extracted from a user device or other devices present in the proximity of the user.
[0064] In an embodiment of the present disclosure, the user device or other devices may
comprise a plurality of sensors. The plurality of sensors may comprise image sensor, gyroscope, accelerometer, proximity sensor, light-sensor, barometer, fingerprint sensor, pedometer, hall sensor, digital compass, augmented & virtual reality, infrared sensor, pressure sensor, temperature sensor, iris scanner, air humidity sensor, pulse oximeter, Geiger counter, near field communication (NFC) sensor, laser, and air gesture sensor.
[0065] At block 305, the at least one context input is provided to an Artificial Intelligent
(AI) module. The AI module may comprise neural network. The neural network comprises plurality of layers. At block 307, the at least one context input is processed by the neural network to determine a context/situation. The context/situation may comprise one of a public space and a private space. The public space may comprise a public space inside home and a public space outside home.
[0066] In an embodiment of the present disclosure, the neural network of the AI module is
trained by training data. The training data may comprise a plurality of context inputs. Each of the
context input is mapped to a corresponding context/situation. For example, a formal attire may be
mapped to public space, user’s location may comprise inside home or outside home locations, and
the sound at the place is mapped to a public or private space based on the decibels of sound.
Similarly, other types of context inputs may be mapped with a respective context/ situation. The
neural network may also be trained with different combinations of context inputs and each
combination of context inputs is associated with a respective context/situation.
[0067] At block 309, a relevance level of the query based on the context/situation. A
relevance level is determined as appropriateness of execution of the query in the present context/situation of the user. Different types of queries may be assigned a relevance level based on the context/situation of the user. For example, a user may ask a query of “latest movies” to a VA when the user is present in a meeting room of the office. In such a scenario, the query may be assigned a low relevance level. In another scenario, a user may ask a query of “list of clients” to a VA in a meeting room, then query may be assigned a high relevance value. Similarly, different types of queries may be assigned a relevance level i.e. high, medium, or low based on appropriateness of the query based on the context/situation of the user.
[0068] In an embodiment of the present disclosure, a query may be assigned a relevance
level based on urgency and importance of the query. If the query is urgent and important, then the query may be assigned a high relevance level. If the query is urgent and not important, then the query may be assigned a high relevance level. If the query is not urgent but important, then the query is assigned a medium relevance level. If the query is neither urgent nor important, then it is assigned a low relevance level.
[0069] At block 311, the VA determines whether to immediately determine a response or
delay a determination of the response or abort a determination of the response based on the relevance level of the query. If the query is of high relevance level, then the response may be determined immediately. If the query is of medium relevance level, then the determination of response may be delayed by a particular time duration or a predetermined time period. The predetermined time period may be determined based on a user preference. If the query is of low relevance level, then the determination of response is aborted.
[0070] At block 313, a response is determined or not is checked. If the response is
determined, at block 315 a privacy level of the response is determined based on the context/situation. An information present in the response is checked for privacy level. If the response comprises information that is private with respect to present context/situation of the user, then the response is assigned a high privacy level. If the response comprises information that is not private with respect to present context/situation of the user, then the response is assigned a low privacy level. Based on the privacy level of the response, the response is classified as private or non-private.
[0071] At block 317, the response is mapped to a type of response based on the
classification of the response. The type of response may comprise audio from VA device, video screen on VA device, phone call, SMS, email, or internet messenger. In an exemplary embodiment, If the response is classified as private, then the response is mapped to a phone call, an SMS, email, or internet messenger. If the response is classified as non-private, then the response is mapped to audio from VA device, or video screen on VA device. In an exemplary embodiment, the response may be modified based on the context/situation of the user. For example, in a non-limiting embodiment, the response may be modified to a hint of an implicit content rather than the complete response, thereby securing the private information.
[0072] At block 319, the response is presented to the user based on the mapping. In an
exemplary embodiment, the response may be presented to the user based on a user preference. The
user preference may be taken in the form of a user feedback. In another exemplary embodiment
the user preference is given higher priority than the mapped type of response, while presenting the
response. The steps of method 300 may be performed in an order different from the order described
above. The above method 300 may be understood by an exemplary embodiment mentioned below.
[0073] In an exemplary embodiment of the present disclosure, a user is in a discussion with
a real estate agent and the user needs to handover a cheque to the agent. The context/situation is determined as public space outside home. The user does not remember his account balances across his multiple bank accounts. To decide which bank cheque to issue to the real estate agent, the user asks a query to the VA to get the account balances across all his accounts. The VA identifies the relevance level of the query to determine whether the response should be immediately determined or not.
[0074] Based on the urgency and importance of the query, the response to the query is
immediately determined. When the response is determined, information present in the response is checked for privacy level. The account balance details is determined to be a private information of the user as the user is in public space outside home and the risk level associated with sharing such information in public is high. The response is assigned high privacy level and the response is classified as private. The VA maps the response to the type of response “SMS” and sends an SMS with the account balance details to the user.
[0075] Fig. 4(a) shows a block diagram illustrating a virtual assistant (VA) system 400 for
contextual response generation and fig. 4(b) show a block diagram illustrating an AI module, in accordance with another embodiment of the present disclosure.
[0076] In an embodiment of the present disclosure, a virtual assistant (VA) system 400 for
contextual task execution and response generation is disclosed. The VA system comprises a user interface 401, a processing system 403, data sources 407, AI module 409, output device 411 in communication with each other. The AI module 409 may comprise a neural network 417, a memory 413, and processor 415 in communication with each other.
[0077] The user interface 401 may be configured to receive a query to be executed from a
user. The query may comprise any query executed by the VA as discussed above. The processing
system 403 may be configured to extract at least one context input from at least one source. The at least one context input may comprise user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around the user, people around the user, size of an area, video feed of the place, user device activities, user’s emails and calendar. The at least one source may comprise a user device or other devices present in the proximity of the user device. The processing system 403 may be then be configured to provide the at least one context input to the neural network 417.
[0078] The neural network 417 is trained by the processing system 403 based on training
data as discussed above. The neural network 417 may be configured to receive the at least one context input and determine a context/situation based on at least one context input. The determined context/situation may comprise one of a public space and a private space. The public space may comprise a public space inside home and a public space outside home.
[0079] The processing system 403 may be configured to determine a relevance level of the
query based on the context/situation. The relevance level of the query may be determined based on the context/situation as discussed above. The processing system 403 may be then configured to determine whether to immediately determine a response or delay the determination of the response or abort the determination of the response based on the relevance level of the query. If the query is of high relevance level, then the response may be determined immediately. If the query is of medium relevance level, then the determination of response may be delayed by a particular time duration or a predetermined time period. The predetermined time period may be determined based on a user preference. If the query is of low relevance level, then the determination of response is aborted.
[0080] If the response is determined, the processing system 403 may be then configured to
determine a privacy level of the response based on the context/situation and classify the response based on the privacy level. If the response comprises information that is private with respect to present context/situation of the user, then the response is assigned a high privacy level. If the response comprises information that is not private with respect to present context/situation of the user, then the response is assigned a low privacy level. Based on the privacy level of the response, the response is classified as private or non-private. In an exemplary embodiment, the response may
be modified based on the context/situation of the user. For example, in a non-limiting embodiment,
the response may be modified to provide a hint or an alert rather than a complete response.
[0081] The processing system 403 may be then configured to map the response to a type
of response based on the classification. The type of response comprises at least one of: audio from VA device, video screen on VA device, phone call, SMS, email, or internet messenger. The output device 411 may be configured to present the response to the user. The response is presented to the user based on the mapping. In an exemplary embodiment, the response may be presented to the user based on a user preference. The user preference may be taken in the form of a user feedback. In another exemplary embodiment the user preference is given a higher priority than the mapped type of response, while presenting the response.
[0082] Fig. 5 illustrates a block diagram of a system 500 of query execution and response
generation, in accordance with another embodiment of the present disclosure.
[0083] In an embodiment of the present disclosure, the system 500 comprises a user
interface 501, AI module 503, neural network 505, context inputs 507, data sources 509, information provider 511, immediate information provider 511, delayed information provider 513, information discarder 515, output device selector 519, output modifier 521, and output device 523 in communication with each other.
[0084] The user interface 501 may be configured to receive a query to be executed. The
query may comprise any query executed by the VA as discussed above. The AI module 503 may be configured to extract context inputs 507. The context input may comprise any context input as discussed above. The AI module 503 may be then configured to provide the context inputs to the neural network 505.
[0085] The neural network 505 is trained based on training data as discussed above. The
neural network 505 may be configured to receive the context inputs 507 and determine a context/situation based on the context inputs 507. The determined context/situation may comprise one of a public space and a private space. The public space may comprise a public space inside home and a public space outside home.
[0086] The AI module 503 may be configured to determine a relevance level of the query
based on the context/situation. The relevance level of the query may be determined based on the context/situation as discussed above. The information provider 511 may be configured to
determine whether to immediately determine a response or delay the determination of the response or abort the determination of the response based on the relevance level of the query. If the query is of high relevance level, then the information provider 511 provides the response to immediate information provider 513. If the query is of medium relevance level, then the information provider 511 provides the response to delayed information provider 515. The delayed information provider 515 delays the determination of response by a predetermined time period as discussed above. If the query is of low relevance level, then the information provider 511 provides the response information discarder 517.
[0087] If the response is determined, the AI module 503 may be then configured to
determine a privacy level of the response based on the context/situation and classify the response based on the privacy level. If the response comprises information that is private with respect to present context/situation of the user, then the response is assigned a high privacy level. If the response comprises information that is not private with respect to present context/situation of the user, then the response is assigned a low privacy level. Based on the privacy level of the response, the response is classified as private or non-private.
[0088] The output device selector 519 may be then configured to select an output device
523 of a plurality of output devices based on the classification. The output device may comprise an audio device, video device, or a mobile phone. In an exemplary embodiment, the response may be modified based on the context/situation by an output control modifier 521. For example, in a non-limiting embodiment, the response may be modified to provide a hint or an alert rather than a complete response, thereby securing the private information.
[0089] The output device 523 may be configured to present the response or the modified
response to the user. In an exemplary embodiment, the response may be presented to the user based on a user preference. The user preference may be taken in the form of a user feedback. In another exemplary embodiment the user preference is given higher priority than the mapped type of response, while presenting the response.
[0090] In this embodiment of the present disclosure, the AI module 503, neural network
505, context inputs 507, data sources 509, information provider 511, immediate information provider 511, delayed information provider 513, information discarder 515, output device selector
519, output modifier 521 may comprise one or more processors and other hardware components
for query execution and response generation.
[0091] Fig. 6 illustrates a block diagram of a system 600 for task execution and response
generation, in accordance with another embodiment of the present disclosure.
[0092] In an embodiment of the present disclosure, the system 600 comprises a user
interface 601, AI module 603, neural network 605, context inputs 611, data sources 609, task
executor 607, immediate task executor 613, delayed task scheduler 615, Task discarder 617, task
output generator 619, output device selector 621, output modifier 623, and output device 625 in
communication with each other.
[0093] The user interface 601 may be configured to receive a task to be executed. The task
may comprise any task executed by the VA as discussed above. The AI module 603 may be
configured to extract context inputs 611. The context input 611 may comprise any context input
as discussed above. The AI module 603 may be then configured to provide the context inputs to
the neural network 605.
[0094] The neural network 605 is trained based on training data as discussed above. The
neural network 605 may be configured to receive the context inputs and determine a
context/situation based on the context inputs. The determined context/situation may comprise one
of a public space and a private space. The public space may comprise a public space inside home
and a public space outside home.
[0095] The AI module 603 may be configured to determine a relevance level of the task
based on the context/situation. The relevance level of the task may be determined based on the
context/situation as discussed above. The task executer 607 may be configured to determine
whether to perform or delay or abort the task based on the relevance level of the task. If the query
is of high relevance level, then the task executer 607 provides the response to immediate task
executer 613. If the task is of medium relevance level, then the task executer 607 provides the
response to delayed task scheduler 615. The delayed task scheduler 615 delays the task execution
by a predetermined time period as discussed above. If the task is of low relevance level, then the
task executer 607 provides the response information task discarder 617.
[0096] The task output generator 619 may be configured to determine a response based on
one of the performed task, the delayed task, or the aborted task. The response may comprise
information related to the performed task, the delayed task, or the aborted task. The response of
the delayed task or the aborted task may also comprise reason for delaying or aborting the task.
[0097] The AI module 603 may be configured to determine a privacy level of the response
based on the context/situation and classify the response based on the privacy level. If the response comprises information that is private with respect to present context/situation of the user, then the response is assigned a high privacy level. If the response comprises information that is not private with respect to present context/situation of the user, then the response is assigned a low privacy level. Based on the privacy level of the response, the response is classified as private or non-private.
[0098] The output device selector 621 may be then configured to select an output device
523 of a plurality of output devices based on the classification. The output device 625 may comprise an audio device, video device, or a mobile phone. In an exemplary embodiment, the response may be modified based on the context/situation by an output control modifier 623. For example, in a non-limiting embodiment, the response may be modified to provide a hint or an alert rather than a complete response.
[0099] The output device 625 may be configured to present the response or the modified
response to the user. In an exemplary embodiment, the response may be presented to the user based on a user preference. The user preference may be taken in the form of a user feedback. In another exemplary embodiment the user preference is given higher priority than the mapped type of response, while presenting the response.
[0100] In this embodiment of the present disclosure, the AI module 603, neural network
605, context inputs 611, data sources 609, task executor 607, immediate task executor 613, delayed task scheduler 615, task discarder 617, task output generator 619, output device selector 621, output modifier 623 may comprise one or more processors and other hardware components for query execution and response generation.
[0101] The user interface may include at least one of a key input means, such as a keyboard
or keypad, a touch input means, such as a touch sensor or touchpad, a sound source input means, a camera, or various sensors, and the user interface may include a gesture input means. Further, the user interface may include all types of input means that are currently in development or are to be developed in the future. The user interface may receive information from the user through the
touch panel of the display and transfer the inputted information to the processing system, processor, AI module.
[0102] The processing system may comprise one or more processors, memory, and
communication interface. The memory may be software maintained and/or organized in loadable
code segments, modules, applications, programs, etc., which may be referred to herein as software
modules. Each of the software modules may include instructions and data that, when installed or
loaded on a processor and executed by the processor, contribute to a run-time image that controls
the operation of the processors. When executed, certain instructions may cause the processor to
perform functions in accordance with certain methods, algorithms and processes described herein.
[0103] The illustrated steps are set out to explain the exemplary embodiments shown, and
it should be anticipated that ongoing technological development will change the manner in which
particular functions are performed. These examples are presented herein for purposes of
illustration, and not limitation. Further, the boundaries of the functional building blocks have been
arbitrarily defined herein for the convenience of the description. Alternative boundaries can be
defined so long as the specified functions and relationships thereof are appropriately performed.
Alternatives (including equivalents, extensions, variations, deviations, etc., of those described
herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained
herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0104] Furthermore, one or more computer-readable storage media may be utilized in
implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer- readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., are non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0105] Suitable processors include, by way of example, a general purpose processor, a
special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality
of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine.
Advantages of the embodiment of the present disclosure are illustrated herein.
[0106] In an embodiment, the present disclosure provides artificially intelligent virtual
assistant that decides whether to execute, delay or discard the task and send a response to the user, depending on the context/situation of the user and the privacy level of the response.
[0107] In an embodiment, the present disclosure provides artificially intelligent virtual
assistant that may able to decide whether to execute the query and send a response to the user, depending on the context/situation of the user and the privacy of the response.
Reference Number:
Reference Number Description Reference Number Description
200 VIRTUAL ASSISTANT SYSTEM 507 CONTEXT INPUTS
201 USER INTERFACE 509 DATA SOURCES
203 PROCESSING SYSTEM 511 INFORMATION PROVIDER
205 SOURCES 513 IMMEDIATE INFORMATION PROVIDER
207 DATA SOURCES 515 DELAYED INFORMATION PROVIDER
209 AI MODULE 517 INFORMATION DISCARDER
211 OUTPUT DEVICE 519 OUTPUT DEVICE SELECTOR
213 MEMORY 521 OUTPUT MODIFIER
215 PROCESSOR 523 OUTPUT DEVICE
217 NEURAL NETWORK 600 SYSTEM
400 VIRTUAL ASSISTANT SYSTEM 601 USER INTERFACE
401 USER INTERFACE 603 AI MODULE
403 PROCESSING SYSTEM 605 NEURAL NETWORK
405 SOURCES 607 TASK EXECUTOR
407 DATA SOURCES 609 DATA SOURCES
409 AI MODULE 611 CONTEXT INPUTS
411 OUTPUT DEVICE 613 IMMEDIATE TASK EXECUTOR
413 MEMORY 615 DELAYED TASK SCHEDULER
415 PROCESSOR 617 TASK DISCARDER
417 NEURAL NETWORK 619 TASK OUTPUT GENERATOR
500 SYSTEM 621 OUTPUT DEVICE SELECTOR
501 USER INTERFACE 623 OUTPUT MODIFIER
503 AI MODULE 625 OUTPUT DEVICE
505 NEURAL NETWORK
We Claim:
1. A method (100) of contextual task execution and response generation through a virtual
assistant (VA), the method comprising:
receiving (101) a task to be executed;
extracting (103) at least one context input from at least one source, wherein the at least one context input comprises user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar, and wherein the at least one source comprise a user device or other devices present in the proximity of the user device;
inputting (105) the at least one context input to a neural network;
determining (107), by the neural network, a context/situation, wherein the determined context/situation comprises one of a public space and a private space;
determining (109) a relevance level of the task based on the context/situation;
determining (111) whether to perform or delay or abort the task based on the relevance level of the task;
determining (113) a response based on one of the performed task, the delayed task, or the aborted task;
determining (115) a privacy level of the response based on the context/situation and classifying the response based on the privacy level;
mapping (117) the response to a type of response based on the classification, wherein the type of response comprises at least one of: audio from VA device, video screen on VA device, phone call, SMS, email, or internet messenger; and
presenting (119) the response to the user.
2. The method (100) as claimed in claim 1, further comprising:
providing training data to the neural network, wherein the training data comprises a plurality of context inputs, and wherein each context input is mapped to a corresponding context/situation; and
training the neural network based on the training data.
3. The method (100) as claimed in claim 1, wherein determining (111) whether to perform or
delay or abort the task based on the relevance level of the task comprises:
performing the task if the task is of high relevance level; delaying the task if the task is of medium relevance level; and aborting the task if the task is of low relevance level.
4. The method (100) as claimed in claim 1, wherein presenting (119) the response to the user
comprises:
receiving a feedback from the user; presenting the response based on one of:
the feedback from the user, or
the type of response.
5. The method (100) as claimed in claim 1, wherein the user device and the other devices comprises a plurality of sensor, and wherein the plurality of sensor comprises image sensor, gyroscope, accelerometer, proximity sensor, light-sensor, barometer, fingerprint sensor, pedometer, hall sensor, digital compass, augmented and virtual reality, infrared sensor, pressure sensor, temperature sensor, iris scanner, air humidity sensor, pulse oximeter, Geiger counter, near field communication (NFC) sensor, laser, and air gesture sensor.
6. A method (300) of contextual response generation through a virtual assistant (VA), the method comprising:
receiving (301) a query to be executed;
extracting (303) at least one context input from at least one source, wherein the at least one context input comprises user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar, and wherein the at least one source comprises a user device or other devices present in the proximity of the user device;
inputting (305) the at least one context input to a neural network;
determining (307), by the neural network, a context/situation, wherein the determined context/situation comprises one of a public space and a private space;
determining (309) a relevance level of the query based on the context/situation;
determining (311) whether to immediately determine a response or delay a determination of response or abort a determination of response based on the relevance level of the query;
if the response is determined (313), determining (315) a privacy level of the response based on the context/situation and classifying the response based on the privacy level;
mapping (317) the response to a type of response based on the classification, wherein the type of response comprises at least one of: audio from VA device, video screen on VA device, phone call, SMS, email, or internet messenger; and
presenting (319) the response to the user.
7. The method (300) as claimed in claim 6, further comprising:
providing training data to the neural network, wherein the training data comprises a plurality of context inputs, and wherein each context input is mapped to a corresponding context/situation; and
training the neural network based on the training data.
8. The method (300) as claimed in claim 6, wherein determining (311) whether to
immediately determine a response or delay a determination of response or abort a determination
of response based on the relevance level of the query comprises:
determining the response if the query is of high relevance level;
delaying the determination of the response if the query is of medium relevance level; and
aborting the determination of the response if the query is of low relevance level.
9. The method (300) as claimed in claim 6, wherein presenting (319) the response to the user
comprises:
receiving a feedback from the user; presenting the response based on one of: the feedback from the user, or
the type of response.
10. The method as claimed in claim 6, wherein the user device and the other devices comprises a plurality of sensor, and wherein the plurality of sensor comprises image sensor, gyroscope, accelerometer, proximity sensor, light-sensor, barometer, fingerprint sensor, pedometer, hall sensor, digital compass, augmented and virtual reality, infrared sensor, pressure sensor, temperature sensor, iris scanner, air humidity sensor, pulse oximeter, Geiger counter, near field communication (NFC) sensor, laser, and air gesture sensor.
11. A virtual assistant (VA) system (200) for contextual task execution and response generation, the VA system comprising:
a neural network (217);
a user interface (201) configured to receive a task to be executed;
a processing system (203) in communication with the neural network (217) and the user interface (201) and configured to:
extract at least one context input from at least one source, wherein the at least one context input comprises user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around the user, people around the user, size of an area, video feed of the place, user device activities, user’s emails and calendar, and wherein the at least one source comprises a user device or other devices present in the proximity of the user device; and
provide the at least one context input to the neural network; wherein the neural network is configured to:
receive the at least one context input;
determine a context/situation based on at least one context input, wherein the determined context/situation comprises one of a public space and a private space; wherein the processing system is configured to:
determine a relevance level of the task based on the context/situation;
determine whether to perform or delay or abort the task based on the relevance level of the task;
determine a response based on one of the performed task, the delayed task, or the aborted task;
determine a privacy level of the response based on the context/situation and classify the response based on the privacy level; and
map the response to a type of response based on the classification, wherein the type of response comprises at least one of: audio from VA device, video screen on VA device, phone call, SMS, email, or internet messenger; and
an output device (211) in communication with the processing system and configured to present the response to the user.
12. The VA system (200) as claimed in claim 11, wherein the processing system (203) is
configured to:
provide training data to the neural network (217), wherein the training data comprises a plurality of context inputs, and wherein each context input is mapped to a corresponding context/situation; and
training the neural network (217) based on the training data.
13. The VA system (200) as claimed in claim 11, wherein to determine whether to perform or
delay or abort the task based on the relevance level of the task, the processing system (203) is
configured to:
perform the task if the task is of high relevance level; delay the task if the task is of medium relevance level; and abort the task if the task is of low relevance level.
14. The VA system (200) as claimed in claim 11, wherein to present the response to the user,
the user interface (201) is configured to:
receive a feedback from the user, wherein the output device (211) is configured to: present the response based on one of: the feedback from the user, or
the type of response.
15. The VA system (200) as claimed in claim 11, wherein the user device and the other devices comprises a plurality of sensor, and wherein the plurality of sensor comprises image sensor, gyroscope, accelerometer, proximity sensor, light-sensor, barometer, fingerprint sensor, pedometer, hall sensor, digital compass, augmented & virtual reality, infrared sensor, pressure sensor, temperature sensor, iris scanner, air humidity sensor, pulse oximeter, Geiger counter, near field communication (NFC) sensor, laser, and air gesture sensor.
16. A virtual assistant (VA) system (400) for contextual response generation, the VA system comprising:
a neural network (417);
a user interface (401) configured to receive a query to be executed;
a processing system (403) in communication with the neural network and the user interface and configured to:
extract at least one context input from at least one source, wherein the at least one context input comprises user attire, user posture, user location, lighting of the place, sound at the place, movement in the place around user, people around the user, size of an area, video feed of the place, user device activities, and user emails and calendar, and wherein the at least one source comprises a user device or other devices present in the proximity of the user device; and
provide the at least one context input to the neural network; wherein the neural network is configured to:
receive the at least one context input; and
determine a context/situation based on the at least one context input, wherein the determined context/situation comprises one of a public space and a private space; wherein the processing system is configured to:
determine a relevance level of the query based on the context/situation;
determine whether to immediately determine a response or delay a determination of the response or abort a determination of the response based on the relevance level of the query;
if the response is determined, determine a privacy level of the response based on the context/situation and classify the response based on the privacy level; and
map the response to a type of response based on the classification; and an output device (411) in communication with the processing system and configured to present the response based on the user.
17. The VA system (400) as claimed in claim 16, wherein the processing system is configured
to:
provide training data to the neural network (417), wherein the training data comprises a plurality of context inputs, and wherein each context input is mapped to a corresponding context/situation; and
train the neural network (417) based on the training data.
18. The VA system (400) as claimed in claim 16, wherein to determine whether to immediately
determine a response or delay a determination of response or abort a determination of response
based on the relevance level of the query, the processing system is configured to:
determine the response if the query is of high relevance level;
delay the determination of the response if the query is of medium relevance level; and
abort the determination of the response if the query is of low relevance level.
19. The VA system (400) as claimed in claim 16, wherein to present the response to the user,
the user interface (401) is configured to:
receive a feedback from the user, wherein the output device (411) is configured to: present the response based on one of:
the feedback from the user, or
the type of response.
20. The VA system (400) as claimed in claim 16, wherein the user device and the other devices comprises a plurality of sensor, and wherein the plurality of sensor comprises image sensor, gyroscope, accelerometer, proximity sensor, light-sensor, barometer, fingerprint sensor, pedometer, hall sensor, digital compass, augmented & virtual reality, infrared sensor, pressure sensor, temperature sensor, iris scanner, air humidity sensor, pulse oximeter, Geiger counter, near field communication (NFC) sensor, laser, and air gesture sensor.
| # | Name | Date |
|---|---|---|
| 1 | 201821048373-STATEMENT OF UNDERTAKING (FORM 3) [20-12-2018(online)].pdf | 2018-12-20 |
| 2 | 201821048373-PROVISIONAL SPECIFICATION [20-12-2018(online)].pdf | 2018-12-20 |
| 3 | 201821048373-PROOF OF RIGHT [20-12-2018(online)].pdf | 2018-12-20 |
| 4 | 201821048373-POWER OF AUTHORITY [20-12-2018(online)].pdf | 2018-12-20 |
| 5 | 201821048373-FORM 1 [20-12-2018(online)].pdf | 2018-12-20 |
| 6 | 201821048373-DRAWINGS [20-12-2018(online)].pdf | 2018-12-20 |
| 7 | 201821048373-DECLARATION OF INVENTORSHIP (FORM 5) [20-12-2018(online)].pdf | 2018-12-20 |
| 8 | 201821048373-Proof of Right (MANDATORY) [07-05-2019(online)].pdf | 2019-05-07 |
| 9 | 201821048373-RELEVANT DOCUMENTS [19-11-2019(online)].pdf | 2019-11-19 |
| 10 | 201821048373-FORM 13 [19-11-2019(online)].pdf | 2019-11-19 |
| 11 | 201821048373-FORM 18 [20-12-2019(online)].pdf | 2019-12-20 |
| 12 | 201821048373-DRAWING [20-12-2019(online)].pdf | 2019-12-20 |
| 13 | 201821048373-CORRESPONDENCE-OTHERS [20-12-2019(online)].pdf | 2019-12-20 |
| 14 | 201821048373-COMPLETE SPECIFICATION [20-12-2019(online)].pdf | 2019-12-20 |
| 15 | 201821048373-ORIGINAL UR 6(1A) FORM 1-080519.pdf | 2019-12-31 |
| 16 | Abstract1.jpg | 2020-01-07 |
| 17 | 201821048373-OTHERS [07-09-2021(online)].pdf | 2021-09-07 |
| 18 | 201821048373-FER_SER_REPLY [07-09-2021(online)].pdf | 2021-09-07 |
| 19 | 201821048373-DRAWING [07-09-2021(online)].pdf | 2021-09-07 |
| 20 | 201821048373-CLAIMS [07-09-2021(online)].pdf | 2021-09-07 |
| 21 | 201821048373-ABSTRACT [07-09-2021(online)].pdf | 2021-09-07 |
| 22 | 201821048373-FER.pdf | 2021-10-18 |
| 23 | 201821048373-PatentCertificate22-12-2023.pdf | 2023-12-22 |
| 24 | 201821048373-IntimationOfGrant22-12-2023.pdf | 2023-12-22 |
| 1 | SEARCHSTRATEGY-E_16-03-2021.pdf |