Sign In to Follow Application
View All Documents & Correspondence

Artificial Intelligence Enabled Mixed Reality System And Method

Abstract: The present invention relates to an artificial intelligence based system and method for moderating interaction between interacting users. The attempt is to improve emotional intelligence of users so that a seasoned response and reaction is observed during interaction, even if situations of conflict arise. The disclosure, thus, provides for a mixed reality glass powered assistant that displays the moderated expressions of a customer to the service provider. For the same, the analytical engine upon determining the negative emotions of customer, transforms the image of customer and adds smile to his face, which is presented to the service provider via a mixed reality glass so that he responds back to the customer in a positive manner.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 November 2019
Publication Number
19/2021
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
dev.robinson@amsshardul.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-01-08
Renewal Date

Applicants

Cognizant Technology Solutions India Pvt. Ltd.
Techno Complex, No. 5/535 Old Mahabalipuram Road Okkiyam Thoraipakkam Chennai 600 097, Tamil Nadu India

Inventors

1. Rajkumar Joseph
KG Good Fortune Apt, D Block 203, Nookampalayam Road Off ECR Link Road, Cheran Nagar Perumbakkam Chennai – 600 100, Tamil Nadu India
2. Safuvan Puthan Peedika
Pallikuthi Chalil House Kavanur Post, Kerala – 673639 India
3. Arun Muthuraj Vedamanickam
W3, Akash Ganga Apartment Thangam Avenue, Saibaba Nagar Pallikaranai Chennai – 507123, Tamil Nadu India
4. Rajgopal Appakutty
1180, 19th st, G Block, Annanagar Chennai – 600040, Tamil Nadu India
5. Purwa Rathi
Flat 403, Tower 11 CHD Avenue 71 Gurgaon – 122018, Haryana India

Specification

We claim:
1) A system for moderating interaction amongst plurality of users, comprising:
an input module configured to receive image data, speech data and physiological signals of a first user from amongst the plurality of users;
an extraction module configured to extract dynamically varying facial expressions, bodily expressions, aural and other symptomatic characteristics from the received image data, speech data and the physiological signals;
an artificial intelligence engine configured to:
perform context driven emotional assessment on the
dynamically varying facial expressions, bodily expressions,
aural and other symptomatic characteristics;
apply adjustment to a degree of obtaining normalized
emotions in an event the context driven emotional assessment is
higher than a predetermined threshold; and
a virtual assistant configured to display normalized emotions of
the first user to one or more other users such that a moderated
response is generated from the one or more of the plurality of
users to the adjusted emotions of the first user.
2) The system, as claimed in accordance with claim 1, wherein the input
module is further configured to receive historical activity of temperament of the first user.

3) The system, as claimed in accordance with claim 1, further comprising one or more cameras or other image capturing devices that are configured to capture the image of the first user from different angles or different portions of frequency spectrum for transmitting to the input module.
4) The system, as claimed in accordance with claim 1, further comprising one or more audio sensors or haptic sensors that are configured to capture speech data of the first user along with prosodic features such as volume, pitch, speed, strength and tone of speech for transmitting to the input module.
5) The system, as claimed in accordance with claim 1, further comprising a pulse sensor, a heartbeat sensor, a blood pressure sensor, a respiratory rate sensor, or one or more pace frequency sensor to capture physiological signals of the first user for transmitting to the input module.
6) The system, as claimed in accordance with claim 1, wherein the extraction module is operable to provide image data and speech data to a 3-dimensional convolutional neural network (CNN) to extract the facial expressions, bodily expressions, aural and other symptomatic characteristics from the image and speech data.
7) The system,-as-claimed In accordance with claim 5, wherein the
extraction module is further operable to track progress of extraction of

the facia! expressions, bodily expressions, aural and other symptomatic characteristics based on long short-term memory (LSTM) units.
8) The system, as claimed in accordance with claim 1, wherein the context
driven sentiment is assessed based on:
determination of context of interaction amongst the plurality of users from the dynamically varying facial expressions, bodily expressions, aural and other symptomatic characteristics;
associate a representation of emotional state corresponding to the assessed context based on occurrence of speech elements and emotional tone associated therewith using feed forward neural network model; and
compare the represented emotional state with a predetermined threshold for obtaining emotional assessment score.
9) The system, as claimed in accordance with claim 1, wherein the
normalized emotions refer to emotions other than negative emotions
such as anger, disgust, anxiety, fatigue or the like.
10) The system, as claimed in accordance with claim 1, wherein artificial intelligence engine is configured to apply adjustment to the degree of obtaining the normalized emotions by morphing negative emotions of the first user with those of positive emotions using variational autoencoder-
generative adversarial network
11) The system, as claimed in accordance with claim 1, wherein the virtual assistant comprises of a wearable device configured to be worn by the plurality of users other than the first user to receive and display the normalized emotions of the first user to the plurality of users.
12) The system, as claimed in accordance with claim 11, wherein the wearable device is a wearable eyewear or a glass partition device, and is configured to select a visualization mode, virtual reality mode, augmented reality mode or mixed reality mode or the combination thereof for display of the normalized emotions.
13) A method for moderating interaction amongst plurality of users, comprising:
receiving image data, speech data and physiological signals of a first user from amongst the plurality of users;
extracting dynamically varying facial expressions, bodily expressions, aural and other symptomatic characteristics from the
received image data, speech data and the physiological signals;
performing context driven emotional assessment on the dynamically varying facial expressions, bodily expressions, aural and other symptomatic characteristics;
applying adjustment to a degree of obtaining normalized emotions in an event the context driven emotional assessment is higher

displaying normalized emotions of the first user to one or more other users equipped with virtual assistant such that a moderated response is generated from the one or more of the plurality of users to the adjusted emotions of the first user.
14) The method, as claimed in accordance with claim 13, further comprising receiving historical activity of temperament of the first user.
15) The method, as claimed in accordance with claim 13, wherein the image data is captured from different angles or different portions of frequency spectrum by an image capturing device.
16) The method, as claimed in accordance with claim 13, wherein the speech data is captured along with prosodic features such as volume, pitch, speed, strength and tone of speech.
17) The method, as claimed in accordance with claim 13, wherein the physiological signals consists measuring of pulse rate, heartbeat, blood pressure, respiratory rate, pace frequency and the like.
18)The method, as claimed in accordance with claim 13, wherein the facial expressions, bodily expressions, aural and the other symptomatic characteristics are extracted using a 3-dimensional convolutional neural network (CNN) model.
19) The method, as claimed in accordance with claim 13, furthfircomprising
tracking of progress of extraction of the facial expressions, bodily

expressions, aural and other symptomatic characteristics based on long short-term memory (LSTM) units.
20) The method, as claimed in accordance with claim 13, wherein the
context driven sentiment assessment is performed by:
determining context of interaction amongst the plurality of users from the dynamically varying facial expressions, bodily expressions, aural and other symptomatic characteristics;
associating a representation of emotional state corresponding to the assessed context based on occurrence of speech elements and emotional tone associated therewith using feed forward neural network model; and
comparing the represented emotional state with a predetermined threshold for obtaining emotional assessment score.
21) The method, as claimed in accordance with claim 13, wherein the
normalized emotions refer to emotions other than negative emotions
such as anger, disgust, anxiety, fatigue or the like.
22) The method, as claimed in accordance with claim 13, wherein the
normalized emotions are obtained by morphing negative emotions of the
first user with those of positive emotions using variational autoencoder-
generative adversarial network.
23) The method, as claimed in accordance with claim
13, wherein the virtuaT assistant comprises of a wearable device configured to be worn by the

plurality of users other than the first user to receive and display the normalized emotions of the first user to the plurality of users.

Documents

Application Documents

# Name Date
1 201941044684-IntimationOfGrant08-01-2024.pdf 2024-01-08
1 201941044684-STATEMENT OF UNDERTAKING (FORM 3) [04-11-2019(online)].pdf 2019-11-04
2 201941044684-PatentCertificate08-01-2024.pdf 2024-01-08
2 201941044684-PROOF OF RIGHT [04-11-2019(online)].pdf 2019-11-04
3 201941044684-POWER OF AUTHORITY [04-11-2019(online)].pdf 2019-11-04
3 201941044684-CLAIMS [25-01-2022(online)].pdf 2022-01-25
4 201941044684-FORM 1 [04-11-2019(online)].pdf 2019-11-04
4 201941044684-FER_SER_REPLY [25-01-2022(online)].pdf 2022-01-25
5 201941044684-FORM 3 [21-01-2022(online)].pdf 2022-01-21
5 201941044684-DRAWINGS [04-11-2019(online)].pdf 2019-11-04
6 201941044684-PETITION UNDER RULE 137 [21-01-2022(online)].pdf 2022-01-21
6 201941044684-COMPLETE SPECIFICATION [04-11-2019(online)].pdf 2019-11-04
7 201941044684-FORM 18 [06-11-2019(online)].pdf 2019-11-06
7 201941044684-FER.pdf 2021-10-17
8 Correspondence by Agent_Form1,Form26_08-11-2019.pdf 2019-11-08
8 201941044684-FORM 3 [18-02-2020(online)].pdf 2020-02-18
9 201941044684-Form 1 (Submitted on date of filing) [08-11-2019(online)].pdf 2019-11-08
9 201941044684-Request Letter-Correspondence [08-11-2019(online)].pdf 2019-11-08
10 201941044684-Form 1 (Submitted on date of filing) [08-11-2019(online)].pdf 2019-11-08
10 201941044684-Request Letter-Correspondence [08-11-2019(online)].pdf 2019-11-08
11 201941044684-FORM 3 [18-02-2020(online)].pdf 2020-02-18
11 Correspondence by Agent_Form1,Form26_08-11-2019.pdf 2019-11-08
12 201941044684-FER.pdf 2021-10-17
12 201941044684-FORM 18 [06-11-2019(online)].pdf 2019-11-06
13 201941044684-COMPLETE SPECIFICATION [04-11-2019(online)].pdf 2019-11-04
13 201941044684-PETITION UNDER RULE 137 [21-01-2022(online)].pdf 2022-01-21
14 201941044684-DRAWINGS [04-11-2019(online)].pdf 2019-11-04
14 201941044684-FORM 3 [21-01-2022(online)].pdf 2022-01-21
15 201941044684-FER_SER_REPLY [25-01-2022(online)].pdf 2022-01-25
15 201941044684-FORM 1 [04-11-2019(online)].pdf 2019-11-04
16 201941044684-CLAIMS [25-01-2022(online)].pdf 2022-01-25
16 201941044684-POWER OF AUTHORITY [04-11-2019(online)].pdf 2019-11-04
17 201941044684-PatentCertificate08-01-2024.pdf 2024-01-08
17 201941044684-PROOF OF RIGHT [04-11-2019(online)].pdf 2019-11-04
18 201941044684-STATEMENT OF UNDERTAKING (FORM 3) [04-11-2019(online)].pdf 2019-11-04
18 201941044684-IntimationOfGrant08-01-2024.pdf 2024-01-08

Search Strategy

1 2021-06-1509-37-07E_13-07-2021.pdf

ERegister / Renewals

3rd: 01 Apr 2024

From 04/11/2021 - To 04/11/2022

4th: 01 Apr 2024

From 04/11/2022 - To 04/11/2023

5th: 01 Apr 2024

From 04/11/2023 - To 04/11/2024

6th: 01 Apr 2024

From 04/11/2024 - To 04/11/2025

7th: 01 Apr 2024

From 04/11/2025 - To 04/11/2026

8th: 01 Apr 2024

From 04/11/2026 - To 04/11/2027

9th: 01 Apr 2024

From 04/11/2027 - To 04/11/2028

10th: 01 Apr 2024

From 04/11/2028 - To 04/11/2029

11th: 01 Apr 2024

From 04/11/2029 - To 04/11/2030

12th: 01 Apr 2024

From 04/11/2030 - To 04/11/2031

13th: 01 Apr 2024

From 04/11/2031 - To 04/11/2032

14th: 01 Apr 2024

From 04/11/2032 - To 04/11/2033

15th: 01 Apr 2024

From 04/11/2033 - To 04/11/2034