Abstract: HEADSET AND METHOD FOR DETECTING EMOTIONS ABSTRACT A headset (100) for detecting emotions is disclosed. The headset (100) is adapted to detect brain signals of a user. The headset (100) is further adapted process the brain signals of the brain using a machine learning technique. The headset (100) divides the received brain signals into segments using scaling and extracts features of the segmented brain signals. The headset (100) further classifies the extracted features based on predefined features stored in a training database (108) using a specified classifier to detect the emotions. The headset (100) generates a notification regarding the detected emotions on the user device (110). Claims: 10, Figures: 3 Figure 1 is selected.
Description:BACKGROUND
Field of Invention
[001] Embodiments disclosed herein relate, in general, to a field of machine learning, and more particularly, to a headset and method for accurate detection of emotions using a machine learning technique.
Description of Related Art
[002] Emotion recognition is the process of identifying and understanding human emotions, typically through the analysis of various cues such as facial expressions, body language, and vocal tone. This technology has numerous potential applications, including but not limited to human-computer interfaces, market research, mental health monitoring, and human-robot interaction.
[003] An ability to detect and understand human emotions is crucial for improving the quality and effectiveness of various human-computer interactions. For instance, in the realm of customer service, emotion recognition can enhance the user experience by allowing computers and robots to respond to a user's emotional state. In the field of healthcare, it can assist in the early detection of mental health disorders by analyzing patients' facial expressions and vocal cues, thereby enabling timely intervention and treatment. In educational settings, emotion recognition can be employed to tailor instructional materials and techniques to students' emotional states, optimizing the learning process. Moreover, it can be used in market research to gauge consumer reactions to products, advertisements, or services.
[004] Despite its potential, the current state of image-based emotion recognition systems has several significant disadvantages, limiting their effectiveness and accuracy. Current image-based emotion recognition systems primarily rely on analyzing isolated facial expressions, ignoring valuable contextual information. Human emotions are often complex and influenced by a wide range of factors, including body language, voice intonation, and the surrounding environment. Focusing solely on facial expressions can result in misinterpretations and incomplete emotional assessments.
[005] Image-based systems are sensitive to variations in lighting, background, and image quality. These factors can lead to inaccuracies in emotion detection, as they can alter the appearance of facial features and expressions, resulting in false positives or negatives. The use of facial recognition technologies in public and private spaces has raised significant privacy concerns. Individuals are often uncomfortable with their emotions being continually monitored and recorded without their consent, leading to ethical and legal issues.
[006] Many current emotion recognition systems are trained on datasets that cannot be representative of diverse cultural and demographic groups. This can result in biases and inaccuracies when applied to individuals from different backgrounds, as emotional expressions can vary significantly across cultures.
[007] Achieving real-time emotion recognition with image-based systems can be challenging due to the need for extensive computational resources. This limitation hinders the adoption of emotion recognition in time-sensitive applications, such as human-robot interaction or autonomous vehicles.
[008] Emotions are often expressed through multiple modalities, including facial expressions, body language, and vocal cues. Current systems typically focus on only one modality at a time, missing out on the valuable information provided by others.
[009] There is thus a need for an improved and advanced that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[0010] Embodiments in accordance with the present invention provide a headset to detect emotions. The headset comprising: a sensor adapted to detect brain signals of a user. The headset further comprising: a processor connected to the sensor. The processor is configured to: receive the brain signals from the sensor. The processor is further configured to divide the received brain signals into segments using scaling. The processor is further configured to extract features of the segmented brain signals. The extracted features are selected based on a spectral power, a coherence between brain regions, event-related potentials (ERPs), or a combination thereof. The processor is further configured to classify the extracted features based on predefined features stored in a training database using a specified classifier to detect the emotions. The processor is further configured to generate a notification regarding the detected emotions on the user device.
[0011] Embodiments in accordance with the present invention further provide a method for detecting emotions using a headset. The method comprising steps of: detecting brain signals of brain using a sensor; receiving the brain signals from the sensor; dividing the received brain signals into segments using scaling; extracting features of the segmented brain signals; classifying the extracted features based on predefined features stored in a training database using a specified classifier to detect the emotions; and generating a notification regarding the detected emotions on the user device.
[0012] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a headset and a method for detecting emotions.
[0013] Next, embodiments of the present application may provide a headset and a method for detecting emotions that utilize a machine learning technique.
[0014] Next, embodiments of the present application may provide a headset and a method for detecting emotions that is helpful for early detection of emotions such as happy, sad, anger, depressed, excited and so forth.
[0015] These and other advantages will be apparent from the present application of the embodiments described herein.
[0016] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0018] FIG. 1 illustrates a block diagram of a headset to detect emotions, according to an embodiment of the present invention;
[0019] FIG. 2 illustrates a block diagram of a processor, according to an embodiment of the present invention; and
[0020] FIG. 3 depicts a flowchart of a method for detecting emotions, according to an embodiment of the present invention.
[0021] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0022] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0023] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0024] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0025] FIG. 1 illustrates a headset 100 to detect emotions (hereinafter referred to as the headset 100), according to an embodiment of the present invention. The emotions may be, but not limited to, happy, sad, anger, depressed, excited, fearful, surprised, calm, anxious, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the emotion, including known, related art, and/or later developed technologies. The headset 100 may comprise a sensor 102, a processor 104, an application server 106, a training database 108, a user device 110, and a computer application 112, in an embodiment of the present invention.
[0026] The sensor 102 may be adapted to detect brain signals of a user upon wearing the headset 100. The brain signals may be electroencephalogram (EEG) signals that may be detected by the sensor 102 in real time.
[0027] In an embodiment of the present invention, the processor 104 may be located on the application server 106 and may be connected to the sensor 102. In an embodiment of the present invention, the processor 104 may be location in the headset 100 and connected to sensor 102 to receive the detected brain signals. In an embodiment of the present invention, the processor 104 may be configured to execute programming instructions associated with the headset 100, in an embodiment of the present invention. In a preferred embodiment of the present invention, the processor 104 may be ESP-32. According to embodiments of the present invention, the processor 104 may be, but not limited to, a Programmable Logic Control unit (PLC), a microcontroller, a microprocessor, a computing device, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the processor 104 including known, related art, and/or later developed technologies.
[0028] In an embodiment of the present invention, the processor 104 may receive the detected brain signals from the sensor 102. The processor 104 may divide the received brain signals into segments using scaling. In an embodiment of the present invention, the scaling may be, but not limited to a horizontal scaling, a vertical scaling, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the scaling, including known, related art, and/or later developed technologies. The processor 104 may further extract features of the segmented brain signals. The extracted features may be selected based on, but not limited to a spectral power, a coherence between brain regions, event-related potentials (ERPs), and so forth. Embodiments of the present invention are intended to include or otherwise cover any extracted features including known, related art, and/or later developed technologies.
[0029] In an embodiment of the present invention, the processor 104 may further classify the extracted features based on predefined features stored in the training database 108 using a specified classifier to detect the emotions. In an embodiment of the present invention, the classifier may be but not limited to a K-nearest neighbor (KNN), a support vector machine (SVM), a decision tree, a random forest (RF), a naive Bayes (NB), a logistic regression (LR), a rule generation, artificial neural networks (ANNs), a Deep Convolutional Neural Network (CNN) classifier, and so forth. The processor 104 may further generate a notification regarding the detected emotions on the user device 110.
[0030] In an embodiment of the present invention, the application server 106 may be, but not limited to, a laptop, a desktop, and alike. The application server 106 may be a cloud server, in an embodiment of the present invention. Embodiments of the present invention are intended to include or otherwise cover any type of the application server 106 including known, related art, and/or later developed technologies.
[0031] The user device 110 may be a device used by the user, in an embodiment of the present invention. The user device 110 may receive the notification regarding the detected emotions on the user device 110. The user device 110 may be, but not limited to, a personal computer, a consumer device, and alike. Embodiments of the present invention are intended to include or otherwise cover any type of the user device 110 including known, related art, and/or later developed technologies. In an embodiment of the present invention, the personal computer may be, but not limited to, a desktop, a server, a laptop, and alike. Embodiments of the present invention are intended to include or otherwise cover any type of the personal computer including known, related art, and/or later developed technologies.
[0032] Further, in an embodiment of the present invention, the consumer device may be, but not limited to, a tablet, a mobile phone, a notebook, a netbook, a smartphone, a wearable device, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the consumer device including known, related art, and/or later developed technologies. Embodiments of the present invention are intended to include or otherwise cover any type of the user device 110 including known, related art, and/or later developed technologies.
[0033] In a preferred embodiment of the present invention, the user device 110 may comprise the computer application 112 that may be a computer-readable program installed on the user device 110 for executing functions associated with the headset 100. Further, in an embodiment of the present invention, the computer application 112 may enable the user to login into the headset 100 by providing login details such as, but not limited to, a user identifier, a password, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the login details that may be associated with the user. Upon login into the headset 100, the user may input the user inputs into the headset 100 by using the computer application 112. In another embodiment of the present invention, the computer application 112 may enable the user to input the user inputs into the headset without providing the login details into the headset 100.
[0034] In an embodiment of the present invention, the user device 110 may communicate to the application server 106 using a communication network (not shown). According to an embodiment of the present invention, the communication network may be a data network such as, but not limited to, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the data network, including known, related art, and/or later developed technologies. In another embodiment of the present invention, the communication network may be a wireless network, such as, but not limited to, a cellular network and may employ various technologies including an Enhanced Data Rates for Global Evolution (EDGE), a General Packet Radio Service (GPRS), and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the wireless network, including known, related art, and/or later developed technologies.
[0035] FIG. 2 illustrates a block diagram of the processor 104 of the headset 100, according to an embodiment of the present invention. The processor 104 may comprise programming instructions in form of programming modules such as a signal receiving module 200, a signal processing module 202, a prediction module 204, and a notification module 206, in an embodiment of the present invention.
[0036] In an embodiment of the present invention, the signal receiving module 200 may receive the brain signals from the computer application 112. the signal receiving module 200 may be configured to pre-process the received brain signals using a pre-processing technique, referring but not limited to an image filtration, a background elimination, an unwanted layer elimination, and so forth. Embodiments of the present disclosure are intended to include or otherwise cover any type of the pre-processing technique, including known, related art, and/or later developed technologies.
[0037] In an embodiment of the present invention, the signal receiving module 200 may be coupled to the signal processing module 202. The signal processing module 202 may receive the pre-processed images from the signal receiving module 200. The signal processing module 202 may divide the received pre-processed brain signals into the segments using the scaling. Further, the signal processing module 202 may extract features of the segmented brain signals.
[0038] In an embodiment of the present invention, the signal processing module 202 may be coupled to the prediction module 204. The prediction module 204 may utilize the training database 108 for classifying the brain signals based on extracted features. The extracted features of the brain signals may be matched with prestored brain signals of the training database 108 and may be classified to detect the emotions. In an embodiment of the present invention, the prediction module 204 may be coupled to the notification module 206. The notification module 206 may be adapted to generate a notification based on an output of the prediction module 204. The notification module 206 may transmit the notification to the user device 110.
[0039] FIG. 3 depicts a flowchart of a method 300 for detecting emotions using the headset 100, according to an embodiment of the present invention. The method 300 comprising steps of:
[0040] At step 302, the headset 100 may enable detection of the brain signals using the sensor 102.
[0041] At step 304, the headset 100 may receive the brain signals from the sensor 102 by using the processor 104.
[0042] At step 306, the headset 100 may divide the received brain signals into the segments using the scaling.
[0043] At step 308, the headset 100 may extract the features of the segmented brain signals.
[0044] At step 310, the headset 100 may classify the extracted features based on the predefined features stored in the training database 108 using the specified classifier to detect the emotions.
[0045] At step 312, the headset 100 may generate the notification regarding the detected emotions on the user device 110.
[0046] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0047] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or headsets and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A headset (100) to detect emotions, the headset (100) comprising:
a sensor (102) adapted to detect brain signals of a user; and
a processor (104) connected to the sensor (102), characterized in that the processor (104) configured to:
receive the brain signals from the sensor (102);
divide the received brain signals into segments using scaling;
extract features of the segmented brain signals, wherein the extracted features are selected based on a spectral power, a coherence between brain regions, event-related potentials (ERPs), or a combination thereof;
classify the extracted features based on predefined features stored in a training database (108) using a specified classifier to detect the emotions; and
generate a notification regarding the detected emotions on the user device (110).
2. The headset (100) as claimed in claim 1, wherein the scaling used by the processor (104) to divide the brain signals is selected from a horizontal scaling, a vertical scaling, or a combination thereof.
3. The headset (100) as claimed in claim 1, wherein the classifier is selected from a K-nearest neighbor (KNN), a support vector machine (SVM), a decision tree, a random forest (RF), a naive Bayes (NB), a logistic regression (LR), a rule generation, artificial neural networks (ANNs), a Deep Convolutional Neural Network (CNN) classifier, or a combination thereof.
4. The headset (100) as claimed in claim 1, wherein the brain signals are selected from Magnetic resonance images (MRI), a computed tomography (CT) scans, or a combination thereof.
5. The headset (100) as claimed in claim 1, wherein processor (104) is located on an application server (106).
6. A method for detecting emotions using a headset (100), the method comprising steps of:
detecting brain signals of brain using a sensor (102);
receiving the brain signals from the sensor (102);
dividing the received brain signals into segments using scaling;
extracting features of the segmented brain signals;
classifying the extracted features based on predefined features stored in a training database (108) using a specified classifier to detect the emotions; and
generating a notification regarding the detected emotions on the user device (110).
7. The method as claimed in claim 6, wherein the scaling used by the processor (104) to divide the brain signals is selected from a horizontal scaling, a vertical scaling, or a combination thereof.
8. The method as claimed in claim 6, wherein the classifier is selected from a K-nearest neighbor (KNN), a support vector machine (SVM), a decision tree, a random forest (RF), a naive Bayes (NB), a logistic regression (LR), a rule generation, artificial neural networks (ANNs), a Deep Convolutional Neural Network (CNN) classifier, or a combination thereof.
9. The method as claimed in claim 6, wherein the brain signals are selected from Magnetic resonance images (MRI), a computed tomography (CT) scans, or a combination thereof.
10. The method as claimed in claim 6, wherein the processor (104) is located on an application server (106).
Date: December 05, 2023
Place: Noida
Dr. Keerti Gupta
Agent for the Applicant
(IN/PA-1529)
| # | Name | Date |
|---|---|---|
| 1 | 202341083327-STATEMENT OF UNDERTAKING (FORM 3) [06-12-2023(online)].pdf | 2023-12-06 |
| 2 | 202341083327-REQUEST FOR EARLY PUBLICATION(FORM-9) [06-12-2023(online)].pdf | 2023-12-06 |
| 3 | 202341083327-POWER OF AUTHORITY [06-12-2023(online)].pdf | 2023-12-06 |
| 4 | 202341083327-OTHERS [06-12-2023(online)].pdf | 2023-12-06 |
| 5 | 202341083327-FORM-9 [06-12-2023(online)].pdf | 2023-12-06 |
| 6 | 202341083327-FORM FOR SMALL ENTITY(FORM-28) [06-12-2023(online)].pdf | 2023-12-06 |
| 7 | 202341083327-FORM 1 [06-12-2023(online)].pdf | 2023-12-06 |
| 8 | 202341083327-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-12-2023(online)].pdf | 2023-12-06 |
| 9 | 202341083327-EDUCATIONAL INSTITUTION(S) [06-12-2023(online)].pdf | 2023-12-06 |
| 10 | 202341083327-DRAWINGS [06-12-2023(online)].pdf | 2023-12-06 |
| 11 | 202341083327-DECLARATION OF INVENTORSHIP (FORM 5) [06-12-2023(online)].pdf | 2023-12-06 |
| 12 | 202341083327-COMPLETE SPECIFICATION [06-12-2023(online)].pdf | 2023-12-06 |
| 13 | 202341083327-Proof of Right [15-02-2024(online)].pdf | 2024-02-15 |