Sign In to Follow Application
View All Documents & Correspondence

An Eeg Based Emotion Recognition System With Feature Level Explainability Using Deep Learning

Abstract: AN EEG-BASED EMOTION RECOGNITION SYSTEM WITH FEATURE-LEVEL EXPLAINABILITY USING DEEP LEARNING The invention discloses an EEG-based system and method for emotion recognition with feature-level explainability using deep learning. EEG signals are acquired from multiple electrodes and preprocessed using band-pass filtering and Independent Component Analysis to remove artifacts. Features including power spectral density and wavelet coefficients are extracted to represent spatial and temporal characteristics of brain activity. A deep learning model comprising a hybrid convolutional neural network and long short-term memory network classifies emotional states such as happiness, sadness, stress, or calmness. To enhance transparency, an explainability module quantifies and displays the contribution of each EEG channel and feature to the prediction, enabling interpretable decision-making. The system operates in real time or offline and is adaptable to different EEG devices. Applications include mental health monitoring, education, and brain-computer interfaces. The invention ensures accurate, interpretable, and user-trusted emotion recognition suitable for sensitive clinical and research environments.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 September 2025
Publication Number
43/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. T. MADHAVI
PHD SCHOLAR, SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR. SRIDHAR CHINTALA
ASST.PROFESSOR (CS&AI), SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
The present invention relates to the field of artificial intelligence, biomedical signal processing, and brain-computer interfaces. More specifically, it pertains to an electroencephalogram (EEG)-based system and method for emotion recognition using deep learning models integrated with feature-level explainability. The invention combines preprocessing techniques, hybrid neural network architectures, and explainable artificial intelligence to provide accurate and interpretable emotion detection from EEG signals.
BACKGROUND OF THE INVENTION
Recognizing human emotions accurately from brain signals is a complex task due to the noisy and nonlinear nature of electroencephalogram (EEG) data. Conventional machine learning models often lack the ability to explain why a certain emotion was predicted, reducing trust and usability in sensitive applications such as mental health monitoring. There is a need for a deep learning-based system that can not only classify emotions from EEG signals with high accuracy but also provide clear feature-level explanations for its predictions.
US20190347476: Disclosed are a method and a system for estimating human emotions using a deep psychological affect network for human emotion recognition. According to an embodiment of the present disclosure, a method for estimating emotion includes obtaining a physiological signal of a user, learning a network, which receives the obtained physiological signal, by using a temporal margin-based classification loss function considering a temporal margin, when the learning is in progress along a time axis, and estimating an emotion of the user through the learning of the network using the temporal margin-based classification loss function.
US20220319536: The present invention relates to an emotion recognition method implemented by a processor. Provided are an emotion recognition method and a device using the same, the emotion recognition method comprising: providing content to a user, receiving biosignal data of a user while the content is being provided, recognizing an emotion of the user with respect to the content by using an emotion classification model trained to classify emotions on the basis of a plurality of biosignal data labeled with emotions.
Emotion recognition from EEG signals remains challenging due to the noisy and nonlinear nature of brainwave data. Conventional machine learning systems often fail to classify emotions accurately and lack transparency, functioning as “black boxes” that do not reveal which features contributed to their predictions. This absence of explainability reduces trust in clinical, educational, and mental health applications where interpretability is essential.
The present invention solves these problems by developing a deep learning-based EEG emotion recognition system that incorporates robust preprocessing, hybrid feature extraction, and feature-level explainability. By highlighting the role of specific brainwave features and electrode channels in classification, the system ensures both high accuracy and interpretability, making it suitable for trust-sensitive environments.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The invention discloses an EEG-based system and method for emotion recognition that integrates deep learning with feature-level explainability. EEG signals are first preprocessed using band-pass filtering and Independent Component Analysis to remove artifacts. Features such as power spectral density and wavelet coefficients are then extracted to capture spatial and temporal information from the EEG.
The features are fed into a deep learning model based on a hybrid architecture combining Convolutional Neural Networks for spatial feature extraction and Long Short-Term Memory networks for temporal pattern recognition. The architecture is optimized for the nonlinear and sequential properties of EEG signals, ensuring robust classification of emotional states such as happiness, sadness, stress, and calmness.
To overcome the lack of interpretability in existing systems, the invention integrates explainability mechanisms that identify and highlight the relative importance of EEG features and electrode channels in each classification decision. This ensures transparency, improves user trust, and enables meaningful insights for clinicians, researchers, and educators.
The system operates in real time or offline, is modular for integration with healthcare and neurofeedback platforms, and is scalable for use across different EEG datasets and hardware configurations. By combining accuracy with interpretability, the invention sets a new standard for EEG-based emotion recognition.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The proposed invention is a deep learning-based system for emotion recognition using EEG signals, enhanced with feature-level explainability. The EEG signals which are recorded from the brain is to be preprocessed for different artifact removals using techniques like band-pass filtering and Independent Component Analysis (ICA). The cleaned signals are then converted into features such as power spectral density and wavelet coefficients. These features are fed into a custom-designed deep learning architecture, such as a hybrid Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network, to accurately classify emotional states (e.g., happy, sad, stressed). To enhance transparency, the system integrates explainability tools such as SHAP (SHapley Additive exPlanations), which highlight the contribution of each EEG channel and feature to the final emotion prediction. This allows end users—such as clinicians or researchers to understand the reasoning behind each prediction, making the system both accurate and interpretable.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The proposed invention is a deep learning-based system for emotion recognition using EEG signals, enhanced with feature-level explainability. The EEG signals which are recorded from the brain is to be preprocessed for different artifact removals using techniques like band-pass filtering and Independent Component Analysis (ICA). The cleaned signals are then converted into features such as power spectral density and wavelet coefficients. These features are fed into a custom-designed deep learning architecture, such as a hybrid Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network, to accurately classify emotional states (e.g., happy, sad, stressed). To enhance transparency, the system integrates explainability tools such as SHAP (SHapley Additive exPlanations), which highlight the contribution of each EEG channel and feature to the final emotion prediction. This allows end users—such as clinicians or researchers to understand the reasoning behind each prediction, making the system both accurate and interpretable.
The proposed invention uniquely integrates a deep learning-based emotion recognition system with feature-level explainability using EEG signals. The system enables accurate classification of emotional states but also real-time interpretation of which specific brainwave features and electrode channels influenced the prediction. This system prioritizes interpretability by utilizing SHAP or comparable explainable AI techniques to visualize feature importance, in contrast to previous work that primarily seeks to enhance accuracy. The model uses a hybrid architecture (e.g., CNN-LSTM) optimized for the temporal and spatial characteristics of EEG data. Preprocessing steps like artifact removal and signal transformation further enhance model reliability. The system is designed for real-time or offline use and supports modular integration with mental health monitoring platforms. This dual focus on performance and explainability makes the invention suitable for clinical, educational, and neurofeedback applications where trust and transparency are critical.
The invention provides a system for emotion recognition that uses EEG signals as primary input. Electrodes positioned on the scalp record brain activity, generating raw signals that are often contaminated with noise from muscle activity, eye blinks, and environmental artifacts. Preprocessing methods including band-pass filtering and Independent Component Analysis are applied to remove noise and retain relevant neurological information.
After preprocessing, features are derived to represent both frequency-domain and time-domain characteristics. Power spectral density captures the distribution of energy across different EEG frequency bands such as alpha, beta, theta, and gamma, each associated with distinct cognitive and emotional states. Wavelet transformations decompose the signals into time-frequency representations, preserving transient emotional cues.
The extracted features are fed into a deep learning architecture designed to model both spatial and temporal properties. Convolutional Neural Networks identify patterns across electrode channels, learning localized features relevant to emotional states. Long Short-Term Memory networks process temporal sequences, capturing evolving brain dynamics that indicate emotions such as stress or relaxation.
The hybrid CNN-LSTM model is trained on annotated EEG datasets, with labels corresponding to emotional states determined through validated experimental protocols. The training process optimizes the network to minimize misclassification, achieving high accuracy in emotion recognition tasks.
To address the issue of explainability, the system integrates explainable AI tools. Techniques such as Shapley Additive Explanations quantify the contribution of each input feature to the final prediction. The system generates visual or numerical outputs showing which EEG channels or frequency bands were most influential in classifying a particular emotion. This provides feature-level transparency, enabling practitioners to understand how the model arrived at its decision.
The system further allows real-time operation, where EEG signals are streamed continuously, preprocessed, and classified on the fly. The explainability module functions in real time as well, allowing users to visualize which brainwave patterns influence the current emotional state detection.
The invention is suitable for applications in mental health monitoring, where clinicians can track stress, anxiety, or mood changes while understanding the underlying EEG correlates. It is equally applicable in brain-computer interfaces, where emotion recognition is essential for adaptive feedback systems.
Educational environments may use the system to gauge student engagement and stress levels, while neurofeedback platforms can integrate it to provide personalized feedback based on emotional states.
The modular design of the invention allows integration with different EEG acquisition hardware, from high-density laboratory systems to portable wearable headsets. The preprocessing, feature extraction, and deep learning modules are adaptable to varying data quality and electrode configurations.
The system is scalable to different languages and cultural contexts, as emotional EEG correlates are largely universal. However, fine-tuning of the model can be performed using population-specific data to enhance performance.
Compared to prior solutions, the invention prioritizes not only accuracy but also interpretability. Most existing deep learning systems for EEG emotion recognition lack the ability to explain feature contributions, reducing their utility in clinical practice. This invention directly addresses that limitation, creating a balance between performance and trust.
Best Method of Working
The best method of working involves acquiring EEG signals through standard non-invasive electrodes placed according to the international 10–20 system. Signals are preprocessed using band-pass filters between 0.5–50 Hz and Independent Component Analysis to remove artifacts. Features are extracted in the form of power spectral density and wavelet coefficients, representing both spatial and temporal information. These features are input into a hybrid CNN-LSTM architecture trained on labeled EEG datasets. The trained model is deployed with integrated explainability tools, enabling real-time classification and visualization of feature-level contributions. The system is implemented on GPU-enabled platforms for efficiency and supports integration with clinical or wearable EEG devices.


, Claims:1. An EEG-based emotion recognition system comprising:
an EEG acquisition module configured to capture brain signals through multiple electrodes; a preprocessing module configured to remove artifacts using filtering and independent component analysis; a feature extraction module configured to compute power spectral density and wavelet-based features; a deep learning module comprising a hybrid convolutional neural network and long short-term memory network configured to classify emotional states; and an explainability module configured to identify and display feature-level contributions of EEG signals to classification outcomes,
wherein the system provides accurate and interpretable emotion recognition.
2. The system as claimed in claim 1, wherein the preprocessing module applies a band-pass filter between 0.5 Hz and 50 Hz.
3. The system as claimed in claim 1, wherein the feature extraction module computes frequency-domain features corresponding to alpha, beta, theta, and gamma bands.
4. The system as claimed in claim 1, wherein the explainability module employs feature attribution techniques to highlight important EEG channels or frequency bands.
5. The system as claimed in claim 1, wherein the system operates in real time or offline mode for emotion recognition.
6. A method for EEG-based emotion recognition, the method comprising: acquiring EEG signals through electrodes; preprocessing the signals to remove artifacts; extracting power spectral density and wavelet-based features from the signals; classifying emotional states using a deep learning model comprising convolutional and long short-term memory networks; and explaining the classification results by identifying feature-level contributions,
wherein the method provides both accurate and interpretable predictions.
7. The method as claimed in claim 6, wherein preprocessing comprises applying band-pass filtering and independent component analysis.
8. The method as claimed in claim 6, wherein classification includes distinguishing emotional states selected from happiness, sadness, stress, and calmness.
9. The method as claimed in claim 6, wherein the explainability module provides real-time visualization of influential features.
10. The method as claimed in claim 6, wherein the method is implemented on clinical, educational, or neurofeedback platforms for emotion monitoring.

Documents

Application Documents

# Name Date
1 202541090195-STATEMENT OF UNDERTAKING (FORM 3) [22-09-2025(online)].pdf 2025-09-22
2 202541090195-REQUEST FOR EARLY PUBLICATION(FORM-9) [22-09-2025(online)].pdf 2025-09-22
3 202541090195-POWER OF AUTHORITY [22-09-2025(online)].pdf 2025-09-22
4 202541090195-FORM-9 [22-09-2025(online)].pdf 2025-09-22
5 202541090195-FORM FOR SMALL ENTITY(FORM-28) [22-09-2025(online)].pdf 2025-09-22
6 202541090195-FORM 1 [22-09-2025(online)].pdf 2025-09-22
7 202541090195-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-09-2025(online)].pdf 2025-09-22
8 202541090195-EVIDENCE FOR REGISTRATION UNDER SSI [22-09-2025(online)].pdf 2025-09-22
9 202541090195-EDUCATIONAL INSTITUTION(S) [22-09-2025(online)].pdf 2025-09-22
10 202541090195-DRAWINGS [22-09-2025(online)].pdf 2025-09-22
11 202541090195-DECLARATION OF INVENTORSHIP (FORM 5) [22-09-2025(online)].pdf 2025-09-22
12 202541090195-COMPLETE SPECIFICATION [22-09-2025(online)].pdf 2025-09-22