Sign In to Follow Application
View All Documents & Correspondence

A Deep Ica Based Hybrid System For Real Time Elimination Of Ocular Artifacts In Eeg Signals

Abstract: Disclosed herein is a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals (100) comprises an independent component analysis (ICA) module (102) configured to decompose incoming EEG signals into statistically independent components. The system also includes a convolutional neural network (CNN) module (104) configured to classify the decomposed components into ocular artifact-related components. The system also includes an artifact removal module (106) configured to selectively remove or correct the ocular artifact-related components. The system also includes a feedback module (108) configured to continuously refine the classification and artifact elimination process. The system also includes a transfer learning unit (110) configured to enable adaptation of the system across different EEG devices and subject conditions with minimal calibration. The system also includes a validation module (112) configured to assess the quality of reconstructed EEG signals using event-related potential recovery and signal quality indices.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2025
Publication Number
46/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. PUSHYAMI
PHD SCHOLAR, SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR. SRIDHAR CHINTALA
ASST. PROFESSOR (CS&AI), SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF DISCLOSURE
[0001] The present disclosure relates generally relates to the field of biomedical signal processing. More specifically, it pertains to a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals.
BACKGROUND OF THE DISCLOSURE
[0002] Electroencephalography (EEG) has long been recognized as one of the most significant non-invasive techniques for recording the electrical activity of the human brain. Since its introduction in the early twentieth century, EEG has evolved into a central tool in neuroscience, clinical diagnostics, and brain–computer interface (BCI) applications. The method relies on placing electrodes over the scalp to capture minute voltage fluctuations generated by neuronal firing within the cerebral cortex. Because of its millisecond-level temporal resolution, EEG offers insights into the dynamic processes of brain activity, distinguishing it from other neuroimaging modalities such as fMRI or PET, which provide higher spatial but lower temporal resolution. This fine-grained temporal precision makes EEG indispensable for monitoring fast cognitive events, neurological disorders, sleep studies, and real-time interfaces where rapid feedback is essential.
[0003] Despite its strengths, EEG suffers from a fundamental vulnerability to contamination by non-neural physiological signals. These signals, commonly referred to as artifacts, originate from muscular activity, environmental interference, and, most notably, ocular sources such as eye blinks, saccades, and other movements of the eyes and surrounding muscles. Among all the contaminating factors, ocular artifacts pose the greatest challenge because of their high amplitude relative to neural EEG signals and their frequency overlap with cognitive-related brain rhythms. In fact, the potential difference generated by the corneo-retinal dipole during eye blinks or gaze shifts can be ten times greater than the cortical signals of interest. This overwhelming presence not only distorts the recorded data but can also lead to severe misinterpretation of results, making the removal or attenuation of ocular artifacts an indispensable step in EEG preprocessing.
[0004] Ocular artifacts are particularly problematic because of their ubiquity and unpredictability. Unlike muscular artifacts from jaw clenching or head movements, which can often be minimized through participant instructions, eye blinks and saccades are involuntary and unavoidable. Blinking is essential for ocular health, while gaze shifts are intrinsic to visual exploration. As a result, complete prevention of ocular artifacts during EEG recording is virtually impossible. Their impact is especially pronounced in frontal electrodes, which are located closest to the eyes, but their influence can spread across all channels through volume conduction. This widespread contamination presents significant challenges for both clinical and research contexts. For example, in epilepsy monitoring, ocular artifacts can mimic pathological spikes, complicating diagnosis. In BCI applications, where rapid and reliable signal decoding is necessary, the intrusion of blinks can drastically reduce classification accuracy, undermining system usability.
[0005] Historically, several strategies have been explored to mitigate ocular artifacts in EEG recordings. The simplest among them are filtering-based approaches, which employ high-pass, low-pass, or band-pass filters to attenuate undesired frequency components. While these techniques are computationally efficient and suitable for online use, they lack specificity. Eye blinks typically generate broadband signals, spanning frequencies that overlap with genuine neural oscillations. Consequently, filtering methods often fail to completely remove ocular interference without distorting brain activity. This trade-off renders them inadequate for contexts requiring high precision, such as cognitive neuroscience experiments or clinical diagnostics.
[0006] Regression-based techniques marked an early advancement in ocular artifact correction. These approaches exploit the fact that eye movements can be directly measured using electrooculography (EOG), which involves placing electrodes around the eyes. By modeling the linear relationship between EOG activity and EEG channels, regression methods subtract the predicted ocular contribution from the recorded data. Although conceptually simple and relatively effective, regression suffers from several limitations. It assumes a linear and stationary relationship between ocular and EEG signals, an assumption that is rarely satisfied in real-world data. Additionally, the requirement for separate EOG electrodes increases preparation time and participant discomfort, while not fully eliminating residual artifacts. Overcorrection, where genuine brain activity correlated with eye movements is inadvertently removed, also poses a risk.
[0007] Independent Component Analysis (ICA) revolutionized EEG preprocessing by offering a more data-driven method for separating neural signals from artifacts. ICA decomposes multichannel EEG into statistically independent components, some of which correspond to ocular activity. By identifying and rejecting these components, researchers can reconstruct a relatively clean EEG dataset. ICA has been widely adopted because of its robustness and effectiveness, often outperforming traditional filtering or regression methods. However, ICA is not without drawbacks. It requires a substantial amount of data for accurate decomposition, making it less suitable for real-time applications where rapid updates are necessary. The identification of ocular components often depends on manual or semi-automated heuristics, introducing subjectivity and variability. Moreover, ICA assumes that the sources are stationary, an assumption that is frequently violated in non-stationary EEG recordings.
[0008] Alternative methods such as wavelet decomposition and empirical mode decomposition (EMD) have also been proposed to address the challenges of ocular artifact removal. Wavelet methods exploit the time–frequency representation of signals to isolate transient blink activity, while EMD decomposes EEG into intrinsic mode functions. These approaches have demonstrated promise, particularly in capturing non-stationary and transient artifacts. Nonetheless, they too face limitations. Wavelet techniques require careful selection of mother wavelets and thresholds, and their performance varies with different datasets. EMD, while adaptive, suffers from mode-mixing and computational inefficiency, making it less ideal for real-time or large-scale applications.
[0009] The advent of machine learning and, more recently, deep learning has opened new avenues for artifact removal in EEG. Neural networks, convolutional architectures, and recurrent models have been applied to classify and suppress ocular artifacts with growing success. Deep learning models excel at capturing complex, nonlinear relationships and can generalize across participants and recording conditions. Their potential lies not only in artifact detection but also in adaptive filtering, where models learn to distinguish and reconstruct clean neural signals. However, these systems often require vast amounts of labeled training data, which is difficult to obtain in practice. The computational demand of deep models also raises concerns for real-time applications, where low latency is critical. Furthermore, the black-box nature of deep learning introduces interpretability issues, limiting their adoption in clinical settings where transparency is paramount.
[0010] Within the broader context of EEG research, the challenge of ocular artifact removal reflects a deeper tension between accuracy and efficiency. Offline methods can afford extensive computation and manual intervention, yielding high-quality cleaned data for analysis. Yet, the growing demand for real-time applications such as adaptive neurofeedback, mobile health monitoring, and BCI control necessitates artifact removal methods that are both fast and precise. Real-time processing requires algorithms that can operate on continuous streams of data, update dynamically to account for non-stationarity, and preserve critical brain dynamics without distortion. This intersection of requirements has spurred interest in hybrid approaches that combine the strengths of traditional signal processing with the adaptability of modern learning-based systems.
[0011] The importance of solving the ocular artifact problem extends beyond academic research. In clinical neurology, the accuracy of EEG interpretation can mean the difference between correct and incorrect diagnosis of conditions such as epilepsy, sleep disorders, or encephalopathies. In psychiatry, EEG biomarkers are being investigated for their potential to guide treatment decisions in disorders like depression and schizophrenia. In these contexts, the intrusion of ocular artifacts can severely compromise diagnostic reliability. Similarly, in applied fields such as brain–computer interfaces, where EEG signals are translated into control commands for communication or motor assistance, the robustness of artifact correction determines system usability for patients with severe motor impairments. Even in consumer-grade EEG devices designed for wellness or gaming, the persistence of ocular artifacts undermines user experience and limits market adoption.
[0012] The extensive body of literature on artifact removal reflects decades of effort to strike the right balance among computational cost, accuracy, adaptability, and usability. Yet, no single method has fully addressed the multifaceted challenges posed by ocular contamination. Traditional methods fall short in adaptability and preservation of brain signals, while newer machine learning approaches face hurdles of data requirements, interpretability, and real-time performance. The field thus remains open to innovations that can leverage the strengths of existing approaches while overcoming their limitations. The pursuit of hybrid systems that integrate the interpretability of traditional decomposition methods with the adaptability of deep learning represents a promising direction for future development.
[0013] Thus, in light of the above-stated discussion, there exists a need for a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals.
SUMMARY OF THE DISCLOSURE
[0014] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0015] According to illustrative embodiments, the present disclosure focuses on a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0016] An objective of the present disclosure is to contribute to advancements in neurotechnology by providing a reliable and real-time artifact elimination solution that enhances the quality of EEG-based diagnostics, monitoring, and therapeutic systems.
[0017] Another objective of the present disclosure is to design and develop a hybrid computational framework that combines deep learning with Independent Component Analysis (ICA) for accurate identification and removal of ocular artifacts in Electroencephalography (EEG) signals.
[0018] Another objective of the present disclosure is to achieve real-time artifact elimination by optimizing the proposed system for low-latency processing, ensuring it can be applied during live EEG monitoring sessions.
[0019] Another objective of the present disclosure is to improve the accuracy of EEG signal interpretation by minimizing the distortion of original neural information during ocular artifact removal.
[0020] Another objective of the present disclosure is to automate the artifact detection process and reduce the reliance on manual cleaning procedures, thereby minimizing human error and subjectivity.
[0021] Another objective of the present disclosure is to address the limitations of traditional ICA and filtering techniques by introducing a hybrid deep learning mechanism capable of handling overlapping frequency components between brain activity and ocular artifacts.
[0022] Another objective of the present disclosure is to develop a robust training model that can generalize across different subjects, EEG datasets, and experimental conditions without requiring extensive recalibration.
[0023] Another objective of the present disclosure is to preserve key neural oscillations and cognitive event-related potentials while effectively eliminating ocular artifacts, thereby maintaining the scientific and clinical utility of EEG recordings.
[0024] Another objective of the present disclosure is to evaluate system performance using multiple metrics such as signal-to-noise ratio (SNR), mean squared error (MSE), classification accuracy of cognitive tasks, and processing speed.
[0025] Yet another objective of the present disclosure is to create a scalable and deployable framework that can be integrated into existing EEG acquisition systems for clinical, research, and brain computer interface (BCI) applications.
[0026] In light of the above, a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals comprises an independent component analysis (ICA) module configured to decompose incoming EEG signals into statistically independent components. The system also includes a convolutional neural network (CNN) module configured to classify the decomposed components into ocular artifact-related components and brain activity components based on spatial and temporal features. The system also includes an artifact removal module configured to selectively remove or correct the ocular artifact-related components while retaining the clean brain activity components. The system also includes a feedback module configured to continuously refine the classification and artifact elimination process by utilizing the reconstructed EEG signals to improve subsequent detection and correction accuracy. The system also includes a transfer learning unit configured to enable adaptation of the system across different EEG devices and subject conditions with minimal calibration. The system also includes a validation module configured to assess the quality of reconstructed EEG signals using event-related potential recovery and signal quality indices.
[0027] In one embodiment, the independent component analysis (ICA) module is configured to utilize FastICA, Infomax ICA, or other adaptive ICA algorithms to achieve real-time decomposition of EEG signals.
[0028] In one embodiment, the convolutional neural network (CNN) module is trained on labeled EEG datasets and configured to analyze spatial distributions, temporal dynamics, and energy patterns of the components for accurate classification.
[0029] In one embodiment, the artifact removal module is configured to selectively zero out, suppress, or reconstruct ocular artifact-related components while maintaining the integrity of neural signals.
[0030] In one embodiment, the feedback module is configured to adjust CNN classifier parameters dynamically based on the quality of reconstructed EEG signals, thereby improving classification accuracy over time.
[0031] In one embodiment, the transfer learning unit is configured to enable cross-device generalization, allowing the system to operate effectively on both low-channel and high-density EEG systems without extensive retraining.
[0032] In one embodiment, the validation module is further configured to quantify the improvement in EEG quality using event-related potential (ERP) amplitude recovery, latency precision, and standard signal-to-noise indices.
[0033] In one embodiment, the system is deployable on real-time brain-computer interface (BCI) systems and embedded hardware platforms with low-latency processing requirements.
[0034] In one embodiment, the modular design enables integration with existing EEG acquisition systems, clinical diagnostic platforms, and neurofeedback applications without requiring modification of the original hardware.
[0035] In one embodiment, the CNN module and feedback module together form an adaptive hybrid learning framework that minimizes calibration time across subjects by leveraging online updates during use.
[0036] These and other advantages will be apparent from the present application of the embodiments described herein.
[0037] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0038] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0040] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0041] FIG. 1 illustrates a flowchart outlining sequential step involved in a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals, in accordance with an exemplary embodiment of the present disclosure;
[0042] FIG. 2 illustrates a flowchart showing working of a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals, in accordance with an exemplary embodiment of the present disclosure.
[0043] Like reference, numerals refer to like parts throughout the description of several views of the drawing;
[0044] The deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals, which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0045] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
[0046] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0047] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0048] The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0049] The terms “having”, “comprising”, “including”, and variations thereof signify the presence of a component.
[0050] Referring now to FIG. 1 to FIG. 2 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a flowchart outlining sequential step involved in a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals, in accordance with an exemplary embodiment of the present disclosure.
[0051] A deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals 100 comprises an independent component analysis (ICA) module 102 configured to decompose incoming EEG signals into statistically independent components. The independent component analysis (ICA) module 102 is configured to utilize FastICA, Infomax ICA, or other adaptive ICA algorithms to achieve real-time decomposition of EEG signals.
[0052] The system also includes a convolutional neural network (CNN) module 104 configured to classify the decomposed components into ocular artifact-related components and brain activity components based on spatial and temporal features. The convolutional neural network (CNN) module 104 is trained on labeled EEG datasets and configured to analyze spatial distributions, temporal dynamics, and energy patterns of the components for accurate classification. The CNN module 104 and feedback module together form an adaptive hybrid learning framework that minimizes calibration time across subjects by leveraging online updates during use.
[0053] The system also includes an artifact removal module 106 configured to selectively remove or correct the ocular artifact-related components while retaining the clean brain activity components. The artifact removal module 106 is configured to selectively zero out, suppress, or reconstruct ocular artifact-related components while maintaining the integrity of neural signals.
[0054] The system also includes a feedback module 108 configured to continuously refine the classification and artifact elimination process by utilizing the reconstructed EEG signals to improve subsequent detection and correction accuracy. The feedback module 108 is configured to adjust CNN classifier parameters dynamically based on the quality of reconstructed EEG signals, thereby improving classification accuracy over time.
[0055] The system also includes a transfer learning unit 110 configured to enable adaptation of the system across different EEG devices and subject conditions with minimal calibration. The transfer learning unit 110 is configured to enable cross-device generalization, allowing the system to operate effectively on both low-channel and high-density EEG systems without extensive retraining.
[0056] The system also includes a validation module 112 configured to assess the quality of reconstructed EEG signals using event-related potential recovery and signal quality indices. The validation module 112 is further configured to quantify the improvement in EEG quality using event-related potential (ERP) amplitude recovery, latency precision, and standard signal-to-noise indices.
[0057] The system is deployable on real-time brain-computer interface (BCI) systems and embedded hardware platforms with low-latency processing requirements. The modular design enables integration with existing EEG acquisition systems, clinical diagnostic platforms, and neurofeedback applications without requiring modification of the original hardware.
[0058] FIG. 1 illustrates a flowchart outlining sequential step involved in a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals.
[0059] At 102, the process begins with the independent component analysis (ICA) module. Raw EEG signals, typically contaminated by ocular artifacts, are decomposed into statistically independent components using ICA algorithms such as FastICA or Infomax ICA. The rationale behind this step is that EEG recordings are mixtures of neural and non-neural sources, and ICA mathematically separates them into distinct, independent signals. By doing so, the system prepares the data for further classification, allowing individual components to be assessed on their unique statistical and temporal features rather than as a noisy composite.
[0060] At 104, following decomposition, the components are processed by the *convolutional neural network (CNN) module. This deep learning model has been trained to differentiate between ocular and neural components by examining their spatial distribution, energy levels, and temporal dynamics. For example, ocular components often exhibit high-amplitude, low-frequency patterns concentrated near frontal electrodes, whereas genuine neural signals display distributed spatial activity and diverse frequency content. The CNN automates what traditionally required manual inspection, drastically improving both the speed and reliability of artifact identification.
[0061] At 106, once classified, the signals enter the artifact removal module. Here, the components identified as ocular artifacts are selectively removed or corrected without affecting the integrity of neural components. This selectivity is a key innovation, as traditional threshold-based or filtering approaches often suppress genuine brain activity in the process of artifact removal. The clean neural components are preserved in their original form, ensuring that the reconstructed EEG maintains physiological accuracy.
[0062] At 108, after artifact removal, the system incorporates a feedback module. This module represents a dynamic learning mechanism in which the reconstructed clean EEG signals are analyzed for quality. The evaluation results are then fed back into the CNN classifier, enabling iterative refinement of the artifact detection and elimination process. Over time, this feedback mechanism enhances the system’s adaptability and precision, particularly in real-time environments where variability in signal characteristics is common.
[0063] At 110, in addition to feedback learning, the system is equipped with a transfer learning unit. This feature allows the model to generalize across different EEG devices, electrode configurations, and subject-specific variations with minimal recalibration. By leveraging pre-trained knowledge and adapting it to new contexts, the system avoids the inefficiency of complete retraining and significantly broadens its usability in both laboratory and clinical settings. This cross-device adaptability makes the invention especially valuable in practical deployments where hardware diversity and subject variability are inevitable.
[0064] At 112, the final stage of operation involves the validation module. To ensure that the reconstructed EEG signals are both artifact-free and physiologically accurate, this module employs event-related potential (ERP) recovery and signal quality indices as benchmarks. By comparing the cleaned signal against established neurophysiological markers, the system provides quantitative confirmation of its effectiveness. This validation step not only guarantees reliability for research and diagnostics but also instills confidence for real-time applications like neurofeedback and BCI control.
[0065] FIG. 2 illustrates a flowchart showing working of a deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals.
[0066] The process begins with raw EEG signal acquisition, where multichannel signals containing ocular artifacts are collected through scalp electrodes. These raw signals typically carry both brain activity and undesired noise generated by eye blinks or movements, which often obscure critical neural patterns. Once collected, the signals undergo preprocessing, which involves standard steps like notch filtering to remove powerline noise, baseline correction to stabilize drifts, and normalization to ensure consistent amplitude scales across channels. This stage prepares the signals for more sophisticated decomposition.
[0067] Following preprocessing, the signals pass through the ICA decomposition stage. Independent Component Analysis (ICA) algorithms such as FastICA or Infomax ICA are applied to decompose the EEG into statistically independent components. This step separates the mixed brain and artifact signals into individual sources, allowing for clearer identification of which components represent ocular interference and which represent genuine neural activity.
[0068] After decomposition, the deep learning classification module becomes central. Here, a trained Convolutional Neural Network (CNN) evaluates each component, analyzing their spatial and temporal features, including shape, energy distribution, and pattern dynamics. Based on these characteristics, the CNN labels each component as either ocular or neural. This automated classification removes the need for manual inspection, which is a major limitation of traditional ICA-based methods.
[0069] Once the CNN identifies ocular-related components, the system enters the artifact component removal phase. At this stage, components corresponding to ocular artifacts are selectively zeroed out or corrected while preserving the integrity of the genuine neural components. This ensures that the clean EEG signal retains as much useful brain activity as possible without distortion.
[0070] Next, the remaining components are passed into the signal reconstruction phase, where they are recombined to generate a clean EEG signal. The reconstructed signal is now largely free of ocular artifacts but still maintains the essential brainwave features required for scientific, medical, or interface-related applications.
[0071] A unique feature of the invention is the feedback learning loop. After reconstruction, the cleaned EEG is analyzed for quality, and the resulting assessment is fed back into the CNN classifier. This feedback enables adaptive fine-tuning of the classification process, improving performance over time and ensuring that the system becomes more accurate and robust with continued use.
[0072] Finally, the cleaned EEG enters the output stage, where the artifact-free signals are delivered to downstream applications such as BCI systems, ERP analysis, or medical diagnostic tools. By ensuring real-time, high-quality EEG without ocular distortions, the system enhances the reliability of neural data for both research and clinical purposes.
[0073] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0074] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0075] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0076] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0077] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:I/We Claim:
1. A deep ICA-based hybrid system for real-time elimination of ocular artifacts in EEG signals (100) comprising:
an independent component analysis (ICA) module (102) configured to decompose incoming EEG signals into statistically independent components;
a convolutional neural network (CNN) module (104) configured to classify the decomposed components into ocular artifact-related components and brain activity components based on spatial and temporal features;
an artifact removal module (106) configured to selectively remove or correct the ocular artifact-related components while retaining the clean brain activity components;
a feedback module (108) configured to continuously refine the classification and artifact elimination process by utilizing the reconstructed EEG signals to improve subsequent detection and correction accuracy;
a transfer learning unit (110) configured to enable adaptation of the system across different EEG devices and subject conditions with minimal calibration;
a validation module (112) configured to assess the quality of reconstructed EEG signals using event-related potential recovery and signal quality indices.
2. The system (100) as claimed in claim 1, wherein the independent component analysis (ICA) module (102) is configured to utilize FastICA, Infomax ICA, or other adaptive ICA algorithms to achieve real-time decomposition of EEG signals.
3. The system as claimed in claim 1, wherein the convolutional neural network (CNN) module (104) is trained on labeled EEG datasets and configured to analyze spatial distributions, temporal dynamics, and energy patterns of the components for accurate classification.
4. The system as claimed in claim 1, wherein the artifact removal module (106) is configured to selectively zero out, suppress, or reconstruct ocular artifact-related components while maintaining the integrity of neural signals.
5. The system as claimed in claim 1, wherein the feedback module (108) is configured to adjust CNN classifier parameters dynamically based on the quality of reconstructed EEG signals, thereby improving classification accuracy over time.
6. The system as claimed in claim 1, wherein the transfer learning unit (110) is configured to enable cross-device generalization, allowing the system to operate effectively on both low-channel and high-density EEG systems without extensive retraining.
7. The system as claimed in claim 1, wherein the validation module (112) is further configured to quantify the improvement in EEG quality using event-related potential (ERP) amplitude recovery, latency precision, and standard signal-to-noise indices.
8. The system as claimed in claim 1, wherein the system is deployable on real-time brain-computer interface (BCI) systems and embedded hardware platforms with low-latency processing requirements.
9. The system of claim 1, wherein the modular design enables integration with existing EEG acquisition systems, clinical diagnostic platforms, and neurofeedback applications without requiring modification of the original hardware.
10. The system as claimed in claim 1, wherein the CNN module (104) and feedback module together form an adaptive hybrid learning framework that minimizes calibration time across subjects by leveraging online updates during use.

Documents

Application Documents

# Name Date
1 202541096577-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2025(online)].pdf 2025-10-07
2 202541096577-REQUEST FOR EARLY PUBLICATION(FORM-9) [07-10-2025(online)].pdf 2025-10-07
3 202541096577-POWER OF AUTHORITY [07-10-2025(online)].pdf 2025-10-07
4 202541096577-FORM-9 [07-10-2025(online)].pdf 2025-10-07
5 202541096577-FORM FOR SMALL ENTITY(FORM-28) [07-10-2025(online)].pdf 2025-10-07
6 202541096577-FORM 1 [07-10-2025(online)].pdf 2025-10-07
7 202541096577-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [07-10-2025(online)].pdf 2025-10-07
8 202541096577-DRAWINGS [07-10-2025(online)].pdf 2025-10-07
9 202541096577-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2025(online)].pdf 2025-10-07
10 202541096577-COMPLETE SPECIFICATION [07-10-2025(online)].pdf 2025-10-07