Sign In to Follow Application
View All Documents & Correspondence

Hybrid Adaptive Non Invasive Brain Computer Interface System And Method Thereof

Abstract: HYBRID ADAPTIVE NON-INVASIVE BRAIN-COMPUTER INTERFACE SYSTEM AND METHOD THEREOF ABSTRACT A hybrid adaptive non-invasive brain-computer interface system (100) is disclosed. The system (100) comprises a wearable headset (102) adapted to be worn by a user on a head. The system (100) is configured to receive the neurophysiological signals; align the received neurophysiological signals by means of a signal fusion engine to reduce artifacts and enhance signal quality; classify the aligned signals through an adaptive artificial intelligence engine configured with transfer learning to map neural activity into control commands; execute a local real-time processing of the classified signals by means of an edge computing unit (108) to generate device control outputs; and transmit the device control outputs through an application programming interface to an external peripheral machine (110). The system (100) that reduces latency and supports smooth, immediate interaction with assistive devices, AR/VR systems, and communication tools. Claims: 10, Figures: 3 Figure 1A is selected.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2025
Publication Number
46/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR University
SR University, Ananthasagar, Warangal Telangana India 506371 patent@sru.edu.in 08702818333

Inventors

1. Vupulluri Sudha Rani
SR University, Ananthasagar, Hasanparthy (PO), Warangal, Telangana, India-506371.

Specification

Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to a computer system and particularly to a hybrid adaptive non-invasive brain-computer interface system.
Description of Related Art
[002] Brain-Computer Interface (BCI) technology seeks to create a direct communication pathway between the human brain and external devices. However, existing systems face persistent challenges such as low accuracy, limited adaptability, and restricted applicability in real-world conditions. These shortcomings reduce the potential of BCIs to support patients with neurological disorders or to provide reliable human–machine interaction in non-medical fields. The core problem lies in achieving high precision and reliability without resorting to highly invasive and risky procedures.
[003] Various approaches currently address these challenges. Invasive solutions such as Neuralink aim to restore motor and sensory functions by implanting chips directly into the brain, while devices like Synchron’s Stentrode provide a less invasive alternative by using a vascular implantation route. Non-invasive options, such as Emotiv Epoc X EEG headsets, allow cost-effective and accessible signal capture. Additionally, open-source platforms like OpenBCI provide tools for developers and researchers to experiment with new applications. These solutions indicate that both invasive and non-invasive technologies are active areas of development, each targeting different user needs and markets.
[004] Despite these advances, current solutions remain inadequate. Invasive devices pose serious safety concerns, require expensive neurosurgical intervention, and struggle with user acceptance. Minimally invasive systems often deliver low resolution or reduced bandwidth. Non-invasive commercial headsets provide greater safety and accessibility, but they typically suffer from noisy signals, shallow electrode depth, and low accuracy in intention recognition. Many systems further lack effective adaptability to user-specific conditions or robust data privacy safeguards.
[005] There is thus a need for an improved and advanced hybrid adaptive non-invasive brain-computer interface system that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a hybrid adaptive non-invasive brain-computer interface system. The system comprising a wearable headset adapted to be worn by a user on a head. The wearable headset is adapted to acquire neurophysiological signals of the user. The system further comprising a control unit communicatively connected to the wearable headset. The control unit is configured to receive the neurophysiological signals; align the received neurophysiological signals by means of a signal fusion engine to reduce artifacts and enhance signal quality; classify the aligned signals through an adaptive artificial intelligence engine configured with transfer learning to map neural activity into control commands; execute a local real-time processing of the classified signals by means of an edge computing unit to generate device control outputs; and transmit the device control outputs through an application programming interface to an external peripheral machine.
[007] Embodiments in accordance with the present invention further provide a method for operating a hybrid adaptive non-invasive brain-computer interface system. The method comprising steps of receiving neurophysiological signals from a wearable headset; aligning the received neurophysiological signals by means of a signal fusion engine to reduce artifacts and enhance signal quality; classifying the aligned signals through an adaptive artificial intelligence engine configured with transfer learning to map neural activity into control commands; executing a local real-time processing of the classified signals by means of an edge computing unit to generate device control outputs; and transmitting the device control outputs through an application programming interface to an external peripheral machine.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a hybrid adaptive non-invasive brain-computer interface system.
[009] Next, embodiments of the present application may provide a brain-computer interface system that enables high-resolution signal acquisition without surgical procedures.
[0010] Next, embodiments of the present application may provide a brain-computer interface system that minimizes calibration time and personalizes system response to the cognitive and physiological state of the user.
[0011] Next, embodiments of the present application may provide a brain-computer interface system that reduces latency and supports smooth, immediate interaction with assistive devices, AR/VR systems, and communication tools.
[0012] Next, embodiments of the present application may provide a brain-computer interface system that supports multiple use-cases, including neuro prosthetics, assistive mobility, immersive learning, and mental health interventions.
[0013] These and other advantages will be apparent from the present application of the embodiments described herein.
[0014] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0016] FIG. 1A illustrates a block diagram of a hybrid adaptive non-invasive brain-computer interface system, according to an embodiment of the present invention;
[0017] FIG. 1B illustrates a hybrid adaptive non-invasive brain-computer interface system, according to an embodiment of the present invention; and
[0018] FIG. 2 depicts a flowchart of a method for operating a hybrid adaptive non-invasive brain-computer interface system, according to an embodiment of the present invention.
[0019] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0020] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0021] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0022] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0023] As used herein, the term “user” may refer to an individual whose neurophysiological, cognitive, or physiological activity may be monitored, analyzed, or utilized by the disclosed system. The user may include, but is not limited to, a patient, a research subject, an operator, or any individual who may interact with the system for medical, rehabilitative, communicative, or immersive purposes. The definition of user may further extend to individuals in controlled laboratory studies, clinical environments, or everyday operational contexts.
[0024] FIG. 1A illustrates a block diagram of a hybrid adaptive non-invasive brain-computer interface system 100 (hereinafter referred to as the system 100), according to an embodiment of the present invention. In an embodiment of the present invention, the system 100 may be configured to provide an interface between neurophysiological signals of a user and external devices by means of a structured signal acquisition, processing, and control flow. The system 100 may operate in a manner where signals may be acquired through a wearable arrangement, transmitted into a central control framework, processed for artifact suppression and alignment, classified into intention-based commands, and executed through real-time processing for generating device control outputs. These outputs may then be communicated to external peripheral machines by means of an integrated interface.
[0025] In an embodiment of the present invention, the system 100 may integrate electroencephalography, functional near-infrared spectroscopy, and electromyography into a single multimodal acquisition framework. Electroencephalography may capture fast neural oscillations, functional near-infrared spectroscopy may capture slower hemodynamic changes, and electromyography may provide muscular reference signals for artifact suppression. This complementary data fusion may enhance both accuracy and robustness compared to traditional single-modality approaches.
[0026] According to the embodiments of the present invention, the system 100 may incorporate non-limiting hardware components to enhance a processing speed and an efficiency such as the system 100 may comprise a wearable headset 102, dry electrodes 104a-104n (hereinafter referred individually to as the dry electrode 104, and plurally to as the dry electrodes 104), a control unit 106, an edge computing unit 108, and an external peripheral machine 110. In an embodiment of the present invention, the hardware components of the system 100 may be integrated with computer-executable instructions for overcoming the challenges and the limitations of the existing systems.
[0027] In an embodiment of the present invention, the wearable headset 102 may be adapted to be worn by a user on a head. The wearable headset 102 may be adapted to acquire neurophysiological signals of the user. The neurophysiological signals may be acquired using sensors such as, but not limited to, electroencephalography (EEG) sensors, functional near-infrared spectroscopy (fNIRS) sensors, and electromyography (EMG) sensors, and so forth. The wearable headset 102 may be ergonomically designed to maintain stable electrode contact while ensuring comfort during prolonged usage. Materials used in the headset frame may include lightweight composites or flexible polymers so that mechanical strain on the user may be minimized. The wearable headset 102 may further integrate adjustable mounting assemblies to accommodate different head sizes and shapes without loss of signal quality. The wearable headset 102 may comprise signal acquisition circuitry, that may be embedded and shielded in the wearable headset 102 to reduce electromagnetic interference, thereby enhancing accuracy of the captured signals.
[0028] In an embodiment of the present invention, the wearable headset 102 may be designed for portability and comfort so that prolonged usage may be supported. The headset frame may be lightweight and fabricated from polymer composites, while the dry electrode 104 may maintain low-impedance contact without conductive gel. The wearable headset 102 may further comprise adjustable straps and flexible mounts to fit users of varying head sizes without discomfort.
[0029] In an embodiment of the present invention, the dry electrodes 104 may be integrated within the wearable headset 102 and may be configured to establish electrical contact with the scalp of the user for acquiring electroencephalography signals. The dry electrodes 104 may be fabricated from conductive polymers, coated metal alloys, or nanostructured composites so that electrical impedance at the skin–electrode interface may be minimized without the use of conductive gels. The dry electrodes 104 may be arranged in a distributed pattern across the inner surface of the wearable headset 102 so that coverage of key cortical regions may be achieved. Each dry electrode 104 may be spring-loaded or elastically mounted so that consistent pressure may be maintained despite variations in scalp curvature. The design of the dry electrodes 104 may enable long-duration usage without degradation of contact quality, and the surface of each electrode may be textured or micro-patterned to enhance signal acquisition while maintaining user comfort. In an embodiment of the present invention, the dry electrodes 104 may to enable prolonged usage without conductive gel application.
[0030] In an embodiment of the present invention, the control unit 106 may be connected to the wearable headset 102. The control unit 106 may be configured to receive the neurophysiological signals from the wearable headset 102.
[0031] The control unit 106 may be configured to preprocess the received the neurophysiological signals. In an embodiment of the present invention, the preprocessing of the neurophysiological signals may be performed within the control unit 106 so that raw data acquired from the wearable headset 102 may be prepared for further analysis. The preprocessing may include signal amplification, baseline correction, and filtering across specific frequency bands so that noise and drift may be minimized. Motion artifacts and muscular interferences present in the neurophysiological signals may be suppressed through adaptive filtering algorithms that may utilize reference inputs from electromyography data. The preprocessing may involve segmentation of the signals into temporal windows and normalization of amplitudes so that inter-user variability may be reduced. The control unit 106 may further implement digital conversion techniques with high-resolution analog-to-digital converters so that signal integrity may be preserved during processing. By performing such preprocessing steps, the quality and reliability of the neurophysiological signals may be enhanced before alignment and classification.
[0032] The control unit 106 may be configured to align the pre-processed neurophysiological signals by means of a signal fusion engine to reduce artifacts and enhance signal quality. The signal fusion engine may be configured to temporally and spatially align electroencephalography, functional near-infrared spectroscopy, and electromyography signals so that multimodal integration may be achieved. The engine may utilize synchronization protocols based on timestamping or phase-locking so that delays among modalities may be compensated. Feature extraction modules within the signal fusion engine may process each modality separately and then merge the extracted features into a unified data stream. The signal fusion engine may apply weighting algorithms so that signal components with higher reliability may contribute more strongly to the aligned output. Additionally, adaptive artifact removal methods may be implemented so that noise from eye blinks, motion, or ambient interference may be minimized before fusion. By aligning signals in this manner, the system 100 may achieve coherent multimodal representation that may improve subsequent classification accuracy.
[0033] The control unit 106 may be configured to classify the aligned signals through an adaptive artificial intelligence engine configured with transfer learning to map neural activity into control commands. The adaptive artificial intelligence engine may be configured to personalize classification accuracy through reinforcement learning based on a physiological state of the user. The adaptive artificial intelligence engine may be configured to reduce calibration time by transfer learning across multiple users.
[0034] The signal fusion engine may be configured to temporally and spatially align electroencephalography, functional near-infrared spectroscopy, and electromyography signals so that multimodal integration may be achieved. The signal fusion engine may utilize synchronization protocols based on timestamping or phase-locking so that delays among modalities may be compensated. Feature extraction modules within the signal fusion engine may process each modality separately and then merge the extracted features into a unified data stream. The signal fusion engine may apply weighting algorithms so that signal components with higher reliability may contribute more strongly to the aligned output. Additionally, adaptive artifact removal methods may be implemented so that noise from eye blinks, motion, or ambient interference may be minimized before fusion. By aligning signals in this manner, the system 100 may achieve coherent multimodal representation that may improve subsequent classification accuracy.
[0035] The control unit 106 may be configured to execute a local real-time processing of the classified signals by means of the edge computing unit 108 to generate device control outputs. The edge computing unit 108 may be adapted to perform latency reduction to achieve real-time intention recognition and device interaction. The edge computing unit 108 may be configured to execute computational tasks close to the source of data so that latency may be minimized and response time may be improved. The edge computing unit 108 may process the classified signals by applying decision logic and mapping them to corresponding device control outputs. Hardware within the edge computing unit 108 may include multi-core processors, graphics processing modules, or field programmable gate arrays so that parallel computation of multiple signal streams may be achieved. The edge computing unit 108 may further incorporate memory buffers and cache optimization strategies so that throughput may remain stable during continuous operation. By generating device control outputs locally, the edge computing unit 108 may reduce dependence on remote servers and may ensure that the outputs may be transmitted in real-time to the application programming interface for subsequent interaction with the external peripheral machine 110.
[0036] The control unit 106 may be configured to transmit the device control outputs through an application programming interface to the external peripheral machine 110. The application programming interface may support integration with assistive devices, augmented reality systems, virtual reality systems, communication interfaces, and so forth. The application programming interface may be configured to convert the device control outputs generated by the edge computing unit 108 into standard communication protocols so that compatibility with multiple external platforms may be achieved.
[0037] In an embodiment of the present invention, the application programming interface connected to the control unit 106 may be configured as a modular integration platform. The application programming interface may expose standardized libraries so that third-party developers may build custom extensions for the external peripheral machine 110. The modular structure may allow new accessibility devices, AR/VR platforms, or educational tools to be integrated without reconfiguration of the system 100.
[0038] The application programming interface may further support modular integration so that third-party applications may be developed for specialized tasks. Data packets transmitted by the application programming interface may include authentication tokens and encryption layers so that secure and authorized interaction with the external peripheral machine 110 may be maintained. The transmission may be optimized through low-latency communication channels such as Bluetooth Low Energy, Wi-Fi, or wired interfaces depending on application requirements. By using the application programming interface in this manner, the device control outputs may seamlessly activate functions of the external peripheral machine 110 including assistive mobility devices, augmented reality systems, or communication interfaces.
[0039] In an embodiment of the present invention, the control unit 106 may further include a privacy-protective device (not shown) that may be configured to cryptographically shield neural data prior to any storage or synchronization. The privacy-protective device may perform real-time encryption using industry standards such as AES-256 and may generate authentication tokens so that only authorized access may be permitted. The privacy-protective device may further anonymize user identifiers before storing data locally so that privacy of the neural data may be preserved. The privacy-protective device within the control unit 106 may employ AES-256 encryption for all neural data. The encryption may be combined with access control policies requiring explicit user consent for any data transfer. This may ensure that the neural data of the user may remain secure and compliant with data ethics standards.
[0040] FIG. 1B illustrates the system 100, according to an embodiment of the present invention. In an embodiment of the present invention, the FIG. 1B may represent the overall architecture of the system 100 where the flow of signals may progress in a sequential manner to achieve a reliable and adaptive interaction pathway between the brain activity of the user and the connected devices.
[0041] In an embodiment of the present invention, the wearable headset 102 may be adapted to capture the neurophysiological signals of the user through integrated sensors, while the dry electrode 104 may be positioned on the inner surface of the wearable headset 102 to acquire electroencephalography signals without conductive gel. The control unit 106 may be communicatively coupled to the wearable headset 102 and may be configured to receive the signals acquired by the dry electrode 104 and other sensors, preprocess the signals for noise reduction, and prepare them for further alignment and classification.
[0042] In an embodiment of the present invention, the edge computing unit 108 may be integrated within the system 100 and may be configured to handle computationally intensive tasks locally so that reliance on external cloud infrastructure may be minimized. The edge computing unit 108 may process classified neurophysiological signals received from the control unit 106 in real-time and may generate device control outputs with minimal latency. The edge computing unit 108 may include a multi-core central processing architecture, graphics processing subsystems, or reconfigurable hardware such as field programmable gate arrays to accelerate parallel computations. Such hardware may be optimized for deep learning inference tasks so that adaptive artificial intelligence models may be executed efficiently. The edge computing unit 108 may further include memory modules with high data throughput and cache optimization techniques so that processing bottlenecks may be reduced. In some embodiments, the edge computing unit 108 may support on-device model retraining or fine-tuning so that classification accuracy may adapt to variations in user physiology. Data within the edge computing unit 108 may be encrypted and locally stored before transmission to safeguard sensitive neural information. By performing computation near the source of data acquisition, the edge computing unit 108 may ensure rapid responsiveness and enhance reliability of the system 100 in diverse operating environments.
[0043] In an embodiment of the present invention, the external peripheral machine 110 may be any device or platform that may receive the device control outputs from the application programming interface of the system 100 and may execute corresponding actions. The external peripheral machine 110 may include assistive mobility devices such as powered wheelchairs where control signals may correspond to directional movement commands including forward, backward, left, and right. Safety mechanisms may be embedded in the wheelchair such as collision detection sensors that may validate or override incoming commands for user protection. The external peripheral machine 110 may include prosthetic limbs or robotic arms where device control outputs may be mapped to joint-level actuation so that fine motor control may be achieved for grasping, lifting, or rotating tasks.
[0044] In an embodiment of the present invention, the external peripheral machine 110 may include augmented reality systems where device control outputs may trigger virtual object manipulation, menu navigation, or gesture simulation within an augmented visual field. Similarly, the external peripheral machine 110 may include virtual reality platforms where the commands may allow the user to navigate immersive environments, interact with virtual elements, or perform training exercises. The external peripheral machine 110 may further include communication interfaces such as text-to-speech converters or computer cursors where neural commands may provide direct input for communication by patients with limited motor control.
[0045] In an embodiment of the present invention, the external peripheral machine 110 may support both unidirectional control where signals flow only from the system 100 to the external device, and bi-directional interaction where feedback such as haptic vibration, auditory cues, or visual overlays may be transmitted back to the user for closed-loop operation. The technical implementation of the external peripheral machine 110 may therefore be application-dependent but may always ensure high precision, minimal delay, and safe execution of user-intended actions. By encompassing a wide range of assistive and immersive technologies, the external peripheral machine 110 may extend the usability of the system 100 across healthcare, rehabilitation, accessibility, education, and interactive entertainment sectors.
[0046] In an exemplary embodiment, a user X may operate the system 100, where the user X may wear the wearable headset 102 on the head. The wearable headset 102 may be configured to capture neurophysiological signals of the user X including electroencephalography, functional near-infrared spectroscopy, and electromyography. At least one dry electrode 104 may be in contact with the scalp of the user X so that electroencephalography signals may be acquired without the requirement of conductive gel, thereby enabling long-duration usage with comfort.
[0047] The signals acquired from the wearable headset 102 may be transmitted to the control unit 106. Within the control unit 106, a signal fusion module may preprocess the signals of the user X by applying artifact suppression, alignment, and filtering so that noise from motion or muscular activity may be minimized. Once preprocessed, the signals of the user X may be processed through an adaptive artificial intelligence engine that may apply feature extraction, transfer learning, and reinforcement learning so that the intentions of user X may be classified into device control commands.
[0048] The classified outputs may then be transferred to the edge computing unit 108, that may execute local real-time processing so that the commands of user X may be converted into device control outputs with minimal latency. These device control outputs may then be communicated through an application programming interface to the external peripheral machine 110. For example, when user X imagines movement toward the left, the system 100 may generate a corresponding control output that may cause a powered wheelchair functioning as the external peripheral machine 110 to move left. Similarly, when user X focuses on a selection command, the same output may be applied to navigate or select options within an augmented reality or virtual reality environment supported by the external peripheral machine 110.
[0049] In this manner, the exemplary embodiment may demonstrate how a real user such as user X may interact with the system 100 in daily life. The signals of user X may be processed, classified, and executed in real-time to provide seamless interaction with accessibility devices, immersive technologies, or communication platforms while preserving data security and ensuring usability.
[0050] FIG. 2 depicts a flowchart of a method 200 for operating the system 100, according to an embodiment of the present invention.
[0051] At step 202, the system 100 may receive the neurophysiological signals.
[0052] At step 204, the system 100 may preprocess the received the neurophysiological signals.
[0053] At step 206, the system 100 may align the pre-processed neurophysiological signals by means of the signal fusion engine to the reduce artifacts and enhance the signal quality.
[0054] At step 208, the system 100 may classify the aligned signals through the adaptive artificial intelligence engine configured with the transfer learning to map the neural activity into control commands.
[0055] At step 210, the system 100 may execute the local real-time processing of the classified signals by means of the edge computing unit 108 to generate device control outputs.
[0056] At step 212, the system 100 may transmit the device control outputs through the application programming interface to the external peripheral machine 110.
[0057] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0058] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A hybrid adaptive non-invasive brain-computer interface system (100), the system (100) comprising:
a wearable headset (102) adapted to be worn by a user on a head, wherein the wearable headset (102) is adapted to acquire neurophysiological signals of the user; and
a control unit (106) communicatively connected to the wearable headset (102), characterized in that the control unit (106) is configured to:
receive the neurophysiological signals;
align the received neurophysiological signals by means of a signal fusion engine to reduce artifacts and enhance signal quality;
classify the aligned signals through an adaptive artificial intelligence engine configured with transfer learning to map neural activity into control commands;
execute a local real-time processing of the classified signals by means of an edge computing unit (108) to generate device control outputs; and
transmit the device control outputs through an application programming interface to an external peripheral machine (110).
2. The system (100) as claimed in claim 1, wherein the wearable headset (102) comprises electroencephalography (EEG) sensors, functional near-infrared spectroscopy (fNIRS) sensors, and electromyography (EMG) sensors, or a combination thereof.
3. The system (100) as claimed in claim 1, wherein the control unit (106) is configured to preprocess the received the neurophysiological signals.
4. The system (100) as claimed in claim 1, wherein the adaptive artificial intelligence engine is configured to personalize classification accuracy through reinforcement learning based on a physiological state of the user.
5. The system (100) as claimed in claim 1, wherein the edge computing unit (108) is configured to perform a latency reduction to achieve real-time intention recognition and device interaction.
6. The system (100) as claimed in claim 1, wherein the wearable headset (102) comprises dry electrodes (104a-104n) arranged to enable prolonged usage without conductive gel application.
7. The system (100) as claimed in claim 1, wherein the application programming interface is configured to support integration with assistive devices, augmented reality systems, virtual reality systems, communication interfaces, or a combination thereof.
8. The system (100) as claimed in claim 1, wherein the adaptive artificial intelligence engine is configured to reduce calibration time by transfer learning across multiple users.
9. A method (200) for operating a hybrid adaptive non-invasive brain-computer interface system (100), the method (200) is characterized by steps of:
receiving neurophysiological signals from a wearable headset (102);
aligning the received neurophysiological signals by means of a signal fusion engine to reduce artifacts and enhance signal quality;
classifying the aligned signals through an adaptive artificial intelligence engine configured with transfer learning to map neural activity into control commands;
executing a local real-time processing of the classified signals by means of an edge computing unit (108) to generate device control outputs; and
transmitting the device control outputs through an application programming interface to an external peripheral machine (110).
10. The method (200) as claimed in claim 9, comprising a step of pre-processing the receive the neurophysiological signals.
Date: October 04, 2025
Place: Noida

Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant

Documents

Application Documents

# Name Date
1 202541096354-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2025(online)].pdf 2025-10-07
2 202541096354-REQUEST FOR EARLY PUBLICATION(FORM-9) [07-10-2025(online)].pdf 2025-10-07
3 202541096354-POWER OF AUTHORITY [07-10-2025(online)].pdf 2025-10-07
4 202541096354-OTHERS [07-10-2025(online)].pdf 2025-10-07
5 202541096354-FORM-9 [07-10-2025(online)].pdf 2025-10-07
6 202541096354-FORM FOR SMALL ENTITY(FORM-28) [07-10-2025(online)].pdf 2025-10-07
7 202541096354-FORM 1 [07-10-2025(online)].pdf 2025-10-07
8 202541096354-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [07-10-2025(online)].pdf 2025-10-07
9 202541096354-EDUCATIONAL INSTITUTION(S) [07-10-2025(online)].pdf 2025-10-07
10 202541096354-DRAWINGS [07-10-2025(online)].pdf 2025-10-07
11 202541096354-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2025(online)].pdf 2025-10-07
12 202541096354-COMPLETE SPECIFICATION [07-10-2025(online)].pdf 2025-10-07