Abstract: MIND-CONTROLLED VIRTUAL KEYBOARD SYSTEM FOR HANDS-FREE TYPING AND DEVICE NAVIGATION ABSTRACT A mind-controlled virtual keyboard system (100) for hands-free typing and device navigation is disclosed. The system (100) comprises a non-invasive electroencephalography (EEG) headset (102) adapted to capture brainwave signals. The captured brainwave signals are associated with an intended character input, navigation commands from a user, or a combination thereof. A processing unit (104) is configured to receive the captured brainwave signals from the EEG headset (102); extract relevant features from the received brainwave signals; decode an intended user input based on the extract relevant features using a machine learning model (106); and render a virtual keyboard interface (112) based on the decoded input. The system (100) enables the user to type and navigate devices using brain signals alone, removing a need for any physical movement, which benefits individuals with severe motor impairments. Claims: 10, Figures: 3 Figure 1 is selected.
Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to a hands-free typing system and particularly to a mind-controlled virtual keyboard system for hands-free typing and device navigation.
Description of Related Art
[002] The increasing reliance on digital communication and computing systems has amplified the demand for more accessible human-computer interaction methods. Conventional input devices such as physical keyboards, touchscreens, and mice require precise motor function, which excludes individuals with physical disabilities from full participation in digital environments. The absence of alternative input mechanisms that function without physical effort leaves a significant portion of the population underserved and digitally disconnected.
[003] To bridge this gap, several technologies have come into existence. Voice recognition software allows verbal input, but this method remains ineffective in noisy environments or for users with speech impairments. Eye-tracking systems, while offering hands-free operation, often suffer from high calibration sensitivity, limited precision, and user fatigue. Other mechanical interfaces such as sip-and-puff systems and switch controls serve as rudimentary access tools, but their operation speed and intuitiveness remain well below optimal levels for independent usage.
[004] In parallel, brain-computer interface (BCI) research has sought to create direct neural pathways for device control. Commercially available BCI devices rely on electroencephalography (EEG) to detect brain activity and translate signals into commands. However, such systems frequently exhibit limitations in signal accuracy, response time, user adaptability, and cost-effectiveness. These factors restrict their practical deployment in real-world assistive contexts.
[005] There is thus a need for an improved and advanced mind-controlled virtual keyboard system for hands-free typing and device navigation that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a mind-controlled virtual keyboard system for hands-free typing and device navigation. The system comprising a non-invasive electroencephalography (EEG) headset adapted to capture brainwave signals. The captured brainwave signals are associated with an intended character input, navigation commands from a user, or a combination thereof. The system further comprising a processing unit in communication with the EEG headset. The processing unit is configured to receive the captured brainwave signals from the EEG headset; extract relevant features from the received brainwave signals; decode an intended user input based on the extracted relevant features using a machine learning model; and render a virtual keyboard interface based on the decoded input.
[007] Embodiments in accordance with the present invention further provide a method for enabling hands-free typing and device control using brain-computer interface technology. The method comprising steps of capturing brainwave signals from a user via a non-invasive electroencephalography (EEG) headset; extracting relevant features from the captured brainwave signals; decoding an intended user input based on the extracted relevant features using a machine learning model; and rendering a virtual keyboard interface based on the decoded input.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a mind-controlled virtual keyboard system for hands-free typing and device navigation.
[009] Next, embodiments of the present application may provide a mind-controlled virtual keyboard system that enables users to type and navigate devices using brain signals alone, removing the need for any physical movement, which benefits individuals with severe motor impairments.
[0010] Next, embodiments of the present application may provide a mind-controlled virtual keyboard system that uses non-invasive EEG headsets to read brain signals, avoiding the risks and discomfort associated with surgically implanted interfaces.
[0011] Next, embodiments of the present application may provide a mind-controlled virtual keyboard system that works silently and does not rely on verbal commands, making it effective in noisy environments and for users with speech disabilities.
[0012] Next, embodiments of the present application may provide a mind-controlled virtual keyboard system that adapts to the user’s unique brainwave patterns over time, increasing accuracy and speed with continued use.
[0013] Next, embodiments of the present application may provide a mind-controlled virtual keyboard system that does not require complex setup, expensive hardware, or extensive training, making it more accessible and practical for widespread use.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein.
[0015] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor an exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0017] FIG. 1 illustrates a mind-controlled virtual keyboard system for hands-free typing and device navigation, according to an embodiment of the present invention;
[0018] FIG. 2 illustrates a block diagram of a processing unit, according to an embodiment of the present invention; and
[0019] FIG. 3 depicts a flowchart of a method for enabling hands-free typing and device control using brain-computer interface technology, according to an embodiment of the present invention.
[0020] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0021] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0022] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0023] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0024] FIG. 1 illustrates a mind-controlled virtual keyboard system 100 (hereinafter referred to as the system 100) for hands-free typing and device navigation, according to an embodiment of the present invention. In an embodiment of the present invention, the system 100 may intercept brainwave signals of a user, and may further convert the intercepted brainwave signals into a digital input that may be carried out to electronic devices (not shown) as input. The electronic devices may be, but not limited to, a personal computer, a desktop, a laptop, a tablet, a mobile phone, a notebook, a netbook, a smartphone, a wearable device, a home appliance, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the electronic devices, including known, related art, and/or later developed technologies. The system 100 may operate in a non-invasive manner. The system 100 may provide a fatigue-free interaction. Further, the system 100 may be used by physically impaired and/or neutralized motor users.
[0025] According to the embodiments of the present invention, the system 100 may incorporate non-limiting hardware components to enhance the processing speed and efficiency such as the system 100 may comprise a non-invasive electroencephalography (EEG) headset 102 (hereinafter referred to as the EEG headset 102), a processing unit 104, a machine learning model 106, a convolution neural network (CNN) 108, probabilistic language models 110, and a virtual keyboard interface 112.
[0026] In an embodiment of the present invention, the EEG headset 102 may be an electronic peripheral that may be worn on a head of the user. The EEG headset 102 may be adapted to capture brainwave signals. The captured brainwave signals may be associated with such as, but not limited to, an intended character input, navigation commands from a user, and so forth. Embodiments of the present invention are intended to include or otherwise cover any association of the captured brainwave signals, including known, related art, and/or later developed technologies. The EEG headset 102 may be, but not limited to, an emotive flex headset, a mental training headset, a g. Nautilus headset, and so forth. In a preferred embodiment of the present invention, the EEG headset 102 may be a dry-electrode headset. Embodiments of the present invention are intended to include or otherwise cover any type of the EEG headset 102, including known, related art, and/or later developed technologies.
[0027] In an embodiment of the present invention, the processing unit 104 may be in communication with the EEG headset 102. The processing unit 104 may further be configured to execute computer-executable instructions to generate an output relating to the system 100. The processing unit 104 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the processing unit 104, including known, related art, and/or later developed technologies. In an embodiment of the present invention, the processing unit 104 may further be explained in conjunction with FIG. 2.
[0028] In an embodiment of the present invention, the probabilistic language models 110 may be configured to enhance an accuracy and contextual relevance of the decoded inputs by predicting a most probable sequence of characters, words, or commands based on partial input data received from the machine learning model 106. Additionally, the probabilistic language models 110 may incorporate temporal smoothing across successive predictions to mitigate transient signal noise and ensure stable character or command selection. The probabilistic language models 110 may leverage statistical techniques and natural language processing algorithms to filter and correct unintended or ambiguous inputs arising from noise or signal inconsistencies in the EEG data. The probabilistic language models 110 may be, but are not limited to, n-gram models, hidden Markov models (HMMs), Bayesian networks, transformer-based language models, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of probabilistic or predictive language modeling technique, including known, related art, and/or later developed technologies.
[0029] In an exemplary embodiment of the present invention, the selection of a specific character, such as the letter "R", may be achieved by integrating the decoded EEG signal with probabilistic language modeling. The system estimates the likelihood that the user intended to type the letter "R" by analyzing how closely the observed brainwave pattern matches previously learned EEG patterns associated with that letter. This likelihood is further weighted by how frequently the letter "R" typically appears in a given linguistic context. If the combined probability exceeds a predefined confidence threshold, the system selects "R" as the intended input.
[0030] Likewise, for word prediction, the system may employ a trigram-based language model. For example, if the previously selected words were "the" and "color", the system evaluates the probability of "red" being the next most likely word based on historical usage patterns in natural language. If the word "red" commonly follows the phrase "the color", and the brainwave signal corresponds closely to the EEG pattern associated with that word, the system confidently selects "red" as the intended input. This combined use of neural decoding and linguistic context significantly reduces false selections and improves typing accuracy.
[0031] FIG. 2 illustrates a block diagram of the processing unit 104, according to an embodiment of the present invention. The processing unit 104 may comprise the computer-executable instructions in form of programming modules such as a data receiving module 200, a data extraction module 202, a data decoding module 204, and a rendering module 206.
[0032] In an embodiment of the present invention, the data receiving module 200 may be configured to receive the captured brainwave signals from the EEG headset 102. The data receiving module 200 may be configured to pre-process the received brainwave signals using pre-processing techniques. The pre-processing techniques may be, but not limited to, noise reduction, artifact removal, signal normalization, frequency band filtering, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the pre-processing techniques, including known, related art, and/or later developed technologies. The data receiving module 200 may further be configured to transmit the pre-processed brainwave signals to the data extraction module 202.
[0033] The data extraction module 202 may be activated upon receipt of the pre-processed brainwave signals from the data receiving module 200. In an embodiment of the present invention, the data extraction module 202 may be configured to extract relevant features from the received brainwave signals. The relevant features may be, but not limited to, a character input, a navigational input, a gesture input, a pointer input, a vocal input, and so forth. Embodiments of the present invention are intended to include or otherwise cover any relevant features, including known, related art, and/or later developed technologies, that may be extracted from the pre-processed brainwave signals. The data extraction module 202 may be configured to transmit the extracted relevant features to the data decoding module 204.
[0034] The data decoding module 204 may be activated upon receipt of the extracted relevant features from the data extraction module 202. In an embodiment of the present invention, the data decoding module 204 may be configured to decode an intended user input. The intended user input may be decoded using the machine learning model 106. The machine learning model 106 may be adapted to employ the convolutional neural network (CNN) 108 that may be trained on user-specific brain signal datasets. Further, the data decoding module 204 may be configured to reduce false input detection using the probabilistic language models 110. The data decoding module 204 may be configured to transmit the decoded input to the rendering module 206.
[0035] The rendering module 206 may be activated upon receipt of the decoded input from the data decoding module 204. In an embodiment of the present invention, the rendering module 206 may be configured to render the virtual keyboard interface 112 based on the decoded input. The rendering of the virtual keyboard interface 112 may further account for user preferences and a usage history. The accounting of the user preferences and the usage history may enable the rendering module 206 to customize a language and a length of the virtual keyboard interface 112. The customization of the virtual keyboard interface 112 may further allow for increased speed, accuracy, and accessibility.
[0036] The rendering module 206 may be configured to operate a real-time feedback loop that may be adapted to improve accuracy by learning the brainwave signals of the user over time. The real-time feedback loop may induce accuracy with time, that may make the system 100 more feasible for the users with more severe motor disabilities, allowing smooth, effective, and independent digital communication without requiring movement.
[0037] FIG. 3 depicts a flowchart of a method 300 for enabling hands-free typing and device control using the system 100, according to an embodiment of the present invention.
[0038] At step 302, the system 100 may capture the brainwave signals from the user via the EEG headset 102.
[0039] At step 304, the system 100 may pre-process the captured brainwave signals using the pre-processing techniques.
[0040] At step 306, the system 100 may extract the relevant features from the captured brainwave signals.
[0041] At step 308, the system 100 may decode the intended user input based on the extracted relevant features using the machine learning model 106.
[0042] At step 310, the system 100 may reduce the false input detection using the probabilistic language models 110.
[0043] At step 312, the system 100 may render the virtual keyboard interface 112 based on the decoded input.
[0044] At step 314, the system 100 may customize the virtual keyboard interface 112 in the layout and the language based on the user preferences and the usage history.
[0045] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0046] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A mind-controlled virtual keyboard system (100) for hands-free typing and device navigation, the system (100) comprising:
a non-invasive electroencephalography (EEG) headset (102) adapted to capture brainwave signals, wherein the captured brainwave signals are associated with an intended character input, navigation commands from a user, or a combination thereof; and
a processing unit (104) in communication with the EEG headset (102), characterized in that the processing unit (104) is configured to:
receive the captured brainwave signals from the EEG headset (102);
extract relevant features from the received brainwave signals;
decode an intended user input based on the extracted relevant features using a machine learning model (106); and
render a virtual keyboard interface (112) based on the decoded input.
2. The system (100) as claimed in claim 1, wherein the EEG headset (102) is a dry-electrode headset.
3. The system (100) as claimed in claim 1, wherein the processing unit (104) is configured to pre-process the received brainwave signals using pre-processing techniques selected from noise reduction, artifact removal, signal normalization, frequency band filtering, or a combination thereof.
4. The system (100) as claimed in claim 1, wherein the machine learning model (106) employs a convolutional neural network (CNN) (108) trained on user-specific brain signal datasets.
5. The system (100) as claimed in claim 1, wherein the virtual keyboard interface (112) is customizable in layout and language based on user preferences and usage history.
6. The system (100) as claimed in claim 1, wherein the processing unit (104) is configured to reduce false input detection using probabilistic language models (110).
7. A method (300) for enabling hands-free typing and device control using brain-computer interface technology, the method (300) is characterized by steps of:
capturing brainwave signals from a user via a non-invasive electroencephalography (EEG) headset (102);
extracting relevant features from the captured brainwave signals;
decoding an intended user input based on the extracted relevant features using a machine learning model (106); and
rendering a virtual keyboard interface (112) based on the decoded input.
8. The method (300) as claimed in claim 7, comprising a step of pre-processing the captured brainwave signals using pre-processing techniques selected from noise reduction, artifact removal, signal normalization, frequency band filtering, or a combination thereof.
9. The method (300) as claimed in claim 7, comprising a step of reducing a false input detection using probabilistic language models (110).
10. The method (300) as claimed in claim 7, comprising a step of customizing the virtual keyboard interface (112) in layout and language based on user preferences and usage history.
Date: May 12, 2025
Place: Noida
Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202541045852-STATEMENT OF UNDERTAKING (FORM 3) [13-05-2025(online)].pdf | 2025-05-13 |
| 2 | 202541045852-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-05-2025(online)].pdf | 2025-05-13 |
| 3 | 202541045852-POWER OF AUTHORITY [13-05-2025(online)].pdf | 2025-05-13 |
| 4 | 202541045852-OTHERS [13-05-2025(online)].pdf | 2025-05-13 |
| 5 | 202541045852-FORM-9 [13-05-2025(online)].pdf | 2025-05-13 |
| 6 | 202541045852-FORM FOR SMALL ENTITY(FORM-28) [13-05-2025(online)].pdf | 2025-05-13 |
| 7 | 202541045852-FORM 1 [13-05-2025(online)].pdf | 2025-05-13 |
| 8 | 202541045852-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-05-2025(online)].pdf | 2025-05-13 |
| 9 | 202541045852-EDUCATIONAL INSTITUTION(S) [13-05-2025(online)].pdf | 2025-05-13 |
| 10 | 202541045852-DRAWINGS [13-05-2025(online)].pdf | 2025-05-13 |
| 11 | 202541045852-DECLARATION OF INVENTORSHIP (FORM 5) [13-05-2025(online)].pdf | 2025-05-13 |
| 12 | 202541045852-COMPLETE SPECIFICATION [13-05-2025(online)].pdf | 2025-05-13 |