top of page

Basics of Brain-Computer Interface


ree

Brain-Computer Interface (BCI) technology is revolutionizing how humans interact with computers. At its core, BCI is a communication system that captures commands from the brain and translates them into a language a computer can understand—allowing it to perform specific tasks. While its current applications are largely medical, the potential for expansion into non-medical domains like gaming, education, and assistive technologies is rapidly growing.



In this introductory post, I’ll walk you through the key components of a BCI system to give you an overall picture. Each component will be explored in detail in subsequent blog entries.



1. Capturing Brain Signals


The first step in any BCI system is acquiring brain signals. This is done using sensors that detect the electrical activity of neurons. Based on the placement of these sensors, BCIs are categorized as:


  • Invasive BCI: Sensors (electrodes) are implanted directly into brain tissue for high-precision recording of brain signals. These require invasive surgeries where the skull is opened to insert the electrodes directly into the layers of the brain.


  • Partially Invasive BCI: Implants are placed inside the skull but outside brain tissue. The brain is covered by three layers (dura, arachnoid, and pia) called meninges. These meninges lie inside the skull bones. The implants are placed either directly above the brain surface but below the dura mater (subdural electrocorticography) or on top of these meninges but inside the skull bone (extradural electrocorticography). These BCIs are less invasive compared to invasive BCIs but more invasive than non-invasive BCIs. The data recorded has more noise compared to invasive BCIs but fewer artifacts compared to non-invasive BCIs.


  • Non-Invasive BCI: Sensors are placed outside the skull—EEG, fMRI, Near-Infrared Spectroscopy (NIRS), EMG. These sensors lie outside the skin and do not require invasive procedures to implant them. They can be removed and reused as needed. Due to their non-invasive nature, these sensors are more widely used compared to other types.



Signal Acquisition Methods: We use the following devices to acquire the signals: Electroencephalogram (EEG), Electrocorticogram (ECoG), Functional Magnetic Resonance Imaging (fMRI), Near-Infrared Spectroscopy (NIRS), Electromyography (EMG), among others.


There are various types of signals we acquire using the above devices: visual evoked potentials, P300, SSVEP, hybrid signals, spontaneous signals.


Once the subjects are prepared, electrodes are placed on the scalp to record brain activity. EEG signals vary significantly between individuals and even across different recording sessions for the same individual. Therefore, multiple recording sessions are typically required to collect sufficient data and train the BCI system to accurately recognize specific commands. The duration of each recording session is determined by the objectives of the study or application.


ree



2. Signal Preprocessing and Extraction


Raw brain data is complex and noisy. Along with the actual command signals, it also carries background noise (e.g., eye blinks, heartbeats, electromagnetic disturbances from the surrounding environment) and overlapping signals from multiple brain regions. We need to extract the meaningful commands from these distractions.


Techniques used: Independent Component Analysis (ICA), Common Average Reference (CAR), Adaptive Filters, Principal Component Analysis (PCA), Surface Laplacian (SL), Signal De-Noising, among others.


These methods help filter out artifacts and environmental disturbances, allowing the system to focus on the relevant neural signals.



3. Signal Amplification and Feature Extraction


Sometimes, the extracted signals are weak and need amplification to enhance accuracy. Once refined, the signals are transformed into a feature vector, which is a compact representation of the command.

Feature vectors vary depending on the task. The feature vectors related to motor tasks are different from those related to visual tasks. The data we gather consists of multiple such feature vectors. Hence, it is necessary to classify them to make it easier for the application device to understand the command.



4. Classification of Commands


The feature vector is classified based on the type of command it carries. BCI classifiers use machine learning algorithms to categorize the signal into actionable outputs such as moving a robotic arm in a specific direction, selecting an icon on the screen, or controlling a wheelchair.

Classification methods: Linear classifiers, neural networks, non-linear Bayesian classifiers, nearest neighbor classifiers, hybrid classifiers, among others.



5. Execution of the Task


Once classified, the command is sent to the computer or external device to perform the desired action. This is where the brain’s intent becomes a tangible output—whether it's cursor movement, robotic control, or communication.



6. Training and Adaptation


BCI systems require training to adapt to individual users. This is one of the current limitations of BCI technology. There are two main types:


  • Synchronous BCI: Requires active input from the user at specific times.

  • Asynchronous BCI: Continuously monitors brain activity and detects commands as they occur naturally.


In the upcoming blog posts, I’ll dive deeper into each of these components.

 
 
 

Recent Posts

See All

Comments


bottom of page