Goal: Develop a framework for Brain Computer Interfaces to control robotic devices with commercially available hardware.
Short summary: Brain Computer Interfaces (BCI) analyze biological signals related to brain activity to extract commands the user wants to issue and allow users to interact with a devices through their thoughts. However, most signals, such as the electroencephalogram (EEG), have a low Signal to Noise Ratio, meaning that the information contained in the signal is lost in the noise from other signals, such as eye blinks or electrical appliances. Therefore, BCI usually needs medical grade equipment to measure signals of high enough quality, which are costly and usually not portable.
To extract the relevant information from signals that are measured with commercial grade portable equipment, algorithms are needed that can identify conscious commands from the user in the low SNR signals. Usually, machine learning algorithms, that are trained on data from subjects performing a task related to the information we want to extract, are used to deal with these signals. Currently, the state-of-the-art in machine learning uses so-called deep learning and neural networks. We therefore aim at creating a deep learning method that will allow us to extract information from commercially available sensors, with the purpose of using the BCI system as a controller for robotic devices.