April 29, 2017
 

Using a brain/computer interface for smart-home control

Christoph Guger
Just by thinking, users can select computer commands to turn on television sets, open or close doors and play music.

An electroencephalogram (EEG)-based brain/computer interface (BCI) measures and analyses electrical brain activity to control external devices.1–3 The degrees of freedom of these interfaces are their speed of communication, accuracy of mental-process detection and the number of selectable commands. BCI setups are currently used mainly for moving a cursor on a computer screen, controlling external objects such as wheelchairs or for spelling tasks.1–5 Systems for smart-home control need especially high accuracy, many choices and ease of use,6 but communication speed is less important than for spelling applications.

BCIs measure electrical activity using EEGs combined with sophisticated signal-analysis algorithms, and focus on specific frequency ranges. Devices using motor imagery, such as left- or right-hand motion, are limited to a few degrees of freedom (for example, when wiggling a cursor around on a computer monitor).2,3 Another commonly employed method uses flickering lights to induce the ‘EEG flicker frequency,’ to detect when the subject is looking at specific lights.5 Such methods usually allow users to select from only 12 different choices. A smart-home environment has several controllable elements and requires several hundreds of options. This can be achieved with an interface based on the P300 evoked potential.

In this system, English letters (A, B, …, Z) and Arabic numerals (1, 2, …, 9) are arranged in a 6×6 matrix on a computer screen. The characters are subsequently highlighted in a random order while the subject must concentrate on the specific letter they want to spell. Whenever the character flashes on which the user is concentrating, the EEG exhibits the ‘P300 signal.’ This is basically an amplitude enhancement of the electrical signal 300ms after the flash, which can be detected. To improve detection accuracy, the system initially highlights each character 15 times. These devices serve as a spelling system for completely paralysed people.


Virtual representation of a smart home.

To use and test this system for smart-home control, we implemented a virtual representation of a real home and projected it onto a Cave Automatic Virtual Environment wall, which uses stereoscopic projections to highly immerse the person into the environment (see Figure 1). In the experiment, subjects should be able to switch on and off the light, open and close doors and windows, control the television (TV) set, use the phone, play music, operate a video camera at the entrance, walk around in the house and move to a specific location in the smart home. Therefore, we developed special control masks for the BCI system containing all the necessary commands. We created seven masks, i.e., for controlling the light, music, phone, temperature, and TV (see Figure 2, left), as well as to move objects or go to a specific location (see Figure 2, right).

Twelve subjects participated in an initial experiment. We first trained them how to select specific commands from the P300 matrices. This EEG data was used to calibrate the BCI system on the basis of the individual data specific to each person. The subject could then select icons in real time. To measure the accuracy, the user had to select a specific command sequence to perform certain tasks, like watching TV.

The system can be perfectly controlled (100%) by about 82% of users with only five minutes of training and 15 flashes per icon.7 We tested whether we could reduce the number of flashes to increase decision speed. One out of twelve people could operate the system with only four flashes, without any mistake. Even with only two, the worst classification result of the 12 subjects was 30%, far above the random-decision probability of 1/36.


(left) Control icons for TV, music, temperature and other functions. (right) Icons used to move a person to a specific location inside the home. The image shows the house from a bird's eye view.

The BCI system was initially trained with EEG data from a few randomly selected icons, and relied on subject-specific information for all other commands. This enables participants to use icons for many different tasks, without prior time-consuming, boring training on each individual icon. This reduced the training time compared to other BCI implementations, where hours or even weeks are needed.1–3 This improvement in training time might be important for locked-in and amyotrophic lateral-sclerosis patients who have problems with concentration for longer periods. The concept also works better if more items are presented in the control mask, since the P300 response is more pronounced when the likelihood of the target character being highlighted decreases. Of course, this results in a lower information-transfer rate but enables control of almost any device.

Applications requiring reliable decisions are highly supported, making the P300-based BCI system an optimal tool for smart-home control. In such experiments, the virtual smart home acts as a test installation for real versions. Wheelchair control, which many designers identify as their target application, can also use this type of system in a goal-oriented way. In such an approach, people do not need to move a robotic hand by thinking about hand or foot movements and controlling right, left, up or down commands. Instead, users simply think “I want to pick up the glass” and the system initiates the real command.

We will next interface the BCI system to real smart homes. The interfacing is currently being developed as part of the European Union project ‘Smart Homes for All’ (SM4all). This middleware package will allow control of domotic devices—robotic technologies for the home—that are already on the market. Such devices include cameras, window and door openers, TV sets and many more. To improve system accuracy and speed, we are currently investigating user-implanted electrodes.


Author

Christoph Guger
g.tec: Guger Technologies OEG


References
  1. N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J. Perelmouter, E. Taub and H. Flor, A spelling device for the paralysed, Nature 398, pp. 297-298, 1999.

  2. C. Guger, A. Schlögl, C. Neuper, D. Walterspacher, T. Strein and G. Pfurtscheller, Rapid prototyping of an EEG-based brain-computer interface (BCI), IEEE Trans. Rehab. Eng. 9 (1), pp. 49-58, 2001.

  3. T. M. Vaughan, J. R. Wolpaw and E. Donchin, EEG-based communication: prospects and problems, IEEE Trans. Rehab. Eng. 4, pp. 425-430, 1996.

  4. D. Krusienski, E. Sellers, F. Cabestaing, S. Bayoudh, D. McFarland, T. Vaughan and J. Wolpaw, A comparison of classification techniques for the P300 speller, J. Neural Eng. 6, pp. 299-305, 2006.

  5. G. R. McMillan and G. L. Calhoun et al., Direct brain interface utilizing self-regulation of steady-state visual evoke response, Proc. Rehab. Eng. Assist. Technol. Soc. N. Am., pp. 693-695 June 9?14, 1995.

  6. C. Guger, C. Holzner, C. Groenegress, G. Edlinger and M. Slater, Control of a smart home with a brain-computer interface, Proc. 3rd Brain Comput. Interface Worksh. Train. Course, 2008.

  7. C. Guger, S. Daban, E. Sellers, C. Holzner, R. Carabalona, F. Gramatica and G. Edlinger, How many people are able to control a P300-based brain-computer interface?, Neurosci. Lett.. In press.


 
DOI:  10.2417/2200907.1741