Electroencephalography (colloquially pronounced EEG) is the process of measuring the electrical activity from your brain. It was the first such brain imaging technology to be invented, and remains one of the most commonly used in medical settings, alongside MRi and CT scans.
The technology works as follows. Every time your neuron synapses fire, they product a small amount of electricity, known as an action potential. As millions of neurons fire in synchronicity, this electrical signal becomes large enough to measure through the skin.
Small conductive plates (electrodes) placed in contact with the top of the scalp detect this electrical activity. By comparing the reading across several electrodes at different locations on the head, we can infer certain information about what the brain is doing at a given moment.
EEG is largely responsible for our modern understanding of the brain and the central nervous system. The discovery of electrical pulses in neurons spawned the understanding of neurotransmitters, nerve firings, and other phenomena that make our body tick. To this day, EEG is used in the diagnosis of epilepsy, and other neurological disorders.
So why do we care?
Historically, EEG has been confined to hospitals and universities.
EEG scans were prescribed by a doctor. The patients would head to the hospital, connect 64 electrodes to their head with conductive paste, hook up to a $35,000 piece of equipment, and sit very still while the clinician interpreted the results. The process was long, messy, and very expensive.
That all changed very recently.
Consumer EEG devices are popping up on the market. As costs decrease and computers improve, the capabilities of at-home EEG devices are advancing. The technology is leaving the lab, and entering your home.
With novelty comes skepticism. As developers of a consumer EEG device, we’ve been surrounded by experts in the field for over a year. As such, we decided to write a quick history of EEG: how it started, what recently changed, and where it’s heading.
How it started
The first pre-EEG experiments took place in the late 1800s. Researcher Richard Caton used a galvanometer (the precursor to our modern multimeter) in order to detect small electrical changes that occurred in nerve cells. These experiments were rough, but clearly indicated that nerve cells used electricity as a means of transmitting information across the body. At the time, this was a major discovery.
Human EEG only took off in 1929, thanks to Hans Berger. He used a much more advanced galvanometer, which would scribble the electrical signal onto a piece of paper rather than simply displaying it. Along with better electrodes, the ability to perform post-measurement analysis opened up new avenues in EEG reading.
One of Hans’ big discovery was that the brain emitted a different electrical signature at different locations. He named the different waves Alpha and Beta. These definitions, while very rough, remain a prominent part of modern EEG analysis.
Until this point, EEG measurements were purely research-oriented. This changed in the 60s, with the move to digital electronics.
Until the 1960s, an EEG test involved clinicians staring at the EEG signature, trying to infer brain activity from the shape of the graph. It was a very approximate science.
Adding computers to the mix meant that data could be analyzed much more in depth, using a much greater number of electrodes. One algorithm in particular – the Fast Fourier Transform – allowed researchers to look at what frequencies were present in the EEG signal. This reinforced the concept of alpha and beta waves, which was found to roughly correlated with specific mental states:
Delta [0.5, 4Hz]: slow-wave sleep
Theta [4, 7.5Hz]: drowsiness
Alpha [7.5, 14Hz]: calm, relaxed focus
Beta [14, 30Hz]: executive function, alertness
The 60s also brought forward the concept of neurofeedback – training users to control their brain frequencies through auditory or visual feedback. The most popular neurofeedback protocol was alpha-theta, where users would train their minds to produce more alpha waves, and fewer theta waves.
Following these protocols, neurofeedback produced measurable results across a number of domains, from treating neurological disorders like ADHD, to improving cognitive performance in neurotypical adults. The results demonstrated effects comparable in magnitude to the use of methylphenidate, a common treatment for ADHD.
From the 60s onwards, EEG research remained relatively constant, as more neurological phenomena were discovered, and assigned to their respective theta/alpha/beta discretization.
This continued until very recently.
What recently changed?
The past 10 years have led to drastic changes in EEG analysis thanks to two changes: low-noise electronics, and machine learning algorithms.
Until now, very expensive amplifiers were required in order to avoid decaying the very faint brain signal. Electrodes also had to be coated in conductive paste, to yield the best possible connection to the head. A very recent decrease in the cost of low-noise, high-impedance amplifiers (in particular, the Ti ADS129x series) have allowed high-fidelity data acquisition without spending thousands of dollars.
At the same time, EEG practices started using active electrodes, which pre-amplify each electrode at their source, before the signal can accumulate noise by traveling through the wire to the control unit.
Together, these changes meant that researchers could get a better EEG signal, cheaper, and easier than ever before.
Machine learning algorithms changed the way we approach most analytics tasks.
Machine learning involves letting the computer learn what patterns are indicative of the phenomenon you are measuring, instead of trying to guess what will correlate before-hand. This represents a big step in EEG analysis – moving from the rudimentary alpha, beta, theta wave system to a much more robust, generalizable algorithm. It’s the same change that allows siri to recognize your words so accurately, compared to how terrible speech-to-text was 10 years ago.
Additional progress has been made in algorithms that isolate key features in the signal. From noise-reduction algorithms such as ICA and CSP, to feature extraction techniques such as Shannon entropy and spectral edge frequency. Together, these advancements have drastically moved the field forward, increasing the scope of what we can measure with EEG.
Where is it heading?
Roughly speaking, modern EEG has divided into two categories of products: medical, and consumer.
Medical EEG aims to investigate and quantify brain phenomena, pushing the limits of what the technology can measure.
These systems use 64-electrode caps, actively amplified electrodes, conductive paste to improve skin contact, and the most expensive chipsets that a given lab can afford.
Medical EEG devices are powerful brain-mapping tools, with incredible resolution. Whether it be diagnosis of mental disorders, detecting emotional states, or measuring the delay in neuro-firing in response to a stimulus, the technology has advanced drastically in the past 10 years. More recent projects involve using full-cap EEG to allow deeply paralyzed patients to communicate, and allowing paraplegic patients to control robotic arms with their minds. Measuring concentration, in medical, is old news.
On the other side of the spectrum, the decrease in cost of EEG systems has opened the market to small-scale, subtle, accessibly-priced consumer EEG products. These devices use the improvements in the technology to decrease the requirements: decreased cost, and increased comfort.
The movement began with what were essentially toys – cheap products with an open SDK, on which developers would create brain-training games. Very soon, devices came out that offered clinical-grade neurofeedback at an accessible cost. Now, many products are offering tracking, mental state detection, and actionable insights from deeper data analysis.
The miniaturization of EEG hasn’t been without problems. With fewer electrodes, the ability to localize areas of brain activation are decreased. Lower-cost dry electrodes lead to increased signal noise, which is difficult to filter out. Finally, taking technology out of a lab means dealing with real-world phenomena – different head shapes, movement artifacts, and poor electrode connection.
These issues are the same reason Fitbit’s heart rate technology took so long to become accurate, and fingerprint scanning only became available a few years ago. As consumer EEG breaks into the mainstream, the techniques to deal with these issues get better, and usability improves.
We’re on that threshold now.