Brain stimulated activity vs resting state
Event & stimulus, evoked response, event-related potential waveform and its components, evoked potentials
In neuroscience and specifically in electrophysiology, we refer to an event being associated to the presentation of a stimulus to the nervous system. This can be of
sensory, cognitive or motor nature. A stimulus, such as an auditory stimulus, may elicit a stereotyped electrophysiological response which is characterized by a specific electric potential waveform,
referred to as an evoked response or event-related potential. It is emphasized that the event-related potential is a waveform and that it consists of different components.
Evoked potentials are distinguished in the categories below:
Auditory evoked response - Auditory event-related potential (waveform)
BAEPs (I-IV), mid-latency responses (P0, Pa) and late-latency responses (P50, N100, P160)
Following a sound onset, a potential is generated in the ear’s cochlea and is then transmitted sequentially to the cochlear nerve
the cochlear nucleus
, the brainstem
(nuclei and axonal tracts represented by the superior olivary complex and lateral lemniscus respectively), the inferior
colliculus in the midbrain, the thalamus
(medial geniculate body), and finally the cortex
The responses associated with the brainstem, including input and output, are called brainstem auditory evoked potentials (BAEPs). These consist of (Fig. 1): wave I
for the acoustic nerve, wave II for the cochlear nucleus, wave III for the superior olivary complex, wave IV for the lateral lemniscus and wave V for the inferior colliculi. Wave VI may be derived
from the medial geniculate body but is not clinically used.
As BAEPs are highly automatically and relatively unaffected by sleep, anesthesia etc. they are used for auditory assessment.
BAEPs are followed by mid-latency responses and late-latency responses, described by the letter P or N to indicate polarity of the peak (positive or negative) and a
number or letter to indicate latency in milliseconds or ordinal position in the waveform.
Mid-latency responses (10 to 50 ms) are represented by P0 and Pa which are probably derived from the medial geniculate nucleus and the primary auditory cortex.
Long-latency responses (50 - 200 ms) start with the P50 (or P1), N100 (or N1) and P160 (or P2). Some scientists include these in the mid-latency responses to distinguish them from high-level
cognitive components such as P300 and N400.
The N1 wave is one of the most prominent components during an auditory stimulus. It represents the sum of different subcomponents that depend on the nature of the
stimulus, the state of the subject etc. It is sensitive to attentional status and therefore, different manipulations can be considered in the experimental setting e.g. inducing the expectation that
the event may occur at a specific time point.
Brain synchronization via auditory stimulus presentation: the case of binaural beats
The beat or product of interference of 240 Hz and 256 Hz is 16 Hz.
The highest amount of synchronization in the auditory cortex due to binaural beats occurs within the beta band at 16 Hz.
In order to perform complex cognitive tasks, such as processing and storing information, it is necessary to integrate regional neuronal activity in an extensive
coordinated network. For instance, certain tasks require to couple different EEG bands, such as the theta (4 Hz - 8 Hz) and gamma (25 Hz - 40 Hz). Neuronal synchronization is fundamental to this
effect; this means that neurons have to demonstrate phase synchronization. Synchronized firing of a significant number of presynaptic neurons is expected to increase the probability of firing of a
postsynaptic neuron, given that increased input is likely to reach the threshold for firing of the latter. It must be noted that synchronization in oscillatory bands such as gamma waves is confined
to certain brain regions, while theta wave synchronization is spread across long distances in the brain [refs 27–29].
Synchronization can be achieved via auditory stimulus presentation using binaural beats. A beat is defined as the product of interference between two frequencies,
while the term "binaural" suggests that the different frequencies are presented to the two ears separately (aural = of the ear). When two different sound frequencies are presented to each ear, an
electric signal is generated in the cochlea, which travels in the auditory pathway. At a certain structure of the auditory pathway, reported by this reference to be the inferior colliculus in the
midbrain, the signals will be combined to generate a firing frequency equal to the beat frequency or, in other words, the product of interference of the two frequencies. This means that if one sound
frequency is 240 Hz and the other is 256 Hz, the electric frequency that will be generated will be 16 Hz and the neurons will start firing at 16 Hz. It is noted that the highest amount of
synchronization in the auditory cortex due to binaural beats occurs within the beta band at 16 Hz [ref. 42].
Figure 2: Presentation of a 256 Hz tone to the left ear and a 240 Hz tone to the right ear results in the generation of a binaural beat electric frequency of 16
Binaural beats can influence cortical responses at different frequency bands. Concerning the gamma band, the largest EEG steady state
responses were accomplished with a binaural beat of 40 Hz [refs 46–49]. Concerning the beta band, use of 18.5 Hz increased EEG magnitude by 21% [ref. 50]. It has also been reported that during delta
or alpha binaural beat stimulation, there was an increase of the respective bands but also an increase of theta [ref.55]. Additionally, delta binaural stimulation increased alpha waves as well [ref.
Visual evoked response - Visual event-related potential (waveform)
Following a visual event i.e. the presentation of a light stimulus or a pattern stimulus, a visual event-related potential (ERP)
waveform is elicited.
The major visual ERP waveform component is the P1, peaking at approximately 100 ms (latency of 40-70 ms). It is maximal at lateral
occipital electrode sites. It is sensitive to variations of stimulus parameters and to attention modulation and arousal (alertness).
The P1 component is followed by the N1 component which is refractory i.e. this component will be significantly reduced in a subsequent
similar visual evoked response. The N1 component has three subcomponents which, like the auditory N1, are influenced by attentional status (spatial attention). It is suggested that the lateral
occipital N1 subcomponent is linked to performing discrimination tasks.
The average amplitude for VEP waves is usually between 5 and 20 microvolts. These have clinical importance, especially the P100, as
they can be used to evaluate the presence of damage in the visual pathway and visual acuity in general.
Figure 3: Typical visual event-related potential (ERP) waveform components including the N100 (labeled N1) and P300 (labeled P3) (from Wikipedia.)
The event-related potential (waveform) component P300, a high-level cognitive component: triggered by an “it rings a bell”, "it appears to be familiar" reaction
The P300 event-related potential component is one of the most notable late ERP potential components. It is a positive deflection
occurring at approximately 300 ms following stimulus presentation, in the auditory, visual and audio-visual modality among others. It is considered as a high-level cognitive component. It is induced
by an expected but unpredictable element in a steam of stimuli elements. It is interpreted by the fact that a given element has captured our attention. If attentional status is decreased or shifted,
the amplitude of the component is decreased.
It can be elicited by the oddball paradigm which consists of the presentation of two kinds of stimuli in random: one that is presented
frequently and is thus called a “standard stimulus” and one that is presented infrequently, constituting the "oddball" or “target stimulus” as it has to be the target of detection by the subject.
Compared to standard stimuli, target stimuli elicit a stronger N2 followed by a stronger P3.
Since the mid-1980, the P300 had been used in “lie detection” by conducting interrogation based on the oddball paradigm.
Capturing the P300: DARPA's Cognitive Technology Threat Warning System (CT2WS) (2008)
A guard or a warfighter might not have noticed "a branch swaying" and might therefore have not issued a warning for what could be an important threat. However, most
probably their brain had reacted by generating a P300.
DARPA announced in 2008, technology that captures the P300 and brings it to the attention of the users so that they evaluate surroundings optimally.
"By improving the sensors that capture imagery and filtering results, a human user who is wearing an EEG cap can then rapidly view the filtered image set and let the brain’s natural threat-detection
ability work. Users are shown approximately ten images per second, on average. Despite that quick sequence, brain signals indicate to the computer which images were significant."
DARPA's press release may be represented by this content:
https://scitechdaily.com/darpas-ct2ws-program-improves-target-detection/ (excerpt above)
Figure 4: DARPA's Cognitive Technology Threat Warning System (CT2WS) (2008) (Facebook post)
Motor-evoked potentials: Vestibular Evoked Myogenic Potentials (VEMPs) as tests of the vestibular system/otolith organs (balance - orientation)
The vestibulo-ocular reflex is linked to the oVEMP, a test of the utricle function. The vestibulo-collic reflex is linked to the
cVEMP, a test of the saccule function.
When a bumpy road suddenly causes our head to tilt to the right thereby destabilizing our gaze, two reflexes will be activated by the
vestibular system (responsible for balance - inner ear):
(1) The vestibulo-ocular reflex (VOR), one of the fastest reflexes in the human body, will send commands to eye muscle to move our eyes
to the left.
(2) The vestibulo-collic reflex (VCR) (might be related to the word "collar") will send commands to activate or deactivate neck muscle
to correct the position of the head.
As a result, motor-evoked potentials termed vestibular-evoked myogenic potentials (VEMPs) are produced and can be recorded using
surface electrodes from eye muscle, termed ocular VEMP or o-VEMP, and from cervical (neck) muscle termed cervical VEMP or c-VEMP.
These reflexes allow for gaze stabilization and head stabilization.
In the above case, the VEMPs were triggered by head movement. They can also be triggered by sound or vibration. This forms the basis of
their use in the clinical evaluation of the vestibular system and specifically the clinical evaluation of the otolith organs, the saccule and the utricle.
Sound is the most common VEMP stimulus modality. Sound triggers muscle excitation which is recorded with electrodes in an
electromyography setting (EMG).
The cVEMP is known as P13-N23
The cVEMP, representing the activation of the vestibulo-collic reflex, is a biphasic potential, with a positive peak at approx. 13 ms
(P13) and a negative peak at approx. 23 ms (N23) (Fig. 5). It is produced and recorded from the SCM muscle. “As air-conducted (AC) sound preferentially activates the saccule, cVEMPs evoked by this
stimulus can be used as a test of saccular function (reference)”
The oVEMP is known as N10-P15
The oVEMP, representing the activation of the vestibulo-ocular reflex, is a biphasic potential with a negative peak at approx. 10 ms
(N10) and a positive peak at approx. 15 ms (P15) (Fig. 5). It is produced and recorded from the inferior oblique muscle
(cheeks under the eyes). The oVEMP is considered a test of utricular function.
The saccule and the utricle tune or phase-locked to the sound stimulus
The cVEMP and oVEMP, except for their use in otolith function evaluation, can also provide information about the activation of the
vestibular system by sound frequencies/vibrational frequencies. The saccule and the utricle become phase-locked to the sound stimulus or tune to the sound stimulus with different response parameters
(cf. different resonance).
Young et al (3) "stimulated squirrel monkeys with AC sound and found that the resting discharge of vestibular afferents became
phase-locked to the stimulus. Saccular afferents had the lowest phase-locking threshold to sound, around 106–119 dB sound pressure level (SPL), while units in the other vestibular organs responded to
higher intensity sounds."
A relevant reference:
“A utricular origin of frequency tuning to low-frequency vibration in the human vestibular system?”