Music Neurotechnology Research at the Crossroads of Music and Neural Engineering

NETT – Summer School, University of Nottingham Music Neurotechnology Research at the Crossroads of Music and Neural Engineering Prof. Eduardo R. Mira...
1 downloads 0 Views 3MB Size
NETT – Summer School, University of Nottingham

Music Neurotechnology Research at the Crossroads of Music and Neural Engineering Prof. Eduardo R. Miranda Interdisciplinary Centre for Computer Music Research (ICCMR) Plymouth University http://cmr.soc.plymouth.ac.uk/

Imagine if one could play a musical instrument with signals detected directly from your brain. Would it be possible to generate music representing brain activity?

What would the music of our brains sound like? What happens in your brain when you listen to music? Have you ever thought what goes through different minds when listening to a symphony? How might new technologies informed by brain sciences shape the future of music? Can you imagine playing a musical instrument endowed with a living brain? Living musical instruments? What would it sound like? How would one play it?

Music Neurotechnology: an innovative field of research at the crossroads of music, neuroscience and engineering.

New technologies that are emerging from this research include systems that compose bespoke music based on biological readings from our body, braincomputer music interfaces, and musical instruments built with microchips made with real neurones cultured in vitro.

I addition to developing such new technologies, I am interested in gaining a better understanding of how music affects our brain.

It is absolutely awesome to work with music at the cutting edge of scientific research because these new technologies and understandings are paving the way to new approaches to creating music.

Examples of projects currently being developed at ICCMR:

• In vitro Music Instrument

• Brain-Computer Music Interfacing • Composition: Raster Plot • Composition: Symphony of Minds Listening

In vitro Music Instrument Reference: Miranda, E. R., Bull, L., Gueguen, F., Uroukov, I. S. (2009). “Computer Music Meets Unconventional Computing: Towards Sound Synthesis with In Vitro Neuronal Networks”, Computer Music Journal, Vol. 33, No. 1, pp- 9-18. •

An investigation into the feasibility of synthesizing sounds with hybrid wetware-silicon devices using in vitro neuronal networks.



The dynamics of in vitro neuronal networks represent a source of very rich temporal behavior and we are interested in exploiting this behavior to make music with.



We report on the initial results from our research into the development of techniques to steer the behavior of the networks.



The aim is to exert some form of controllability and repeatability in the system, which is the next step towards effective renderings of in vitro neural networks technology into controllable sound synthesizers.

)

A typical hen embryo aggregate neuronal culture, also referred to as a spheroid. In our experiments, spheroids are grown in culture in an incubator for 21 days.

Hen embryo aggregate neuronal culture . Magnified 350 x

Then, they are placed into a MEA device in such a way that at least two electrodes make connections into the neuronal network inside the spheroid. One electrode is arbitrarily designated as the input by which to apply electrical stimulation and the other as the output from which to record the effects of the stimulation on the spheroid's spiking behaviour.

A typical MEA used to stimulate and record electrical activity of cultured brain cells on the surface of an array of electrodes.

Stimulation at the input electrode consisted of a train of biphasic pulses of 300mv each, coming once every 300ms.

An excerpt lasting for 1sec of typical neuronal activity from one of the sessions. The noticeable spikes of higher amplitude indicate concerted increases of firing activity by groups of neurones, which most probably are in response to input stimuli.

The synthesizer is an additive granular synthesizer with a number of sinusoidal oscillators.

We developed a sonification method to map spiking data onto the synthesizer. Sound morphology preserves spiking dynamics.

Cochleogram of an excerpt of a sonification where spikes of higher amplitude can be heard

Brain-Computer Music Interfacing (BCMI) Pioneering references: Miranda, E. R., Sharman, K., Kilborn, K., Duncan, A. (2003). “On Harnessing the Electroencephalogram for the Musical Braincap”, Computer Music Journal, 27(2):80-102. Miranda, E. R., Magee, W., Wilson, J. J., Eaton, J., and Palaniappan, R. (2011). “Brain-Computer Music Interfacing (BCMI): From Basic Research to the Real World of Special Needs", Music and Medicine, DOI: 10.1177/1943862111399290 •

Brain-Computer Interfacing (BCI) technology has the potential to enable active participation in music-making activities for recreational and therapeutic purposes



Despite recent advances of BCI technology for music, this technology has seldom been trialed with the sector of the population that really needs it



The time is ripe to trial such technology in the real world of special needs

What is BCI? A brain–computer interface (BCI) allows a person to control electronic devices by means of commands expressed by signals read directly from his/her brain using appropriate brain scanning technology. Currently the most viable and practical method of scanning brain signals for BCI purposes is to read the electroencephalogram (or EEG) with electrodes placed on the scalp. Other methods include MEG (magnetoencephalography), PET (positon emission tomography), fMRI (functional magnetic resonance imaging and fNIRS (functional near infra-red spectroscopy). These are good methods for basic research; but currently, they are not practical, not portable and are v. expensive.

Figures source: (left side) Wired, 04.27.06 (http://www.wired.com/science/discoveries/news/2006/04/70726), (right) The DANA Foundation website (http://www.dana.org/uploadedImages/Images/Content_Images/page27_cont.jpg)

The electroencephalogram (EEG) • Electroencephalogram is abbreviated EEG. It is the study of electrical current in the brain • Neural activity generates electric fields that can be detected with electrodes

• “Electrodes are attached to the scalp. Wires attach these electrodes to a machine which records the electrical impulses. The results are either printed out or displayed on a computer screen.” [1] (Or they can be relayed to a computer for processing, handling, storage, etc.)

[3]

etc.

[4]

[4]

[2]

[1] http://www.emedicinehealth.com/script/main/art.asp?articlekey=11199 [2] Source: Universe Review http://universe-review.ca/ [3] Source: Dr Richard’s Chiropractic http://www.richardschiropractic.com/ [4] Source: Wikipedia Commons

• The EEG expresses the overall activity of millions of neurons in the brain in terms of charge movement. Electrodes on the scalp detect this very superficially: the signal is filtered by the meninges, the skull and the scalp

• The EEG is extremely faint, with amplitudes in the order of only a few microvolts • The signal has to be amplified significantly and scrutinized by means of signal processing techniques to be handled by a BCI system

Source: Stanford Medicine - http://www.flickr.com/photos/stanfordmedicine/3194916280/

In BCI research, it is assumed that: (a) there is information in the EEG that corresponds to different cognitive tasks (or at least a task of some sort), (b) this information can be detected and (c) people can be trained to produce EEG with such information voluntarily Power spectrum analysis is commonly used to extract information from the EEG: it breaks the EEG signal into different frequency bands and reveals the distribution of power between them. This is useful because it is believed that specific distributions of power in the spectrum of the EEG can encode particular states of mind.

Source: http://www.sciencegl.com/EEG_3D_spectrum/

For a review of various EEG analysis methods see: Sanei, S. and Chambers, J.A. (2007). EEG Signal Processing. Hoboken, NJ: Wiley & Sons.

SSVEP-based approach • Repetitive visual stimuli (RVS) elicit Steady-State Visual Evoked Potentials (SSVEPs) in the EEG • SSVEP-based BCI enables a subject to select commands, each of which is associated to one of a number of repetitive visual stimuli (RVS) with distinctive properties; e.g., 4 icons flashing at 4 different speeds (or frequencies)

• The user selects a command by focusing his/her attention on one of the RVS • When the subject focuses his/her attention on an RVS, an SSVEP is elicited in his/her EEG (especially from the visual cortex) matching the frequency (or harmonics) of that RVS • Zhu, D. et al. suggested that SSVEPs can be elicited by RVS at frequencies ranging from 1 to 100 Hz [1], but in practice this is overoptimistic; this seems to vary according to different techniques and approaches.

[2] [1]

[1] Zhu, D. et al. (2010), “A Survey of Stimulation Methods Used in SSVEP-Based BCIs”, Computational Intelligence and Neuroscience Vol. 2010, Article ID 702357. [2] Bin, G. et al. (2009). “An online multi-channel SSVEP-based brain-computer interface using a canonical correlation analysis method”, J. Neural Eng. 6(2009) doi:10.1088/1741-2560/6/4/046002.

[1]

• On the left side: typical waveform of an EEG signal (Oz-Cz) acquired during visual light stimulation with a frequency of 15 Hz. • On the right side: the spectrum of the EEG showing the SSVEP, which are peaks at 15 Hz and higher harmonics.

[1] Zhu, D. et al. (2010), “A Survey of Stimulation Methods Used in SSVEP-Based BCIs”, Computational Intelligence and Neuroscience Vol. 2010, Article ID 702357.

Experimental setting

EEG amplifier

Stimuli engine

EEG processing & Music engine

The System EEG analysis

EEG detection EEG signal

Control command

Audio

Music engine

PA / Hi-Fi

Stimuli engine

Hypothetical example Subject gazes at “left green arrow” flashing at a rate of 15 Hz

System detects a 15 Hz SSVEP component in the EEG

X Hz

EEG signal

Z Hz 15 Hz Y Hz

Control command = “left green arrow”

Audio

System performs the musical task associated with “left green arrow” = “play a melody on a flute”

PA / Hi-Fi

Example of how the “left green arrow” can a generate a melody on a flute A sequence of notes is stored in memory for the flute.

The amplitude (power) of the SSVEP component is used to select notes from the stored sequence. In the example, the amplitude reached its maximum peak threshold and this selected the last note of the sequence. The system plays this note and then checks the SSVEP power again, and so on.

The more the subject gazes at a flashing icon, the higher the amplitude of the respective SSVEP component. The amplitude of the SSVEP component can be controlled by looking away from the flashing icon and gazing again, and so on.

Raster Plot •

Raster Plot is the 2nd movement of a larger choral symphony entitled “Sound to Sea”.



It alludes to the last moments of Plymouth-born explorer Robert Falcon Scott’s life.



It includes extracts from Scott’s diary on the final moments of his expedition to the South Pole, before he died in March of 1912.



The mezzo-soprano sings the extracts using sprechgesang, a type of vocalization between singing and recitation: the voice sings the beginning of each note and then falls rapidly from the notated pitch, alluding to the endurance of Scott and his companions facing the imminent fatal ending of the expedition.



A whispering choir echoes distressed thoughts amidst a plethora of jumbled mental activity represented by the sounds of the orchestra.

Raster Plot Raster Plot is the 2nd movement of a larger choral symphony entitled “Sound to Sea”. It alludes to the last moments of Plymouth-born explorer Robert Falcon Scott’s life. It includes extracts from Scott’s diary on the final moments of his expedition to the South Pole, before he died in March of 1912. The mezzo-soprano sings the extracts using sprechgesang, a type of vocalization between singing and recitation: the voice sings the beginning of each note and then falls rapidly from the notated pitch, alluding to the endurance of Scott and his companions facing the imminent fatal ending of the expedition. A whispering choir echoes distressed thoughts amidst a plethora of jumbled mental activity represented by the sounds of the orchestra.

In order to represent the notion of mental activity musically, I devised a method inspired by the physiology of the human brain. I used a computer simulation of a network of interconnected neurones, which models the way in which information travels within the brain, to generate patterns that I subsequently turned into music.

When the network is stimulated, each neurone of the network produces sequences of bursts of activity, referred to as spikes, forming streams of rhythmic patterns.

A raster plot is a graph plotting the spikes; hence the title of the movement. Example: generated by a network of 50 spiking neurons stimulated by the sinusoid shown at the top of the figure. As the undulating line rises, the spiking activity is intensified. Conversely, as the undulating line falls, the spiking activity becomes quieter.

I associated each instrument of the orchestra, excepting the voices, to a neurone or group of neurones. From the 50 neurones of the network, I ended up using only the first 40, counting from the bottom of the raster plots upwards. Instruments that are associated to a group of neurons (e.g., Organ) can play more than one note simultaneously.

I established that each cycle of the stimulating sinusoid would produce spiking data for three measures of music, with the following time signatures: 4/4, 3/4 and 4/4. Therefore, each run of the simulation would produce spiking data for fifteen measures of music.

One cycle of stimulation = three measures: 4/4, 3/4 and 4/4 Up to 44 semiquaver notes

GRID = three measures: 4/4, 3/4 and 4/4 Each square = semiquaver

GRID = three measures: 4/4, 3/4 and 4/4 Each square = semiquaver

Example:

Turning the spiking transcription into musical forms: rhythmic template

Assigning pitches to the rhythmic template: I defined of a series of 36 chords of 12 notes each, using the harmonic series

Example: a) The assignment of pitches from the G clef portion of chord number 22 to the rhythmic figures for the violins in measures 82-84. b) Articulation of the musical material

Symphony of Minds Listening Symphony of Minds Listening is a symphonic piece in 3 movements lasting for approximately 9 minutes each: I - Ballerina II - Philosopher III – Composer

It is a remix of the 2nd movement of Beethoven’s 7th symphony through my own mind and those of a classic ballerina and a philosopher. Each person listened to Beethoven's music while undergoing fMRI brain scans. I created a method to remix Beethoven's original music using the brain scans.

• Deconstructed Beethoven’s movement to its essential elements with the aid of bespoke Artificial Intelligence software • Stored them together with statistical information about Beethoven’s compositional decisions. • Re-assembled these elements with a twist: the scanned fMRI information influenced the re-assembling process. • The original Beethoven's elements were modified by various musical operations effecting rhythm, harmony, and so on.

METHOD

Functional magnetic resonance imaging (fMRI) is a procedure that measures brain activity by detecting associated changes in blood flow. The measurements can be presented graphically by colour-coding the strength of activation across the brain.

This is typical representation of an fMRI scan of a person listening to music, displaying her brain activity at a specific window of time. It shows various slices, from the top to the bottom of her brain.

• A example of an artistic 3D rendition of an fMRI scan. • It shows different areas of the brain responding in a coordinated manner to the music.

Each scanning session generated sets of fMRI data. They were tagged to each measure of the second movement of Beethoven’s 7th symphony.

• I deconstructed the movement into its essential elements and stored them with statistical information about Beethoven’s compositional decisions • I re-assembled these elements with method, which uses fMRI information to influence the process of reassembling of the music. • The information representing the deconstructed piece was stored on a measure-bymeasure basis. • During the compositional stage, I retrieved this information measure by measure and used the respective fMRI data to guide the process of re-assembling the music.

The re-assembling involved transformations of Beethoven’s elements informed by the fMRI scans.

For instance, blue activity might determine transformation of the melody for a specific number of measures, whereas yellow might determine change of rhythm, and so on.

Musical passages bearing varied degrees of resemblance to the original. Technically, fMRI information muddles up the elements and statistics of Beethoven’s score. This procedure involved both, computerized and manual processes.

• I did to the Beethoven score what our hearing system does when we listen to music: sounds are deconstructed as soon as they enter the ear and are relayed through various pathways towards cortical structures, where the data are reconstructed into what is perceived as music. • The fMRI scans differed amongst the three of us. • Three different minds yielded three different movements of the composition, which resemble the original in varied ways.

• The scans of the ballerina and mine bear more commonalities to each other, whereas the philosopher's is the most distinct of the three.

Symphony of Minds Listening was composed at Interdisciplinary Centre for Computer Music Research (ICCMR), School of Humanities and Performing Arts, Plymouth University. It was premiered on 23 February 2013 by Ten Tors Orchestra under the baton of Simon Ible, at Peninsula Arts Contemporary Music Festival, Plymouth, UK. I would like to thank Dan Lloyd (on the right, Trinity College, Connecticut, USA) and Zoran Josipovic (in the middle, New York University, USA) and Duncam Williams (not in the photo, Research Fellow, ICCMR, UK) for their valuable contributions to this project.

http://symphony-of-minds-listening.webs.com/

Imagine if one could play a musical instrument with signals detected directly from your brain. Would it be possible to generate music representing brain activity?

What would the music of our brains sound like? What happens in your brain when you listen to music? Have you ever thought what goes through different minds when listening to a symphony? How might new technologies informed by brain sciences shape the future of music? Can you imagine playing a musical instrument endowed with a living brain? Living musical instruments? What would it sound like? How would one play it?

Thank you!