Emotions of Musical Instruments

Emotions of Musical Instruments Teun Lucassen [email protected] ABSTRACT This paper describes a research regarding emotions of musical instruments....
Author: May Hoover
2 downloads 1 Views 127KB Size
Emotions of Musical Instruments Teun Lucassen [email protected] ABSTRACT This paper describes a research regarding emotions of musical instruments. The goal is to find out whether it is possible to alter a communicated emotion to a listener by using various musical instruments. These instruments are the piano, marimba, alt sax and cello. After performing an experiment clear differences are visible, especially on the emotions joy and sadness.

Keywords Music, emotion, musical instruments, piano, marimba, alt saxophone, cello, characteristics, perception, performance.

1. INTRODUCTION There is very much activity in the field of research on music and emotions. Two parties are arguing about the way music influences the emotions of a listener, namely the cognitivists and the emotivists. The discussion has pursued many years, and will probably go on for many years. Various aspects of music and emotions have been investigated. Most researches are very philosophical, like Goldman [GOL95]. The more technical researches cover aspects like musical performance, types of emotions to be communicated and the influence of music on the performance of a listener. I would like to extend the scientific knowledge base with yet a very different aspect. There has been very little research on the topic of emotions evoked by musical instruments. I try to fill a little bit of this void in our knowledge. The goal is to find out whether it is possible to alter a communicated emotion to a listener by using various musical instruments. I approach this topic from the base of human media interaction (computer science). The results of this research could serve several purposes in computer science and other fields. An example is real-time music generation in a virtual environment, where different instruments can be used to create a different feeling of the environment. This application is not only suitable for entertainment purposes. Robertson et al. [ROB98] describe a real-time music generation system for a virtual learning environment for children. In this environment the emotions in music are used to enhance the feeling of presence. Using various instruments to reinforce the emotion could attribute to this feeling of presence.

passive music has on lap times in a racing game. Oddly enough, active stimulating music slowed the speed down. North concludes this is because the music attracts all the attention, so a smaller part of the brain is available for driving.

2. RESEARCH QUESTION The research question is formulated as follows: “What is the impact of different musical instruments on emotions perceived by a listener?” I will explain this question further. I will try to find a direct relationship between various instruments used in music and the emotion that a listener attributes to this piece. Distinct instruments are likely to have a very distinct influence on emotions, both the emotions types and the arousal level. The following sub questions rise from the research question. For each instrument, which emotion does the audience perceive by listening to the music? For each instrument, is the emotion strong or weak? Which instrument has the best results in communicating emotions? After I have performed the experiment, I will have results in terms of emotions rated by listeners on all instruments. With each instruments, I can see which emotion is perceived most and if there is a wide spread of emotions. Furthermore I can see how intense the emotions are by the rate of each emotion.

3. APPROACH Intuitively, we link completely different music when listening to various musical instruments. For example, a cello is often used in sad, slow music, where a marimba is found more in happy music. But does this mean that a marimba is a happy, and a cello is a sad instrument? Can the instruments themselves evoke this emotions, or is it purely the music? The influence of the type of music will be ruled out, the different instruments will be used in playing the same song. There are several instruments that I intend to research. I will perform the experiment with a small number of instruments. I pick the instruments that I suspect to have the greatest impact, based on the music they are commonly used in. All instruments have very distinct sounds. Furthermore, these are very common instruments in today’s music, which means that the results of the research is of best use in practice. These instruments are the classical piano, cello, alt saxophone and a marimba.

North [NOR99] investigates another application of emotion in music in his paper “Music and driving game performance”. As the title states, he tries to find out what influence active or

4. LITERATURE BACKGROUND 4.1 Cognitivists vs Emotivists

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission. 4th Twente Student Conference on IT , Enschede 30 January, 2006 Copyright 2006, University of Twente, Faculty of Electrical Engineering, Mathematics and Computer Science

The first party call themselves the coginitivists. They state that a listener is able to recognize emotions in music and attribute them. Music is however not able to communicate emotions in terms of actual experience. This means a sad song can be recognized as such, but it can not make the listener feel sad.

Two parties are arguing about the way music is able to influence the emotions of a listener. Goldman [GOL95] describes this discussion.

Emotivists on the other hand, think it is possible for music to influence the actual emotions of a listener. This means a listener can feel sad after listening to a song, or at least sadder then he felt before. Emotivists however do not have an explanation for this, where the cognitivists claim they do.

Krumhansl [KRU02] uses a combination of psychophysiological measurements and judged emotions. He sees a relationship between them, but this one is not clear enough to map emotions directly to physiological measurements.

An important matter to keep in mind is the fact that music can be emotionally charged by earlier experiences. If someone heard a song before during a very happy time, it is likely that this person will become happier by listening to this song again later. This discussion is about the primary emotions, ruling out experiences from the past.

Scherer [SCH04] discusses various ways of self-report. The first option is to use basic emotions in the music and make the listener pick or rate them. The problem is that not all basic emotions are suitable to be expressed in music. An example: sadness is used often in music, where surprise or anticipation is harder to express.

My opinion is that the truth lies in between, as in so many other everyday discussions. A big part of emotions in music will only cover recognition. However, from personal experience I can tell my emotions alter when listening to particular songs. In this research the differences between these theories are not vital, but it is good to be aware that they exist.

The second method is to use a dimensional model of emotions. A commonly accepted model is the three-dimensional model by Wundt. It was adjusted by Feldman et al. [FEL99] to a two scale model which is of better use in research fields like music. This model has for example been used by North et al. [NOR97]. The two scales are valence and activation. For listeners it is quite easy to rate these two dimensions and for the research worker it is easy to work with. The disadvantage is that some emotions which are fundamentally different are almost at the same place in this model. An example is boredom and melancholy, quite distinct emotions, put with almost the same position in the model.

4.2 Extraction of emotions from music In this investigation I try to measure emotional differences. It has become clear that music has emotional content. Extracting and labeling this content has proven to be quite difficult. Li and Ogihara [LI03] tried to extract six groups of emotions from about 500 songs. They used one single test person to do this. The results they gathered were not satisfying, accuracy was very low. They blamed the borderline cases, where it is hard to decide in which group a song should be placed. Li and Ogihara tried to find emotions in music in general, trying to model this. The research of Li an Ogihara has not yielded results which are suitable for using in this research, for example using a model to rate the emotions.

4.3 Cue utilization Juslin [JUS00] asked three professional guitar players to play three well-known melodies, each in four ways to express four distinct emotions, namely joy, sadness, anger and fear. They were not allowed to change any of the pitches or guitar sounds, but were free to experiment with aspects like tempo, sound level, articulation or timbre. Each performer was given enough time to familiarize himself with the emotion and the song. After recording the songs, listeners were asked which emotion the performer tried to express with the music. This research yielded very clear results. About 70 percent of the listeners attributed the intended emotion to the music. This means cue utilization in music is successful in communicating emotions. In this research all emotions evoked by cue utilization will be ruled out.

4.4 Measuring the emotions An important matter in these researches is the way the emotions of the listener are measured. The main problem is how to find out how the listener actually feels. Mostly, the person in question tells how he feels, or writes it down. This might generate some noise, because it is very hard to exactly express how you feel. Another way of measuring emotions is to record physiological data like skin activity, blood pressure or heart rate. Here, the listener just has to listen to the music without reporting about it. There is some knowledge about the relationship between emotions and physiological reactions, but too little to map the reactions directly to emotions. Thereby, of course not all physical data is caused by the emotions, so there will be plenty of noise. Self-report seems more suitable here.

Another way to label emotions is not to use basic emotions, but pick some emotions which are likely to suit the music. The problem here is comparability with other research, because they are likely to use other emotions. Scherer proposes a variant to this, where you start with a lot of emotions, and carefully select those who are suitable. In my opinion this yields the same problems. The choice for this experiment is to use a combination of the first and second method suggested by Scherer. At first the listeners will be asked to rate the four basic emotions also used by Juslin (joy, sadness, anger, fear). Using the same basic emotions enhances comparability. After that the listeners rate the valence and activation. Comparing the first and second results it is possible to check the validity of the results. For example, the listeners rated the cello sad, but does the valenceactivation model show the same results?

4.5 Usage of virtual instruments This experiment uses a virtual musical instrument (a keyboard) to simulate a real instrument. This choice was made because of practical considerations. Usage of such alternative instruments does not have to be a problem, but is judicious to keep the differences in mind. Everyone with experience in playing keyboards can confirm that not all musical instruments are as suitable to be simulated on a keyboard. The reason for this is that the control of a keyboard (keys) can fundamentally differ from the control of the simulated instrument (e.g. strings). The possibilities of controlling a string of a guitar are vaster then the control of a key on a keyboard. This means that for example a guitar is not really suitable for such a simulation. You will always hear the sound is not coming from a real guitar, no matter how wellmade the sample is. Examples of various controls are noted by Jensen [JEN96]. Below (section 5.2), the investigated instruments are discussed, with the suitability to be played on a keyboard. Dobrian [DOB01] considers the aesthetic aspects of the use of virtual instruments. He suggests that instead of simulating existing musical instruments, it is better to create new ones with their own characteristics, using the virtual instrument. For this

research, this suggestion is turned down, because the choice of using of a keyboard was made with practical reasons in mind. Thereby, the results of this research will be more applicable when using well-known musical instruments.

instrument purely, this would mean playing a single note with a certain duration and let a listener rate the emotions. I do not think this is the best way, because the listener does not hear the musical instruments in their natural habitat, namely music.

5. EXPERIMENT SETUP 5.1 Overview

Making the instruments play a certain song causes some other problems. The risk of the song influencing the listener’s emotions is very high. The part of the instrument in the emotions is questionable in this case.

Four instruments (piano, marimba, cello and alt saxophone) will be recorded playing the same song in the same way. This is achieved by using the computer to play the song by MIDI. The (neutral) song is written especially for this research, to avoid recognition by the listeners. 25 listeners rated the song on a 0-5 scale on joy, sadness, anger and fear. A 7 point scale was provided for valence and activation. Listeners heard the music on a MP3 player in a quiet room.

5.2 Investigated Instruments For the sake of the duration of the research I chose to investigate a number of four distinct musical instruments. My considerations for choosing these are noted below. The first instrument is the classical piano. The piano is one of the most common musical instruments in the world. Therefore it can not be left out of this research. It is hard to intuitively attribute a certain emotion to this instrument, because the usage is very wide spread. Almost every music style gratefully uses this instrument to enrich the music. The marimba is in certain aspects a member of the piano family. It uses a similar pattern of pitch alteration, namely by keys. Only the keys of a marimba are hit by mallets. The marimba intuitively calls happy, pleasant emotions. It is likely that this is caused by the type of music marimbas are often used in, namely exotic (e.g. Caribbean) music.

There are several reasons why a song can influence the emotions of a listener. Some characteristics are already treated at the literature background, but not all is covered. Music is very strong in recalling emotions felt during a time when you heard the song before. Therefore it is not wise to use an existing (well-known) song, Too many people participating in this research could have an emotional preconception to the song, influencing the results in a negative way. Musical characteristics are likely to influence the emotions too. A song written in Minor chords is often rated sadder than the same song in Major. Other aspects, like tempo, pitch and timbre also influence emotions. The research by Juslin [JUS00] confirms this. For example, a slower tempo and lower pitch evoke sadder emotions. Considering the arguments above it is best to compose a song especially for this research, so nobody can recognize it. This song has to be as emotionally uncharged as possible. This means it needs a neutral tempo, neutral chords, etcetera: every aspect needs to be as neutral as possible. After experimenting with various musical aspects I composed the following song.

The third instrument to be used is the cello. This classical musical instrument is of a completely different type, using strings. The warm, low vibrating sound of the cello mainly seems to provoke sad feelings. The cello is seldom heard in happy, quick music. It’s territory lies in the slow, classical music. Last but not least I investigate the emotions of the alt saxophone (or alt sax). This lip-driven musical instrument resembles the sound of the cello in some ways. Intuitively this type of instrument is a little happier and not as depressing as a cello. Negative emotions are likely to appear here, but not as strong as at the cello. A few considerations apply to all the picked instruments. They are all quite suitable for playing on a keyboard. The piano of course has exactly the same control type and the marimba is very close. The controls of the cello and alt sax are very different from a keyboard. This means that some of the nuances of playing one of these instruments (like timbre alternations in one single note) will be left out. Normally this is a disadvantage because the sound is not as real as the actual musical instrument. Later (see 5.4 Recording), we see that this no problem, in fact it is what we want to achieve. The selection of instruments is very wide spread. This is an explicit choice. This way I hope to be able too see the clearest results with big differences between instruments. Thereby, the results will be of use in a wide spread of applications.

5.3 Used Music There are two possibilities in the way the musical instruments are presented to the listeners. I could play a sample of the

Figure 1. The composed song This little piece of music contains descending notes (mostly rated sad) and ascending notes (mostly rated happy). It contains Major and Minor chords and has a quarter note duration of 120, which is neutral, as is the 4/4 time signature. It almost consists of quarter notes only, to rule out emotional differences of note lengths. A simple base line was added to make the music sound a little more harmonious instead playing the instrument completely solo. The base line is played with the same instrument as the melody itself. A couple of people heard the song before usage in the actual experiment. All had a hard time attributing emotions to this song. This contributes to the persuasion of emotionally neutral music.

5.4 Recording the songs The recording of the four songs is a delicate matter. Emotions in music are not only found in listening, but also (if not more) in performing. Even if the notes are provided to fix a certain style of performance, slight differences will appear in aspects like timbre, articulation or volume. In fact, this is what makes the music interesting.

In this research, those differences need to be ruled out. Even when the song was recorded with a particular musical instrument first and later mapped to other instruments, troubles might arise. For example, a change in volume on a cello might have a different emotional impact then on a piano. To rule all of these matters out, a composer program, NoteWorthy Composer [NOT02], was used to create the music. In this program one is able to drag and drop notes on a staff to create a score. Small differences in the notes, which make music interesting, but emotional, are left out this way. This can not be done on the used keyboard. The program is able to play this score, using MIDI [MID95]. The four chosen instruments are represented by MIDI sound numbers 01 (grand piano), 13 (marimba), 43 (cello) and 66 (alt saxophone). Playing these songs on an average priced sound card in a normal PC generates a sound which resembles the original instrument in some ways, but is not quite accurate. Therefore the recordings were transferred to a keyboard, a General Music WK6 Power Station [GEM05]. This is a modern keyboard, with great sound samples for accurate musical instrument simulation. The piano version of the song was not recorded on this keyboard, but on a Roland EP7-MK2 digital piano. This digital piano is rated with the best piano sound in it’s class. The keyboard and piano were connected to a PC, where Adobe Audition [ADO05] was used to record the songs to 256-bit, 44 Khz stereo MP3 files. All records were of course of equal length, 18 seconds long. This should be long enough to judge the emotions, but not too long before the listener gets bored. This might influence the results.

5.5 Used Emotions Scherer has proposed and discussed various ways to measure the emotions in music through a listener. I discussed them earlier at the literature background. For this research, I decided to combine the methods of basic emotions and the two-scale model. Plutchik distillated eight basic emotions. From this eight emotions I chose four to be rated by the listeners. These emotions are joy, sadness, anger and fear. Juslin used the same emotions in his research. It remains unclear why Juslin chose these four emotions, but the reason I picked them is the plausibility that music will be able to communicate the emotion. The four dropped emotions are surprise, anticipation, disgust and acceptance. The four emotions were presented on a 0 to 5 scale. This means that the listener could not only pick the right emotion, but also rate the intensity of it. Multiple emotions together is also a possibility. If a listener feels that a certain emotion is not applicable at all, he is able to rate it with a 0. After rating the basic emotions, the listener was also asked to rate the scales of valence (pleasant – unpleasant) and activation (active – passive). These are 7 point scales. By combining these two methods I build in a double check. If a musical instrument is rated with a certain emotion, this result is comparable with the valence-activation model. If the valence and activation match with the rated emotion, this confirmation give a more solid foundation for the results. The emotion joy is pleasant-active, sadness is unpleasantpassive and both anger and fear are unpleasant-active.

5.6 Performing the experiment Before listening to the four recordings all listeners are introduced to the project with the same speech. This way I can be sure that every listener has the same knowledge about the project. Everyone is presented with the following text (translated in dutch): Thank you for your participation in this experiment in advance. You will be presented with some in-ear headphones, from which you will hear four short music fragments. The experimentator will always notify you when a fragment starts. Try to take the music to you, close your eyes while listening. When the fragment ends, you will be asked to rate the emotions you personally felt during the song on form 1. First, you can rate all four basic emotions. If you feel that one or more emotions are not applicable, you can rate them with a 0. After this, please rate the valence (pleasant – unpleasant) and activation (active – passive) of the fragment. Always rate these two factors, there is no 0 here. Do you have any questions before we start? The listener sat at a table on an office chair. The experiment took place in an instruction room with a size of about 4 by 6 meters. There was almost no background noise, everything was definitely quiet when putting in the earphones. The experimentator sat in front of the listener, using a PDA with earphones to play the music. The PDA was on the table, but the listener could not see which instrument was displayed on the screen. The only way to know which instrument was playing, was to listen to the music. The listener was not asked to fill out which instrument it was. The four songs were played in random order, to rule out differences in emotions because of the order. Between each fragment, the listener had as much time as he or she wanted to rate the emotions. The experimentator always checked the form if everything was filled out correctly, to prevent surprises at the data analysis. After all four songs were played and rated, the listener was presented with a second form. On this form, some basic information was asked. Age and gender were noted, as well as the fact whether the listener plays an instrument him/herself. There was also the possibility to note an other direct relationship with music, such as singing, or fanatic listener. The last thing to fill out was the music taste of the listener. This aspect is not very important for this research, but it is almost no extra effort to record this information. Maybe some extra results are gathered. To be sure that the results are valid, about 25 people have done the experiment. The number of males should be about the same as females, to compare these two groups.

6. RESULTS 6.1 Data analyses The data recorded during the experiment was processed using SPSS, a data process program. At first, I will present some basic statistics, to found the actual research results. About 53% of the participants was female, 47% male. The balance on male-female is as wanted. The mean age of the participants is 34. The standard deviation is quite big, namely 22. This is a result of a very wide spread of ages, the youngest person was a 15-year old female, the oldest a 75-year old male. As some sort of bonus, participants were asked if they play a musical instrument. Nothing was stated about the ratio of

musicians-non musicians, but it turned out well with a ratio of 53%-47%. Participants also had the option to indicate an other direct personal relationship to music, besides playing it. This was an open question, but only three distinct answers were given. 47% labels him/herself as a listener, where 18% claims to be a singer. 6% says to be interested in music, 29% did not indicate a direct relationship. The favourite music of the listeners was in 35% of the cases pop music. 18% chose for classical music, 12% rock and the rest picked various other sorts of music. The order to play the songs in was intended to be random. A little check in SPSS, counting the times an instrument was played at place 1-4, confirms that it was indeed random, all places were evenly often taken by the four instruments.

6.2 Emotions of musical instruments 6.2.1 Interpretation of values In the next section I will provide the results per instrument. For each instrument I will give 2 result types, namely the means of the four emotions and a scatter graph of the valence-activation model. The emotions were rated on a scale of 0 to 5, the valence and activation on a scale of -3 to +3. In the valence and activation model the size of the dots represents the number of times that combination was chosen. The bigger the dot, the more often it was chosen.

Figure 3. Marimba emotions, valence and activation Joy was by far the highest rated emotion at the marimba. It was given a score of 3,41. The second highest emotion is almost negligible with a score of 1,06. As in the results of the piano, anger and fear were almost not present, with values 0,47 and 0,41. Almost all dots in the valence and activation model are in the pleasant, active quadrant, which confirms the emotion of joy.

6.2.4 Cello The cello was labelled earlier as a musical instrument likely to invoke sadder emotions. The results show that this forecast is quite right.

6.2.2 Piano The piano was intuitively hard to attribute to a specific emotion, because of the wide spread of emotions appearing in piano songs. The results of the experiment seem to confirm this.

Figure 4. Cello emotions, valence and activation Joy, anger and fear are rated very low at the emotions of the cello, with values of respectively 1,12, 1,24 and 1,24. Sadness is the number one emotion with a rating of 3,41.

Figure 2. Piano emotions, valence and activation Anger and fear were almost not rated in this case, the listeners just could not find any in the song recorded with the piano. Therefore the rating is 0,59 for both emotions. The first two emotions, joy and sadness were rated higher. Joy was rated with a 2,53 and sadness 2,00.

The valence activation model mainly shows dots in the quadrant of unpleasantness an passivity, exactly were sad emotions belong.

6.2.5 Alt saxophone The alt saxophone was predicted to be sad, but not as sad as the cello. The experiment results show this in an unexpected way.

The valence and activation model confirms the results of the emotions. Most participants rated the piano as pleasant and active, this matches with joy. Ratings are also seen in the passive (and unpleasant) quadrant, this matches with sadness. The wide spread of dots confirms the broadness of the emotions in this musical instrument.

6.2.3 Marimba The expectation for the marimba was that it would yield positive emotions, such as joy. This expectation seems to be right looking at the results.

Figure 5. Alt saxophone emotions, valence and activation As predicted, the rating of sadness is lower then the cello, with a value of 2,82. Nevertheless this is the highest rating given to the alt sax. The second rated emotion is joy, with a rating of

1,71. Again, anger and fear are rated low, with values of 0,76 and 1,12. Again, the rated emotions are confirmed by the valence activation model.

6.3 Differences per emotion In the preceding paragraph, emotions were listed per instrument. To give a clear comparison of the instruments, I list them per emotion.

Figure 7. Emotion ratings by agegroups In almost every research containing an experiment, the difference between men and women is investigated. This seems to be justified, because almost in every case differences appear. Looking at the emotion ratings, split between men and women, we see a remarkable difference. Women rate the emotions joy and sadness higher than men, but men rate anger and fear higher then women. The differences are not significant, only 0,1 points for anger, fear and sadness and 0,2 for joy. Figure 6. Ratings of 4 instruments per emotion We see that the marimba scores the highest rating in joy, followed by the piano. The alt saxophone and cello come third and fourth. The cello is the musical instrument with the saddest emotions. The alt sax is a very close second, followed by the piano and marimba as third and fourth. The rating for anger and fear are a lot lower. The cello has the most anger in it, followed by the alt sax. The piano is third and the marimba fourth. The cello is the most fearsome musical instrument, closely follows by the sax. Piano and marimba again come in third and fourth.

6.4 Other aspects During the experiment other questions besides emotions were asked to the listeners. Things like gender, age, musicality were also noted. If we compare these factors to the overall ratings of the emotions, we get some interesting results. The first factor is age. I divided the complete group of participants in two halves, the youngest and the oldest. We see that this does not show significant differences in the ratings of joy, anger and fear. Sadness however, is rated much higher overall by younger people. The difference is more then 0,5 points.

Figure 8. Emotion ratings by gender A third way to split the group into two pieces, is by musicality. If we separate the results from musicians from non-musicians, big differences appear. Anger and fear have nearly equal ratings, but there is a huge difference between the ratings of musicians and non-musicians of joy and sadness. Joy is rated 0,4 point higher by musicians and the difference at sadness is even bigger, with a gap of 0,6 points.

The cello was very successful in communicating sadness, so was the alt saxophone. The cello’s only high rating was sadness, indicating a pure relationship with this emotion. The alt saxophone’s sadness rating is 17% lower then the cello, but the joy rating is much higher. Overall, it is still pretty low with a value of 1,71, but for a musical instrument with such a rating in sadness, it is remarkable. The alt saxophone seems to be able to communicate both these emotions, where the sad one is stronger. Earlier, I spoke about the intuitive feelings with an alt sax. I still think that this combination of joy and sadness can be interpreted as consolation. The bluesy music an alt sax is frequently used in has the same comforting effect.

Figure 9. Emotion ratings by musicality

7. DISCUSSION 7.1 Anger and fear I started off with four basic emotions, which I suspected to be most suitable to be communicated by music. The results of this research nevertheless clearly showed that anger and fear were much lower rated then joy and sadness. Joy and sadness are in my opinion more general, and easier to attribute to. The results however do not show that it is entirely impossible to communicate anger and fear through music. In this setup, using this method, music and instruments, these emotions are very weak. For example, the research of Juslin [JUS00] used the same four emotions, and he was successful in communicating anger and fear. In my opinion the emotions anger and fear are less “basic” then such emotions as joy or sadness. In fact, I believe that part of the emotion fear is sadness. Only in fear, there is an extra layer within, which makes it fear. Both emotions are unpleasant and inactive, which places them near each other in the valence activation model. This also contributes to my opinion that sadness and fear are not fundamentally different. The reason I did use fear and anger along with joy and sadness, is that Plutchik has noted them as basic emotions and those were needed in the method I chose for measuring the emotions. The musical instrument which is most successful in communicating anger and fear is the cello. I think it is not a coincidence that this is also the instrument with the highest rating for sadness. The cello invokes negative emotions, and all three of these emotions are negative. Because of the low ratings, I will leave the emotions anger and fear out of the rest of the discussion. The results are not clear enough to use them in conclusions.

7.2 Joy and sadness Looking at the two basic emotions joy and sadness, one can see that especially the marimba is strong in communicating joy. The cheerful sounds of this musical instrument almost only invoke happy feelings. Playing a certain song on a marimba certainly cheers up the song. Experimenting with this, by just playing numerous songs on a keyboard, confirms this.

It is not only with the alt saxophone where joy and sadness are combined. The piano shows this combination even stronger, with a rating of 2,53 for joy and 2,00 for sadness. Most of these high ratings were given by the same participants. This means that the listener rated joy and sadness high at the same time. Apparently the music I chose, in combination with a piano, can be interpreted as both happy and sad. This again confirms the neutrality of this musical instrument. In my opinion this fact can be of use when the music itself, excluding the instrument, should cause all emotions. The piano just transfers the music, without adjusting the emotionally part.

7.3 Other aspects A very striking result when comparing the secondary aspects was seen in the age difference. When dividing the group in a younger and older half, we see that specifically the emotion sadness is rated much stronger by younger people, with a difference of about 0,5 points. This is a result that I can not explain. Reasoning about the general emotionally difference between old and young people is not easy to do without a specific literature background. Females rated the emotions sadness and joy higher than males, the difference is not as big as with the age groups, but still it exists. In my opinion the reason for this result is the fact that women are often more emotionally involved. On the other hand, the number op people participating in this experiment was not large enough to draw a founded conclusion about this fact. The most interesting result in my opinion was the emotion rating difference between musicians and non-musicians. The differences were big for as well joy as sadness. It is clear that when someone plays a musical instrument, he or she automatically becomes more involved with the emotions in music, especially when such an experiment focuses on musical instrument. To be certain about this: the participants did not know in advantage that the experiment was about the instruments.

8. CONCLUSION The experiment has roughly brought the results that I expected. The piano turned out to be an emotionally neutral musical instrument. The marimba was very joyful, where the cello invoked strong sad emotions. The alt saxophone did provoke negative and positive emotions, labeled by me as a consolation effect. Men are less emotionally involved in music as women, although the experiment group size was not big enough to give a solid foundation for this conclusions. Young people rated sadness much higher then old people. I do not have an explanation for this. The last conclusion is that musicians are more emotionally involved in music than people who do not play an instrument.

9. FURTHER RESEARCH In this research the only thing I looked at was four emotions and four distinct musical instruments. To extend or improve this research multiple aspects could be investigated. Of course it is possible to use various other instruments then the ones I used. Adding more instruments to the knowledge of emotions in musical instruments is a useful thing to do. It is also possible to use other or more emotions and try to find out if these are invoked by musical instruments too. It would be possible not to use basic emotions, but emotions which match to music better. Scherer [SCHE04] has proposed methods for this investigation.

[JEN96] Jensen, K. The control of musical instruments. Nordic acoustical meeting, 1996. [JUS00] Juslin, P. N. Cue Utilization in Communication of Emotion in Music Performance: Relation Performance to Perception. Journal of Experimental Psychology, 26, 6 (2000), p. 1797-1813. [LI03] Li, T. and Ogihara, M. Detecting Emotion in Music. Johns Hopkins University, 2003. [NOR97] North, A.C. and Hargreaves, D.J. Liking, arousal potential, and the emotions expressed by music. The Scandinavian journal of psychology, 38, 1 (1997), p. 45-54.

More interesting would be the investigation of the properties of the musical instruments, in my opinion. In this research, I found out that musical instruments do in fact alter emotions communicated by music, but the results are only related to four instruments. Which properties of the instrument do cause these emotions? Answering this question would cause the results to be of use in any kind of musical instrument, just by investigating it’s properties.

[NOR99} North, A.C. and Hargreaves, D.J. Music and game driving performance. The Scandinavian journal of psychology, 40 (1999), p. 285-292.

REFERENCES

[NOT02] NoteWorthy Composer 1.75. NoteWorthy Software, 2002.

[ADO05] Adobe Audition 1.5. Adobe systems incorporated. 2005.

[ROB98] Robertson, J. et al. Real-Time Music Generation for a Virtual Environment. University of Edinburgh, 1998.

[DOB01] Dobrian, C. Aesthetic Considerations in the Use of “Virtual” Music Instruments. Journal SEAMUS, spring 2003.

[SCH04] Scherer, K. Which Emotions Can be Induced by Music? What Are the Underlying Mechanisms? And How Can We Measure Them? Journal of new music research, 33, 3 (2004), p. 239-252.

[GEM05] General Music S.p.A. http://www.generalmusic.com [GOL95] Goldman, A. Emotions in Music (A Postscript). The journal of aesthetics and art criticism, 53, 1 (1995), p. 59-70.

[KRU02] Krumhansl, C. L. Music: A Link Between Cognition and Emotion. Current directons in psychological science, 11, 2 (2002), p. 45-50. [MID95] MIDI Manufacturers Association. http://www.midi.org