Titelangaben
Frieler, Klaus ; Akkermans, Jessica ; Schapiro, Renee ; Busch, Veronika ; Lothwesen, Kai Stefan ; Elvers, Paul ; Fischinger, Timo ; Schlemmer, Kathrin ; Shanahan, Daniel ; Jakubowski, Kelly ; Müllensiefen, Daniel:
Modelling emotional expression in monophonic melodies using audio and symbolic features.
2017
Veranstaltung: Jahrestagung der Deutschen Gesellschaft für Musikpsychologie, 15.-17.9.2017, Universität Hamburg.
(Veranstaltungsbeitrag: Kongress/Konferenz/Symposium/Tagung, Poster)
Kurzfassung/Abstract
Justin and Gabrielsson (1996) presented evidence which suggests that skilled musical performers have the ability to communicate their intended emotional expressions in music to listeners with high accuracy. Due to importance of this result, reflected in a large number of citations, we conducted a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g., happy, sad, angry, etc.) by professional musicians were recorded using three melodies from the original study. Participants (N=319) were presented with recordings and rated how well each emotion matched the emotional quality expressed in the recording. The same instruments from the original study (violin, voice, and flute) were used, with the addition of piano. As an extension to the original study, the recordings were also presented to participants on an internet-based survey platform. Results found overall high decoding accuracy using the method of analysis from the original study.
The present study aims to investigate through which musical features the emotional expression was actually communicated to listeners. To this end, we employed three sets of features extracted from the expressive performances through computational analysis. Firstly, we assembled a fine grained transcription with the help of the Tony transcription system (Mauch et al., 2015) and obtained minute details such as micro timing, intonation and relative dynamic intensities of the individual notes. Secondly, we extracted a large set of audio features using the MIRtoolbox (Lartillot & Toiviainen, 2007), which has proved useful in previous studies (Lange & Frieler, 2017) to model perceived emotional content. Thirdly, we extracted a vast array of melodic features using the MeloSpyGUI (Pfleiderer et al., 2017) to describe structural characteristics of the source melodies which are also assumed to contribute to the perception of the emotional content. Using the three features sets, we aim to model the accuracy of decoding emotional expression and identify the most salient musical features responsible for correct emotion recognition.
Analysis is currently under way and results will be available at time of the conference. We expect that the expressive performance features will have a much higher predictive power than audio and structural features.
Forschungsprojekte
Weitere Angaben
Publikationsform: | Veranstaltungsbeitrag (unveröffentlicht): Kongress/Konferenz/Symposium/Tagung, Poster |
---|---|
Schlagwörter: | emotional expression in music, replication, musical features |
Sprache des Eintrags: | Englisch |
Institutionen der Universität: | Philosophisch-Pädagogische Fakultät > Musik > Professur für Musikwissenschaft |
Weitere URLs: | |
Open Access: Freie Zugänglichkeit des Volltexts?: | Nein |
Titel an der KU entstanden: | Ja |
KU.edoc-ID: | 20902 |
Letzte Änderung: 02. Mär 2022 12:26
URL zu dieser Anzeige: https://edoc.ku.de/id/eprint/20902/