Emotion recognition of faces and music in asperger syndrome: A crossmodal priming study

Katharina Killy, Manuela M. Marin

Abstract


Although autistic individuals are said by the general public to have impairments with emotion perception in social interactions, recent findings do not fully support this claim (Erbas et al., 2013). A study by Charbonneau et al. (2013) has shown that discrimination between two types of affective vocalizations and facial expressions, presented in isolation or in combination, was impaired in autistic individuals. However, controls and autistic individuals benefited from a bimodal presentation of the stimuli. Research on musical emotions has indicated that autistic individuals are not impaired in decoding musical emotions (e.g., Heaton, Hermelin, & Pring, 1999; Quintin et al., 2011). Furthermore, musical emotions have been shown to influence affective face processing in healthy individuals (Logeswaran & Bhattacharya, 2009; Marin et al., in prep). Motivated by the results of Charbonneau et al. (2013), we thus aim to systematically investigate the possible influence of musical emotions on emotion decoding of facial expressions in a crossmodal priming paradigm. Our sample comprises 15 high- functioning autistic individuals and 15 healthy controls matched for age, gender and IQ. Primes are excerpts of four types of musical emotions (happy, neutral, sad and scary). Targets are pictures of facial expressions (happy, neutral, sad and scary) taken from the Radboud Faces Database (Langner et al., 2010). In a two- alternative forced choice task, primes and targets of two types of emotions are paired in six combinations (happy-sad, sad-scary, happy-scary, neutral scary, neutral-sad and neutral- happy), and are either congruent or incongruent in terms of valence and/or arousal. This experimental design enables us to disentangle arousal from valence effects (Marin et al., 2012). The same discrimination task, collecting reaction time and accuracy data, will be performed for all stimuli in isolation in a separate session. We hypothesize that healthy controls will outperform autistic individuals on all tasks except for the musical emotion task. We hope to be able to demonstrate a crossmodal priming effect in healthy controls, but more importantly, also the presence of a reduced priming effect in autistic individuals. Our results may lead to further intervention studies in which listening to music may help to train autistic individuals to discriminate between facial expressions of emotion.

References

Charbonneau, G., Bertone, A., Lepore, F., Nassim, M., Lassonde, M., Mottron, L. & Collignon, O. (2013). Multilevel alterations in processing of audo-visual emotion expressions in autism spectrum disorders. Neuropsychologia, 51, 1002-1010.

Erbas, Y., Ceulemans, E., Boonen, J., Noens, I. & Kuppens, P. (2013). Emotion differentation in autism spectrum disorder.
Research in Autism Spectrum Disorder 7, 1221-1227.

Heaton, P., Hermelin, B., Pring, L. (1999). Can children with autistic spectrum disorders perceive affect in music? An experimental investigation. Psychological Medicine, 29, 1405-1410.

Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H., Hawk, S. T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition and Emotion, 24(8), 1377-1388.

Logeswaran, N., & Bhattacharya, J. (2009). Crossmodal transfer of emotion by music.Neuroscience letters, 455(2), 129-133.

Marin, M. M., Gingras, B., & Bhattacharya, J. (2012). Crossmodal transfer of arousal, but not pleasantness, from the musical to the visual domain. Emotion, 12(3), 618.

Quintin, E.-M., Bhatara, A., Poissant, H., Fombonne, E. & Levitin, D. J. (2011). Emotion perception in music in high-functioning adolescents with autism apectrum disorders. Journal of Autism Development Disorders, 41, 1240-1255.

Full Text:

PDF

Refbacks

  • There are currently no refbacks.