One study showed that when participants were asked to listen to songs from a variety of popular music genres, they performed only at chance level when attempting to interpret the singer's intended message of each song. The extent to which semantics and emotions are conveyed by song lyrics remains a controversial issue. ![]() Furthermore, the segmentation of a pseudo-language into relevant units is facilitated for sung compared to spoken pseudowords, and infants learn words more easily when sung on melodies rather than when spoken. Indeed, the melody of a song may facilitate learning and recall of the words, , though this advantage appears to be diminished when the rate of presentation is controlled for, such that spoken lyrics are presented at the same rate as sung ones. Despite the lateralization tendencies, melody and lyrics appear to be tightly integrated in recognition and priming experiments. Įarly studies of song cognition used dichotic listening paradigms to reveal lateralization patterns of left-ear (right hemisphere) advantage for melody recognition and right ear (left hemisphere) advantage for phoneme recognition in song and in the recall of musical and linguistic content of sung digits. Moreover, from a developmental perspective, singing is also quite relevant for parent-infant bonding, as indicated by studies showing that babies prefer infant-directed singing to infant-directed speech. While most studies of music cognition have used non-vocal music stimuli, everyday music-making and listening usually involve singing. Furthermore, a better understanding of the neural basis of song is surely germane to the ongoing debate on the evolutionary origins of language and music, especially in view of propositions that the protolanguage used by early humans was characterized by singing, and that vocal learning was a key feature governing the evolution of musical and linguistic rhythm. In this case, linguistic and musical information are contained in one auditory signal that is also a universal form of human vocal expression. One ecological alternative is to study the perception of song. The main disadvantage to comparing language and music processing by testing perception of speech and musical excerpts is that the acoustic properties, context, and secondary associations (e.g., musical style or linguistic pragmatics) between even the most carefully controlled stimuli may vary greatly between the two domains. These conflicting results may stem from the use of different methods, but also from other methodological problems. On the other hand, results of brain imaging and behavioral studies have often demonstrated shared or similar resources underlying, for instance, syntactic and harmonic processing –, auditory working memory for both linguistic and musical stimuli, and semantic or semiotic priming –. On the one hand, double dissociations of linguistic and musical processes, documented in neuropsychological case studies, often point to domain-specific and separate neural substrates for language and music, –. Strong arguments have been made for both the opposing frameworks of modularity versus shared resources underlying language and music cognition (see reviews – ). ![]() Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Participants were asked to attend to either the words or the melody, and to perform a same/different task. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody same word, different melody different word, same melody different word, different melody. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. ![]() Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |