Written by Karl Leif Bates, Duke News & Communications
A pair of studies by Duke University brain scientists shows powerful new evidence of a deep biological link between human music and speech.
The two new studies found that the musical scales most commonly used over the centuries are those that come closest to mimicking the physics of the human voice, and that we understand emotions expressed through music because the music mimics the way emotions are expressed in speech. Composers have long exploited the perception of minor chord music as sad and major chord music as happy, now the Duke team thinks they know why.
In a paper appearing in the Journal of the Acoustical Society of America (JASA), the Duke team, led by Dale Purves, a professor of neurobiology and member of the Duke Institute for Brain Sciences, found that sad or happy speech can be categorized in major and minor intervals, just as music can. So your mother was right: It’s not only the words you say, but how you say them.
In a second paper appearing Dec. 3 in the online journal PLOS One, Kamraan Gill, another member of the team, found the most commonly used musical scales are also based on the physics of the vocal tones humans produce.
“There is a strong biological basis to the aesthetics of sound,” Purves said. “Humans prefer tone combinations that are similar to those found in speech.”
This evidence suggests the main biological reason we appreciate music is because it mimics speech, which has been critical to our evolutionary success, said Purves, who is also director of Duke’s Neuroscience and Behavioral Disorders Program and executive director of the A*STaR Neuroscience Research Partnership at the Duke-NUS Graduate Medical School in Singapore.
To study the emotional content of music, the Duke team collected a database of major and minor melodies from about 1,000 classical music compositions and more that 6,000 folk songs and then analyzed their tonal qualities.
They also had 10 people speak a series of single words with 10 different vowel sounds in either excited or subdued voices, as well as short monologues.
The team then compared the tones that distinguished the major and minor melodies with the tones of speech uttered in the different emotional states. They found the sound spectra of the speech tones could be sorted the same way as the music, with excited speech exhibiting more major musical intervals and subdued speech more minor ones.
The tones in speech are a series of harmonic frequencies, whose relative power distinguishes the different vowels. Vowels are produced by the physics of air moving through the vocal cords; consonants are produced by other parts of the vocal tract.
In the PLOS One paper, the researchers argue the harmonic structure of vowel tones forms the basis of the musical scales we find most appealing. They show the popularity of musical scales can be predicted based on how well they match up with the series of harmonics characteristic of vowels in speech.
Although there are literally millions of scales that could be used to divide the octave, most human music is based on scales comprised of only five to seven tones. The researchers argue the preference for these particular tone collections is based on how closely they approximate the harmonic series of tones produced by humans.
Though they only worked with western music and spoken English, there is reason to believe these findings are more widely applicable. Most of the frequency ratios of the chromatic musical scale can be found in the speech of a variety of languages. Their analysis included speakers of Mandarin Chinese, said Duke neuroscience graduate student Daniel Bowling, who is the first author on the JASA paper, and this showed similar results.
“Our appreciation of music is a happy byproduct of the biological advantages of speech and our need to understand its emotional content,” Purves said.
It would be hard to say whether singing or speech came first, but graduate student Dan Bowling supposes “emotional communication in both speech and music is rooted in earlier non-lingual vocalizations that expressed emotion.”
The JASA paper is not yet available online.
Read the PLOS One paper here.
The Duke Institute for Brain Sciences (DIBS) was created in 2007 as a cross-school, campus-wide, interdisciplinary Institute with a commitment to building an interactive community of brain science research and scholarship.
Copyright 2008-2014 DIBS and Duke University. All rights reserved.