« PREVIOUS ENTRY
Left Behind, the video game: My review in Wired News
Does your language affect how you perceive the rhythms in music?
Possibly so, according to some incredibly cool new research. A group of scientists recently got interested in longstanding assumptions about how people sort sounds into groups. Traditionally, researchers would test people by playing them a bunch of musical sounds that alternated in loudness (such as loud-soft-loud-soft-loud-soft-etc.) or duration (such as long-short-long-short-long-short-etc.). Then they’d ask people to group the sounds: Where did a grouping of sounds begin and end?
Historically, people would say that a louder sound marked the beginning of a group, and a lengthened sound indicated the end of a group. This makes sense to me. Think about pop music: A verse or chorus in a pop song will often begin with a loud, accented syllable, and conclude with a drawn-out note. These findings were so regularlyconfirmed in lab experiments that eventually, they came to be regarded as “universal” laws of human perception — the acoustic lattices that organize the way we hear language and music.
Except for one thing: The studies were only conducted with Western people speaking Western languages. So this new group of scientists — who work in San Diego and Kyoto, Japan — wondered if the findings would still hold up with Eastern subjects. They decided to remount the experiment, comparing native speakers of Japanese with native speakers of American English.
Sure enough, differences emerged. While the Japanese speakers agreed that loud sounds marked the beginning of groups, they disagreed when it came to sound duration: They felt that a short sound was most likely to mark the end of a group.
This, the scientists theorize, may be because of how language trains our minds to perceive rhythm. In English, “function” words like “the” or “a” tend to come at the beginning of phrases and combine with longer, meaningful words like nouns or verbs. That means that linguistic chunks tend to start short and end long. But Japanese, as the researchers note in this press release, works differently:
Japanese, in contrast, places function words at the ends of phrases. Common function words in Japanese include “case markers,” or short sounds which can indicate whether a noun is a subject, direct object, indirect object, etc. For example, in the sentence “John-san-ga Mari-san-ni hon-wo agemashita,” (“John gave a book to Mari”) the suffixes “ga,” “ni” and “wo” are case markers indicating that John is the subject, Mari is the indirect object and “hon” (book) is the direct object. Placing function words at the ends of phrases creates frequent chunks that start with a long element and end with a short one, which is just the opposite of the rhythm of short phrases in English.
The scientists now think they could analyze the structure of a language and predict how its speakers would perceive rhythms in music.
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better, which came out Sept. 12 this year. You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog