Archive: February 2009

Cute but sad Dumbo octopus

Collision Detection reader Paul Gemperle is a big fan of the “Dumbo octopus”. As am I. And really — who isn’t? Only someone with an anvil for a heart could be unmoved by the gorgeous and thoroughly extraterrestrial spectacle of the noble Grimpoteuthis, which looks sort of like a gelatinous ghost from Pac-Man, with two floppy ‘lil ear-thingies on top.

So Paul and I were both excited and saddened to see this footage of a Dumbo octopus that, according to the Youtube poster, was caught during a beam trawl. “Excited” because it’s cool to see these things up close, but “sad” because, as Paul points out, the octopus doesn’t seem very happy.

Humanity, watch it. I’ve said it before and I’ll say it again: You mess with the Kraken, you’ll get the seven-mile-long Cthulhuian tentacles. I shudder to think of what sort of charges will be levied against us during the cephalopod truth-and-reconciliation commissions a few decades from now.

New mathematical technique turns Chopin into 3D shapes


A few weeks ago I had an exchange on Twitter with John Magee about music. He Tweeted “music is the least representative art. or is it? could it abstractly represent the noise of consciousness?”, which led us to talk about the relationship between music and math/geometry. I pointed out that when Howard Gardiner researched his theory of “multiple intelligences,” he so frequently found music and math paired up — i.e. somebody with facility in one invariably had facility in the other — that he considered making them a single intelligence: Two sides of the same coin, as it were.

(Other researchers have argued persuasively that this isn’t the case; when I profiled the neuroscientist and record-producer Daniel Levitin, he noted that people who have Williams syndrome often have fantastic musical abilities, even though their traditional IQ never rises above that of a small child — they almost never acquire any basic math at all. So the link between math and music can’t be that linear.)

Anyway, the Twitter exchange I had with Magee came back to me when I stumbled upon this: A bunch of music professors have invented a new way to visualize music — as points on different-dimensioned spaces. That picture above? It’s from this video, in which they represent the chord changes in Chopin’s E Minor Prelude as a set of dots moving around the periphery of circular “pitch class space”. Even trippier is this one, which maps the chord changes through a four-dimensional space.

The professors argue that their new representational system could produce some cool innovations:

“You could create new kinds of musical instruments or new kinds of toys,” he said. “You could create new kinds of visualization tools — imagine going to a classical music concert where the music was being translated visually. We could change the way we educate musicians. There are lots of practical consequences that could follow from these ideas.”

“But to me,” Tymoczko added, “the most satisfying aspect of this research is that we can now see that there is a logical structure linking many, many different musical concepts. To some extent, we can represent the history of music as a long process of exploring different symmetries and different geometries.”

I’m all in favor of new ways to let people visualize — and play around with — music. A lot of electronic music software embraces this sort of visual mess-around aesthetic — like the incredibly cool Nintendo DS game Electroplankton (which I wrote about for Wired News a while back), or even Korg’s Kaoss Pad, which, by letting you control two variables at once on an X-Y grid (like “resonance” and “frequency”, so that when you slide your finger around it creates a wah-pedal like effect) quite directly map musical/audio concepts into a spatial dimension.

I’ve often wondered whether someone could similarly use software to create a better way to teach the harmonica. A lot of the cool dynamics of harmonica playing are about embouchure — the different shapes you make with your lips, tongue, cheeks and jaw. The overall shape inside your mouth changes the resonance of the harmonica almost the way a particular X/Y position on a Kaoss Pad (using in “filtering” mode) changes the sound of in instrument passed through it. Drop your jaw and tongue down low and you “bend” a harmonica note downwards; push your finger on the Kaoss Pad up and to the left and you’ll filter the instrument so you only hear the very lowest frequencies.

The cool thing about a Kaoss Pad is that you can see the relationship between where you slide your finger and the attendant musical effect. But with harmonica, it’s really hard to visualize what’s happening inside the harmonica player’s mouth. “How to play the harmonica” guides tend to trip over themselves trying to explain precisely where the hell your tongue and cheeks and jaw and lips are supposed to be, and how relatively tense or loose they’re supposed to be, and how hard or soft you’re supposed to be blowing. I wonder if it’d be possible to create a sort of MRI-like visualization of what’s going on inside your mouth while playing the harmonica — as a visual aid? Or could you even create a virtual instrument that let you play a virtual harmonica using Kaoss-like controls?

Study: Hearing damage occurs after more than 5 minutes of full-volume listening on iPod earbuds


I’m coming late to this, but back in 2006 a couple of audiologists decided to test a bunch of earbuds and headphones to measure how much hearing damage they cause when playing MP3 tracks at a range of volumes, from whisper-quiet to full-on 100%.

That chart above summarizes the results. It’s slightly alarming to me, because I sometimes listen to my Sansa Fuze player — using earbuds — at 80% or 90% of the volume, in part because when I’m walking around Manhattan, the ambient noise is pretty high, so the music has to cut over that. If hese guys are right, I can listen for no more than 1.5 hours, and possibly as little as 22 minutes, “without greatly increasing their risk of hearing loss”. Mind you, who really knows: As the scientists note in their writeup, people’s auditory physiology varies quite a lot.

But there’s also some more, and newer, nifty research from this team — about the listening habits of young people. One public-health-policy concern these days is that young people are listening to music waaaaaay too loud, and are all gonna be deaf by their mid 40s. To figure out whether this was true, these same scientists studied 30 Denver-Boulder-area teenagers.

It turns out things aren’t so bad. Only a small minority of kids — between 7 and 24 per cent — are listening to their MP3 players at eardrum-shredding levels. “We don’t seem to be at an epidemic level for hearing loss from music players,” Portnuff said. What’s more, boys tend to listen to music louder than girls, teenagers tend to listen at quieter levels as they get older, and some teenagers appear to have trouble judging precisely how loud they’re listening.

But dig this …

The study also showed that teen boys listen louder than teen girls, and teens who express the most concern about the risk for and severity of hearing loss from iPods actually play their music at higher levels than their peers, said CU-Boulder audiologist and doctoral candidate Cory Portnuff, who headed up the study. Such behaviors put teens at an increased risk of music-induced hearing loss, he said. [snip]

“We really don’t a have good explanation for why teens concerned about the hearing loss risk actually play their music louder than others,” he said.

Heh. Maybe it’s because they’ve been listening so loud for so long that it’s just begun to unsettle even themselves . Not enough to stop, yet, but enough to start wondering, hmmm, isn’t this gonna make me deaf?

Is Flower the first game about global warming? My latest Wired gaming column

Spoiler warning: This blog posting contains a lot of spoilers about the video game Flower!

In recent years there’s been a heartening surge in “art” video games — games that use play mechanics to explore an idea or evoke a mood. The creators use gameplay as a rhetorical technique: They use physics, music, action, perspective, goals and challenges of the game as metaphors. A couple of months ago I wrote my Wired News column about Passage, a fantastic little video-game meditation on life and death. This week, I wrote about Flower, an insanely beautiful game released two weeks ago for the Playstation 3 by Jenova Chen. In the game, you control a gust of wind that blows a flower petal along, and you do …

… well, lots of things. You touch other flowers, opening them up and releasing their petals; if you do a lot of this you start to bring dead, dry land back to life. Sometimes you also cause huge rocks to shift and groan and open up like petals themselves. Other times dead trees explode with color and leaves, or winds start blowing that power wind turbines. The final “boss fight” — such as it is — consists of a crazy, massive “awakening” of an entire grey, dead, “fallen” city.

The visual metaphors and the gameplay are sufficiently open-ended enough — yet evocative enough — that critics have been arguing, interestingly, about what precisely the game is supposed to be saying. So I wrote my column arguing that it’s essentially a game about the environment, or climate change.

Of course, the game isn’t solely “about” climate change, in the sense that Robert Frost’s “Stopping by Woods on a Snowy Evening” isn’t solely “about” longing for death, horses, the winter solstice, obligation and freedom, or snow. Flower is amorphous enough that you could say it’s “about” any number of things, ranging from i) spiritual renewal to ii) the vague delights of conquering obstacles to iii) the Cartesian mind/body divide (all that hard steel! all those soft flower petals born aloft on a mere breath of air!) to iv) the sheer weird tactile fun of hurling petals into the wind. When you head into a thunderstorm and watch, close-up, as a handful of floating petals are illuminated from behind by a sudden flash of lightning, it’s clear that Chen is having enormous fun with the artistic traditions here ranging from chiaroscuro to, probably, Katinka Matson’s scanner-photography.

What particularly interested me was how straightforwardly Chen’s imagery in the game was rooted in super-ancient Western mythologies about a dry, broken land healed by a heroic quest. Now, plenty of video games use this as their substructure — hell, you could argue that Super Mario is sort of in that tradition — but Chen strips everything back to pure, raw metaphoric imagery. Yet because so many of those images are so peculiarly contemporary — power windmills, out-of-control electricity, brooding weather, corroded industrial towers — I couldn’t escape the idea that he was deploying all this rich tradition to rummage around in our modern unease about the environment.

Anyway, enough blathering about the column. (My intro is nearly as long as the column itself!) The piece is online free at here the Wired News web site, and a copy is archived below. Above is a bit of gameplay via Youtube!

» MORE...

How to get “ReTweeted”: Tweet in the morning EST, offer cool new information, and say “please”


What’s a “ReTweet”? It’s when somebody copies one of your Twitter status updates and puts it in their own stream. It has thus become the new coin of the realm in measuring online influence: If your utterances on Twitter are getting ReTweeted a lot, then you can brag lustily about your awesome Web 7.0 street cred. Tens of thousands of Twitterophiles each day stare forlornly at that empty box on the Twitter page, wondering what they can say — in 140 characters or less — that will suddenly go viral and sweep the globe.

Well, wonder no more! Over at the Mashable blog, viral-marketing expert Dan Zarella did some fascinating research into “the science of ReTweets.” Because Twitter has a very open and generous API to their enormous firehose of everyday Tweets, anyone can grab the data and try to parse it for patterns. Zarella decided to look at ReTweeting amongst a sample of 20,000 users to see if he could spy any rules. So what did he find?

Firstly, he discovered that the identity of the original Twitterer isn’t the be-all and end-all. You might imagine that getting ReTweeted is simply a matter of being a huge Twitter celebrity with 15,000 followers; with so many people paying attention to your Tweets, it would stand to reason that you’d have a much greater likelihood of your utterances going viral, right?

That’s true, Zarella found — but only to a degree. If you control for the number of followers someone has — and thus compare Tweets to Tweets on an equal basis — then the content of the Tweet is actually more important than the identity of the person who originally wrote it. What specific type of content was most likely to be ReTweeted? Original stuff — bits of news and information that is exclusive to the original Twitterer. In particular …

- Calls to action (as in: “please ReTweet”), while they might sound cheesy, work very well to get ReTweets.
- Timely content gets ReTweeted a lot.
- Freebies are popular.
- Self-reference (Tweeting about Twitter) works.
- Lists are huge.
- People like to ReTweet blog posts.

The most ReTweeted words and phrases were, in order, “you”, “twitter”, “please,” “retweet”, “post”, “blog”, “social,” and “free”. Indeed, as Zarella points out, saying “please” is very powerful — “polite calls to action” have a high incidence of getting ReTweeted. We’re social beings; we like to help out!

The final intriguing trend he found is time of day. It turns out Twitter is currently governed by the circadian rhythms of Eastern Standard Time — because the amount of ReTweeting overall starts at a low level in the predawn period EST, then climbs during the workday and peaks at 3 pm. (Check out the chart after the jump.)

So if you want to get really well ReTweeted? Tweet something with nifty original content, ask if people will “please” pass it around, and post at 9 am EST.

One of the things that fascinated me about Zarella’s work here is that it appears to support Duncan Watts’ debunking of the idea that viral spreading of trends, memes or utterances is dependent upon “influentials” — incredibly well-connected people who are all-important “hubs” in the social network. (I wrote about his work last year in Fast Company.) The “two-step” theory of influence, developed in the 50s and popularized in Malcolm Gladwell’s The Tipping Point, is that these superinfluential folks are key to the spread of a big trend. Watts doubted this was true, and developed some mathematical models and real-world experiments that cast a lot of doubt on the idea that “influential” people can really have that much influence. And indeed, Zarella seems to have found that being an “influential” on Twitter — i.e. having tons of followers — isn’t as important as the quality and content of the message.

» MORE...

Why Gay Talese does his reporting on “laundry board”

This morning, I was reading the print version of The New York Times when a thin, one-column story in the city section caught my eye: “When Panhandlers Need a Wordsmith’s Touch”. In the story, the writer uses a first-person voice to describe how he was walking to the bank and wound up talking to panhandlers, and discovered that the economic downturn is hitting them pretty hard too.

The writer suggests that a panhandler ought to wear a sign invoking the Obama stimulus package. Indeed, the writer even offers to composed a message for the guy — at which point I hit upon this incredibly odd passage:

I stopped talking and reached into my pocket for one of the strips of laundry board on which I make notes when I’m interviewing people. On one strip of laundry board I wrote: “Please Support Pres. Obama’s Stimulus Plan, and begin right here … at the bottom … Thank you.’’ I handed it to him, and he said he’d copy the words on his sign and have it on display the following day.

Wait a minute: This guy does his reporting on strips of laundry board? My mind boggled. What in god’s name are “laundry boards”, anyway? And what sort of oddball cuts them into pieces instead of using a regular notepad? That the mere phrase “laundry board” — and the packrat-like behavior here — made me immediately think, “wow, this guy must be like 124 years old or something.” So I glanced down to the bottom of the story, where the byline was printed, and saw that it was …

… Gay Talese. Bingo! Not quite as old as I’d suspected — he’s only 77. Nonetheless, he’s of that generation of writers that — as he noted in this interview here — is most comfortable writing his stories longhand. Only after he’s written longhand does he transfer his work to a typewriter, and only at the very end of the editing process does he shift over to a computer (which he uses, as he points out, “like a typewriter,” as opposed to a networked data-processing machine).

I’m not being critical here; on the contrary, I love hearing about the kooky, idiosyncratic methods people use to gather, process, and compose information. Each method — paper, typewriter, computer, smoke signals — has a built-in cognitive style, things it’s good and bad at. For example, when I’m reporting at my desk and doing interviews, I always type on my computer, because I can rip along at about 80 words a minute. My interviews notes are thus extremely complete — almost a word-for-word transcription of what my interviewee said.

But when I do use paper notes — such as when I’m out in the field — I enjoy the fact that the lower bit-rate of data recording forces me to make choices, instant by instant, about what’s interesting and worth recording, and what isn’t. When I type notes, it’s a “publish, then filter” system, whereby I capture everything and only later on decide what’s crucial for the story; when I take notes on paper, it’s an older-skool “filter, then publish” system. The thing is, the latter approach can sometimes yield more pentrating insights into what I’m reporting on, because when you’re forced to think on the fly it can have the effect of really sharpening my focus: When I have to make split-second decisions about whether I have time to record something, it tends to make me notice everything more vividly.

When I’m finally writing, the cognitive styles of computer versus paper tend to complement one another. The cut-and-paste, move-it-around-and-see-what-happens nature of digital text makes it easier to be playful with how I’m structuring a piece. But sometimes when I’m stuck I’ll pull out a pad of paper, because paper slows me down, and being forced to think more slowly can be incredibly useful. (Also, writing on paper allows me to draw swoopy arrows and talmudic sidenotes that can result in big breakthroughs about how to structure a long article.) I’ve never actually tried writing a long feature on a typewriter, but I suspect it’d be a pretty fascinating experience.

Given Gay Talese’s awesome impact on journalism, I’m assuming that whatever writing systems he’s got work incredibly well for him. Now I’m savoring the mental image of him stalking Frank Sinatra back in 1966 for his seminal Esquire profile, and carrying around slips of laundry board to scribble notes.

I’m still mystified, though. Can anyone explain precisely what the hell “laundry board” is? (Collision Detection reader Dave pointed out that Talese actually does explain in the article what laundry board is — it’s the 14 by 8 inch cardboard “that the dry cleaner sends home with my shirts.” I, being a crack investigative journalist, managed to breeze past that passage in a 543-word article. Winner.)

41% of museums don’t know how dogs actually walk

See that skeleton above? It’s a display at the Natural History Museum in Oulu, Finland, and it shows a domestic dog in mid-stride. The only problem?

That’s not how dogs walk. If you actually closely observe a dog — or any other quadruped — while it’s walking slowly, you see that they step with their left hind leg, followed by their left foreleg, then the right hind leg and the right foreleg. This gives them maximum “static stability”: They always have three paws on the ground, so they won’t trip or get easily knocked over. But that dog in the Finland museum is shoving its right front paw forward, followed by its back left paw. What gives?

It used to be that people argued to the point of fistfights over how quadruped legs moved. Then in the 1870s, the photographer Edward Muybridge began settling these fights by pioneering serial freeze-frame photography that revealed how horses and the like walked and galloped, and pretty soon naturalists had this stuff all figured out. But recently a team of biological physicists noticed that they were seeing errors in museum exhibits and taxidermy models, so they decided to see how commonplace these goofs were. They randomly gathered a representative sampling of 307 depictions of quadrupeds walking in museum exhibits, taxidermy catalogues, animal-anatomy books and toys. The result?

Museums screwed things up a stunning 41% of the time. Taxidermy catalogues got it wrong 43% of the time, toys 50% of the time, and animal-anatomy catalogues were the worst, with 63.6% errors. As the scientists dryly pointed out in their paper — which you can download free here

This high error rate in walking illustrations in natural history museums and veterinary anatomy books is particularly unexpected in a time where high-speed cameras and the internet offer ideal possibilities to obtain reliable quantitative information about tetrapod walking.

So why exactly do we get dog-walking wrong so often? Dogs, after all, are kind of all over the place, and thus pretty easy to observe. But the fact is quadruped leg-motion isn’t intuitive: When you close your eyes and visualize it, it makes more sense for the legs to alternate steps left and right, much like the screwed-up skeleton above. What we see in our mind’s eye doesn’t match what we actually see in the world around us — so we ignore the evidence in front of our eyes. It’s kind of like how Aristotle maintained that men had more teeth than women because it made more sense to him, and never bothered to actually check inside an actual woman’s mouth.

Of course, given that we now actually do empirical science and stuff, it’s still pretty alarming that museums mess up quadrupedal gait almost as often as toy companies do. It’s possible, the scientists suggest, that modern media is partly to blame. When museum researchers and taxidermists do a quadruped exhibit, they probably just refer to existing illustrations and models, even when those are wrong — so the errors simply compound themselves. They don’t think to re-investigate the correct gait of dogs, because they presume this was settled a hundred years ago. It was! But even in science, errors of human psychology can creep back in.

Interesting coda: The scientists found that CGI movies like Jurassic Park buck this trend — they tend to correctly represent quadruped gaits.


Interesting coda, 2: I think I’ve broken my world record for “time between blogging posts.”

Search This Site


Bio:

I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).

More of Me

Twitter
Tumblr

Recent Comments

Collision Detection: A Blog by Clive Thompson