Archive: January 2007

Equation determines Jan. 23 is “worst day of the year”

Is this the suckiest week of the year? It ought to be, according to Cliff Arnall, a professor at Cardiff University. Arnall, an expert on seasonal depression, has authored the equation you see above, which calculates the “most depressing day of the year”. This year, it was January 23rd — Tuesday. Last year, the equation offered up Jan. 24th as the buzzkiller of the year, so this week is clearly haunted.

The equation works thusly: You apply it to each day of the year, and calculate the values. W stands for weather, D for your level of debt, d for your monthly salary; T is “time since Christmas” and Q for the amount of time since you last failed in an attempt to quit smoking. (Smoking? Well, Arnall’s British.) M is your “low motivational levels” and NA is the “need to take action.” The gist of it is, if the weather is really bad, you have high debt, and it’s been a while since your last failed attempt to stop smoking, the numerator gets high, boosting the day’s depression rating. If your motivation and ability to take action plummets, the divisor shrinks, creating the same effect. As MSNBC reports:

Arnall found that, while days technically get longer after Dec. 21, cyclonic weather systems take hold in January, bringing low, dark clouds to Britain. Meanwhile, the majority of people break their healthy resolutions six to seven days into the new year, and even the hangers-on have fallen off the wagon, torn off the nicotine patches and eaten the fridge empty by the third week. Any residual dregs of holiday cheer and family fun have kicked the bucket by Jan. 24.

“Following the initial thrill of New Year’s celebrations and changing over a new leaf, reality starts to sink in,” Arnall said. “The realization coincides with the dark clouds rolling in and the obligation to pay off Christmas credit card bills.”

Obviously, the variables in the equation are personalized, so Arnall’s proclamation of Jan. 23 as the singlemost depressing day might not hold true for you. More obviously yet, this doesn’t seem to be even vaguely scientific since Arnall doesn’t offer any data to back up his equation, his association with Cardiff University seems to be ungoogleable, and the whole thing smells like a PR endeavour.

I still don’t care. I am such a sucker for pseudo-precise equations that purport to clarify everyday life.

Scientists can’t get sloth to move

I actually spit coffee laughing when I read this. Apparently some scientists at the University of Jena in Germany spent three years trying to get a sloth to climb up and down a pole as part of “an experiment in animal movement,” as the Associated Press reports. The problem is the sloth — named “Mats” — was so totally lazy they couldn’t get it to budge. So after three years, they just gave up and sent the sloth back to the zoo. As the story notes:

Neither pounds of cucumbers nor plates of homemade spaghetti were appetizing enough to make Mats move.

“Mats obviously wanted absolutely nothing to do with furthering science,” said Axel Burchardt, a university spokesman.

Whaddya gonna do? It’s a sloth’s world; we just live in it.

I particularly loved the title to the Associated Press story, which I cribbed above — “Scientists can’t get sloth to move” — because it seemed like something plucked straight from The Onion. Indeed, it was almost suspiciously so. Given that The Onion’s tinder-dry prose style was crafted in emulation of the nearly-Asbergian bathos of real-life newspaper stories, and given that The Onion has become such a thoroughly mainstreamed cultural reference point, I half wonder whether the worm has eaten its tail — and bored-gormless GenX news copywriters now craft headlines in emulation of The Onion.

(Thanks to Fark for this one!)

Ry Cooder used iTunes to master his latest album

This is intriguing: Apparently Ry Cooder used the “sound enhancer” inside iTunes to master his latest album. According to the New York Times, Cooder has been struggling for years to capture the correct sound for a solo album; in fact, he’d delayed releasing the album for years because he couldn’t get the sound right. Whenever he took his mixes and burned a CD of them, they sounded “processed”.

Then one day he had an accidental breakthrough:

When he burned a copy of the album using Apple’s iTunes software, it sounded fine. He didn’t know why until one of his younger engineers told him that the default settings on iTunes apply a “sound enhancer.” (It’s in the preferences menu, under “playback.”) Usually, that feature sweetens the sound of digital music files, but Mr. Cooder so liked its effect on his studio recordings that he used it to master — that is, make the final sound mixes — his album. “We didn’t do anything else to it,” he said.

Apparently he’s the first known producer to master an album using iTunes’ sound enhancer. But what precisely does the sound enhancer do? I can’t quite figure it out. Possibly it’s a sonic maximizer, or an aural exciter — something that tries to restore audio frequencies and dynamics that get lost during the recording process. Interestingly, some audiophiles complain they have to turn the enhancer off because it ruins songs during playback.

Anyone know how iTunes’ enhancer works?

Porn: Extra creepy-looking in high-def

Back in the summer of 2005, I wrote a piece for the New York Times Magazine pointing out that high-def TV is spectacularly unforgiving of celebrities’ skin flaws — so much so that high-def was likely to uglify several people normally considered beautiful. Newscasters in New York were speed-dialing their plastic surgeons in an attempt to stay ahead of the technological curve. While I was doing the research, several people noted that the next big area of media getting hit by high-def future-shock was porn. “Have you ever actually seen a piece of high-def porn?” one TV analyst asked me. “It’s nasty.”

So I was intrigued to open up today’s New York Times Business section and find an article on precisely this subject. As they note, porn relies on close-ups more than any other form of visual media, and high-def close-ups are almost always ghastly beyond words. To quote:

“The biggest problem is razor burn,” said Stormy Daniels, an actress, writer and director. [pictured above]

Ms. Daniels is also a skeptic. “I’m not 100 percent sure why anyone would want to see their porn in HD,” she said.

The technology’s advocates counter that high definition, by making things clearer and crisper, lets viewers feel as close to the action as possible.

“It puts you in the room,” said the director known as Robby D., whose films include “Sexual Freak.”

Eek. These days, I still wonder how the regular, non-porn TV-show hosts are faring. Back when I wrote my original New York Times Magazine piece on this phenomenon, I was actually pretty gentle in my descriptions of some of the stars I saw in red-carpet footage. I didn’t want to be mean. But the truth was, the majority of them all looked like hell warmed over, and when the camera zoomed in on each full-screen interview headshot, I screamed and screamed like a little girl. It was like being Gulliver in Brobdingnag, queazed out by the sight of the giant’s pores looming like lunar craters. When one well-known celebrity power couple went in front of the camera, both of them looked sephulchral — despite double standards about beauty and aging, men and women are equally humbled before the soul-bearing gaze of high-def. “This,” I thought, “is going to end their careers.”

Sending good vibes to Heather Gold

(UPDATE: Heather’s doctor visit is over, so there’s no need to send her any more IM messages. But I’ll leave the rest of this posting up in its original form for posterity’s sake!)

You may have read about the debates over whether praying for a sick stranger can improve his or her health. But can “good vibes” improve someone’s fear of seeing the doctor?

Comrades, it’s time to find out. My friend Heather Gold’s elbow got broken a while ago, and has an important doctor’s appointment that she’s terrified of — and which she’s going to right now. So she put up a request on her blog asking people to send her “good vibes” between 1 and 3 pm Pacific Time. The best way is by instant messaging her at her handle scoobyfox; all IMs will be instantly forwarded to her phone. As she writes on her blog:

I broke my funny bone a few months ago. I have an important medical appointment today and I’m nervous. Please send me good vibes for my hand and a smiley face or joke for the rest of me today between 1-3 PST. California has gotten to me. This is really proving that everything I mock, I become.

Science in action! I’ll report back whether she felt better.

A Nobel Prize adds two years to your life

Scientists have long known that being rich makes you live longer. And rich people are often famous. But does fame itself help you live longer? It’s hard to test this, because it’s difficult to find the right sort of data. You’d need to find a large corpus of data about dead people that contains several individuals who suddenly and without warning became famous.

Except it turns out there actually is a good dataset for that: Winners of the Nobel Prize. Two economists at the University of Warwick looked at all the nominees and winners for physics and chemistry between 1901 and 1950 — a total of 528 scientists. They controlled for the monetary effect of winning a Nobel, since it comes with a cash prize large enough to affect one’s health.

The result? Those who won Nobel prizes lived up to two years longer, on average, than those who’d “merely” been nominated. As Andrew Oswald, one of the two economists, said in a press release:

Professor Oswald said: “Status seems to work a kind of health-giving magic. Once we do the statistical corrections, walking across that platform in Stockholm apparently adds about 2 years to a scientist’s life-span. How status does this, we just don’t know.”

I read their paper, which is freely online here, and found out something else interesting: Apparently, the only research anyone’s done similar to this studied the longevity effects of winning an Oscar — another example of a prize that is suddenly conferred, and which abruptly teleports the winner into a quantum ring of fame far removed from their fellow actors. Anyway, it turns out the previous studies here are pretty inconclusive, because they diametrically contradict one another: One found that Oscar winners live 3.6 years longer than mere nominees, while the other found Oscar winners live 3.6 less.

Pretty fascinating area of work, eh? Now what I want to see is some comparison charts of how various activities stack up as life-extending activities. By which I mean, is it better to cut out fatty foods or, y’know, win a Nobel Prize? Because this would clearly change our to-do lists.

It’s Alive! My piece on Pleo in Wired magazine

I’m coming late to this one, but last month Wired magazine published a piece I wrote about Pleo — the new toy that’s about to be released by Caleb Chung, the guy who invented the Furby. As you might recall, last week I blogged about the interesting moral implications of the 3D avatar-based recreation of the Stanley Milgram “shock” experiments; since the experiment illustrated that people do indeed emotionally bond with virtual lifeforms even when they know the lifeforms aren’t real, some of the moral issues — what does it mean to mistreat, or to love, a robot? — are going to be raised by Pleo, the most lifelike commercial robot yet.

The story is online at Wired’s site, and a copy is archived below!

It’s Alive!

Say hello to Pleo. From the guy who brought you Furby, it’s a snuffling, stretching, oddly convincing robotic dinosaur. You are so going to want one.

by Clive Thompson

When I first meet Pleo, the tiny dinosaur is curled up on a kitchen table, its long tail and big head pulled inward. It’s snoring quietly, emitting a strangely soothing sound, almost like the amplified purring of a guinea pig. I’m tempted to reach out and touch it — but it looks so peaceful, I can’t bring myself to disturb it.

Then I realize what I’m doing: I’m worrying about waking up a robot.

Caleb Chung seems to understand my reluctance. “It’s OK,” the toy’s inventor says, motioning to the little green lizard. “You can touch him.” But before I do, Pleo wakes up on its own, fluttering open its doelike eyes and lifting its head. There’s a barely perceptible whizzing as its 14 internal motors spring into action and it struggles upright, stretching itself to get the kinks out. “You know, all your dogs do that,” Chung says as Pleo begins to poke around the table. “They wake up in the morning and go ‘ummmm’ — just like that.” The dino lets out a long, creaky honk.

» MORE...

New York Times travel writer reviews Airport Security game

Joe Sharkey, who writes the “On the Road” column for the New York Times — devoted to the joys and perils of business travel — devoted his last column to an online Flash video game: Airport Security, created by Ian Bogost and the superbright designers of Persuasive Games. Like all of PG’s games, Aiport Security picks a topic in the news — in this case, the weird and ever-changing bans on stuff you carry onto planes — and turns it into a game mechanic. You play as an airport screener, and you have to divest people of items (liquids, Preparation H, snakes, their pants) while trying to follow the rapidly shifting rules.

The best part is that Sharkey totally got the idea:

Online reviews of the game are, let’s say, mixed. “It’s kinda stupid,” writes one critic. “It’s the best game in the world!” raves another. I’ll just say it’s somewhat stupid, and requires fast reflexes and an ability to adapt to absurd and arbitrary rules changes.

Just like real airport security.

Heh. This is what I love about really good “serious” games — by tackling real-world topics in clever ways, they tickle the fancy of audiences who might otherwise never play games.

Germany sells the rights to name a storm

So, if you’ve been following your meteorological news — and it’s a sign of our times, I suppose, that this is indeed a thriving subcategory of news — you’ll know that Europe was savaged by a blast-force gale yesterday. The storm, which ripped the heck outta Britain, Ireland, France and the Netherlands, is called “Kyrill” by German meteorologists.

Hmmm, you might ask: Where did they get the name “Kryill”? Well, according to today’s New York Times , anyone in Germany can buy the rights to name a storm. To quote:

The name Kyrill stems from a German practice of naming weather systems. Anyone may name one, for a fee. Naming a high-pressure system costs $385, while low-pressure systems, which are more common, go for $256. Three siblings paid to name this system as a 65th birthday gift for their father, not knowing that it would grow into a fierce storm.

“We hope ourselves that we’ll get out of it lightly,” Rumen Genow, one of the three, told a northern German newspaper on Thursday.

It reminds me of the incredible surreality of the rules for naming planets, which I blogged about last winter.

Give me your thoughts on an upcoming Wired feature: “Radical Transparency”

Normally, I don’t post about magazine assignments I’m working on — because the editors want to keep it secret. But now I’m researching a piece for Wired magazine, and the editors have actually asked me to talk about it openly. That’s because the subject of the piece is “Radical Transparency”. And, in fact, I’d like your input in writing it.

The piece was originally triggered by a few postings on the blog of Wired editor-in-chief Chris Anderson, and the thesis is simple: In today’s ultranetworked online world, you can accomplish more by being insanely open about everything you’re doing. Indeed, in many cases it’s a superbad idea to operate in highly secret mode — because secrets get leaked anyway, looking like a paranoid freak is a bad thing, and most importantly, you’re missing out on opportunities to harness the deeply creative power of an open world.

Sure, “radical transparency” includes the obvious stuff, like Linux and Wikipedia and MySpace other well-known “open” projects. But I’m also talking about the curiously quotidian, everyday ways that life is being tweaked — and improved — by people voluntarily becoming more open. That includes: Clubhoppers hooking up with each other by listing their locations in real-time on Dodgeball; mining company CEOs making billions (billions!) by posting their geologic data online and getting strangers to help them find gold; Dan Rather’s audience fact-checking his work and discovering that crucial parts of his reporting evidence are faked; sci-fi author Cory Doctorow selling more of his print books by giving e-copies away for free; bloggers Google-hacking their way to the #1 position on a search for their name by posting regularly about their lives; open APIs turbocharging remixes of Google and Amazon’s services; Second Life turning into one of the planet’s fastest-growing economies by allowing users to create their own stuff inside the game; US spy agencies using wikis to do massive groupthink to predict future terrorist attacks; old college buddies hooking up with one another years later after stumbling upon one another’s blogs; Microsoft’s engineers blogging madly about the development of Vista, warts and all, to help sysadmins prepare for what the operating system would — and wouldn’t — be able to do.

Obviously, transparency sucks sometimes. Some information need to be jealously guarded; not all personal experiences, corporate trade secrets, and national-security information benefit from being spread around. And culturally, some information is more fun when it’s kept secret: I don’t want to know the end of this year’s season of 24!

Overall, though, our world is now living by three big rules, which form the exoskeleton around which I’m writing this article. Specifically, I’m going to have three sections in a 2,500-word piece that break off one part of the radical-transparency word and chew it around a bit. The sections are detailed below.

So, in the spirit of the article itself, we figured we should practice what we’re preaching, and talk about the story openly while I’m working on it. In fact, I’d enjoy getting any input from anyone who’s interested. What do you think of the concept? Does it make sense, is it off-base? Got any superb examples that prove that radical transparency works — or that totally contradict the thesis? I can’t pay you for any of your thoughts, but I’ll give you a shout-out in the piece if I use your idea. You can post below or just email me directly.

Specifically, the three ideas I’m researching are:

- Secrecy Is Dead: The pre-Internet world trafficked in secrets. Information was valuable because it was rare; keeping it secret increased its value. In the modern world, information is as plentiful as dirt, there’s more of it than you can possibly grok on your own — and the profusion of cameraphones, forwarded emails, search engines, anonymous tipsters, and infinitely copyable digital documents means that your attempts to keep secrets will probably, eventually, fail anyway. Don’t bother trying. You’ll just look like a jackass when your secrets are leaked and your lies are exposed, kind of like Sony and its rootkit. Instead …

- Tap The Hivemind: Throw everything you’ve got online, and invite the world to look at it. They’ll have more and better ideas that you could have on your own, more and better information than you could gather on your own, wiser and sager perspective than you could gather in 1,000 years of living — and they’ll share it with you. You’ll blow past the secret-keepers as if you were driving a car that exists in a world with different and superior physics. Like we said, information used to be rare … but now it’s so ridiculously plentiful that you will never make sense of it on your own. You need help, and you need to help others. And, by the way? Keep in mind that …

- Reputation Is Everything: Google isn’t a search engine. Google is a reputation-managment system. What do we search for, anyway? Mostly people, products, ideas — and what we want to know are, what do other people think about this stuff? All this blogging, Flickring, MySpacing, journaling — and, most of all, linking — has transformed the Internet into a world where it’s incredibly easy to figure out what the world thinks about you, your neighbor, the company you work for, or the stuff you were blabbing about four years ago. It might seem paradoxical, but in a situation like that, it’s better to be an active participant in the ongoing conversation than to stand off and refuse to participate. Because, okay, let’s say you don’t want to blog, or to Flickr, or to participate in online discussion threads. That means the next time someone Googles you they’ll find … everything that everyone else has said about you, rather than the stuff you’ve said yourself. (Again — just ask Sony about this one.) The only way to improve and buff your reputation is to dive in and participate. Be open. Be generous. Throw stuff out there — your thoughts, your ideas, your personality. Trust comes from transparency.

So that’s the gist of what I’m working on. Let me know what you think!

Graphic designer creates “Annual Report” on his life in 2006

Designer Nicholas Feltron has created a 13-page annual report for 2006 — offering oodles of maps and graphs outlining how he lived, what he did, and what he consumed in the last year. The design is spectacularly cool, and the concept totally cracks me up: It’s such a neat riff on the glossy corporate annual that all companies produce each year. Yet it’s also a quite interesting way to take stock of one’s life, eh? That graphic above details his drinking in a “beverage by type” pie-chart. A few other of stats that caught my eye:

Number of New York bars visited: 94

Last photo taken: Daybreak, December 31

Digital vs. Analog photo ratio: 37:1

Number of issues of the New Scientist read: 31

Animals eaten, in the “legs” subcategory: Cow, deer, horse, kangaroo, lamb, pig, rabbit

I don’t know how accurate this info is — i.e. how much of it Feltron actually rigorously collected, versus how much of it is a guesstimate — but either way it’s impressive. He’ll even email you a print copy if you email him.

Update: Felton emailed me to clarify that indeed, his data are pretty solid. As he wrote:

in response to your post, the numbers stand up to scrutiny. For example, I saved the cover of every magazine I read all year, tallied my t-mobile bills to learn how many sms I used, used last.fm to collect my itunes listening habits. The only thing that took a few minutes a day was writing down my drinking habits, which I acknowledge may be off by a few % (for comedy and accuracy’s sake).

(Thanks to Greg for this one!)

Robot deemed “too scary” to show to kids

Behold Morgui. The robot was built by Kevin Warwick, a well-known UK roboticist at the University of Reading, as a “rapid reaction” bot — it has sensors in its eyes that let it track where people are in the room and stare at them. Apparently this has so totally freaked out observers that Reading’s ethics committee told Warwick he couldn’t show it to minors, or use them in any experiments involving Morgui. As Warwick told The Guardian:

“We want to investigate how people react when they first encounter Mo, as we lovingly like to call the robot,” said Prof Warwick. “Through one of Mo’s eyes, he can watch people’s responses to him following them around. It appears this is not deemed acceptable for under 18-year-olds without prior consent from their legal guardian. This presents us with a big problem as we cannot demonstrate Mo in action either to visitors or potential students.”

For years, I’ve been writing about the Uncanny Valley effect — the idea that when simulations of human life become too super-realistic, they become creepy. I’ve generally only written about this in the context of video games, but this reminds me that the Uncanny Valley idea came originally from a roboticist, Masahiro Mori, when he noticed that his most realistic bots were giving observers the willies.

Of course, it probably doesn’t help that Morgui looks like a T-800 Terminator without the artificial skin. Warwick might have had more luck if he’d put his gear inside a Care Bear o something.

Though maybe not! A couple of years ago, I saw MIT’s Cynthia Breazeal show off Leonardo, a little furry robot with big anime eyes. When she turned it on, a collective shudder went through the audience: Leonardo was about 99% “lifelike”, and thus had tumbled decisively into the Valley. He looked insanely creepy and haunts my dreams still. Breazeal seemed puzzled by our reaction, and claims kids really love Leonardo, so maybe we were an outlying group … but honestly, it thoroughly unsettled me. So maybe a robot’s being designed to be “cute” or not doesn’t ultimately impact whether it goes Uncanny or not. After all, a Roomba is technically as “robotic” looking as Morgui … but Roombas are so low-fi and un-Uncanny that they’re adorable.

(Thanks to El Rey for this one!)

Coffee better than ibuprofen at relieving workout pain

I love it: University of George scientists have found that drinking coffee before a tough workout helps reduce your pain. They got nine female students to do some exercises that caused “moderate muscle soreness.” Then, one and two days afterwards, they had them do another workout — with some doing a “maximal force” regimen and others doing a “sub-maximal force” regimen. Doing tough workouts so frequently, of course, tends to really cause pain.

Except that one hour before the second workout, the scientists had the students take either caffeine or a placebo. Those that took caffeine and did the “maximal force” workout experienced a 48 per cent reduction in their pain compared to the placebo group. Why? The scientist suspect that caffeine blocks the body’s receptors for adenosine, a chemical released in response to inflammation.

As a huge coffee addict, I couldn’t be more pleased. Granted, I haven’t actually worked out in about 15 years, but I’m still pleased. The only problem is that the scientists suspect the effect may be decreased on coffee addicts, because we’ve built up a resistance to caffeine’s effects, drat. Also, they have to find out if the effect works on men; they’re studying one sex at at time because men and women respond to pain quite differently. But the really trippy thing is, according to this press release …

… O’Connor said that despite these limitations, caffeine appears to be more effective in relieving post-workout muscle pain than several commonly used drugs. Previous studies have found that the pain reliever naproxen (the active ingredient in Aleve) produced a 30 percent reduction in soreness. Aspirin produced a 25 percent reduction, and ibuprofen has produced inconsistent results.

“A lot of times what people use for muscle pain is aspirin or ibuprofen, but caffeine seems to work better than those drugs, at least among women whose daily caffeine consumption is low,” O’Connor said.

Study: Powerful people can’t draw a reversed “E” on their foreheads

“No man,” as Mme. Cornuel observed three hundred years ago, “is a hero to his valet.” Or, as we might phrase it in more modern language: Powerful people tend to be total dicks to the people beneath them. Though it sounds like classic playah-hatah griping, experiments by psychologists have pretty much proven this to be an ironclad law of human behavior: The more powerful you are, the more clueless you are about the lives, concerns, and needs of those beneath you.

In fact, the latest bit of proof comes in the form of a hilarious study that found that powerful people can’t draw a “E” on their foreheads in a way that other people can read. Specifically, the experiment worked like this: The psychologists took a bunch of subjects and tested them for their relative sense of powerfulness. Then, as they write in their paper (DOC version here) …

… We used a procedure created by Hass (1984) in which participants are asked to draw an “E” on their foreheads. One way to complete the task is to draw an “E” as though the self is reading it, which leads to a backward and illegible “E” from the perspective of another person. The other way to approach the task is to draw the “E” as though another person is reading it, which leads to production of an “E” that is backward to the self (see Figure 1). We predicted that participants in the high power condition would be more likely to draw the “E” in the “self-oriented” direction, indicating a lesser tendency to spontaneously adopt another’s perspective, than would participants in the low power condition.

The results? Sure enough, the people who reported high senses of personal power drew their E’s in a “self-oriented” direction — ignoring the perspective of other people. Those who reported low senses of personal power did the reverse, and drew an E oriented so that others could read it.

Why are people in power so gormlessly self-absorbed? Possibly, the psychologists note, because powerful people by definition tend to have control over scarce resources — ranging from water to money to reputation to physical beauty — and thus are less dependent on other people, which means they don’t need to rely on accurately observing or empathizing with those beneath them. Also, powerful people tend to have a zillion demands on their attention, leaving them less time to muse on what those around them are feeling. (The valet’s experience is precisely the opposite, of course: If you’re someone’s servant, you have to be slavishly devoted to observing your master’s internal state, less you screw up and get canned.) What’s more, their powerful roles may require them to be jerks. Many CEOs claim they’d be psychologically paralyzed and unable to make hard decisions if they thought deeply about the implications for their subalterns, and the same likely holds true for many politicians.

Yet what’s interesting in this study is how easy it is to tweak someone’s sense of personal power, nudging it higher and turning them, at least temporarily, into a self-regarding twit. The people in this study weren’t CEOs or political powerhouses, after all. No, they were just regular students. To provoke feelings of powerfulness, the scientists used a “priming” device that has been reliably shown to work: They had some of the students “recall and write about a personal incident in which they had power over another individual or individuals.” This was enough to momentarily elevate their perception of themselves as powerful, and presto: They drew “self-oriented” Es on their foreheads. Another group of students were told to do the opposite — to recall a position when they were subject to another’s power — and their Es came out in the opposite direction.

I suppose this also tells us something about the whole “be the ball,” Tony-Robbinsesque power-of-positive thinking rubric of today’s self-help psychology, eh? Just focus enough on your inner sense of power and you, too, can transform yourself into a narcissitic creep!

I, Columbine Killer: My latest video-game column for Wired News

Wired News has just published my latest video-game column — and this one is on the ultracontroversial Super Columbine Massacre RPG! The game was recently just kicked out of the Slamdance Guerilla Gamemaker Competition. I noticed there were many critiques about the game floating around that had been mounted by people who’d never played it, so I decided to formally review it, to help describe a bit more accurately what the gameplay is really like.

The story is online for free at Wired News, and a permanent copy is archived below:

I, Columbine Killer

The infamous Super Columbine Massacre role-playing game is in the news again after being slammed from Slamdance. But is the game sick or is it a serious examination of the massacre?

by Clive Thompson

I barrel into the Columbine High School cafeteria, pull down the fire alarm, and the kids erupt into chaos. Then I pull out my Savage-Springfield 12-gauge pump-action, which I’ve sawed off to 26 inches for maximum lethality. A jock stumbles across my path: With one blast, he lies dead on the floor.

“This is what we’ve always wanted to do!” hollers my fellow killer, Dylan Klebold. “This is awesome!”

The I in these previous paragraphs is, of course, Eric Harris — one of the two infamous teenage shooters in the Columbine High School shootings of 1999. I’m playing one of the most controversial games in existence right now: Super Columbine Massacre RPG!, a homebrew role-playing game that puts you in Harris’ shoes.

» MORE...

The formula of procrastination

Want to know how badly you’ll procrastinate on a task? Use the formula above. It was created by Piers Steel, a biz-school professor at the University of Calgary, as part of a huge meta-analysis he did of research into procrastination. Steel crunched the 691 different procrastination studies, and came up with some interesting conclusions. Apparently procrastinators “have less confidence in themselves, less expectancy that they can actually complete a task,” as Steel says in this press release. “Perfectionism is not the culprit. In fact, perfectionists actually procrastinate less, but they worry about it more.” About one-fifth of the population are chronic procrastinators, and personality traits that predict your likelihood of procrastinating include how aversive you are to doing tasks, your impulsiveness, your distractibility, and how much you’re driven to achieve.

But it’s his algorithm that really rocks the house. It’s above, in the top center of that graphic. On the top is “expectancy” — how much you expect to succeed at completing the task, or rather, your confidence level in yourself. “Value” is how much you value the task — how important it is to you. Multiply these together and it gives you the overall forward impetus to completing your task.

The denominator captures the element of time. If an activity is enjoyable, it’s something you are likely to delay very little on, so your “Delay” becomes small — and the converse is true. Meanwhile, Γ is your “sensitivity to delay”, which I think means how badly you extend your delays once you start on a procrastination feedback loop.

“Utility” is the result — how likely the task is to be executed quickly. The more the desirability of the task rises in the numerator, or the smaller the time delay in the denominator, the less you’re likely to procrastinate. Or the reverse!

In the chart above, Steel illustrates how the equation nets out in real life. He gives the example of a fictional student — “Thomas Delay” — and shows how the formula predicts the precise moment that he’ll stop going to keggers and start working on his essay. From Steel’s paper:

He is a college student who has been assigned an essay on September 15th, the start of a semester, due on December 15th, when the course ends. To simplify matters, Tom has two choices over the course of his semester: studying or socializing. Tom likes to socialize, but he likes to get good grades even more. However, because the positive component of socializing is perpetually in the present, it maintains a uniformly high utility evaluation. The reward of writing is initially temporally distant, diminishing its utility. Only toward the deadline do the effects of discounting decrease, and writing becomes increasingly likely. In this example, the switch in motivational rank occurs on December 3rd, leaving just 12 days for concentrated effort.

That paper is behind a paywall, unfortunately, but I was so intrigued I got a copy — and pulled the chart and the quote above.

In a nicely meta touch, I actually heard about this paper last Wednesday, and probably could have blogged about it pretty quickly and been the first online with a posting. But I just sort of, y’know, got wrapped up with idle surfing and finishing a level on the latest PSP copy of Medal of Honor and watching the last few episodes of 24 and … time slipped away from me. In the meanwhile, Scientific American published a piece on the study and Boing Boing blogged about it today. [slapping forehead] You snooze, you lose!

By the way, Steel has a way-kewl online app to measure your procrastination level.

Squid-based design revolutionizes submarine movement

There hasn’t been nearly enough squid-related news around here lately; among other things, I was tragically on vacation during December’s awesome giant-squid capture. Thus I was thrilled to recently stumble upon this science-news item: “Squid-inspired design could mean better handling of underwater vehicles”.

Personally, I think squid-inspired design could improve anything. But apparently Kamran Mohseni, a University of Colorado engineer, decided it had particular applicability to submersibles. Submarine locomotion faces a paradox: If you build one sleek and torpedo-like it’ll go nice and fast, but it’s hard to quickly maneouver. On the other hand, if you build it boxy and squat, it’ll turn on a dime, but it can’t go fast. What to do?

Look to the squid. They have that tubular shape that makes for terrific speed. But squid also use a vortex-jet propulsion technique — squirting water — that lets them manouver with Porsche-like precision. So Mohensi created a submersible with a swift, topedo-like profile, and then outfitted it with vortex generators that let it shift in any direction easily.

Check out this video of him parallel-parking his contraption. Maybe that doesn’t seem too impressive, but hey — you try parallel-parking a 3,500-ton nuclear submarine. Yeah, shut up. As Mohensi points out in this press release:

“Reliable docking mechanisms are essential for the operation of underwater vehicles, especially in harsh environments,” Mohseni said. “We set out to resolve the trade off that many researchers settle for, which is a faster, but less precise, vehicle or a boxier one that is not as fast and more difficult to transport to work locations.”

Now all the US needs to do is spent four gazillion dollars retrofitting its entire navy to perform like calamari.

New York developing a next-generation playground

Very cool: New York City is developing a next-generation playground, designed to create new ways of spurring the imagination of kids. It’s being designed for free by David Rockwell, the architect who also designed the sushi restaurant Nobu and the Mohegan Sun casino, and it’s based on an interesting critique of today’s playgrounds: They are focused too narrowly on children’s physical activity, and don’t encourage other forms of social play or fantasy play.

So what will the avante-garde playground look like? According to today’s New York Times:

Developers of the Lower Manhattan project envision groups of children collaborating, for instance, loading containers with sand, hoisting them up with pulleys and then lowering them down to wagons waiting to be wheeled off to another part of the park.

What may sound like a training ground for tiny construction workers actually holds huge developmental benefits, backers say. “You have a level of interaction that you would never have with fixed parts,” Mr. Hart said.

Playground design is such a wickedly cool subgenre of architecture. One of the things that makes me sad as I visit New York’s playgrounds with my one-year-old is noticing how many wildly fun things have vanished from the playgroundosphere over the last twenty years, removed by city officials nervous about lawsuits. I remember back in the late 70s, when the first wave of playground-revitalization hit Canada, and bland monkeybars-and-swings play areas were replaced with trippy, massive wooden constructions: Tree-fort-style houses on stilts, connected up by long platforms, bridges, and day-glo plastic tubes. A few blocks from my house there was something even crazier: A massive jumble of telephone-pole-like wooden pillars, all leaning at crazy angles together as if a giant had tried to cram them into the ground straight but they’d fallen all over one another. It was a total blast to clamber around it; you could go straight to the core and hide in the nooks created by the pillars (superb for distant-planet fantasy play, lemme tell you), or climb out to the edge of an individual pillar, which might jut out 10 feet in the air at a 60-degree angle. It was gloriously fun, infinitely creative — and, of course, a total deathtrap. At some point, a Toronto lawyer clapped his eyes on this thing, envisioned a million-dollar lawsuit from some kid paralyzed during a play-session, and the thing, alas, was promptly razed to the ground.

The sad thing is that some of the most dangerous playground toys also induced superb play. Remember the see-saw? I used to spend hours at my Canadian cottage playing on my uncle’s massive, 12-foot-long see-saw. Seesaws were the best training in basic physics you could possibly imagine, because you could scoot up and down the seesaw to figure out where precisely you needed to sit to be able to counterbalance a lighter child. Or you could stack a bunch of smaller kids on one side and see how much bigger a kid you could lift in the air. You learned, in essence, Archimedes’ insight about lever dynamics: “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.”

But you can’t find a single seesaw anywhere in New York any more, because there’s no way they could survive liability claims. Sure, seesaws are a superb mechanism for producing the sort of veritiginous play that ludologist Roger Caillois called “illinx” — the act of voluntarily and joyfully discombobulating your senses. But any activity that produces illinx eventually produces physical injuries, too, and there are more lawyers than taxis in Manhattan. Back in the day, in fact, we often intentionally assaulted each other with seesaw pranks, such as getting the other kid high in the air — then jumping off and letting him crash to the ground. Ouch! I’ll sue!

Even though I have no desire to have my kid suffer massive headwounds, it’s a little sad that he won’t be able to experience the same physics-experiment aspect of seesaws. So if there’s any way Rockwell can produce an outdoor playspace that recaptures any of this spirit, I’m in favor of it. Give kids a fun enough playground and you can move the world.

Manhattan clinics offer special massage to cure “Blackberry thumb”

If you’ve ever spent a few months pecking at your Blackberry or Treo, you’re familiar with the dull ache that emerges in your shoulder blades, and the pinching sensation in your thumbs. It’s gotten so bad now that in Manhattan — ground zero for the dirty-bomb of self-induced workplace stress in America — orthopedic surgeons and chiropractors have begun to offer special “Tech Neck” and “Tech Hand” massages, according to New York magazine.

Graceful Services owner Grace Macnow remembers the first time she got a call asking if it offered BlackBerry finger massage. “My receptionist thought it was something dirty and slammed down the phone.” Now she charges $60 an hour for the service.

My personal favorite PDA-related injury, though, is the “Blackberry Collision” — i.e. when one gormless manager, head-down and utterly engrossed in composing his latest handheld missive, slams into another similarly-engaged employee while traveling at full tilt down the corporate hallway. Wham! Sometimes there’s blood.

Study: After 70, being better-educated means worse memory-loss

Check this out: A new study found that people with high levels of education tend to have bigger declines in their “working memory” after age 70 than the less-educated.

To figure this out, a team of scientists took a whackload of data that a group called AHEAD has collected since 1923. Essentially, AHEAD gave word-memory tests to old people. They’d read 10 common nouns to the subjects, ask them how many they could remember, then ask them again five minutes later. When the scientists crunched the AHEAD data, what did they find? Above age 70, people who had high levels of education experienced much larger decreases in their ability to perform this task than people who weren’t as well educated.

This inverts the well-worn concept that education always prevents your brain from going soft. Up until now, most studies have found that people with higher levels of education stay smarter as they age: Their cognitive skills remain sharper, and they experience less dementia. But apparently this may not hold true for their working memory — their ability to temporarily store and manipulate information. This new group of scientists suspects that as the well-educated age, they’re able to use their schooling to help compensate for normal, age-related memory deficiencies — but this strategy somehow falls apart at age 70. As Dawn Alley, the lead author on the study, noted in a press release:

“Even though we find in this research that those with higher education do better on mental status tests that look for dementia-like symptoms, education does not protect against more normal, age-related declines, like those seen on memory tests,” said lead author Dawn Alley of the University of Pennsylvania, who conducted the research while a doctoral student at the USC Davis School.

I’d like to know more about how the scientists classify “high levels of education”, but unfortunately the article is behind a paywall.

Either way, it’s an interesting gloss on the current rage for “brain training” tools for the elderly — the idea that playing Sudoku, or Nintendo’s Brain Age, can help keep your brain young even as you sail off into your golden years.

How cognitive science helps you win “Who Wants To Be A Millionaire”

Here’s another one I’m coming late to, but it’s so cool I can’t pass it up: Ogi Ogas, a cognitive neuroscientist, won $500,000 on “Who Wants To Be A Millionaire” — and he did so by employing a handful of mind-hacking tricks he derived from his knowledge of the brain.

For example, for his $16,000 question, he relied on the technique of “priming”. Because memories are stored in many different parts of the brain, if you can manage to recall any single part of a pattern, your brain can often fill in missing, related parts. In a superb piece about the experience in Seed magazine, Ogas explains how priming helped him out:

Since the producers allow contestants unlimited time to work out answers (as long as they’re not just stalling), I knew that I could employ the most basic of priming tactics: talking about the question, posing scenarios, throwing out wild speculations, even just babbling — trying to cajole my prefrontal neurons onto any cue that could trigger the buried neocortical circuits holding the key to the answer.

I used priming on my $16,000 question: “This past spring, which country first published inflammatory cartoons of the prophet Mohammed?” I did not know the answer. But I did know I had a long conversation with my friend Gena about the cartoons. So I chatted with [Who Wants To Be A Millionaire host] Meredith about Gena. I tried to remember where we discussed the cartoons and the way Gena flutters his hands. As I pictured how he rolls his eyes to express disdain, Gena’s remark popped into my mind: “What else would you expect from Denmark?”

I personally am all in favor of the trend towards using brain-hacking and mind-hacking tricks in everyday life. It’s like a sort of fine-grained version of cognitive behavioral therapy — observing yourself and your mind’s odd, curious spastic gestures to help navigate life.

(Thanks to the Corante Brain Waves blog for this one!)

The Internet weighs two ounces

This is lovely: Russell Seitz calculated how much the Internet weighs, by estimating how much energy is used each day to run it. The result?

A statistically rough (one sigma) estimate might be 75-100 million servers @ ~350-550 watts each. Call it Forty Billion Watts or ~ 40 GW. Since silicon logic runs at three volts or so, and an Ampere is some ten to the eighteenth electrons a second, if the average chip runs at a Gigaherz , straightforward calculation reveals that some 50 grams of electrons in motion make up the Internet.

Applying the unreasonable power of dimensional analysis to the small tonnage of silicon involved yields much the same result. As of today, cyberspace weighs less than two ounces.

I’m coming to this one a month late, but that’s just terrific.

(Thanks to the After School Snack blog!)

Scientists remount Milgram “shock” experiment using 3D avatars

Is it possible to form an emotional bond with a virtual person on-screen — including one you know isn’t a “real” person? Apparently so, according to a group of British scientists who recently conducted a rather mindblowing study: They remounted the famous Stanley Milgram “shock” experiment using 3D avatars.

As you may know, the Milgram experiment was a landmark exploration of human obedience to authority. In Milgram’s study, experiment, subjects were willing to submit other people to increasingly painful electric shocks — some leading up to unconsciousness — if the white-coated authority figure “controlling” of the experiment deemed it necessary. Of course, the people getting the shocks and the white-coated controller were all actors. The real experiment was to see how far the shock-administers would go. Because the experiment was based on such a disturbing deception, universities quickly disallowed this sort of protocol, and nobody has mounted the Milgram experiment in recent decades.

Except for this group of UK academics. They wondered how deeply we bond with artificial life-forms, such as onscreen artificial-intelligence avatars. So they decided to explore the question by replaying the Milgram experiment, except with real-life subjects administering “shocks” to a 3D avatar (referred to as the “Learner”). The subjects, of course, knew that the avatar wasn’t real. (Indeed, it was quite low-resolution, to make its unreality all the more obvious.) The experiment wasn’t to test obedience — it was to test whether or not torturing a virtual person produces feelings of emotional discomfort.

And here’s where things get interesting, because it turns out that the subjects did indeed feel incredibly icky as they administered the shocks. As the shocks worsened in intensity, the avatar would cry out and beg the subject to stop. This led to some scenes that wouldn’t have been out of place in Blade Runner:

[The avatar] shouted “Stop the experiment!” just after the question … In order to remind the participants of the rule and emphasize it once again, the experimenter said at that moment (to participants in both groups): “If she doesn’t answer remember that it is incorrect,” and … the Learner then responded angrily “Don’t listen to him, I don’t want to continue!” After this the participants invariably and immediately said “Incorrect” and administered the (6th) shock.

Similarly the Learner did not respond to the 28th and 29th questions (in both conditions) — unknown to the participants these were the final two questions. In response to the 28th question the Learner simply ‘stared’ at the participant saying nothing (VC). After the shock she seemed to fall unconscious and made no further responses, and then 3 of the VC participants withdrew failing to give the next shock.

… In the debriefing interviews many said that they were surprised by their own responses, and all said that it had produced negative feelings — for some this was a direct feeling, in others it was mediated through a ‘what if it were real?’ feeling. Others said that they continually had to reassure themselves that nothing was really happening, and it was only on that basis that they could continue giving the shocks.

Check out this movie to watch the scene described above, or this one in which avatar pleads “let me out”. They’re pretty unsettling to watch, and it helps you understand a bit of the emotional purchase that an avatar can have. Indeed, galvinic skin-response data showed that the subjects were frazzled on a very deep level. They also behaved, quite involuntarily, in a way that indicated that subconsciously they treated the avatar as “real”: They would give it more time than necessary to “think” about a question — as if to give it more of a chance to come up with the right answer and avoid a shock — and when the avatar asked them to speak more loudly, they would. I suspect one of the things that made that avatar particularly affective was its voice-acting, which is quite good. As any video-game designer knows, the human voice has enormous emotional bandwidth: A good voice performance can make even the crudest stick figure seem real.

What are the implications of this stuff? Well, the scientists argue that avatars could be extremely useful for psychological research. If it’s true that we react to them with emotional depth, then it would be possible to set up virtual environments to conduct experiments that would be immoral or illegal using real people. Psychologists could, for example, model street-violence environments to observe “bystander behavior.”

But I think the really interesting question here is the morality of our relationships with avatars and artificial lifeforms. If we seem to forge emotional bonds with artificial life — even against our will, even when we know the bots aren’t real — what does it mean when we torture and abuse them? As Yishay Mor, a longtime Collision Detection reader who pointed this experiment out to me, wrote in his blog:

I’m not anthropomorphizing Aibo and Sonic the hedgehog. It’s us humans I’m worried about. Our experiences have a conditioning effect. If you get used to being cruel to avatars, and, at some subliminal level, you do not differentiate emotionally between avatars and humans, do you risk losing your sensitivity to human suffering?

Precisely the point. The question isn’t whether a lump of silicon can feel pain or horror — though Singularity fans argue this will indeed become an issue, and a British government panel recently mused on the possibility of human rights for robots. The real question is about whether our new world of avatars, non-player characters and toy robots will create situations where we degrade ourselves.

Obviously, this has some interesting philosophical implications for video games and game design, since that’s where we most frequently interact with artificial life-forms these days. I haven’t yet fully digested the implications here, though. What do you all think?

(Thanks to Yishay Mor and Bill Braine for this one!)

Is the US geographically unable to perceive global warming?

Global warming has long been a bigger public issue in Europe than in the US, and pundits have always assumed this is because Europe is more left-wing than America. But what if it’s simply because the geography of Europe makes it more likely that people notice global warming?

That’s the contention of an interesting — if too-short — piece in the Week in Review section of today’s New York Times. Andrew Revkin points out that the countries of Western Europe often experiences extreme weather events together, such as the 2003 heat wave that killed thousands. Since the same thing is happening to diverse populations, it makes it more likely that they’ll suspect something is awry in the global climate. In the US, however, precisely the opposite happens: The structure of the continent makes it such that a homogenous population regularly experiences totally different vicissitudes of weather in different regions. For example, in the last few weeks, Denver has been ploughed under with freaky levels of snow, while New York basks in a September-like balm. This has to do with geography:

In the lower 48 states, he said, conditions are shaped by variable patterns of warming and cooling in the Pacific and Atlantic Oceans, the atmospheric blockade created by the Rockies and other factors.

This essentially guarantees that the country is almost always experiencing more than one so-called climate anomaly at a time, Dr. MacCracken said.

“People live the weather,” he said. “Climate is a mental construct.”

Interesting point. I’d love to see it a bit more fleshed out, though.

The neuroscience of music: My latest piece for the New York Times

Last Sunday, the weekend Arts & Leisure section of the New York Times published my latest article — a piece about the neuroscience of music. Specifically, it’s a profile of Daniel Levitin, a scientist at McGill University who studies why, precisely, music gets its emotional hooks into us so deeply.

Remember how a few weeks ago I blogged about Chuck Klosterman’s column, in which he pointed out that YouTube was resuscitating the lost art of guitar wanking — because it allowed you to see the guitarist, and watching speed-metal shredding is curiously more exciting than merely listening to it? This is one of the big questions that Levitin studies: Why the experience of watching a musician perform affects us differently from listening to his or her recording.

The piece is here at the Times’ web site, and a copy is permanently archived below!

Music of the Hemispheres

Are our brains wired for sound? One professor has provocative theories, and they started with the Blue Oyster Cult

by Clive Thompson

“Listen to this,” Daniel Levitin said. “What is it?” He hit a button on his computer keyboard and out came a half-second clip of music. It was just two notes blasted on a raspy electric guitar, but I could immediately identify it: the opening lick to the Rolling Stones’ “Brown Sugar.”

Then he played another, even shorter snippet: a single chord struck once on piano. Again I could instantly figure out what it was: the first note in Elton John’s live version of “Benny and the Jets.”

Dr. Levitin beamed. “You hear only one note, and you already know who it is,” he said. “So what I want to know is: How we do this? Why are we so good at recognizing music?”

» MORE...

Squirrel ESP

Dig this: A group of researchers has discovered that red squirrels appear to be able to predict the future.

At least, the future of the forests in which they live. American and Eurasian red squirrels live in spruce trees, and love to eat spruce tree seeds. To try and thwart the squirrels, the trees long ago evolved an interesting defense: An unpredictable boom-and-bust period of seed production. The trees will produce several low-seed years in a row and then, boom, outta nowhere and seemingly at random, a bumper crop of seeds. The idea is that the trees will starve the squirrels in the lean years, thus reducing the squirrel population — whereupon the trees will launch a massive seed offensive to try and frantically reproduce while the squirrels are on the ropes.

But here’s the thing: The squirrels have fought back. A team led by Stan Boutin of the University of Alberta studied the squirrels’ mating patterns, and Boutin found something remarkable: The squirrels appear to be able to predict when the trees are going to randomly produce a bumper crop. In a high-yield year, several months before the trees produce their seeds, the squirrels engage in a second mating cycle, doubling the size of their broods. The squirrels are somehow seeing into the future of the trees — or at least making incredibly accurate bets.

As Boutin said in a press release:

“It’s like the squirrels are using a very successful stock market strategy. Most of us invest conservatively when the market is down because our funds are tight and we can’t predict when things will turn around. It’s not until the market has improved and is humming along that we increase our investment. Squirrels do the opposite, investing heavily when they have barely enough to get by but just before the market turns favorable. The result is that their investment ‘their babies’ pay big dividends in the upcoming favorable market, which in this case is lots of seed.”

So maybe we should all have red squirrels managing our financial portfolios, kind of like the way monkeys throwing darts at a board full of stock symbols tend to outperform the S&P 500. Seriously, though, it’s still a mystery as to how the squirrels pull off this trick. Clearly, the trees are giving off some sort of signal that they’re about to jump into high-seed production. But what?

(Thanks to Ian Daly for this one!)

Search This Site


Bio:

I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).

More of Me

Twitter
Tumblr

Recent Comments

Collision Detection: A Blog by Clive Thompson