Wired News just published my latest video-game column, and this one is about the end of the world — or at least the end of a world, anyway. Asheron’s Call 2, an online multiplayer game, never accumulated enough customers to become viable, so the game’s creators, Turbine, are going to shut it down in two weeks. My column is about what life is like in a condemned world. It’s online for free here at Wired, and a permanent archived copy is below:
“Anybody out there?” I type, but I already know it’s pointless. There’s nobody anywhere near me. For almost an hour, I’ve been wandering around a desolate plain: Gray clouds scud slowly over rough quartz mountains, while a few birds wheel in the air near mushroom-shaped trees. I never see another living soul. It feels like the end of the world.
And in fact, it is. I’m inside Asheron’s Call 2, an online game that is scheduled to die in two weeks. It never acquired enough players to make it self-sufficient, so the game’s owner — Turbine — is going to do something that only happens rarely in the world of online play: On Dec. 30, it’ll flip the power off on the remaining servers, and an entire world will blink out of existence.
Dig this: A group of students have developed photographic film composed of bacteria. They took E. coli and genetically modified it by adding a protein from blue-green algae that detects light. They also linked it to the E. coli’s digestion: In the dark, the bacteria digest sugar and produce a black pigment, but in the light they don’t. Then they coated a petri dish evenly with this modified stuff.
The result? An organic way of taking pictures. The students put the petri dish inside a pinhole camera, expose the dish to light, and presto: The bacteria produce replicas of the scene in dark patches of pigment. As Aaron Chevalier, one of the students, told the University of Texas’ web site:
“At first, we made blobby images and you had to imagine what they were.”
But over the course of the year, he and the other students refined the camera. Although it’s still made with old bookends, discarded microscope parts and a used incubator, the newest camera is much more compact and takes crisper pictures.
I love the look of the photos: They’re like ghostly old daguerreotypes somebody found in their dead greataunt’s attic. It’s a great way to show the promise of synthetic biology — mucking with genetic material to produce new and weirdly useful forms of life.
(Thanks to Joel Collier for this one!)
Snail locomotion is remarkably cool. A muscle runs along the length of the snail’s foot; this muscle contracts and travels from the back of the foot to the front, while its slime adheres it to the surface upon which it travels. Then it expands the muscle, shoving itself forward.
The thing is, scientists did not have a really detailed sense of how this process worked, until an MIT team led by Anette Hosoi brought some snails into the lab and trained cameras on them while they slimed around. They mapped out the architecture of gastropod motion, then went one better — by creating their own robotic version! The robosnail has five movable segments that reproduce the muscular dynamic, while travelling on a 1.5-millimeter layer of Laponite slime. Apparently it can even travel upside down. But, as news@Nature reports:
So has the world been crying out for a robotic snail? “One can easily argue that snail locomotion is slow, slimy and inefficient,” admit the researchers in their paper. But they also point out that because gastropods have only one foot, it is much easier to build mechanical analogues of snails than of two-footed people or four-footed animals.
And although they are slow, snails can crawl over pretty much anything, making them extremely versatile at getting around different environments.
According to MIT’s own press release, this experiment could also offer some insights into the way blood flows into veins, since it follows the same physics: Fluid flow within a flexible boundary. The experiment could also, of course, eventually create a massive army of killer slugbots, raining death upon the battlefield, inch by inch.
(Thanks to Steve Emrich for this one!)
Conservatives don’t like personal audio players. Seventeen years ago, Allan Bloom inveighed against the Walkman, arguing that clapping on the headphones was a selfish, narcissistic manoeuver, in which teenagers sealed themselves into a “nonstop … masturbational fantasy”. This year, in “The Age of Egocasting”, conservative writer Christine Rosen argued that iPods and MP3 players had accelerated this cultural erosion even further: iPod users had devolved into such navel-gazing twits that they don’t even notice where they’re going, and miss subway stops. Personal audio players, conservatives worry, are the ultimate statement that the individual is paramount; the world around us can go screw itself, because we’re not even paying attention.
As anyone who’s followed my ceaseless, numbing anti-iPod rants would know, I’m actually pretty sympathetic to this point of view. But in today’s New York Times, there’s a wonderful counterpoint in a profile of Andreas Pavel, the guy who invented the “stereobelt” — the device that Sony eventually released as the Walkman. (That’s his picture above.) During the article, Pavel tells this story:
Mr. Pavel still remembers when and where he was the first time he tested his invention and which piece of music he chose for his experiment.
It was February 1972, he was in Switzerland with his girlfriend, and the cassette they heard playing on their headphones was “Push Push,” a collaboration between the jazz flutist Herbie Mann and the blues-rock guitarist Duane Allman.
“I was in the woods in St. Moritz, in the mountains,” he recalled. “The snow was falling down. I pressed the button, and suddenly we were floating. It was an incredible feeling, to realize that I now had the means to multiply the aesthetic potential of any situation.”
That’s precisely right. The whole point behind the personal audioplayer is that it provides a new aesthetic dimension to an already-aesthetic experience: Looking at the world around you. Conservatives fret that the white-earbud-sporting masses are simply tuning out and ignoring everything around them. But just as often, I suspect, a soundtrack actually makes you more engaged with the world around you: You notice stuff in new ways because of the emotions the music evokes. Consider it a cognitive mashup: When I walk through Times Square listening to Bedrich Smetana’s “The Moldau” one day, and Corey Hart’s “Never Surrender” the next (yeah, shut up, I know), those are rather different aesthetic experiences.
An old folk myth claims that if you kill a bee, its hivemates will remember your face and come and get you. That’s a canard, of course, but a team of British and German scientists recently discovered something even spookier: Bees can recognize individual human faces. In an experiment, they put a bunch of pictures of faces in front of some bees, and put sugary liquid on a select few of the photos to attract the insects. When the bees were later shown the photos without any sugary liquid, they made a beeline (sorry, I couldn’t resist) for the ones that had previously been sugar-ified. From World Science:
The bees learned to distinguish the correct face from the wrong one with better than 80 percent accuracy, even when the faces were similar, and regardless of where the photos were placed, the researchers found. Also, just like humans, the bees performed worse when the faces were flipped upside-down. [snip]
Moreover, “Two bees tested two days after the initial training retained the information in long-term memory,” they wrote. One scored about 94 percent on the first day and 79 percent two days later; the second bee’s score dropped from about 87 to 76 percent during the same time frame.
Among the many mindblowing implications of the study is that it bursts apart the idea that facial recognition is rilly rilly difficult. Up until now, many neuroscientists have assumed that because facial recognition is evolutionarily crucial, and because we appear to evolved a section of the brain — the fusiform gyrus — that deals primarily with facial recognition, that this task is hellishly hard. But if bees can do it, and educated fleas can do it (man, I just cannot stop myself today, sorry), maybe face-recognition isn’t as cognitively difficult as we assume.
(Thanks to Boing Boing for this one!)
For centuries, people have made up weird explanations for why the 1.5-ton narwhal has a long, spiralled tusk. Sailors claimed the beasts wielded them in battle; Jules Verne wrote that a narwhal tusk could slice open a ship’s hull “as easily as a drill pierces a barrel.” Later, snake-oil merchants passed them off as unicorn horns, or ground them up and sold the powder as a cure for everything from impotence to scurvy. But the actual function of the tusk remained a mystery …
… until now. A bunch of scientists from Harvard and the National Institute of Standards and Technology carefully studied a tusk in a lab and, as the New York Times today reports, got a shock:
The find came when the team turned an electron microscope on the tusk’s material and found new subtleties of dental anatomy. The close-ups showed that 10 million nerve endings tunnel from the tusk’s core toward its outer surface, communicating with the outside world. The scientists say the nerves can detect subtle changes of temperature, pressure, particle gradients and probably much else, giving the animal unique insights.
“This whale is intent on understanding its environment,” said Martin T. Nweeia, the team’s leader and a clinical instructor at the Harvard School of Dental Medicine. Contrary to common views, he said, “The tusk is not about guys duking it out with sticks and swords.”
That’s just awesome. Apparently this violates all known tooth anatomy (a sentence I did not really ever anticipate writing). Tubules in normal teeth never go to the surface. Apparently a team of Canadian scientists recently captured a live narwhal, put sensors on its head, and discovered that its brain-wave activity changes as the salinity in the water changes — which supports the idea that the tooth is a sensing device.
There really is not enough mainstream coverage of narwhal science, if you ask me.
John T. Unger — an artist and longtime Collision Detection reader — is masterful at taking castaway scrap materials and turning them into art. I while back I blogged about his gorgeous Great Bowl of Fire, a flame-cut firepit made out of a recycled tank. Recently John created another nifty project: A series of spanking paddles made out discarded truck tires. They’ve been an enormous hit online (though personally I’d be scared senseless to have someone coming at me with one of these things, heh.)
We know TreeHugger readers have habits some might view as “weird”. Whether it is watering your plants with condensation from a dehumidifier, finding driving techniques for getting the most mpg out of a Prius, or even going so far as to turn a chest freezer into an ultra-efficient fridge to save energy, some of us tend to go the extra mile. We think it is about time you got rewarded for these “weird” habits.
They picked John’s paddles as one of the year’s most surreal environmentally-friendly projects, and he’s currently leading the race. But it’s close, and voting ends Tuesday — so if you like these paddles, hie thee to the voting page and make your voice heard!
This is insanely cool. A social-networks theorist has studied the interlinkings between rappers — and found that their social ties are quite different from other people’s. Apparently, the most famous, well-connected rappers tend to avoid one another.
This isn’t how things normally work amongst creative professionals. In creative industries, we often see a supercharged version of the “six degrees” effect, because well-known creators frequently collaborate with one another. Their linkages are thus shorter than usual: Studies show that movie actors have only 2.5 links on average between them, versus 3.6 for company board directors and 5.9 for high-energy physicists.
So Reginald Smith at MIT crunched the figures on 30,000 rap songs to see what data he could glean. Sure enough, the artists had close linkages — it only took 2.9 links on average to connect together rappers together. But then he found an interesting quirk, as news@Nature reports:
Where the rap network differs from these others, however, is in a property called assortativity. This is a measure of how mixed the collaborations are between highly connected and less connected people. In assortative networks, well-connected individuals tend to prefer to make links with others similar to themselves. [snip]
There seems to be no such pattern for rappers. Smith suggests that this might be partly due to commercial competition between successful artists, who are reluctant to lend their cachet to a rival.
But he points out that the aversions of successful rap artists may go deeper than that. Feuds are common in the business, such as that which existed in the mid-1990s between artists signed to Death Row Records in Los Angeles and those with Bad Boy Records in New York. Such rivalries have sometimes led to violence and even murder.
It’s also true that hip-hop producers frequently spend their creative time nurturing new talent, which would be a much more benign reason for their lower assortativity. Either way, it’s a damn interesting piece of research.
(Thanks to Steve Emrich for this one!)
I am hugely looking forward to the new X-Men movie, so I was stoked to check out the trailer. And sure enough, it looks like a mutantastic film; in a violation of all known laws of sequel physics, the second X-Men movie actually demonstrated less creative entropy than the first, so I have high hopes for the third. There’s only one problem with it, and it’s this:
He’s playing The Beast. And y’know, the makeup is pretty good. Grammer’s face does indeed seem sculpted vaguely like that of the comic-book Beast. But there’s something wrong here, and Tycho over at Penny Arcade nails it when he says …
… whenever I catch a glimpse of Kelsey Grammer as The Beast it kind of injures things. It’s just, like, I know you, man. Indeed, it would appear he is well known.
Precisely: It is a mistake to cast a big, well-known, brand-name star as a beloved comic-book hero, because the star’s branded personality tends to overshadow the role. Let’s call it the “fame effect”: The producer and director figure they need big names to sell the movie, but in today’s insane world of 24/7 celebrity coverage — where tabloid magazines bray with Homeric repetitiveness about the personal lives of our modern Gods — we come to “know” the stars with such pseudo-intimacy that it’s like seeing your mother up on stage there. Comics are about mythic characters. You can’t use someone in a mythic role who regularly trafficks their real-life personality to the media like cheap crack and then expect us to ignore it.
This is one of the many (very many) reasons the latest Star Wars trilogy sucked so badly. In the original trilogy, virtually everyone was an unknown — except for Harrison Ford and Alec Guiness — so those actors were genuinely swallowed up by their mythic personae. But in Episode One the effect was precisely reversed. I kept on thinking, jeez, those guys from Trainspotting and Schindler’s List better watch out with those light sabers — they’re gonna hurt themselves.
The same dynamic has spoiled the otherwise-excellent X-men movies. Wolverine “worked” because nobody knew who the hell Hugh Jackman was; Storm didn’t because Halle Berry is numbingly familiar. (Fanboy rant: She also doesn’t look even vaguely like the comic-book Storm. A far better pick would have been the insanely rocking Gina Torres, who pulled off the kick-ass warrior-chick thing with aplomb in Serenity, yet is still unfamous enough to avoid triggering the fame effect. Speaking of which, notice that Joss Whedon — director of Serenity and the various Buffyverse shows — harnessed total unknown actors in craft his thoroughly-believable mythic universes (universii?), and that’s one of the reasons they worked so well.)
Of course, if you really need to use famous stars in your comic-book movie, one way to route around the fame effect is simply to hire actors who can actually, y’know, act. Alec Guiness was globally famous when he played Obi-Wan Kenobi, and so were Patrick Stewart and Ian McKellen in the X-Men movies — but those guys are actual pros. Not the runway-models-on-the-lam and hey-I’m-playing-me-again thesps who constitute today’s Hollywood elite.
(Thanks to Penny Arcade for this one!)
If you subscribe to my RSS feed, you’ve probably noticed that it doesn’t work very well — it doesn’t include links, images, or formatting, so it’s hard to read.
I’d recommend switching over to my RDF feed or my Atom feed, both of which are nicely formatted. I’ve never been able to figure out how to fix the RSS feed, and I’m inclined to not bother, because these other two good options exist.
We now return to your regular programming!
Behold the fearsome echizen kurage — the latest threat from the briny deep.
These jellyfish are six feet wide, they weigh up to 450 pounds, they’re covered in poison tentacles, and they’re totally b0rking the food supply of Asia. For reasons that no scientist can figure out, they have in recent months been massing at levels 100 times larger than normal off the coasts of China Japan and North Korea. They’re getting caught in fisherman’s nets and ruining their hauls, such that incomes in some fisherman regions are down 80 percent. In a delightfully Godzilla-class move, the three governments are convening a joint “jellyfish summit” this month to figure how to fight this gelatinous menace.
In the meantime, the locals are making the best of it, as the British Times reports, because …
… rather than just complaining about jellyfish they are eating them. [snip]
Coastal communities are doing their best to promote jellyfish as a novelty food, sold dried and salted.
Students in Obama have managed to turn them into tofu, and jellyfish collagen is reported to be beneficial to the skin.
Some speculate that heavy rains in China have sparked the jellyfish invasion; others wonder about global warming. If it’s the latter — man, one could scarcely ask for a better argument in favor of signing Kyoto. “What, you want to get killed by a quarter-ton jellyfish?”
(Thanks to Andrew Griffin for this one!)
Some times, you really don’t need a clever headline, eh? No, just the straight, plain, incredibly weird facts.
Anyway, here’s the backstory: Last year I blogged about Live-shot, a Texas company that let you take control of a gun on the Internet and go hunting. Legislators, predictably, freaked out, and a few months later the Texas Parks and Wildlife Commission voted unanimously “to ban remote hunting for game animals,” as The Dallas Morning News reported. The CEO of Live-shot, John Lockwood, claimed this was discrimination because he had a paying customer who was wheelchair-bound and wanted to use the system to hunt a bit ‘o buck. The new rules require that “anyone hunting a game animal or bird be physically present and in control of the firearm”, so Lockwood actually built a wheelchair-stand on his property in Texas, from which anyone in a wheelchair can take control of the Live-shot technology and use it for hunting. In essence, they’ll still be hunting via telepresence, but from only a couple of feet away — thus obeying the letter, if not the spirit, of the law.
I figured this exercise in Second Amendment surreality had run its course, until I checked into Gizmodo today to discover that Lockwood has started a new business: A remote-control paintball gun that netizens can use to shoot at bikini-clad chicks. As you might expect, he put up a teaser video, and as you might expect, it is both intentionally silly and unintentionally creepy. And then there’s the FAQ!
Q: Can I have my wife run through the field so I can paintball her?
A: It’s surprising how many requests we’ve had for this one! Including, girlfriends, husbands, significant others. Also, exes. Hey, we have a new theme here! Send them out to us, or send a blow up of their picture for target practice, and send her / him a copy of your session on DVD!
I actually thought I’d have some witty comment to make on this one, but it all seems kind of superfluous. Ken Goldberg, my brain needs watering.
(Thanks to Gizmodo for this one!)
Dig this: Some scientists have created a new form of matter using a bunch of regular sand and a falling marble. They packed the sand loosely in a container, dropped the marble in, and observed it using a super-fast, 5,000-frames-per-second x-ray camera. They found that the sand was behaving like an ultra-cold gas — because the sand grains displayed very little randomness in the way they moved.
The thing is, normally you have to cool materials down to nearly absolute zero — minus 497.6 degrees Fahrenheit — to strip the randomness out of them. But this sand was just room temperature. You could do the same thing with a coffee can and marble in your kitchen, though you wouldn’t be able to spy the particularly coolest feature of the jet of sand that sticks up in the air: It’s hollow. As one of the scientists said in their press release:
“One of the biggest questions that we have still not solved is why this jet is so sharply delineated. Why are there these beautiful boundaries? Why isn’t this whole thing just falling apart,” Jaeger asked.
(Thanks to Steve Emrich for this one!)
Everyone understands the idea of pixels; array ‘em in a 2D grid and presto, you’ve got an LCD screen for a laptop. But what about arraying them in a 3D grid, like a cube? You could display information in remarkably weirder ways — with icons that move forward and retreat, for example, or blobs that change shape as they track data.
I’ve seen a couple of great examples of this, mostly by students and fellows at NYU’s ultracool ITP, where my friend Tom Igoe teaches. One year I showed up at their open house and saw a cube with embedded LEDs created by James Clar; you can see a video of it in action online here. Another year I saw Glowbits, a set of glowing ping-pong-balls on sticks that you could raise or lower to create patterns — which would raise or lower corresponding ping-pong-balls on a similar display in front of another user. Imagine using that for instant messaging! Digital-age smoke signals!
Anyway, the point is that today I saw one of the weirdest 3D dislays ever — Electric Moons. The web site describes it thusly:
The “electric moOns” installation consists of 100 helium filled balloons. Each balloon is attached to a thin cable. The length of the cable and thus the floating hight of every balloon can be adjusted stepless with a cable winch from 0-5 meters. Additionally each balloon is lit from inside with dimmable superbright LEDs. The 100 balloon-voxels (volume pixels) are arranged in a 10x10 square (8x8 meters).
There are pictures of it here and a video of it in action. What I really want, though, is for someone to use a 3D display to create a video game. Transforming a puzzle game like Tetris or Bejeweled into a 3D format would fry my noodle.
(Thanks to Tod for this one!)
Here’s an intriguing new word for you: “Solastalgia.” It’s defined as:
… the distress caused by the lived experience of the transformation of one’s home and sense of belonging and is experienced through the feeling of desolation about its change. [snip]
The diagnosis of solastalgia is based on the recognition of the distress within an individual or a community about the loss of ‘endemic sense of place’ and the loss of a sense of control of its destiny. The positive prescription for solastalgia is personal and community involvement in the protection, restoration and rehabilitation of their place/bioregion/’country’ and the return of an endemic sense of place in both individuals and communities.
In essence, solastalgia is the sadness caused by environmental change.
The concept was created by Glenn Albrecht, a professor at the School of Environmental and Life Sciences at the University of Newcastle, after he noticed the depression amongst rural farmers in drought-stricken lands. The drought had caused increased workloads, debt, and fear about future security — and, interestingly, the small changes in their own front yards formed powerful triggers and metaphors for their despair. Albrecht’s studies showed that farmer women would be enormously more upset over the loss of their gardens than their mortgage or income. (“Losing a garden is often quite dramatic,” as a colleague noted. “It’s often the only thing that’s between them and a vast landscape of dust.”)
Albrecht learned that there was no word in the English language that completely expressed this feeling, so he crafted his own: “Solastalgia” combines solacium — solace — with nostos , which means to “return home”, and algos , or “pain”.
But here’s where things get interesting: Albrect designed the word to reflect the human pain wreaked by environmental damage. And obviously you could see this recently with Katrina — or the horrific Kashmir earthquake — where survivors’ houses and towns are destroyed. But many people also feel a less-traumatic form of solastalgia for online locales, when these go through dramatic changes. My friend Morgan, who introduced me to this word, pointed out that he’s felt sadness and displacement when online BBSes like The Well or Echo start to decay. I’ve felt it myself. You go away from an online board for a few months because you need a break, and when you come back, a once-lively zone of conversation has become abruptly depopulated. Maybe everyone got into a fight and huffed; maybe everyone got too busy to talk any more; but either way, all the folks you know aren’t there and everything’s oddly, disquieteningly different.
As Albrecht put it beautifully in a comment to News in Science:
“It’s the homesickness you feel when you’re still at home.”
(Thanks to Morgan Noel for this one!)
I’m coming late to this one, but I have to say — I was totally charmed by the story of Emily the cat, who wound up in France after climbing into a shipping container in Chicago and taking an accidental three-week trip to Belgium. Workers there found her, read her tags, and contacted the owners. Apparently Emily was a pretty huge celebrity in France, such that Continental Airlines heard about it and offered the cat a business-class seat back to the USA. At the inevitably heartwarming press conference, the owners told the Washington Post …
“She seems a little calmer than she was before, just a little quieter, a little, maybe, wiser,” said Lesley McElhiney, 32.
Well, I’d be calmer too if I’d just pulled a Rivers Cuomo and spent 21 days locked in a box. But enough sarcasm: This whole post exists purely for the purpose of running that most excellent photo of Emily checking out the view!
Wired News just published my latest video-game column — and this one’s about one of my fave topics: The “Uncanny Valley” effect. It came out of me spending a week playing the lineup of Xbox 360 launch titles, and realizing that even though the humans are more realistic than ever, they’re also creepier than ever.
To read on, you can see the whole piece at Wired News, or via the copy archived below!
Monsters of Photorealism
by Clive Thompson
My hat is off to whoever designed the new King Kong game for the Xbox 360, because they’ve crafted a genuinely horrific monster. When it first lurched out of the mysterious tropical cave and fixed its cadaverous eyes on me, I could barely look at the monstrosity.
I’m speaking, of course, of Naomi Watts.
Not the actual Naomi Watts. She’s heart-stoppingly lovely. No, I’m talking about the version of Naomi Watts that you encounter inside the game.
In some ways, her avatar is an admirably good replica, with the requisite long blond hair and juicy voice-acting from Watts herself. But the problem begins when you look at her face — and the Corpse Bride stares back. The skin on virtual Naomi is oddly slack, as if it weren’t quite connected to the musculature beneath; when she speaks, her lips move with a Frankensteinian stiffness. And those eyes! My god, they’re like two portholes into a soulless howling electric universe. “Great,” I complained to my wife. “I finally get to hang out with a gorgeous starlet — and she’s dead.”
Most of us have, at some sodden point in our lives, learned the hazards of “beer goggles”. Indeed, research shows that 68% of men or women who hand out a phone number at a bar regret it the next day — after realizing the hottie they’d met was, in fact, not.
But now a neuroscientist has actually worked out a formula describing the precise interaction of booze, lighting, and distance that produces this dread syndrome. In Nathan Ephron’s equation, which is above, “Β” stands for the intensity of one’s beer goggle effect, and it is calculated by these variables:
An = number of units of alcohol consumed
S = smokiness of the room (graded from 0-10, where 0 clear air; 10 extremely smoky)
L = luminance of ‘person of interest’ (candelas per square metre; typically 1 pitch black; 150 as seen in normal room lighting)
Vo = Snellen visual acuity (6/6 normal; 6/12 just meets driving standard)
d = distance from ‘person of interest’ (metres; 0.5 to 3 metres)
The upshot is that the effect isn’t just about drinking: It’s about visual ability. A dance floor lit by black-lit bulbs would produce an epidemiological surge of beer goggles. As Ephron notes, “someone with normal vision, who has consumed five pints of beer and views a person 1.5 metres away in a fairly smoky and poorly lit room, will score 55, which means they would suffer from a moderate beer goggle effect.”
Presumably this stuff is a bit hard to calculate while you’re smashed beyond all recognition, but as Barbie said, math class is tough.
By the way, this stuff was funded by the contact lens firm Bausch & Lomb.
(Note: In a bit of hilarious mistyping, I inititally wrote this post as “beer googles”. And I didn’t mistype it that way once — I typed it wrong the whole way through! What would one call the digital age version of the Freudian slip? A Weinerian slip? Thanks to my brother-in-law Rob for pointing out the error!)
A couple of folks have emailed me asking for an Atom feed, so I’ve duly instituted one — the permanent link is below on the left-hand column!
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog