This is beyond awesome. In its current issue, National Geographic — a magazine that sells directly to heartland America — takes on the idiocy of creationism as bluntly as possible. That first picture above? It’s the cover. The second picture? The opening page for the article.
Indeed, that layout has a quality of bait-and-switch that is practically Onion-esque. As Evan Ratliff wrote in a brilliant Wired article last month, a chief goal of modern creationism isn’t really to persuade scientists. It’s merely to be taken seriously by major publications and government figures; if creationists can manage to get invited to comment at a conference or in a magazine article, it allows them to “prove” to their flock that creationism is a serious, scientific rival theory to Darwinism. Merely being in dialogue with the scientific establishment gives them crucial street cred amongst their real audience, Christians.
So one can imagine a creationist spying the magazine and excitedly grabbing it, assuming that a magazine as prestigious as National Georgraphic has now been forced to take creationism seriously. But that deliciously teasing cover line is really just a set-up for the typographically brilliant “NO”. It is not merely a good article; it’s a rhetorical pie in the face to this brand of barking-mad spiritual literalism that is so badly screwing the scientific future of the country. The US states that are home to the main proselytizers of creationism are falling further and further behind in science; students trained in creationist high schools never develop crucial skills of inquiry, so those states are producing virtually no scientists or scientific discoveries of note. If it were up to these people, we wouldn’t have aspirin or light bulbs.
It is sad comment on modern America that National Geographic even has to publish this. But it’s nonetheless wonderful that the magazine did. And by the way, to those creationists who protest that Darwinian selection is “just a theory?” As National Geographic notes:
In the same sense, relativity as described by Albert Einstein is “just” a theory. The notion that Earth orbits around the sun rather than vice versa, offered by Copernicus in 1543, is a theory. Continental drift is a theory. The existence, structure, and dynamics of atoms? Atomic theory. Even electricity is a theoretical construct, involving electrons, which are tiny units of charged mass that no one has ever seen. Each of these theories is an explanation that has been confirmed to such a degree, by observation and experiment, that knowledgeable experts accept it as fact. That’s what scientists mean when they talk about a theory: not a dreamy and unreliable speculation, but an explanatory statement that fits the evidence.
Of course, creationism is not a scientific theory because it does not have a whit of evidence that knowledgeable experts accept as fact.
I am not a morning person; indeed, my circadian rhythms have been so thoroughly addled by work lately that I am probably now a close relative of the common fruit bat. Thus my delight at finding the “Light Sleeper Duvet”, a new rise-‘n-shine technology. It was created by a company called Loop.ph, but don’t hold their annoying intentionally-unpronounceable name against them; their actual product seems to be kind of cool. It’s designed to wake us up gradually by slowly glowing brighter and brighter over a 20-minute period, mimicking the slow creep of ambient sunlight at dawn. The upshot is a duvet that helps ameliorate seasonal affective disorder and jetlag, as the designers note on their site:
It is recognised by most scientists that SAD and other sleep/ mood disorders are linked to a shift in the suprachaismatic nucleus or circadian rhythm and often referred to as the ‘body clock’. It is recommended that a bright light stimulus is needed to reset the body clock everyday recognising that this controls our daily sleep/wake cycle and hormone functions. [snip] Exposure to intense artificial light suppresses the secretion of the night time hormone melatonin, and may enhance the effectiveness of serotonin and other neurotransmitters. It is believed to be the only way of shifting the circadian rhythm. Research shows that the body’s internal clock only responds to bright light at certain times of day. This peak time in normal people occurs when the circadian rhythm is in R.E.M sleep, which is approximately 1 to 2 hours before waking. This promotes the use of Light Sleeper Bedding and proves it to be one of the most effective products for treating SAD and improving well being as it synchronises our body clock each morning. The bedding is also suitable for those who keep unusual hours and who travel in helping to prevent jet lag and regulate the body clock. Our body clock responds to an imitation sunrise by accelerating the wake-up processes.
(Thanks to Rick Spence for this one!)
You’ve undoubtedly heard about the battle of the bulge — the arguments over that weird lump on George W. Bush’s back during the debates. Critics say that it’s a radio receiver feeding him lines from an earpiece, which would explain his odd penchant for, when asked a question, stumbling around for words initially, then breaking off, staring into the distance, and then suddenly coming out with a terse, pithy proclamation. (It’s also a well-observed problem amongst neophyte newscasters that when someone is reading lines into your earpiece, it’s very hard to keep from shifting your eyes around; and when he’s answering questions at a press conference, Bush shifts his eyes around with a drama so cartoonish that it wouldn’t be out of place in a silent movie.) However, Bush and his advisors categorically deny he’s wired, and say the bulge is nothing more than bad tailoring.
The latest salvo comes literally from a rocket scientist. Robert Nelson is a senior research scientist for NASA and Caltech’s Jet Propulsion Laboratory, and a globally recognized expert in image analysis. (Currently he’s studying pictures of the Saturnian moon Titan.) Nelson got interested in Bush’s bulge and began doing image analysis of a videotape of Bush, taken by one of Nelson’s colleagues. The scientist’s conclusion? As he told Salon:
“I would think it’s very hard to avoid the conclusion that there’s something underneath his jacket,” he says. “It would certainly be consistent with some kind of radio receiver and a wire.”
Given how politicized this issue is, Nelson — a self-described Kerry supporter — will probably face enormous derision for even bothering to study this. But as he says: “If they force me into an early retirement, it’ll be worth it if the public knows about this. It’s outrageous statements that I read that the president is wearing nothing under there. There’s clearly something there.”
(Thanks to Boing Boing for this one!)
In recent weeks, there’s been a mini-boom in news about giant squid. But the most amazing revelation came last week, when, as news.com.au reports, a study in Australasian Science dropped this bombshell:
According to scientists, squid have overtaken humans in terms of total bio-mass. That means they take up more space on the planet than us.
Clearly, the seventh seal hath been opened. Actually, the real causes are nearly as apocalyptic: The scientists say the reason squid have become so big is that global warming and commercial overfishing of whales has removed most of the giant squids’ main predators, allowing them to flourish. They also apparently grow larger when the water is warmer.
(Thanks to Debbie for this one!)
For its September issue on politics, Wired asked me to research six short brainstorms on how digital tech can help fix some of the biggest problems plaguing the political system. They’re now online at Wired’s site, and for reference’s sake, I’ve posted ‘em below, too.
Problem #1: We can’t count votes correctly
Solution: Open-source the voting machines. As the 2000 electoral fiasco proved, nothing in a democracy is more important than counting votes accurately. But old technologies - punchcards, optical scanning, and lever devices - are riddled with flaws. Worse, the current trend is to replace them with electronic voting machines made by companies like Diebold, which have an even scarier record of mysteriously erasing votes. Why do these new devices malfunction? Well, we can’t tell: They run on proprietary code that only a few government auditors have been permitted to examine.
The solution is to go open source. Officials in the Australian Capital Territory, that country’s Washington, DC, used an open source project to develop their regional voting software, and it runs with 100 percent accuracy. (They checked it in 2001 against a set of hand-counted paper ballots.) “What goes in is what comes out,” says Phillip Green, the region’s electoral commissioner. The result: software as transparent as democracy ought to be.
Toast is Yummy28: Hey Clive, Ireally dig your site…
Artificial-intelligence experts tend to pooh-pooh chatbots, because they argue that they’re not “real” intelligence. Because chatbots can only use preprogrammed responses to coverse, scientists think they’re fundamentally un-human-like. Chatbots tend to be incredibly dull, unable to follow a logical conversational thread, and prone to repeating the same thing over and over again.
At which point one might well ask … what precisely is so un-human-like about that? Hell, that describes about half the conversations I overhear on the subway, and easily 95% of all online chat. As I’ve argued many times in the past, artificial intelligence succeeds most when it aims low rather than high, since human intelligence is itself most often parked in neutral. I’ve sure you been at a party, got trapped in a dull conversation, and found yourself totally tuning out, chiming in with the occasional bot-like response — “oh?”; “cool”; “yeah?” — merely to keep up the appearance that you’re paying attention. The thing is, that’s all you need to do to hold up one end of a dialogue, since many people engage in conversation not to actually exchange ideas but to merely listen to the sound of their own voices; either that, or they’re simply pinging their friends to remind them of their existence.
Anyway. The point is, I’ve argued this for years, and anti-chatbot scientists have had one simple response: “You’re wrong, because you yourself would never be fooled by a chatbot if you were talking with one.” I disagreed, arguing that were a chatbot to engage me in conversation without my knowing it wasn’t real, I would probably never guess.
And ya know what? I was right. Because yesterday, someone sicced a chatbot on me. They went to a new site called “Chatting AIM Bot”, where you can send a chatbot to initiate a conversation with an unsuspecting friend — and then watch to see if they get fooled. And, as you can see by the chat log below, I did indeed get fooled.
WARNING: You’ll notice I didn’t link to the “Chatting AIM Bot” site, and that’s for a reason. When I visited it, I discovered the site is crammed full of drive-by spyware and adware that automatically instals itself on your machine. So, enjoy the trick below, but don’t visit the site; they’re pretty sleazy.
Toast is Yummy28: Hey Clive, Ireally dig your site…
pomeranian99: glad you like it
Toast is Yummy28: cool as ice!
(Click on “more” below to see the rest of it!)
As I’ve written before, the bleepy music of early video games has thoroughly invaded the worlds of hip-hop and techo. And for years, technoheads have been doing full-length musical remixes of early videogames tunes. But I just discovered my favorite practitioners of the craft: 8bitpeoples, a group that produces songs that are wholly original but utterly faithful to the deranged sequencing and low-fi limitations of early game tunes. You can download (legally!) a whole pile of their stuff via bittorrent; for a taste, check out this MP3 of the song “Croatian Love”.
(Thanks to Legaltorrents for this one!)
Physics Web recently asked its readers to nominate the world’s most beautiful equations. The winner? “Euler’s identity” equation, depicted above, which respondents variously described as “the most profound mathematical statement ever written”, “uncanny and sublime”, “filled with cosmic beauty”, and “mind-blowing”. What’s so cool about it? As Physics Web noted:
The equation contains nine basic concepts of mathematics — once and only once — in a single expression. These are [in order]: e (the base of natural logarithms); the exponent operation; pi; plus (or minus, depending on how you write it); multiplication; imaginary numbers; equals; one; and zero.
As one respondent noted, “What could be more mystical than an imaginary number interacting with real numbers to produce nothing?” Back in the 19th century, the American mathematician Benjamin Peirce gave a lecture proving “Euler’s identity”, and concluded:
“Gentlemen, that is surely true, it is absolutely paradoxical; we cannot understand it, and we don’t know what it means. But we have proved it, and therefore we know it must be the truth.”
Which is, of course, one of the great charms of hard-core mathematics and physics: If you frontload an equation into your brain that is complex enough, deep enough, and elegant enough, the sensation is pretty much indistinguishable from being baked out of your mind.
(Thanks to Slashdot for this one!)
It’s fall. And you know what that means. Time for colorful leaves, crisp air — and, of course, the United Nations’ annual census of the world’s robot population.
Words fail to express my unbridled delight that global civil servants actually spend their time collating this information. According a Wired News story on the census, there are now 607,000 “automated domestic robots” out there, mostly robot lawnmowers and Roombatic vacuum cleaners. The UN predicts that by the end of 2007, there’ll be a remarkable 4.1 million domestic robots.
I think we should herd ‘em all into one part of the US and make it an official state. If you want more info, you can check out the UN’s official robot-census site, which includes such superb dinner-conversation trivia as:
Almost 20,000 robots in Spain — the robot density in Spain is now higher than in France.
Zut alors. By the way, that picture above? It’s a painting by my friend El Rey, who frequently does robot-based artwork, and its title is “I, For One, Welcome Our Robot Overlords”. I think it’s sold, but his site is replete with other robot art.
(Thanks to Slashdot for this one!)
For historians, digital technology is a double-edged sword. On the one hand, almost everything that “important people” write — email, documents, text messages — is stored somewhere, so computers are preserving ever more essential records of history in progress. The problem is, most of this stuff is written on programs that quickly become obsolete, and thus as inscrutable as the Voynich manuscript. I myself have a couple of 5 1/2-inch floppy disks from 1989 with copies of my college papers on it, written in Wordstar 5.0. I’ll probably never read them again.
The British Library is now bonking up against this conundrum. They’ve announced a project to collect the emails and digital documents of literary greats such as Ted Hughes, Stephen Hawking, and, uh, J.K. Rowling. But since much of this stuff is on ancient media, they’re also forced to somehow get access to all the ancient steam-driven computers on which these notables wrote their great works. But, as digital archivist Jeremy John told News.com.au, the library does not actually have space to store these machines. As a result …
He is appealing for help from members of the public who own obsolete machines so he can unlock archaic files. The British Library does not have room to store bulky computers, but John wants to compile a list of households that own working machines such as the Atlas, one of the earliest British computers that was widely available. [snip]
“As well as computers, I need people to keep hold of disk drives and manuals,” John says.
“Manuals are not officially published, so libraries do not hold copies of them. Without them it will be difficult to get the computers to work.”
He has compiled a list of 11 computers from the 1960s and beyond that he needs to locate, including rare British models such as the Whitechapel Workstation MG-1 and the Sinclair ZX80.
(Thanks to Bookninja for this one!)
You know the Doppler effect? It’s the way that waves emitted by a moving object will appear to change as the object races by you. The most common example is the way an ambulance siren appears to rise in pitch as it approaches you, then fall as it moves away. As the ambulance approaches, the waves are hitting you more and more quickly; as it recedes, they hit you more and more slowly. Spacecraft and satellites, which move at incredibly fast speeds, continually run into the Doppler effect, so astrophysicists are pretty accustomed to dealing with it.
But apparently the Huygens probe was nearly doomed by the Doppler effect. The Hyugens probe, as you may recall, is the little planetary explorer that is currently riding along on the school-bus-sized Cassini probe as it orbits Saturn. In January, Huygens will detach from Cassini and descend to the surface of the Saturnian moon Titan. And that’s where the trouble begins. According to IEEE Spectrum Online, the communications equipment between the Cassini and Huygens was badly designed — such that the Doppler effect would render unintelligible any data coming from Huygens. If the probe discovered enormous tentacled methane-breathing telepathic squid-based life-forms, we’d never know, because the probe’s signal would be indecipherable. Sucky, eh?
But the really cool thing is how the error was discovered. It was all the work of a lone Swedish engineer, who discovered the error late one night — and had barely hours to design a set of experiments to prove it would really screw the Hyugens mission. Sure, it’s a story about spectrum engineering — but it reads like a page-turner thriller by Michael Crichton. Check it out online here!
The upshot is that the error was fixed, in the nick of time. But, as the lead engineers admitted:
We have a technical term for what went wrong here,” one of Huygens’s principal investigators, John Zarnecki of Britain’s Open University, would later explain to reporters: “It’s called a cock-up.”
Some wit has designed TV-B-Gone, a TV remote that has only one function: To turn TVs off. He got so annoyed at the omnipresence of TVs in public venues that he collected the “power off” commands for dozens of units and put them in a single keyfob device. Point it at the TV in your local bar during the Red Sox/Yankees game, hit the button, and presto: The TV will go dead. And so will you, since when people discover you’re turning their TV off in the middle of game they’ll beat you into a bloody pulp.
Which brings us to real cultural meat of this subject. The TV-B-Gone is interesting, but not half so interesting as the furious debate it provokes about the role of TV in society. Wired News hung out with an anti-TV activist who used it to click off a huge bank of TVs at Euro Disney:
“It fills you with naughty laughter to know you did this and other people have no idea what happened,” Burke said. People around him noticed that the screens had turned off, but no one raised a fuss.
… TV-B-Gone has a single purpose: to power off televisions whenever the user feels like being a dick. [snip] Maybe after making his tens of dozens of dollars on the TV-B-Gone, Altman can invent a gadget that transports self-important cocks who think they’re waging a subversive culture war to a log cabin coffee shop where they can reassure each other how awesome they are for hating television.
And on and on the opinions go. Some bloggers point out that they are “easily mesmerized” by TVs in public places, and thus “philosophically love the idea”; others cackle ironically loving TV-B-Gone because “I can piss people off and impose my views on others — then I’ll be a one-man Government!”
It’s kind of amazing. Even the most blood-soaked video games don’t cause this sort of cultural agon. But sixty years on into its mainstreaming, TV is still a ferociously love-it-or-hate-it proposition.
(Thanks to Parker Morse for this one!)
Last month I wrote about scientists who accuse offshore oil companies of accidentally killing giant squid with blasts of incredibly powerful, 200-decibel sonar. It turns out that Lanny Sinkin, a Hawaiian environmental lawyer, was equally worried about the US’s military sonar hurting many other marine animals, such as whales or dolphins.
But Sinkin decided to do something rather remarkable: To let the whales take on the government themselves. So he initiated a lawsuit that pitted actual whales against Donald Rumsfeld and George Bush. Yesterday, ninth circuit judge William A. Fletcher decided that whales lack the standing necessary to sue:
We are asked to decide whether the world’s cetaceans have standing to bring suit in their own name under the Endangered Species Act, the Marine Mammal Protection Act, and the Administrative Procedure Act. We hold that cetaceans do not have standing under these statutes.
There’s a PDF of the decision online here. Man, this reads like something from a David Foster Wallace novel.
(Thanks to Boing Boing for this one!)
A couple of days ago my friend Chris Allbritton — the excellent blogger of Back To Iraq who has been working in Time’s Baghdad bureau — wrote about John Martinkus, an Australian journalist friend of his who was kidnapped then released. Martinkus said that during the interrogation, the captors vanished briefly to check up on the journalist’s work; Chris suspected that they Googled him.
Apparently that’s precisely what happened, according to Martinkus’ employers, SBS. As Australia’s National Nine News reports:
“They Googled him, they checked him out on a popular search engine and got onto his own website or his publisher’s website and saw he was a writer and journalist,” Mr Carey told AAP.
This is a really intriguing moment for the reputation economy. More and more, this is one of Google’s central functions: A way to quickly verify someone’s reputation. Two years ago, I blogged about what I called the “untouchables” — people who do not show up on Google at all. And increasingly I find that when I do a search for someone on Google and can’t find anything about them — not a single page — I’m quite freaked out. It’s like running into some “lost man” from a 1960s cold-war spy novel, somebody who has deliberately adopted a new identity and erased all tracks of themselves. Not being visible on Google now seems kind of antisocial: In a digital age, it’s simply not polite.
Interestingly, Collision Detection has become a really powerful reputation-management device for me. Almost every time I call a company to interview a CEO, the company’s public-relations person immediately googles me, finds this blog, and reads a whole bunch of entries; then they’ll mention it to me when we talk, as if to tell me see, I know something about you.
In a recent edition of New York magazine, the staff presented readers with a short guide on “how to disappear”. Among the first suggestions?
Begin your new life of anonymity by ﬁling an action with the New York Lower Civil Court to change your name to Michael or Emily—the most common newborn names in the city—and then pick a Googleproof surname like Smith.
(Thanks to Jason Uechi for this one!)
My latest column for Slate is out — and in this one, I review four video games that let you run your own US presidential campaign. In my preamble:
As Steven Johnson wrote last year, it’s hard to believe that election sims didn’t exist sooner. Running a campaign isn’t that much different from running a football team or a battle squadron, and video games already do a superb job of modeling that stuff. In the four games I tried, strategy—moving your team around, allocating resources—was more important than the issues. Mostly, I got some practice gazing lustfully at vote-rich states like Florida, Pennsylvania, and Ohio. Playing as Kerry in one round of Political Machine, I won the popular vote but lost the election.
A college student recently turned on his new flat-screen TV, only to find — a few hours later — a squad of local police, air patrol, and search-and-rescue personnel show up at his door. Apparently the TV was emitting an international distress signal, which was picked up by US satellites. As the kid told CNN:
“They’d never seen signal come that strong from a home appliance,” said van Rossmann. “They were quite surprised. I think we all were.”
I had known that plasma TVs can produce unusual humming noises at certain altitudes, but I’ve never known one to actually cry for help.
(Thanks to George Murray for this one!)
A couple of geeks at the University of Louisana have created a robot drum-set — drums that drum themselves. They hooked up a bunch of actuators to the various components of a kit, then wired it to accept MIDI inputs. There’s information on how it works here, and some mind-melting video of it in action here. They call it PEART — Pneumatic and Electronic Actuated RoboT — in honor of Neil Peart, the histrionically hyperactive drummer for Rush. I’d love to see this thing pulling off some of Peart’s 128th-note high-tom rolls from one of Rush’s barely-listenable albums, like Grace Under Pressure.
Maybe for their next project they could design a virtual Geddy Lee, using some ultrahighpitched acoustic device that emits notes seven octaves above high C and knocks sparrows dead out of the air.
(Thanks to Incoming Signals for this one!)
Here’s a cool site: Identifont, an online tool for helping you find a font when you can visualize it but don’t know its name. If you have an idea of a font you’re trying to locate, Identifont will throw a bunch of questions at you — i.e. “Does the Q tail cross the circle?”, or “Does the upper-case ‘J’ extend below the baseline?”. As you answer them, it slowly narrows down the font you’re describing, and then presents you with it.
As it turns out, it doesn’t work that well. I tried to describe my favorite font, Copperplate Gothic, but even though I answered the questions pretty accurately, I couldn’t get it to spit out that font. Instead, I got Carlton, the font pictured above.
That’s when I realized that Identifont may be more fun if you use it simply to discover new fonts. You answer the questions basically according to your taste, and then you sit back and prepare to be surprised by what Identifont finds for you. It’s kind of like the flash-card techniques that MBA students in the 50s used to generate new ideas. They’d take a problem they were trying to solve, and then have someone pepper them with various questions, some relevant and some not; oftentimes the seemingly irrelevant questions were the ones that made them go “aha!” and arrive at a breakthrough, because they encouraged them to think in unconventional ways. (Trivia: Marshall McLuhan created a deck of idea-generation cards much this; they were issued in a very limited edition and today are worth some insane amount of money.)
In the case of Identifont, I was trying to use the tool to drive at Copperplate Gothic, but I wound up with Carlton — a font I wasn’t looking for, but which I rather like, since it shares some similarities but is nonetheless different. Even though the machine got it “wrong”, it did something creatively cool. It’s much like how collaborative filtering and machine intelligence can lead us in interesting new directions. Our Tivos find shows for us; Amazon suggest books based on what we’re surfing. Sometimes they’re hilariously wrong, but much of the time they’re useful in a key way — because they knock us slightly off our expected path.
I’ve often argued that artificial intelligence is best when it remains slightly unhuman and slightly alien, because that’s when it most contributes to our lives. If we wanted to get truly human-like advice on a good font to use or a good TV show to watch, we’d just go talk to a like-minded friend. That’s what other humans are good for! But machines think in much simpler and thus far weirder fashions than we humans do, so even when they’re wrong, they’re wrong in interesting ways. Sometimes, that’s even better than being right.
(Thanks to Tribblescape for this one!)
You’ve probably heard about the controversies over RFID tags, which report the location and identity of whatever device they’re attached to. You may have heard of the scientists who recently developed a tiny tracking chip to implant in a patient’s skin. And you may even have heard of “location-based services” — mobile phones that know where they are and can, say, recommend the nearest Italian restaurant.
Now here’s a new buzzword: “Reality mining”. It was coined recently by the futurists at Accenture, and it stands for the type of data-surfing we’ll be able to do when everything around us — our phones, our cars, our pants, our dogs, and everything else that is and isn’t glued down — has the ability to report on its location and its current state.
Think of “reality mining” as a supercharged version of the presence-management abilities of the AOL Buddy List. The Buddy List gives you one or two simple bits of information about your posse: Who’s online? And if they’ve been dormant, how long? This lets you get a nigh-tactile sense of the current status of your friends, almost as if you were able to glance around the room and look at everyone. Now imagine your buddy list were able to track all sorts of other things: Where your spouse’s car currently is (and how fast it’s travelling), where your kids are (and who they’re with), how busy each of the local restaurants are, and which bank machine near you has the biggest lineup. It’s sort of like having ESP. The Accenture guys also call this “Reality Browsing”, which is maybe a better metaphor, and they give the following scenario:
Before they leave home, shoppers could check for local traffic conditions, parking availability, and size of checkout lines and shopping crowds. As additional sensors and Web services become available, we envision scenarios in which users can check if rental movies, dry cleaning, or theater seats are available before leaving the house.
The commercial-style applications are obvious enough. But what intrigues me are the non-commercial ones — tracking the social dynamics of our friends, being able to see and rewind data about our neighborhoods. It’s about visualizing the world in a really new way, and the thing is, this isn’t really all that far off in the future.
As a very simple demonstration of reality mining, the Accenture guys made a 3D map of their California business-park neighborhood, where the current stock-market value of each company is represented by a shade of green hanging in the sky over their building (pictured above). As you fly through the 3D world, you can see the information made physical, sort of like health-bars in video games, floating over combatants to track how they’re doing. Indeed, video games are the past masters at displaying reality-mining-style info, so I wouldn’t be surprised if our everyday info-interfaces in the future start to look more and more like Grand Theft Auto.
(Thanks to Mindjack for this one!)
Apparently astronomers have discovered that there’s a huge cloud of frozen sugar near the center of the Milky Way. It’s glycolaldehyde, a sugar composed of two carbon atoms, two oxygen atoms and four hydrogen atoms. No word on how tasty that stuff actually is, but given that it’s barely a few degrees above absolute zero, you wouldn’t really wanna lick it anyway. But outside of the extremely cool discovery of our galaxy’s Tootsie-Pop-like qualities, the researchers also surmise that a sugary cloud like this could help seed life — if a comet passed through it, grabbed of bunch of the stuff, then deposited it on a nice warm planet. As The Scotsman reports:
Radio astronomer Dr Jan Hollis, from the American space agency NASA’s Goddard Space Flight Centre in Green- belt, Maryland, said: “Many of the interstellar molecules discovered to date are the same kinds detected in laboratory experiments specifically designed to synthesise prebiotic molecules.
“This fact suggests a universal prebiotic chemistry.”
A giant squid was recently caught off the coast of British Columbia. Nothing unusual in that — as readers of my many previous giant-squid postings will know — except that this species of squid, the Humboldt, normally confines itself to warm-water areas such as the Gulf of California. Even weirder, last year a Humboldt was found off the shores of Alaska.
The culprit may be global warming, as some scientists told the CBC:
“It may have come up with a tonne of warm water, or it might be that they’re making their way north comfortably now,” says Kelly Sendall, senior collection manager at the Royal B.C. Museum.
Living in Manhattan as I do, which is very low to the water, I sometimes imagine what this place will look like when global warming has brought the sea level up by, oh, 15 feet. For the neighborhoods south of 14th St., that’ll put the all the buildings in water up to the second floor. So whenever I walk down the street I’ll occasionally imagine what it would look like when people are taking canoes up and down Lafayette. Now I’ll imagine it with people taking canoes up and down Lafayette … and being feasted upon by giant squid.
Both US presidential campaigns are banking a lot on “getting out the vote” — pumping money into door-to-door campaigns and phone banks aimed at making sure their base, and many new voters, head to the polls. But how useful will this really be?
Two Yale professors did a fascinating study to figure out how much, precisely, it costs to rustle up a vote. They analyzed data from state and local elections in a intriguing way: They compared the turnout of those who had received a get-out-the-vote appeal — a phone call, a door visit, etc. — versus those who didn’t.
The result? Door-to-door visits managed to bring out one additional vote per 14 people visited, which works out to a price of about $7 and $19 for that vote. Leaflets created only one new vote per 66 votes, for a cost-per-vote of about $14 to $42. Telephone calling was incredibly ineffective, costing anywhere from $45 to $200 for a vote.
But here’s where things get interesting. The two parties say they’re spending about $200 million on get-out-the-vote techniques this year. The professors figure that on average, each vote costs $50 to get out. That means, as the New York Times reports …
… that the tremendous mobilization efforts under way will increase turnout by about four million people, or 2 percent of eligible voters. Although unseasonable weather or other unforeseen events could throw this forecast off - and the expected close contest should arouse heightened participation - this year’s turnout is likely to fall between 2000’s rate of 54 percent of eligible voters and 1992’s rate of 61 percent. This moderate forecast stands in contrast to the image of unprecedented voting implied by reports of record numbers of people registering in many states.
Maybe we won’t see as many new voters as we’d hoped to.
It’s quite rare for a scientist to stumble upon a bold new insight about cognition. It’s even more rare to do so while experimenting with a bunch of ferrets that are being forced to watch The Matrix.
But Michael Weliky may indeed have won this surreal trifecta. Weliky, a brain researcher at the University of Rochester, had long assumed — as do many cognitive scientists — that the brain is somewhat inactive in the absence of stimulation. It’s kind of like that old joke that we only use 10% of our brain. Cognitive scientists don’t really believe that old saw, but they do generally assume that the brain is considerably less busy when it’s deprived of stimuli.
Weliky, however, decided to test this assumption. He took a group of adult ferrets, wired up their visual cortexes with probes, and then subjected them to three different forms of stimuli: a) A pitch-black room; b) a TV screen displaying nothing but static; and, last but not least, c) the movie The Matrix. His findings? As you might expect, viewing the movie and the TV static caused the ferrets’ visual cortexes to fire at 100%. But what was truly weird was the the pitch-black room registered 80% activity.
The first question, obviously, is — why are our brains doing so much visual work when there’s nothing to look at? Weliky suspects it’s because our ability to see and recognize things is contingent on cognitive models that are firing all the time, even when we’re not looking at stuff. That 80% activity rate is the baseline work the ferrets needed to do just to generate their mental model of reality. As he said in a press release:
“This suggests that with your eyes closed, your visual processing is already running at 80 percent, and that opening your eyes only adds the last 20 percent. The big question here is what is the brain doing when it’s idling, because it’s obviously doing something important.”
Fair enough. But there’s a much bigger question here, which is — ferrets? Ferrets WATCHING THE MATRIX?
As it turns out, Weliky has been studying ferret visual-cortical activity for some time, so his choice of that particular animal was driven merely by the fact that he’s extremely familiar with their neurophysiology. In previous experiments, he apparently used to shine lights into the eyes of unconscious ferrets to see if it produced any brain activity. (Which, when you think about it, is not really much less strange than sitting them down to watch Carrie-Anne Moss kick ass in bullet time … but whatever.) As to why he picked The Matrix, it seems pretty clear that Welicky could have used any visual stimulus he wanted, but that he simply couldn’t resist the metaphoric hilarity: What better movie to use for a study about how the brain develops mental representations of reality? Heh. While it is hard to tell whether Welicky is headed towards a Nobel prize or an IgNobel one, there’s no doubt that his findings are quite intriguing. If his data are solid, then they point to the fact that our brains are much busier than we suspect. As he concludes:
“In a way, our neural structure imposes a certain structure on the outside world, and all we know is that at least one other mammalian brain seems to impose the same structure. Either that or The Matrix freaked out the ferrets the way it did everyone else.”
(Thanks to Robin at Snarkmarket for this one!)
Best blog to make you seem smarter at cocktail parties
Which Blur song rocks the Fibonacci Sequence? How do you hack a Furby? And what kind of sound does your face make? Find your crib sheet at collisiondetection.net, the web’s go-to site for brainy technobabble-meets-pop-culture references. Tech geeks and Luddites alike will marvel at the daily tidbits from journalist (and M.I.T. alum) Clive Thompson, who writes about techno-arcania with wit and intellectual heft. Live long and prosper.
Can hurtin’ music actually hurt you? Last week, Harvard held the 14th annual IgNobel Awards — tongue-in-cheek prizes for world’s weirdest research. This year, the winner in the “medicine” category was “The Effect of Country Music on Suicide,” a paper that Steven Stack and Jim Gundlach published in a 1992 issue of the journal Social Forces. I can’t find a copy of the full paper online, but here’s an abstract that explains the argument:
In this article, we explore the link between a particular form of popular music (country music) and metropolitan suicide rates. We contend that the themes found in country music foster a suicidal mood among people already at risk of suicide and that it is thereby associated with a high suicide rate. The effect is buttressed by the country subculture and a link between this subculture and a racial status related to an increased suicide risk.
The IgNobel was obviously awarded because this research was so flourescently surreal. But the IgNobels do not dispute the findings of the studies they honor. On the contrary, the IgNobels are intended to highlight research that “first makes people laugh, then makes them think”.
So we’re still left with a serious question: Just how badly was I risking my life by listening to Dwight Yoakam last night? Gundlach began pondering this when he was teaching a class and discovered that Nashville, for some reason, has suicide rates much higher than would ordinarily be predicted by the known correlates (which include, apparently, high levels of unemployment, divorce, and the number of people who are Roman Catholic). Then, as the New Jersey Star Ledger reports …
“Everyone in the class said ‘country music!“‘Gundlach said in an interview.
Further research, including analysis of country music lyrics, showed the major themes — including the travails of love, drinking alcohol as a way to deal with life’s problems, and a sense of hopelessness about work and finances — have all been linked to increased suicide risk. Country music listeners are also big gun owners.
When Gundlach and Stack tallied suicide rates in 49 large metropolitan areas, they found rates went up in sync with the proportion of radio air time devoted to country music.
Well, thankfully I don’t have a gun — or as we liked to call ‘em back in the day, an “equalizer” — kicking around my New York apartment. Even so, I somehow doubt that listening to today’s country would be particularly dangerous, because it’s much less intense than before. Old-school country from the 30s and 40s was incredibly downbeat stuff. Perhaps more dangerously yet, it was — like traditional blues of that period — so profoundly weird that no modern radio station today dares play it; in the jingoistic/faux-gritty age of Shania Twain, Gretchen Wilson and the unspeakably loathsome Toby Keith, old-school country would seem like a broadcast from the surface of Mars. In today’s soccer-mom-focus-group-honed New Country, you can be funny or cocky or angry or gosh-darn-downhome as you want, but you cannot be as truly spittle-flecked and eye-buggingly strange as the freaks who gave birth to the genre so many decades ago. I mean, go listen to Johnny Cash’s “Folsom Prison” album. God damn that’s an odd album. I think it’s that combination of desperation and woodshed weirdness that makes old-school country seem so wonderfully bleak, and today’s country, by comparison, so surgically devoid of any emotion. Even Gundlach seems to intuit this: A wire story quoted him as saying that the country music of 1992 was “gloomier than today”.
A victory for public health, and a tragedy for music. There’s a tear in my beer.
(Thanks to the Eyebeam reBlog for this one!)
Yeah, well, your cat would look pretty weird too if you took a picture of it using an infrared lens.
(Thanks to Slashdot for this one!)
Dig this. British doctors have invented a new way to treat angina: Vibrating pants. Angina, of course, occurs when the body’s circulation becomes so weak that the heart doesn’t get enough blood. That’s where the pants come in. As the BBC reports:
When the heart is resting the cuffs inflate and then deflate again just before each heart beat.
The sequential inflation and deflation of the cuffs increases blood flow to the heart and encourages tiny new blood vessels to grow around the blocked arteries to feed the heart.
Quite apart from whether this treatment works, I wanted to blog it because it’s just so purely awesome to be able to type the phrase vibrating pants.
Last week, the popular Dutch singer Andre Hazes died. (That’s him pictured above.) The news came out just before noon, and within a few minutes, the country’s mobile-phone carrier noticed something strange: The number of text messages instantly doubled. People were spreading the news themselves, instantly, via SMS.
The incident led phone-pundit Mike Masnick to wonder whether this could be a new paradigm for how news is spread. Rather than wait for everyone to tune into their newspaper or TV show or web site, how about having reporters provide news in a format that readers can easily re-broadcast — using SMS, email, or whatever? As Mansick puts it:
From a news organization’s perspective, then, the opportunity is to package the news not in a way that simply attracts more readers, but to be easily disseminated outward by those readers. As the E-Media Tidbits article notes, for the news of Hazes death, a news organization could have sent: “Here is a message to forward, a picture, and part of a Hazes song attached,” and then just let the power of social distribution take over.
It reminds me — to cite a more depressing example — of 9/11. I was at Flatbush and Atlantic that morning, with a crowd of people in the street, watching the buildings burn. When I looked around, I noticed that virtually everyone had pulled their phone out and was talking to someone else, describing the freakish, horrifying scene. Then I suddenly realized: The mobile phone was acting as a sort of de facto news agency, spreading on-the-spot information faster than CNN or Fox.
One of the biggest problems with camera phones is that they’re too small to allow for truly focussing lenses. To focus a picture, two lenses have to be able to move — to slide closer or further together. And given how teensy most phones are, they don’t even have the leeway you find in the smallest digital cameras. Given all the circuitry that has to fit inside the phone, there’s no room left over to allow for several lenses.
The eye, in contrast, works in a very different way. Rather than change the distance between two lenses, the eye simply uses one lens that can alter its own shape — thus changing its focus. Now a couple of scientists in Singapore claim they’ve created a cameraphone lens that does precisely this. It’s got a flexible polycarbonate lens which is hollow inside, and filled with water. The cameraphone can pump water in or out in tiny amounts, to subtly change the shape of the lens. It allows a cameraphone to do something hitherto impossible, which is to get a true close-up shot (that’s an example above). There’s a story on an Asian web site about it here.
It reminds me of Adaptive Eyecare, the company that developed the world’s cheapest eyeglasses. They too have flexible plastic lenses, which are filled with clear oil. You snap an oil pump onto it, then pump oil in or out until it achieves the perfect focus for your eyes. The idea was developed by an Oxford professor and is currently being rolled out in several African countries, where many farmers suffer from poor eyesight brought on by malnutrition.
(Thanks to Ratchet Up for this one!)
Pixel Moon is brilliant: It’s a web site that displays a moonscape where you can design your own moon base and add it to the landscape. So far, 78 people have created bases, many of which are quite hilarious — including a Vodafone kiosk and a shot of four Star Trek characters staring at something lying on the ground.
But to me, what’s really interesting is how big the page is. To see all of Pixel Moon’s surface, you have to scroll four screenfuls sideways, and several screenfuls down. I’ve always wondered why more web artists don’t take advantage of the almost-limitless size of a web page by creating digital art that spreads out to the left and right (and up) as well as down. Indeed, someone could create a web page that was, say, 14 feet wide and 15 feet tall, and then anchor the “opening” part of the page at dead center. When you first loaded the page, you’d start in the center of that massive terrain, and wander around like a voyager. That’s much the way many online “immersive world” video-games work: You arrive somewhere in the world, but can wander off in any direction for quite some distance. It gives a fun sense of scope and immensity to the virtual space.
Here’s an interesting question: What’s the “biggest” web page ever created? Has anyone ever seen a web page that was, like, 14 feet wide and 15 feet tall, in real dimensions?
(Thanks to Ratchet Up for this one!)
For years, the Swiss government has paid hikers to trek through the Alps and tell them what scenery they liked and didn’t like. When hikers discovered that a good view was being obstructed by trees, the government would unleash lumberjacks — and cows — to deforest the area, imagineering the scene back into the Platonic tourist ideal.
Recently, a couple of scientists got a different idea. They created a virtual version of the Alps, then programmed thousands of virtual tourists, each equipped with roughly the same aesthetics as the average tourist. Then they turned ‘em loose in the digital Alps and had them report back what looked cool and what didn’t. As The Economist reports:
An agent can be unleashed again and again on a particular part of the landscape, and will make decisions about where to go based on its previous experiences. After a few runs, agents start to avoid paths that they find uninteresting, and at the end of each simulation they provide feedback about their routes. Dr Nagel and the team use these data to work out how pleasurable each route was.
There’s a web site devoted to the project, with some rather cool visuals of the pathways the virtual tourists trod. That picture above is one part of the Alps; the colored blocks are the agents.
(Thanks to Roland for this one!)
Over at Fast Company, Charles Leadbetter pushes an interesting idea: The increasing scientific, political, and cultural importance of “pro-ams” — amateurs who hold themselves to professional standards. One good example is in astronomy: Many astronomical discoveries these days are coming from amateurs with backyard telescopes, because technology has made those telescopes increasingly powerful. Or consider Linux, an operating system that was created by volunteers, yet which now rivals Microsoft’s top products. In the world of music, cut-and-paste apps like Apple’s Garage Band are making amateur performers increasingly polished.
The interesting thing, as Leadbetter points out, is that this completely reverses the trends of the last few hundred years:
The 20th century was marked by the rise of professionals in medicine, science, education, and politics. In one field after another, amateurs and their ramshackle organizations were driven out by people who knew what they were doing and had certificates to prove it. Now that historic shift seems to be reversing. Even as large corporations extend their reach, we’re witnessing the flowering of Pro-Am, bottom-up self-organization.
Interestingly, an example he doesn’t mention is blogging. “Amateur” authors — I hesitate to call them “amateurs” because some bloggers are more fun to read than many paid professionals — are getting so much audience these days that the pros are freaking out, as the New York Times Magazine documented last week in its excellent story on political bloggers.
Anyway, Leadbetter is set to release a book-length version of his argument in November, and I’ll be intrigued to read it.
(Thanks to Slashdot for this one!)
In video game circles, the big debate these days is about whether video games are driven by narrative or by play-style — narrativology vs. ludology. I think both approaches are useful to understanding games, though I lean towards ludology as more germanely important. While narrative applies to novels, plays, TV, movies, and games, play — working within a set of rules that impose an arbitrary goal and make it tricky to achieve — applies only to games. It is what makes games most game-like.
The other problem with narrative is that it is, at heart, essentially noninteractive. As Northrop Frye argued, the pleasure of a story is masochistic: The fun is in sitting there and going, “yeah? And then? And then? And then?” without ever knowing what’s next. Having any control over that situation changes this dramatically: The masochism is what makes narrative narrative. Take that away, create an interactive situation, and you’ve got something very cool: A game with open-ended, forking scenes, amazingly cinematic visuals, a powerful metaphoric and symbolic system, and other cool stuff. But it’s not a narrative anymore, so studying it as such won’t tell you very much.
My bias here is that I studied literature, so I view the word “narrative” as a precise word — and I think a lot of game theorists use it in an overly imprecise way. They look at a game and see the dramatic scenes, film-like visuals, and metaphoric consistency, and think: Hey, that’s a story! It’s not. And here we come to the point of this blog entry, which is to note an excellent book reviewed at Slashdot today: Interactive Storytelling by Andrew Glassner. As the reviewer notes:
Glassner’s strongly held opinion, which he argues quite coherently, is that a great story is the product of one (or a few) expert storytellers presenting a strong, consistent vision to you, the consumer. The fabled holy grail of gaming is letting the player do whatever they want — full interactivity. And this is to a point fundamentally incompatible with telling a great story. Conflict drives most stories — what if the player quite reasonably minimizes conflict?
Precisely. More interestingly:
… people gravitate toward the entertainment that has the highest fun-to-work ratio. Television is hugely popular since the fun is high to very low, but the work is near zero. They will do more work if it offers a lot more fun. Which means you shouldn’t force your players to do stupid, boring, unnecessary work like running through a dozen screens again and again to get between important locations.
It’s a good question: Why do game-makers so often force you to trek halfway across a virtual world, an incredibly boring task, just so you can get to an important puzzle or battle? Why not just let you teleport to the action? Because they want to show off their genuinely impressive immersive world, of course. Granted, the world really is a lot of fun to explore. But when a game is designed to force you to slog around just to “get through the game”, it is a symptom of the problem of designing games as if they were stories.
A better way of thinking about it might be to treat the immersive world as a piece of architecture. How would you encourage the player to see everything, to admire the elements of design? That’s a project that has nothing to do with story. And indeed, obsessively thinking about the game as a “story” might well get in the way of the true pleasure of the game, which is exploring the world.
Here’s another way of thinking about it: A game can have characters, dramatic sequences, a consistent symbolism, and yet still not be a story. Sure, most games have things that happen one after another. But “one thing happening after another” is not a narrative. Things happen one after another in the The Sims, but those aren’t stories either: They’re simulations, cultural artifacts that are incredibly interesting and engrossing for reasons that have nothing to do with the traditional definition of narrative.
Anyway, I’ll stop now. I salute anyone who’s made it to the end of this incredibly didactic entry.
I'm Clive Thompson, a writer on science, technology, and culture. This blog collects bits of offbeat research I'm running into, and musings thereon.
Currently, I'm a contributing writer for the New York Times Magazine and a columnist for Wired magazine. I also write for Fast Company and Wired magazine's web site, among other places. Email or AOL IM me (pomeranian99) to say hi or send in something strange!
May 20, 2011 » 02:28 PM
From Christopher Kennedy’s very droll book “Neitzsche’s Horse”.
July 28, 2010 » 07:35 AM
“Wr” - S
July 06, 2010 » 10:05 AM
My Xbox broke, and I was trying to Google some possible technical solutions, when I noticed that Google appears to be encouraging me to make a typo. I suppose it’s possible that Google’s algorithms know that typing “wont” instead of “won’t” would produce better results.
June 29, 2010 » 05:00 PM
On the other hand, when I tried the test for multitasking, I was pretty abysmal. I performed worse than people who identify themselves as heavy multitaskers, and those who identify as low multitaskers.
June 29, 2010 » 04:58 PM
I finally got around to trying out the interactive “test your distractability and multitasking” page at the New York Times, which they put up alongside their story earlier this month about how computer distractions are eroding our lives.
According to the test, I guess I have good focus — I’m not very distractable!
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog