Jason Kottke, a long-time and superb blogger, has decided to quit his regular paying work and blog full-time! He’s running a three-week fundraising drive to support it, and I’ve already offered a tip in his tip-jar, because his blog is a truly stupendous resource of stuff that keeps me thinking. I’ll be intrigued to see how things turn out. Jason’s decided to abjure advertising, since he’s concerned it would queer his focus or his relationship with his readers. He wrote a FAQ explaining why he’s trying this, which includes:
In the recent comics issue of McSweeney’s, Chris Ware notes that “in the past decade or so, comics appear to have gained some greater measure of respect, due in no small part to the number of cartoonists who have begun to take the medium seriously”. This is me taking online personal publishing seriously because I feel it deserves as much.
I’ll be fascinated to see how his blog changes when it’s not competing for attention with his other work. He has a wittily sober assessement of his move — “dumb economic decision” — and notes that expects to make 1/3 to 1/2 of his previous income; if he can make it last a year before collapsing, he figures, he’ll be lucky. This part of the FAQ made me laugh out loud:
Are you excited?
If by that you mean “do you feel like you’re going to throw up?” then yes.
Check out his site, and if you like it, drop a buck or two in his cup!
Here’s a depressing little anecdote that illustrates the surprisingly large array of horrible side-effects caused by the war on drugs. It comes from a terrific story in the Boston Globe’s Ideas section reporting on a conference of folks from the intelligence community. One of their biggest problems in fighting radical Islamic terrorists? The US government doesn’t have enough people that speak the various dialects of Arabic.
So why don’t they just go out and, y’know, hire some sharp young linguists? Good question. Rebecca Givner-Forbes, a recent graduate of Georgetown who studied Arabic in college, explained the problems here:
Givner-Forbes did not pursue a job at the NSA or CIA, preferring instead to work for a private company specializing in intelligence. She explained that the agencies often scare away precisely the linguists they should be attracting. She mentioned a friend who wanted to work for NSA but had smoked pot in the past year, and was therefore ineligible, and pointed to the 20 Arabic linguists who have been fired by defense and intelligence agencies since Sept. 11 for being gay: ”You’re not going to find the perfect translator who fits all your lifestyle requirements.”
For years now, one strand of cognitive theory has argued that language and math are entwined in our brain’s circuitry. For example, some studies of brain activity have shown that when people were given sums to calculate, the left frontal lobe — an area generally related to verbal language — would light up. According to this theory, if you lost your ability to handle language, you’d also lose your mathematical ability.
But that may not be true, as some UK scientists have shown. They took a bunch of people with severe aphasia: They couldn’t speak or understand grammatical language. “Grammatical” is the big deal here; these people’s problems were not with understanding the meaning of individual words, but with understanding their correct order. They knew what “lion” and “man” and “hunted” meant individually, but wouldn’t be able to figure out the difference between the sentences “The lion hunted the man” and “The man hunted the lion”.
So the scientists gave these patients mathematical sums with different structures — such as “52 minus 11” and “11 minus 52” — they had no problem understanding that they were different expressions with different meanings. Apparently, mathematical grammar must rely on brain circuitry slightly different from that that parses verbal grammar. As Rosemary Varley, one of the scientists from the University of Sheffieldone, told the BBC:
“Despite profound language deficits these guys showed advanced cognitive abilities, which indicates considerable autonomy between language and thinking.”
Dig this: There’s a new Dyson vacuum cleaner that makes a phone call to the manufacturer when it needs spare parts. As the London Sun reports:
The gizmo alerts the user if it has broken down or needs a replacement part.
The owner then dials the number of the Dyson call centre and holds the telephone receiver to the vacuum cleaner.
The machine transmits a message telling engineers what’s wrong and orders any new part it needs.
I am literally beside myself with joy at the vision of thousands of housecleaners holding a phone up to their vacuum so it can transmit some mysterious parrot-modem-sqwauk-language to the mother ship. But quite apart from the silliness of it all, it’s a usefully concrete, physical metaphor for what much of our software already does.
Ever read the fine print on any of your software? Neither do I, mostly. But when I have, I’ve usually discovered that the company — Microsoft, Apple, Adobe, etc. — has reserved the right, at whatever point in the future they choose, to have the software communicate information back to them about what sort of machine I own, what other software I run, and precisely what I’ve been doing with their software. Since this is invisible — since we don’t have to actually hold the phone up so our software can speak — we mostly ignore these intrusions. (Unless, as I do, you use a firewall like Zone Alarm that reports any attempt by a piece of software to access the Internet.)
But the time will come, and come quickly, when an increasingly large number of household products — fridges, stoves, microwaves, phones, vacuums, hot-water systems — will be networked. They’ll be able to skip the hold-the-phone-for-me step, and simply tell their manufacturers what we’ve been doing with them. And you probably won’t be able to buy a household tool that doesn’t do that.
I’m not a hard-core privacy nut, but that prospect freaks me out a bit.
(Thanks to Engadget for this one!)
Back when I was at MIT, I took an amazing class with Sherry Turkle in which she discussed her ongoing work studying how people relate to seemingly-intelligent machines. She was particularly interested in the booming trend in Japan of children giving robots to their elderly parents. The robots theoretically serve a medical role — they can continuously monitor things like blood pressure or breathing patterns, and alert a doctor if something goes awry. But what really freaked Sherry out was that the robots were also posed as an emotional support. Since the children couldn’t be bothered to hang out with their near-death parents, they’d give ‘em a robot to talk to instead.
I used to think the emotional-robots-for-the-eldery trend was just a flash in the pan, but it shows no sign of diminishing. Today, toy giant Tomy releases a new line of “Yumel” dolls that have 1,200 built-in phrases, and are billed as a “healing partner” for dear old dad or mom. As the AFP reports:
The doll can be programmed to “sleep” or “wake up” in accordance with the owner’s pattern, saying “good morning” with open eyes at due time or inviting the elderly to sleep with the doll’s eyelids drooping.
“I feel so good, g-o-o-d n-i-g-h-t,” the doll says before falling asleep if the owner pats it on the chest gently.
Or Yumel may ask, “Aren’t you pushing yourself too hard?” when it judges the owner has been going to bed too irregularly or not spending enough time playing with it.
“If you lead an orderly life, Yumel will be in a good mood, singing songs or pleading with you to do something like buying him toys,” Kiriseko said.
This latter quality — pleading for assistance — is a relatively new development in robotics, and, as Sherry has written, it’s the most powerful way for a robot to forge a strong emotional attachment with someone. We used to think that robots would be omnipotent and powerful; indeed, as last year’s I, Robot evidenced, our biggest pop-culture fear is that our icy, perfect robots will simply out-evolve us and leave us behind. That fear is partly correct, of course; just ask anyone whose job has been given to a relatively error-free robot.
But the reverse is happening, too: We’re creating a generation of robots that demand to be taken care of — because as it turns out, what we as humans most need is to be needed. This is what the toy-makers realize: If you’re an 80-year-old Japanese woman who’s been effectively abandoned by her kids, maybe what you crave more than anything else is not necesarily attention, but someone who wants you to take care of them.
(Thanks to Ian for this one!)
NNDB just launched — a sort of Internet Movie Database for real-world famous people. It pitches itself as an “intelligence aggregator” that finds links between the famous, be they other people, institutions, or memes. As it notes on its front page:
Superficially, it seems much like a “Who’s Who” where a noted person’s curriculum vitae is available (the usual information such as date of birth, a biography, and other essential facts.)
But it mostly exists to document the connections between people, many of which are not always obvious. A person’s otherwise inexplicable behavior is often understood by examining the crowd that person has been hanging out with.
The concept is pretty cool. But I can’t find anything on the sites that explains how the info is collected or collated, though, so I have no idea how well this thing works, or how it works at all. Anyone know anything more about it? The domain is registered to “Soylent Communications” in Mountain View, California.
(Thanks to Cris for this one!)
My friend the artist El Rey — who regular readers will recognize from many previous posts, including the time when my attempt to get his squid painting turned into a stamp was busted by The Man — has just started a blog! It’s devoted to his thoughts on art, which are very cool and down to earth. In his first few posts, he tackles the question of color in paintings:
I still haven’t sat down and fully thought out my views on color, but I know I care about ‘em maybe disporportionately. If a picture has the properties of, say, subject matter, line/draftsmanship, color, and manner of execution, I’d say color gets half of my attention and the rest is divided up, with subject matter ahead of the others.
It’s kinda similar to when I was just starting out in music and I got my first distortion pedal and guitar amp; the different noises I could make interested me more, really, than what notes I played. I got a Nord Lead 2 Rack synthesizer in a gear trade and I haven’t used it much to actually play notes. More often than not, I just set a little serviceable gibberish loop going and tweak the knobs, rolling around in the delicious sounds exuberantly like a dog in something stinky. Color can be like that for me.
He also points to some way nifty outside resources that have informed his style, such as the Speedball pen technique that produced the exquisitely hand-lettered fonts on movie posters in days of yore. (That picture above is from a Speedball technique manual.) He points to this really jawdropping video of a current Speedball master tossing off note-perfect font letters by hand.
The current issue of New York magazine features a piece by me on the current frailties facing the city’s subway system — which is over 100 years old now and still has many of its original parts! Quite apart from the question of the crisis, the story was a blast to write, merely because the system’s famous complex engineering is really interesting. As I called it, New York’s subway is “the world’s most remarkable Rube Goldberg device — with cutting-edge fiber-optic switches sitting alongside pre-World War II Bakelite phones, custom-engineered radio cable running along areas festooned with dripping stalagmites, and passengers streaming obliviously by, barely glancing up from their BlackBerrys.”
As any engineer knows, with that many moving parts, whatever you’re not actively fixing is in the process of breaking. And for ten years now, the subway has been mercilessly screwed for maintenance-and-upgrading cash by New Yorker governor George Pataki. So my piece opens with a hypothetical disaster:
Picture this: Sometime in the near future, you get on a subway train heading into Brooklyn, and zoom into the tunnel. Unbeknownst to you, though, something bad is happening on the other side of the river. A track fire is smoldering. Throughout the subway, there is fine-grained metal dust that comes from the constant grinding of wheels on rails. It’s combustible stuff, and tonight, as the train ahead of you leaves the station, the 600-volt current from the third rail arcs and ignites some of the dust, like a Fourth of July sparkler. The sparks torch a mess of paper wedged on the tracks, left behind because budget cuts have resulted in fewer cleanups. Normally these fires are a mere nuisance—but this one really gets going, and soon the tunnel ahead of you fills with acrid smoke. The tunnel’s nearest ventilation fan hasn’t been fully repaired, so the smoke doesn’t clear. The motorman tries to contact his command center, but his radio has hit one of the system’s “dead spots,” so he gets no signal. Chaos ensues: The car fills with smoke, nobody has any clue what’s going on, and a bunch of passengers start kicking out windows in a bid to escape.
Hypothetical, but, as I found out, not far-fetched. If you want to read the whole story, it’s online here, or if you’re local, buy a copy on the newsstands now!
(Thanks to Blogdex for this one!)
Here’s a terrific way to respond to the unscientific idiocy that is “intelligent design”. If I.D. presumes that life is so complex that it could only be designed by an intelligent being, then let’s examine life as if it were designed. At which point the question becomes: Was it well designed? What quality is the engineering of this unnamed, omnipotent creator?
Rather slipshod, as Jim Holt discovers when he conducted this thought experiment in yesterday’s New York Times Magazine. Holt points out that in mammals, the recurrent laryngeal nerve doesn’t go directly from the cranium to the larynx, “the way any competent engineer would have arranged it.” Instead, it circles around the neck and chest, which means a giraffe has a 20-foot-long laryngeal nerve when it would only need 1 foot. “If this is evidence of design, it would seem to be of the unintelligent variety,” Holt notes. What’s more, 99 per cent of all species that have ever existed have died out — which again makes no sense for a “created” world. Both these effects are, however, easily explainable by evolutionary theory — which posits that species produce a constant stream of random mutations, the vast majority of which simply don’t work out, and some weird-looking variations (such as that laryngal nerve) that persist so long as the overall organism is fit to survive.
Here’s the best part:
The gravest imperfections in nature, though, are moral ones. [snip]
Why should the human reproductive system be so shoddily designed? Fewer than one-third of conceptions culminate in live births. The rest end prematurely, either in early gestation or by miscarriage. Nature appears to be an avid abortionist, which ought to trouble Christians who believe in both original sin and the doctrine that a human being equipped with a soul comes into existence at conception. Souls bearing the stain of original sin, we are told, do not merit salvation. That is why, according to traditional theology, unbaptized babies have to languish in limbo for all eternity. Owing to faulty reproductive design, it would seem that the population of limbo must be at least twice that of heaven and hell combined.
This is just lovely: the Maoist Internationalist Movement website runs reviews of video games, in which it carefully picks apart the running-dog capitalist assumptions that undergird the major war and sim titles. Sim City, for example, has “completely bourgeois assumptions”; and while the reviewer kind of enjoys Microsoft’s Rise of Nations, he admits that “to sell a game modeling how to achieve peace and harmony is a lot to ask our gaming bourgeoisie at the moment”. The reviews are both a) unintentionally gut-bustingly funny, and yet b) oddly perceptive at times, though a) tends to overshadow b).
My personal favorite is the review of Star Wars: Knights of the Old Republic, in which the writer just completely loses his shit:
It is far easier to use your powers to steal, than to do charity. The game is far easier if you “go capitalistic,” which clearly points to early indoctrination conspiracy by the fascists.
Throughout the game, the player is pitted in fights against common people, where s/he is shown that to resist the powerful means certain death. The deception is perfect, since the player is the power and this pries his/her mind open to indoctrination even more.
As is well known for the fascist bastards, they take enjoyment in killing others, and the game rewards the player for killing common people. In fact, it is not possible to progress in the game without killing commoners, and the manufacturer plainly tells the player: those who have the courage to kill, are strong. Those who don’t are weak.
In one part of the game, the player fights for money—to the death. This serves well to illustrate the society, where money is everything and human life is worth nothing. The worth of a corpse is solely a function of its persynal properties, which have to be looted to advance in the game.
All in all, the Cossacks/Knights of the Old Republic is one of the most manipulative pieces of software ever devised. It leeches morality of young minds and prepares them to kill their peers to prevent a revolution. After all… the strong fascist knight shall always win.
(Thanks to Boing Boing for this one!)
Ever wonder why you overcommit yourself — promising to do more things than you’ll ever have time to accomplish? In the current issue of the Journal of Experimental Psychology, two scientists tried to figure out this puzzle, by devising a couple of experiments comparing how we commit time and how we commit money. It turns out that we’re much better at predicting how much money we’ll have in the future, probably because money is a tangible asset; time is harder to judge. As the press release for the study notes:
Can people learn to predict future time demands more in line with reality? The authors observe, “It is difficult to learn from feedback that time will not be more abundant in the future. Specific activities vary from day to day, so [people] do not learn from feedback that, in aggregate, total demands are similar.” Money’s “slack pools” are smoother, more equal and more predictable over time.
You can read the entire study itself online here as a PDF.
(Thanks to Lifehacker for this one!)
When historians look back at the history of video, I predict they’ll regard the evolution of the “pause” button as a weird, aesthetically important moment.
It used to be that freeze-framing a piece of live action was only possible for high-end sportscasters or artistic stop-action photographers. But when the VCR emerged, it gave everyone that ability. Back in the late 80s, the satirical Canadian magazine Frank used to run cartoon strips composed of paused moments from the nightly news. Invariably, they’d capture the newscaster — or a political guest — making one of the incredibly ungainly expressions that flicker across one’s face in the course of normal speech. They’d put sardonic captions beneath it, and presto: A form of political-remix comedy was born. (Well, maybe not “born”; I’m sure someone had done this before.) These days, the freeze-frame-with-witty-caption is de rigeur on comedy web sites and TV shows. Tivo has amped freeze-frame culture up into the stratosphere.
This is why I was intrigued to learn about PAUSE, a recent art exhibit by Chris Larson. Larson created a wood replica of the car from The Dukes of Hazzard — the General Lee — crashing through the cabin of the Unabomber, Ted Kaczynski. It neatly evokes the gorgeous riot a Tivo user sees every day: The kinetic insanity of pop culture striking a sudden pose. As Larson puts it in his accompanying text:
Forcing together two illogically relevant worlds, Chris Larson creates a monument to duality. Crashing the General E. Lee, the 1969 Dodge Charger from TV’s The Dukes of Hazzard, into a wooden shack, representing Ted Kaczynski’s Montana cabin, brings into the same space similar ideologies expressed with both childhood recklessness and premeditated social disregard.
Heh. Larson seems like a brilliant dude, but … man alive, that piece of prose is a good example of why artists should avoid writing their own accompanying texts; the pomo lit-crit jargon is precision-engineered to piss off the average viewing public. Nonetheless, check out the site — I’m just sad I missed this exhibit, since it closed two weeks ago here in New York. Sigh.
(Thanks to Sensory Impact for this one!)
Apropos of my column last week in Slate about communicating with the paralyzed — the locked-in or minimally conscious — here’s a wild story: A woman who has been mostly incommunicado for 20 years has suddenly started talking again. Sarah Scantlin (pictured above) was struck by a drunk driver in 1984, and for 20 years had only been able to blink her eyes once or twice for “yes” or “no”, and no-one was entirely clear how much she really understood. This January, she began to speak. Doctors have no idea how this regeneration took place, but the conversations she’s been having are both poignant and unsurpassingly surreal. As the Associated Press reports:
Family members say Scantlin’s understanding of the outside world comes mostly from news and soap operas that played on the television in her room.
On Saturday, her brother asked whether she knew what a CD was. Sarah said she did, and she knew it had music on it.
But when he asked her how old she was, Sarah guessed she was 22. When her brother gently told her she was 38 years old now, she just stared silently back at him. The nurses say she thinks it is still the 1980s.
(Thanks to Plastic for this one!)
The bleed between the online game-world and the real one continues apace. For years, people have been using the real world to buy and sell virtual items like castles, platinum pieces, and in-game characters. Now Pizza Hut has inverted the proposition — and created an app that allows you to order a pizza from inside the game. As their web site says:
You’re in luck — pizza is just a few key strokes away! While playing EverQuest II just type /pizza and a web browser will launch the online ordering section of pizzahut.com. Fill in your info and just kick back until fresh pizza is delivered straight to your door.
Actually, geeks have been mining the virtual world —> real-world bleed for some time now. One guy I talked to was doing some unauthorized bot-farming: Setting up bots that would make raw materials he could sell to get virtual money (and then sell at PlayerAuctions.com for real money, of course). The game supervisors don’t allow this, so they had staff people who would walk around the game, finding miscreants with bot-farms and banning their accounts. So this guy wrote a basic ALICE chatbot script for the bots to use. If you walked up to them and said hello, they could keep up a reasonable conversation for a few minutes, since in-game chat — even amongst actual human players — is pretty stripped down, and not hard to auto-emulate. (Saying “lag” — i.e. complaining that you can’t easily chat right now because the Net is clogged and you’re experiencing lag-time — can buy a bot a few crucial minutes of grace.) But this guy knew that his bots couldn’t pass as human for too long under careful scrutiny, so he also wrote a script that would send a message to his pager whenever a bot got approached. That way, wherever he was — at work, in the bath — he could immediately race to him computer, log in, and successfully convince the supervisor that the bots were real people.
That basic idea — sending messages from the game world out to the real world — strikes me as having enormous potential. Various gamers have ported their email to games, so they get alerted whenever an important real-world communique arrives. Indeed, if as game economist Edward Castronova argues, a greater and greater chunk of people are going to spend their lives inside game worlds — where they effectively generate thousands of dollars of real-world value every year, usually at tasks far more interesting than their “real” jobs — then it behooves game designers to allow them to manage and monitor their real lives more easily from inside games.
(Thanks to El Rey for this one!)
Robert Frank is among the most brilliant economists currently working today, with a terrific grasp of how emotion plays into decision-making; his book Luxury Fever does a great job, for example, of explaining why envy often makes a lot of sense. Today he wrote a column for the New York Times talking about some intriguing experiments testing the attitudes of economists — and how they compare to non-economists. As Frank writes:
In an experimental study of private contributions to a common project, two sociologists from the University of Wisconsin, Gerald Marwell and Ruth Ames, found that first-year graduate students in economics contributed an average of less than half the amount contributed by students from other disciplines.
Other studies have found that repeated exposure to the self-interest model makes selfish behavior more likely. In one experiment, for example, the cooperation rates of economics majors fell short of those of nonmajors, and the difference grew the longer the students had been in their respective majors.
Economists are, in short, more likely than you or I to be selfish creeps. Or to put it another way: The problem with economists is that they actually believe economic theory. For Frank, this is a problem because it’s part of why so much economic theory does such a spectacularly bad job of explaining our everyday lives, and also why governments that try to craft policy according to rational-actor thought tend to screw things up.
Of course, it’s still theoretically possible to defend rational-actor thought. You could argue that sure, maybe people on an individual level do not make careful, rational money decisions — but in the agregate, the ur-decision of the masses is rational. In this formulation, rationality arises as a form of emergent behavior, out of the zillions of seemingly irrational decisions of everyday folks. Like a recessive gene, the rationality of buying stock in BagofRocks.com or springing for a pair of $300 Nikes or voting for George Bush’s hallucinogenic tax cuts was present, but merely latent or deeply buried. If you really believe that, though, I got a bridge here in Brooklyn you wanna look at.
Thomas Mahon, the tailor for Prince Charles, has started a blog in which he discusses the art of creating a really nice “bespoke” — i.e. handmade and crafted to fit an individual body — suit. It’s hilariously funny and really informative; Mahon has an encyclopedic flair for explaining the subtleties of his craft. My favorite posting so far is where he discusses the difference between the “fused canvas” versus the “floating canvas”. The canvas, as it turns out, is part of a suit jacket: It’s the layer of cloth between the outer cloth and the inner silk lining, which is what helps the coat retain its shape. Machine-made, off-the-rack suits use a synthetic material for the canvas “which effectively turns to glue when heated”. But …
… with a proper, bespoke suit, the coat is canvassed by hand. Yes, we use a real piece of wool & mohair based canvas. And yes, it does take forever.
Why have a hand canvas? It looks better. With a fused coat, there’s no give. Where the outer cloth goes, the fused material goes, and vice-versa. They’re just machine-stuck together. There’s no synergy between the two.
But with a floating, hand canvas, there’s give. There’s synergy. The end result is the suit follows the contours of the body more naturally. There’s less surface tension. The fit looks more relaxed and elegant without compromising form.
The other great thing with a hand canvas is, if it isn’t put in absulutely, 100% correctly, it doesn’t hang properly.
As I’ve mentioned before, I’m a big fan of suits and ties, and have a stupidly large number of them. (Particularly for someone who works in a home office.) I always tend to freak out my geek friends when I show up wearing a suit at some technology or science event. Yet it always equally surprises me that geeks aren’t into suits.
After all, suits have many of the things that geeks particularly appreciate: Intense levels of engineering, an obsession with structural elegance, physics, totally wicked gear that’s used to create them, topographic geometry, and materials science that burrows right down to chemistry and — these days — nanotechnology. And when it comes to ties, my god, you’ve got the most awesomely realized application of knot theory on the planet. Perhaps most geeklike, the suit is, in essence, a role-playing device — a plus-5 armor vest eminently useful in plenty of situations. (For that matter, the corporate world is only slightly less mannered than the Society for Creative Anachronism, come to think of it.)
As for that stuff about a suit and tie not being comfortable … ah, don’t get me started. That’s an artifact of badly-fitting suits and badly-tied ties. When a suit is well fit and a tie well tied, they’re so sublimely comfortable that you can pretty much play basketball in them. Come to think of it, I have.
Anyway, that’s what struck me about Mahon’s blog: He has a hackerish attitude towards suits.
Last week, a report in Neurology outlined some rather disturbing findings: Apparently, supposedly “minimally conscious” brain-injury patients may be far more conscious than we realize. The scientists located two men who’d suffered terrible brain injuries, leaving them able to breathe on their own but otherwise unresponsive. Then the played them audiotapes of their loved ones relating cherished stories from their past. The result? The men’s brain activity was amazingly close to that of “normal”, fully-conscious people — and one of the men even had high levels of visual-cortex activity, indicating that he was perhaps visualizing the memories. If this study holds water, we may need to radically rethink how we deal with the minimally conscious — who are often abandoned and left with almost no stimulus.
Better yet, is there any way to communicate with them? This is the subject I tackled in my latest Slate column, where I looked at the state of “brain computer interfaces”. An example:
One promising technique for unlocking the thoughts of paralyzed patients is to hook them up to electroencephalograms. EEGs read the electrical impulses caused by brain activity, including the “P300 wave,” something like an involuntary “aha” response. When you’re looking at a set of items and see something you suddenly recognize, your brain automatically kicks out an electrical spike 300 milliseconds later. You don’t have to think about it; it just happens.
Psychologists Lawrence Farwell and Emanuel Donchin have turned this response into a rudimentary typing machine. The patient gets hooked up to an EEG, then looks at a computer screen that shows a six-by-six grid of the letters of the alphabet. When he focuses on a certain letter, the computer begins highlighting each column. As the column containing the chosen letter comes up, the subject’s brain spits out a P300 “aha” response. When the computer repeats the same thing with the rows and gets another “aha,” it gets the X and Y coordinates for the correct letter. Using this technique, people with ALS can “type” about four letters per minute. Best of all, because the “aha” response happens automatically, they don’t have to learn any new skills.
Sorry for the paucity of postings in the last week — I’ve been on a crunched deadline, which wraps up this Friday. Volume will probably be light until then, at which time I return to your regular programming.
This is weirdly lovely. The Last Clock is a software timepiece, with three rings composed of a video taken from a live feed. The outer ring is the second ring; the middle one is the minutes, and the innermost the hours. The clock is thus, as the creators explain, “a record of its own history.” More:
As the hands rotate around the face of the clock they leave a trace of what has been happening in front of the camera. Once Last has been running for 12 hours, you end up with an easy-to read mandala of archived time.
That clock above was composed of images from a building in Taichung, Taiwan. Not terribly practical, but, like the Clock of the Long Now, a neat way refiguring how we think about time and our relationship to it.
(Thanks to the J-Walk blog for this one!)
Okay, space cadets — want to win a ride into orbit? Then tune into the Super Bowl and watch Volvo’s ad for its XC90 V8 SUV!
Will the ad include Volvo executives sheepishly admitting they’ve completely abandoned their decades’-long quest to create safe vehicles by producing yet another ridiculously high-riding, ill-balanced, quintessentially tippable deathmachine SUV, in a desperate bid to cash in on idiot boomer parents’ attempts to Darwinianly remove themselves and their children from the gene-pool by driving vehicles they can’t control?
Well, no, they probably won’t actually admit that. But Volvo will include a surprise — as Geek.com reports:
In the ad Volvo compares its new XC90 V8 SUV to a rocket blasting into space.
The ad marks the first time Volvo has run an ad during the Super Bowl … but that won’t be the only first for the ad. During the commercial it is revealed that the rocket ship’s pilot is none other then Sir Richard Branson, who has a unique offer. Virgin Galactic, the first commercial space tourism operator (see our coverage), is giving away a seat into space when the company debuts its commercial space service in two to three years.
That’s so cool it almost makes me respect Volvo for making that SUV. But not really.
(Thanks to Rob for this one!)
The CBC has just launched a superb new website devoted to arts coverage — and many of the writers are expatriates from the late, excellent Shift magazine. The articles include a terrific photo history of the personal audio-player and a great essay on why “comics have abandoned children”.
But my personal favorite was a short ode to Hammy Hamster, the star of the long-running Canadian TV show Once Upon a Hamster. If you’ve never seen the show, it’s difficult to describe just how enjoyable — not mention hallucinogenically odd — it was. Hammy lived in a shoe near a riverbank, and would wander around getting into adventures with his friend G.P. the guinea pig, as well as other nearby residents. The thing is, this was a live-action show: The producers would try and lure the rodents to “perform” actions that looked vaguely like what the plot maintained, with closeups on their faces whenever the voiceover actors had them “saying” something.
These days, of course, we’re accustomed to high-tech CGI manipulations of live-action animals — or straight-out, full-CGI animals, such as in Finding Nemo — such that their mouths appear to be actually moving in synch with their speech. Still, there was something weirdly expressive about seeing a full-frame shot of a hamster staring into the camera, doing nothing particularly special — just, y’know, being a hamster — while his “voice” discussed the often deeply philosophical questions of the show: Life, death, friendship, love, food, frogs.
The lid really blew off during the action sequences. G.P. would regularly build model airplanes, hot-air balloons, or diving bells, and somehow convince Hammy — a pretty nervous guy — to go for a ride, which would inevitably end in near disaster. The sight of a hamster and guinea pig shoved into the cockpit of a model airplane was, initially, simply too bizarre for the CBC, and the executives initially turned down the show when it was first produced in 1959. (The only network that would show it was the BBC.) But eventually it made it to Canada, where my teenaged friends and I would watch it in total hysterics after school. As the CBC arts piece notes:
People still speak wistfully of Sutherland’s gentle, artless narration, which was as integral to the show’s charm as the sight of twitchy, uncomprehending rodents scurrying across a simulated nature set. … At its pinnacle, Once Upon a Hamster was seen in more than 30 countries. It didn’t reach a U.S. viewership until the ’90s, where it delighted insomniacs and stoners on late-night television. That’s where it caught the notice of Alan Ball, creator of HBO’s Six Feet Under, who ended up using a clip of Once Upon A Hamster in an episode of his lauded series.
Which episode was that? My god, I’d love to see it. There’s a whole website devoted to Hammy, if you really need to waste more time at work reading about this.
Here’s a story straight out of the Oliver-Sacks file: Neuroscientists have recently been studying Esref Armagan, an Istanbul man who is a proficient painter, even though he’s been blind since birth. He paints “houses and mountains and lakes and faces and butterflies”, even though he’s never seen any of them. (That’s one of his paintings above.) When the scientists hand him a cube to touch, he is subsequently able not only draw an accurate picture of it — but he can draw it from different perspectives, such as from across the room, or from above.
So they put his head in a brain-scanner (the story doesn’t say which type, though I suspect it’s an MRI tube) and get him to draw some pictures while they observed his brain activity. What they found was that his visual cortex was firing so powerfully that it’s nearly on par with a fully-sighted individual. And as the New Scientist reports, this leads to a humdinger of a question:
What is “seeing” exactly? Even without the ability to detect light, Armagan is coming incredibly close to it, admits Pascual-Leone. We can’t know what is actually being generated in his brain. “But whatever that thing in his mind is, he is able to transfer it to paper so that I unequivocally know it’s the same object he just felt,” says Pascual-Leone. [snip]
We normally think of seeing as the taking in of objective reality through our eyes. But is it? How much of what we think of as seeing really comes from without, and how much from within? The visual cortex may have a much more important role than we realise in creating expectations for what we are about to see, says Pascual-Leone. “Seeing is only possible when you know what you’re going to see,” he says. Perhaps in Armagan the expectation part is operational, but there is simply no data coming in visually.
A fascinating factoid, buried in this story, is that many or even most blind people have this ability, but aren’t given the ability to develop it; no one ever thinks to tell a blind kid to draw something.
This one’s been blogged heavily already — I’m coming late to it — but it’s so sufficiently weird that I felt it worthy of note again.
(Thanks to Brian for this one!)
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog