Next year, students taking their SATs will have a new task to perform — a 25-minute longhand essay. And this is apparently panicking teenagers across America. Why? Are they worried that their nanoscale attention spans are not longer up to the task?
Nope. They’re worried they no longer are able to write by hand. Growing up in the digital age means you write solely by keyboard, or by 12-button mobile phone keypad. As one student told the Seattle Post Intelligencer:
“People like myself, who don’t have good handwriting, are wondering if some anonymous person is going to think I spelled stuff wrong and not understand what I’m trying to say,” said Lucas Rohm, a 16-year-old Country Day alum who is now a rising junior at Greenwich High School. “I definitely feel handwriting is something I need. Country Day just kind of brushed that out.”
I can sympathise. I’m a journalist, and I crank out quite a lot of text each month, but since I spend the majority of my time at my keyboard, my muscle memory for handwriting is simply shot. I take notes pretty frequently on notepads, but I almost never write entire stories by longhand. But every once in a while I’ll have to work with pen-and-paper — as, for example, when I’m on the road and can’t use my laptop, but am on a deadline and need to start sketching out an article while on a bus. And as I work away, I wonder: Is there any difference between our cognitive styles when we write longhand, versus typing on a keyboard?
Since I type about 70 words per minute, I can type practically as fast as I can compose sentences in my head. So does the much-slower pace of handwriting actually create a different way not just of writing, but of thinking? Does the buffer buildup between my brain and my arm affect things?
What I mean is this: When I’m typing, because I can generate text so fast, I’ll toss lots of stuff out on the page — and then quickly edit or change it. But when I’m writing by hand, because it’s so much slower I’ll try to compose the sentence in my head before trying to write it. With a keyboard, I sort of offload some of my mental-sorting onto the page, where I can look at the words I’ve written, meditate on them, and manipulate them. With writing, that manipulation happens before the output. Clearly this would lead to some cognitive difference between the two modes … but I can’t quite figure out what it would be.
Along these lines, it’s worth pointing out that a 23-year-old student in Singapore seems to have set a new world record for speed-typing on a phone keypad. As the Globe and Mail reports:
Student Kimberly Yeo, 23, managed to type a fiendishly complicated 26-word message on her phone in 43.66 seconds, organizer Singapore Telecommunications said in a statement Monday.
Her effort — in heats held at the weekend — could beat by a wide margin the existing text message record of 67 seconds, set last year by Briton James Trusler in Sydney, Australia, it said. [snip]
Contestants had to type: “The razor-toothed piranhas of the genera Serrasalmus and Pygocentrus are the most ferocious freshwater fish in the world. In reality they seldom attack a human.”
(Thanks to Techdirt Wireless for this one!)
Dig it: NASA has been using evolutionary algorithms to evolve superior antenna designs. As their web site puts it:
Our approach has been to encode antenna structure into a genome and use a GA to evolve an antenna that best meets the desired antenna performance as defined in a fitness function. Antenna evaluations are performed by first converting a genotype into an antenna structure, and then simulating this antenna using the Numerical Electromagnetic Code (NEC) antenna simulation software.
That tiny, weird-looking antenna above? It uses less power than previous, similar designs, yet allows for a broader array of angles at which it can receive data.
(Thanks to Boing Boing for this one!)
The corpse flower is about to bloom!
I’m very excited. The corpse flower, in case you don’t know, is nature’s single-most revolting plant. When one of these three-foot-tall beauties opens up, it gives off the scent of rotting flesh. The University of Connecticut has managed to cultivate one, the first example in the northeast in 60 years, and any day now it’s due to open up.
If you’re lucky enough to be nearby when it opens, here’s how University officials describe the smell:
The corpse flower is specifically adapted to attract carrion flies and beetles, which ferry pollen between plants so they can produce seed, a job accomplished for more ordinary plants by bees or butterflies. The colors of the corpse flower — a sickly yellow and blackish purple — imitate a pot roast that sat out in the sun for a week. The fragrance is universally described as being powerful and revolting, with elements of old socks, dead fish and rotten vegetables. As if that isn’t weird enough, the corpse flower is actually warm-blooded, heating itself up at the height of flowering, probably to help spread its putrid odor. All of this is totally irresistible to flies, who must think they’ve chanced upon a dead elephant, and are tricked into pollinating the plant.
Kind of like Enron investors. At any rate, there’s a web cam on that page I linked to above, so you can check in periodically to watch U of Connecticut botanists retching uncontrollably.
Two years ago, when I headed off to a science-journalism fellowship in Boston, I realized that I’d spend nine months in a long-distance relationship with my New York girlfriend. Being long-distance sucks, so I was trying to figure out ways to simulate being together as much as possible. We already IMed a lot, so we had a lot of virtual “presence.” But I wanted more than that. I wanted telepresence. I wanted a robot avatar that I could command from afar, and use as my proxy in her physical space.
You might well ask why in hell this chick goes out with me, but that would clearly be a much longer post.
Anyway, the point is, I originally hoped to get my hands on a first-generation iRobot. You probably know iRobot as the company that makes the impossibly cute Roomba. But iRobot’s original product was a full-on telepresence ‘bot. You could leave it in whatever location you wanted, and when you needed to virtually visit, you’d use a web interface to remotely “robot in” and control the avatar — seeing what it saw through its webcam eyes, and speaking to people in the room using its speakers. It was just beyond righteous. Unfortunately, it was also somewhere north of $3,000 and the company had sadly stopped making them. So I gave up on my dreams of creepy stalker robotic telepresence.
Until I logged on today and got wind of the Pekee robot by Wany Robotics. It’s even more expensive — $10,000 — but seems nicely customizable for remote control. As the manufacturer’s web site points out:
The Pekee robot is designed around a completely open architecture that provides total flexibility for your robotic application testing and development. Its built-in infrared, temperature, and light sensors, odometers, shock detector, and gyrometers ensure that you can monitor critical elements in the robot’s environment at all times. The Pekee platform lets you pursue your own projects at all levels, from trajectory planning to real-time programming in consumer prodcuts such as robotic vacuum cleaners and interactive toys.
(Thanks to Sensory Impact for this one!)
A couple of days ago, one of the blog readers here criticized me for my position that computer illiteracy is a problem. This might have been a bit mysterious — the poster wrote this in my blog entry about a virtual bugle. But what the poster was responding to was a piece I wrote for Slate last week but have yet to blog.
So, henceforth, here’s a self-aggrandizing link to my article of last week, which surveys the world of malware — adware, spyware, and viruses — and proclaims the dismal truth: Some of the problem is rooted in our massive cultural ignorance of how computers work.
At the center of my argument is this point:
Is it fair to expect computer users to be knowledgeable about the innards of software? We use plenty of other complex, dangerous tools—such as cars—without needing to understand the fine points of their internal mechanics. But our computer ignorance is, even by those standards, horrific. When a computer user doesn’t know that an “.exe” file is a program (and possibly a virus), it’s like not knowing that cars are fueled by gas and that gas is explosive. It’s basic stuff.
Behold the minibike. I’ve seen one or two people riding these things around Brooklyn, but apparently they’ve really taken off in San Francisco. The New York Times had a terrific piece in the weekend Fashion & Style section about these street-illegal teensy rides, which included this observation:
According to riders, part of the thrill is the bikes’ proximity to the ground; they are only 16 to 18 inches tall. “You’re inches from disaster,” said a San Francisco rider named Dan, 22. “It’s a spectacularly precarious, awesome feeling.” Yet many street riders don’t even wear helmets. In San Francisco, a typical rider zipping at a fast clip along Page Street last weekend wore shorts, a T-shirt, a backward ball cap and flip-flops.
Ever wonder just how multilingual your state is? The Modern Language Association has an incredibly cool online application that lets you pick a state, pick a language, and map out the density of speakers in the region. Above is a map showing Spanish speakers in New York.
(Thanks to MemeFirst for this one!)
I’ve written before about Zipf’s Law — a concept invented in the early 20th century by the social scientist George Zipf. Zipf counted word occurence in hundreds of newspaper and magazine articles and found that while English has about 26,000 common words, over 90% of everything we say or write uses merely 2,000 of them. So if you plotted the most-common to least-common words on a graph, you’d see the first few spiking way up high, then quickly dropping down to an almost flat line as you get past the 2,000 most common words. A Zipf Curve looks like a ski slope.
Later on, the economist and sociologist Herbert Simon offered an explanation for this. He pointed out that words gain meaning the more they’re used — which gives the first few words in a text “first mover” advantage, since they’re helping to define the topic under discussion. For example, I’m more likely to re-use words “occurence” or “meaning” in the rest of this article, while probably never using the word “lawnmower.” That’s part of how a text builds meaning: By introducing a few key words and repeating them, over and over again. (On a broader scale, that’s probably how language evolved too, and possibly why the Zipf Curve exists.)
Anyway, a physicist recently decided to see if music behaves the same way. Damian Zanette of the Balseiro Institute studied the occurrence of notes in several pieces of music. Presto: They, too, had Zipf-Curve distributions. What’s really interesting is when he compared the distributions in “pleasant” music — like Mozart — versus atonal, “difficult” music, like Schoenberg. As Nature reports:
The pieces by Bach, Mozart and Debussy all produced a relatively steep graph, suggesting a strong relationship between rank and frequency, and therefore a high level of meaningful context. In other words, if you have heard part of the piece, it is relatively easy to predict what kind of thing will come next. Zanette adds that jazz pieces he tested showed a similar pattern.
But the Schoenberg piece, one of the first truly atonal works, had a much flatter graph. This means that the piece does not have a set vocabulary of commonly used words that keep appearing. Instead, the size of the vocabulary increases at about the same rate as the length of the piece; new “words” are constantly introduced, while earlier ones are seldom repeated.
For years, the bugle song “Taps” has been the song played over a soldier’s funeral; its instantly-recognizable tune is the psychological soundtrack to North American postwar grief. But apparently, as the years have gone by, military buglers have become something of a dying breed. It’s now quite hard to find one for a military funeral.
So the Pentagon developed a digital bugle — a small speaker that is placed inside a bugle horn, and which plays a digitally recorded version of “Taps”, while the faux-bugler holds it to his or her lips and fakes it. It’s been used in half of the 38,000 military funerals held so far this year. As CNN reports:
“It’s the closest and next best thing to the real thing,” said Mark Maynard, director of the Riverside National Cemetery in California, where a few of the Iraq casualties have been buried. “A bone of contention with veterans organizations and families was just the sound and tackiness of the military carrying boom boxes to play taps.”
What’s interesting here is people’s natural distaste for simulation, when it comes to something as sensually rich — and emotionally significant — as a bugle performance of Taps. A boom box seems inescapably tacky; a real bugle emitting the sound of taps doesn’t, even if the person isn’t really playing. According to the CNN piece, the families sometimes don’t know the alt.bugle isn’t the real thing.
That means there’s a sort of emotional Turing Test going on here: When the fake bugler puts the fake bugle to his lips, what precisely alerts the family to the presence of a simulation? His cheeks aren’t blowing in synch with the music? The tonality of the song isn’t quite right? Does the recording sometimes seem too “perfect,” and devoid of the tiny flaws that make the real seem real?
There’s a great story in today’s New York Times science section about the different types of malevolent bosses. It quotes several psychologists who’ve recently been studying the ways in which nasty, abusive, mean-spirited tyrants affect a workplace. Among their most surprising finding is this:
The mystifying thing about this pattern is that it does not appear to undercut productivity. Workers may loathe a bullying boss and hate going to work each morning, but they still perform. Researchers find little relationship between people’s attitudes toward their jobs and their productivity, as measured by the output and even the quality of their work. Even in the most hostile work environment, conscientious people keep doing the work they are paid for.
The funny thing is, this shouldn’t be surprising. Management studies have found — time and time again — that low morale does not correlate to low productivity. A workplace can be a total sinkhole of misery, presided over by the most tiny-minded suburban Napoleons that god ever suffered to crawl across the face of the Earth … yet the employees can still be extremely productive.
What’s even more interesting is that people frequently refuse to believe this, even when confronted by empirical evidence. I’ve gotten in huge arguments with friends of mine about it. They insist that the only way a company can be productive is if its employees are happy; it offends them, on some moral level, to suggest that happiness and morale are irrelevant in a company.
I think it’s because, culturally, it’s alarming to realize how little the marketplace cares about your well-being at work. The free market may be many excellent things — efficient, productive, and a producer of fascinatingly unpredictable bits of distributed intelligence — but it absolutely is not an instrument of justice, fairness, or even happiness. That ought to be obvious. But we’ve been lovebombed for years by political economists who equate the free market with democracy and human rights, when the two are rather different fields: Not necessarily opposed, but sometimes — even frequently — in conflict.
The other reason people assume unhappy workers will be unproductive is that we make a logical mistake inferring from the opposite. We see situations in which a) the employees are happy, and b) the company is productive, and think one necessarily follows from the other. Sure, a productive company can be the result of happy employees. But it doesn’t have to be. The company would probably do just as well if the worker bees hated their bosses and most of their jobs.
I’m not arguing that employers should be disdainful of their workers’ happiness. On the contrary, I think we have a moral imperative to treat people decently. It’s just that the marketplace will almost never punish a boss for being a complete dick.
And hey, so long as I’m pontificating here, I’ll go even further: I think the entire social role of happiness is poorly understood. In last week’s New York Times Magazine, Jim Holt reported on an intriguing new psychological study that upends some of our most cherished beliefs about happiness:
Researchers found that angry people are more likely to make negative evaluations when judging members of other social groups. That, perhaps, will not come as a great surprise. But the same seems to be true of happy people, the researchers noted. The happier your mood, the more liable you are to make bigoted judgments — like deciding that someone is guilty of a crime simply because he’s a member of a minority group. Why? Nobody’s sure. One interesting hypothesis, though, is that happy people have an ”everything is fine” attitude that reduces the motivation for analytical thought. So they fall back on stereotypes — including malicious ones.
I have no pithy observations on this one, other than to say — god in heaven that’s a big truck.
How big? It’s 224 tons, 24 feet high, and can carry 624 tons. For a sense of scale, note the size of the construction worker standing next to it. It costs $3 million and worldwide demand is only 75 a year. The New Scientist interviewed Francis Bartley, team leader for the company that makes this thing:
What’s it like to drive?
It’s like driving a house. After you’ve been around a lot, it’s not as exciting as if suddenly someone said: “Would you like to drive one?” You’d go crazy in that case! It’s not hard to drive, basically like driving an automatic shift car.
Isn’t it so big you could roll over a car without noticing? That’s basically true. The first time I was in it at a mine, the driver started to drive away and actually ran into the back of a service truck. It seems we mashed it down to the ground. I saw someone yelling, but we didn’t feel a thing.
(Thanks to the J-Walk Blog for this one!)
Ever wished your IM buddy icon looked more like you? Go to Abistation and try out the free Portrait Illustrator Maker, which lets you customize about 20 different vectors of your face and head. That icon above is the closest I could come to illustrating myself, though I’m not thrilled with the hair (a little too weirdly spiky on top) and the mouth (I look like a smug Great Ape).
Having said that, it’s an interesting gloss on my recent column about the Uncanny Valley — the robotocist theory stating that as emulations of humans become increasingly realistic, they become increasingly unsettling. Would a hyper-realistic CGI version of me look more true to life than the one above? Or would it look kind of creepy, because it’d look like an animation of my corpse? Is it possible one could get a better sense of my personality by looking at a highly stylized cartoon version of “me,” like the one above?
Assuming I didn’t look so freaking self-satisfied, of course.
(Thanks to Chris Shieh for this one!)
As the slogan goes, “skateboarding is not a crime.” But for property-owners increasingly pissed at finding their rails and benches crapped up by teenagers using them for serious grinding on the weekends, skateboarding is at least a hassle. So now there’s a company called Skatestoppers, and they sell these little metal brackets that prevent skateboarders from enjoying a nice smooth surface. (The company slogan: “Because signs are not enough!”) That’s one of the brackets pictured above, and the web site describes them thusly:
SKATESTOPPERS® skate deterrents are specially designed brackets that deter unwanted skating/biking by eliminating the long, smooth edges that skaters and bikers seek out. In effect, they are speed bumps for your walls, handrails, curbs, etc.
Skatestoppers also sells these little metal discs that can be affixed to flat concrete, making a smooth surface bumpy. They’ve got an example picture here, and it’s oddly beautiful! But then again, so is skateboarding.
(Thanks to Serial Deviant for this one!)
I’m not joking, actually. Last night I was IMing with a friend from Chicago and he mentioned that Diana Krall was about to go on The Tonight Show to perform a Tom Waits song. He told me to turn on my TV and check it out.
That’s when I realized — holy moses, I no longer know how to work a television set other than to play video games on it. I have six gaming systems hooked up to the TV in my office, and a complex system for powering them and allowing me to instantly switch from one gaming system to another. But I never actually watch television on the set. It is purely and solely an output device for games.
My TV isn’t even hooked up to cable. Mind you, it’s been so long since I’ve watched The Tonight Show that I had to concentrate for a second to recall whether it’s on cable or broadcast. Then I remembered, okay, it’s broadcast, so theoretically my cable-less set ought to be able to receive it. But I couldn’t figure out how to tune it to a channel. I know it’s theoretically possible, but it had been years since I’d done so; like a muscle that atrophies after years of living on a space station, or a limb that slowly shrinks and falls off during evolution, my ability to use a regular television had vanished.
Over in the living room, my fiance has her own TV hooked up to a Tivo. I don’t watch a whole lot of TV, but when I do, it’s always something she’s Tivoed for us. So this is another interesting cultural barrier: Were I to want to watch a show, I’d simply ask her to Tivo it, or figure out how to use the Tivo myself. Actually, more likely, I’d simply try to find the video online and download it or watch it streaming. Probably 95% of all TV I’ve seen in the last few years — and fully 100% of all news TV — has come to me over the Internet.
But the concept of turning on a normal TV to watch something live? Utterly foreign.
In our modern culture — addled to the point of delirium with celebrity culture, wealth worship, the nanofame of reality TV and the Washingtonian/Nietzchian pursuit of power for power’s sake — there’s nothing so intoxicating as an elite club. Particularly when you get to be on the inside with the kewl kidz, sneering at the unwashed masses crushing their noses against the glass!
Nobody understands this better than Google — and they’ve proved it, with the fiendishly brilliant rollout of Gmail, their new email service. Gmail is technologically very cool, with its enormous 1-gig storage space and intelligent “conversation” threading. So I wasn’t surprised when Gmail Beta launched last month, and people began wondering: How could they score one of these rare, exclusive, first-peek accounts?
By cosying up to the cool kids, that’s how. Google set up its Beta as a sort of influence-peddling scheme: It handed out a bunch of accounts to its friends and family and admirers, and allowed each of them to have a few “activation codes” so that they could invite their own friends and admirers in. And so on and so on. The end result? By last week, my circle of high-tech friends was consumed by people frantically sucking up to those who were on the inside, in hopes of someone letting them past the velvet rope. You can’t buy buzz like that. What Google realized was that while Americans love to prattle on about the democratic flatness and meritocratic fairness of their country, what they love even more is the ability to lord social power over others like nobleman at the Elizabethan court. It’s high-school dynamics as marketing!
Anyway, this freaky little Milgram experiment that Google is conducting has produced a rather funny side effect: Gmail swap. It’s an attempt to derail the Buffy-at-the-prom dynamics of Google’s marketing scheme by setting up an open trading board. People who want a Gmail account make an offer of something they’re willing to trade with people who have invitation codes. If the two agree, then voila! The transaction occurs.
What’s most interesting about this exchange is that — much like Ebay — it fixes a price on things that you’d otherwise consider intangible or priceless. Here are some of the things people are offering today as a trade for Gmail:
nekura offers “Your name in credits of my first game.”
sweet82 offers “a frienship with a sweet girl”
redredwine offers “$25 worth of underwear and socks”
got gmail offers “FREE lunch in San Rafael, Ca”
abazoe offers “a lukewarm poem and a mix cd”
db5z offers “Swap an Apple iPod”
pacmanfan offers “Few video clips of a redneck’s habitat”
collins619 offers “9/11 powerpoint pictures”
RadicalSpaceDude offers “Jesus Action figures!!!”
coco82173 offers “my homemade spring rolls”
Feedback offers “Original pictures of Adolf Hitler-1940s”
sammyafrikan offers “video thanks in tribal language”
By the way, if anyone here thinks my grumpy little rant here is prompted by my inability to score a Gmail account myself — you can send your comments to firstname.lastname@example.org.
This is the best casemod I’ve ever seen in my life. G-nome, an Australian dude, spent seven months hand-tooling the plexiglass, chrome, and piping necessary to produce this water-cooled desktop masterpiece. It looks like what you’d get if Ridley Scott channelled the ghost of Leonardo da Vinci and the two designed a computer together. Check out his web site — it’s crammed full of lavish pictures and specs. You really have to zoom in on the zillion different details to appreciate the stunning gorgeousity of this thing.
More proof that casemodding is the signature folk art of our times.
(Thanks to Slashdot for this one!)
A bunch of Slovenian digital artists have taken the classic porn movie Deep Throat and rendered the entire thing in ASCII animation: Deep ASCII. That picture above? It’s a shot from the opening credits — if you squint you can make out a woman walking to the right along a sidewalk. On their web site, the artists explain their concept thusly:
Deep ASCII is a full length conversion of the classic porno film Deep Throat, which amounts to 55 minutes of pure mute ascii porn. This genre was selected for its dominating close-ups, very convenient for resolutions ASCII can support.
I particularly love the Matrix-like quality of that old-skool, green-on-black computer-screen text. The whole world’s an illusion, particularly when you’re shooting porn in L.A.
So this three-pound meteorite — the size of a grapefruit — smashes through the roof of a house in New Zealand, nearly killing a couple’s young son. The mother, Brenda Archer, retrieves it, and according to Reuters:
The Archers, who are following expert advice by drying the rock out in their oven, plan to sell it or give it to a museum.
Recently, I wrote a piece about an art forger whose copies of Gaugin, Monet, and other modernists were good enough to fool experts in several countries. But could they fool a neural-net trained on the artists’ work?
That’s what computer scientist Eric Postma is trying to find out. He’s created an artificial-intelligence program called Authentic — which can stare at hundreds of a painter’s works, figure out regularities in his or her style, and then use that knowledge to deduce whether any particular painting is authentic. This also produces an interesting side effect: The program is able to spot patterns in an artist’s technique that no human has noticed before. As the New York Times reports:
By analyzing several hundred low-resolution images of van Gogh paintings downloaded from the Web, the Authentic team still managed to pick out, in hours, patterns that would have taken much longer to detect using manual research — for example, the fact that van Gogh’s use of complementary colors was greater when outlining human figures than it was for other objects in his paintings.
“As data, these Web images are terrible,” said Mr. Berezhnoy. “Nevertheless, our method confirmed an increase in van Gogh’s use of opponent colors. I could tell you that van Gogh started to use blue-yellow opponency in such and such year, and red-green later on, or that he used opponent color to highlight portrait silhouettes and human figures. Do you think I had time to draw those conclusions by studying all 854 paintings?”
That picture above is, by the way, Van Gogh’s “Olive Grove”.
Okay, this rocks beyond description. Rollei has just released a miniature, digital-camera version of the classic Rolleiflex 2.8 — the old-school piece of precision-tooled German engineering that revolutioned personal photography in the 1920s. Just like the original version, it has a top-down viewer: You hold the camera at waist level and peer down into the viewer, which is now, naturally, an LCD screen. As the Rollei site points out, this style has a bunch of cool psychological properties:
Why has the Rollei Twin Lens Reflex always been preferred for portraits? The camera, held at waist level, never ‘stares the model in the eye.’ People go on looking and acting naturally instead of posing for the camera. This is true for grown-ups but also goes for small children and even animals. With this type of viewfinder, you can also hold the camera very low or even place it on the ground when the shot requires that. And you don’t have to lie flat on your belly yourself.
This is brilliant: Using the ergonomics of classic design, but updating it with modern technology. One of the interesting problems about today’s digital tools is that they have a lot more flexibility in their design — maybe too much flexibility. Old-school cameras had rigid limits that helped structure them: A camera had to have a certain shape and size because it contained canisters of film and lenses that needed to physically zoom in and out. Modern digital cameras don’t have these restrictions, so you’re allowed to make them tinier and tinier. Obviously, being small is good for portability — but it means that designers lose some of the ergonomic elegance that was built into old-style cameras.
It’s the same thing with phones — one of my personal bugbears. Ever wonder why people bellow into mobile phones? It’s because the devices have been shrunk so small that they no longer feel like phones. They’re so small and flat that you might as well be holding a stapler to your head, or perhaps a TV remote. They don’t seem like phones any more; they’ve lost all sense of phone-ness. No wonder we holler and yell into them! On some subconscious level, we’re worried they’re not really listening to us.
In contrast, when you pick up an old-school 1940s bakelite desk phone, the ergonomics are lovely: With the clam-curved mouthpiece close to your lips, and the earpiece snug and cupped against your head, it elegantly seals out all outside noise. The phone feels like it’s actually paying attention to you, so you realize you can whisper and still be heard. Because the old phones have such a superb aura of phone-ness, nobody feels the need to yell and holler on them, they way they do with teensy mobiles. I’ve always wanted to rip the guts out of my old 1940s phone (I collect old tech, in case you hadn’t guessed) and put GSM technology in it so I could carry it around as my mobile phone. Sure, it’d be heavy — but at least it’d feel like a phone.
So what’s interesting about the Rolleiflex MiniDigi is that it’s harvesting the useful ergonomics of the old design — with all its psychological nuances — yet mixing in digital technology. My only complaint is that I don’t think they should have shrunk the Rolleiflex down in size. I think the heft of the old design actually imparts a sense of “seriousness” about the tool that probably has its own interesting psychological impact on photographic subjects.
(Thanks to Gizmodo for this one!)
Ralph Baer is the godfather of video games. In 1972 he released the Odyssey, the “brown box” game set that plugged into a TV and let you play “video tennis”, Breakout, and a few light-gun games. But, having single-handedly invented the video-game age, did he rest on his laurels? No sir. He went out and pioneered the hand-held electronic game — by inventing Simon.
But there was plenty of conflict along the way, including the fact that the military — for whom Baer worked — actively discouraged him from his work in video games; Baer and Atari founder Nolan Bushnell have also spent decades arguing over who was really the first person to invent the consumer video game.
Anyway, to set the record straight, Baer is releasing a book this spring with Rolenta Press. As a prequel, High Times (!) did an interview with him, and it’s a joy to read — mostly because Baer, who is by now like 117 years old, has clearly long ago ceased to give a crap what people think of him, and thus is piercingly funny and sharp in his tale-telling. Here he is on his military employers:
The technology was almost ready, and like anything new, there was a 50-50 chance that it would go over. I didn’t think of it as particularly important. The plan was to have a gadget with a TV set on top of it. I’m working for a company that only builds military products, and I’m an engineer exercising my freedom because I’m a division manager thinking I can do whatever I damn well please. That’s how that came about. Numerous people, including my boss, an executive VP, took great pains for several years to tell me that I was screwing around with things I shouldn’t be screwing around with. Then, ten years later, money started coming in as a result of various trials and court cases that we had won, and everybody stopped asking me if I was still screwing around with that stuff. On top of that, under contract, everybody reminded me of how supportive they’d been. Supportive my ass.
Even cooler, Baer discusses the sound design of Simon. As you might recall, Simon had only four notes — one for each color. But each note had to work perfectly with the others: Since the patterns Simon displayed were random, the notes would be played in equally random patterns. So how do you pick four notes that sound good no matter what order or periodicity they’re played in?
You pick a real-life instrument that itself only has four notes — the bugle. All four notes sound harmonious when played in any order, which is precisely why the bugle is such a useful military signalling technology.
(Before anyone rushes in to correct me on this last point — yes, I know that a skilful bugle player can produce a much larger range of notes than just four. You can also produce ton of weird in-between noises if you’ve got good embouchure control. I played the french horn for eight years myself, and my teacher forced me to play for weeks at a time without using the valves, until I too became able to produce a range of notes so high-pitched that fish would wash up dead on the shores of Lake Ontario. I still think Baer’s epiphany was cool.)
(Thanks to El Rey for this one!)
Pretty much anytime you want, you know. It’s called the “Roadster” chair. Just go over to the web site of Szado Design and order me one. Email me and I’ll give you my shipping address. It’s only $3,740 and I think if I sat in one of these I could die happy.
(Thanks to Sensory Impact for this one!)
Now that airlines are developing increasingly long-range airplanes, they’re running into some interesting problems. Say you’re captain of the Singapore Airlines 12,600-kilometer trip from Singapore to Los Angeles — which, at 17 hours in the air, is the longest non-stop flight in the world. Now suppose someone has a heart attack or their appendix bursts. An on-board doctor could help the person out, and in a pinch, the flight could simply land somewhere quickly.
But what if someone dies on the flight — peacefully and in their sleep? What do you do then? Believe it or not, this actually happens with some regularity. And in situations like this, the airlines apparently just keep on flying. Because really, whaddya gonna do? Guy’s dead. It’s not like there’s any need to rush him to an E.R. Still, there’s the touchy question of what to do with the body. If someone passes away one hour into a 17-hour flight, that’s an awfully long time to leave a cadaver sitting in its seat, particularly if there are people sitting on either side. But where else would you put it?
Singapore Airlines has a solution: a new “corpse cupboard” will be installed on its new fleet of Airbus A340-500 aircraft. As the F2 Network reports:
The cupboard, near an exit door, will be fitted with special straps to prevent the body moving during turbulence.
A spokesman for Singapore Airlines, Rick Clements, said the cupboard would be used only if there was no other suitable space in the cabin.
“On the rare occasion when a passenger passes away during a flight, our crew do all that is possible to manage the situation with sensitivity and respect, ” Mr Clements said.
“Unfortunately, given the space constraints in an aircraft cabin, it is not always possible to find a row of seats where the deceased passenger can be placed and covered in a dignified manner, although this is always the preferred option.”
(Thanks to Howard Sherman for this one!)
In 1963, Arno Penzias and Robert Wilson of Bell Labs noticed a strange hiss coming from their astronomical radio antennas. They eventually realized it was the sound of cosmic radiation — the background hiss of the universe. Since then, scientists have discovered slight ripples in the background noise, which correspond to various cosmic events in the history of the universe. But the frequency of these noises are about 50 octaves lower than what a human can perceive.
So Mark Whittle decided to make them audible. The University of Virginia astronomer shifted the sounds up into the human range and produced a series of 5-second .wav files of the resulting noise. The New York Times wrote a story about it, whereupon the guys at Engadget turned the .wav files into into MP3s — so you can download the sound of the universe being born and play it on infinite loop on your iPod.
And what, precisely, does the music of spheres actually sound like? A totally gnarly 70s synthesizer being coaxed into producing triptastic sound f/x for an unnamed episode of Doctor Who, that’s what! Seriously, this is some kooky stuff. One of the files — I can’t link to it directly, because they all download in a single .zip folder — sounds almost precisely like the opening “note” produced by the classic Robotron 2084 arcade game in “sell” mode. (If you want to compare it for yourself, go to Shockwave, where they’ve emulated Robotron in Flash.)
Even more fun is Whittle’s musical analysis of the birth of creation:
“For the first 400,000 years,” Dr. Whittle said, “it sounds like a descending scream falling into a dull roar.”
Over the first million years, Dr. Whittle said, the music of the cosmos also shifted from a pleasant major chord to a more somber minor one.
I can’t wait for some DJ to sample these sounds and produce “house music of the spheres.” Hell, maybe I’ll dust off my copy of Acid Pro and make it myself.
(Thanks to Slashdot for this one!)
Kandinsky’s art has never really been my cup of tea. I dig abstract art, but I need a little more prettiness in my abstractness. Kandinsky was more into the sort of deeply angular weirdness that inspired many devoted postmodern followers, and probably a few Yes album covers. (Interestingly, Kandinsky was something of a synesthesiac, and claimed that he saw color when he listened to music.)
Anyway, Robert Peake is both a very cool programmer and a fan of Kandinsky, so he created a virtual-artist artificial-intelligence program that produces Kandinsky-like drawings — including the one above! Just pump in a few variables — how many shapes you want, the size and width of the picture — and it’ll kick out insto-art. His FAQ is a blast to read, and includes this Q&A:
C’mon, isn’t this really just a bunch of random shapes?
Well, yes and no. I have already begun to make aesthetic decisions, like assigning a color palette to the shading and constraining how far the shapes can be flung. Also, the “randomSymphonic” class does something very common in Kandinsky’s work - arranges either 3, 5 or 7 shapes in a vertical pattern. Usually, these look like brush strokes. It’s a start.
So, already the beginnings of human aesthetics are poking their way in to the picture. I’ll need to bear these in mind as a baseline when I start applying more advanced principles. The ultimate goal of this project is to apply and then test (via human feedback) a wide variety of aesthetic principles. Because it is web-native, the potential for getting a large sample size for human data on what “is and is not art” as well as mapping this data to precise heuristics makes this project interesting.
(Thanks to Slashdot for this one!)
Slate has just posted my latest video-game column — and this time it’s about a paradox that’s afflicting today’s games: The more “realistic” they get, the creepier and weirder the human characters look. I cite the theories of Japanese roboticist Masahiro Mori thusly:
In 1978, the Japanese roboticist Masahiro Mori noticed something interesting: The more humanlike his robots became, the more people were attracted to them, but only up to a point. If an android become too realistic and lifelike, suddenly people were repelled and disgusted.
The problem, Mori realized, is in the nature of how we identify with robots. When an android, such as R2-D2 or C-3PO, barely looks human, we cut it a lot of slack. It seems cute. We don’t care that it’s only 50 percent humanlike. But when a robot becomes 99 percent lifelike—so close that it’s almost real—we focus on the missing 1 percent. We notice the slightly slack skin, the absence of a truly human glitter in the eyes. The once-cute robot now looks like an animated corpse. Our warm feelings, which had been rising the more vivid the robot became, abruptly plunge downward. Mori called this plunge “the Uncanny Valley,” the paradoxical point at which a simulation of life becomes so good it’s bad.
As video games have developed increasingly realistic graphics, they have begun to suffer more and more from this same conundrum. Games have unexpectedly fallen into the Uncanny Valley.
You can read the rest of the piece for free on Slate here! If you have any comments to put in my discussion boards here, you might also want to go cut and paste it into the Fray, Slate’s own discussion area, which always appreciates smart contributions.
Sorry for the paucity of entries in the last few days — I’m on the road, travelling for a story, and have only sporadic Net access! But I’m collecting various weird items to post when I’m back on Friday.
Lego has been called the ultimate geek toy because it’s so deeply mathematical. Any kid that messes around with Lego quickly learns interesting aspects of division, multiplication, and even calculus-like ratio shifting and fractal concepts of internal self-replication. That’s because when you try to assemble complex 3D figures, you have to do a lot of subtle counting — particularly if you’re attempting to create a curved angle using square-edged bricks.
Of course, any kid who hacks away at this stuff soon realizes the similarity between pixels on a screen and Lego blocks. Computer icons are created from zillions of teensy blocks, and this is precisely what’s given birth to the enormous community of Lego artists who replicate famous pixellated images — from Mario to the Macintosh boot-screen design — using Lego bricks.
Now a company called Pixelblocks has gone one step further. They’ve engineered one-centimeter cubes that click together on all six sides, with different levels of gradation, to allow for ever more subtle curves. (I’m not explaining this well, but check out the animation that shows how it works.) Even cooler, the blocks are transluscent, so anything you make can be backlit — precisely like an on-screen pixellated image, or perhaps a stained glass window. Indeed, the company itself refers to Pixelblock creations as “digital stained glass,” which nicely closes the historic loop: It reminds us that some of the the core graphical technologies of today’s computers were borrowed directly from the techniques of ancient artists. Antialiasing, for example, was originally invented by medieval tapestry makers, and is a regular part of today’s needlepoint designs.
Pixelblocks has even created a little online application you can use to process a digital photo and create a block-plan you can print up that will let you render the picture in Pixelblocks. Ian of Water Cooler Games bought a set of the blocks and used it to render a Pacman ghost in “vulnerable” mode! Imagine taking a picture of your spouse’s head and rendering it as a three-foot-tall translucent Pixelblock image.
Actually, who am I kidding. The first thing people are gonna use this for is to generate enormous pixellated versions of Jenna Jameson.
(Thanks to Ian for this one!)
I think this has been blogged about before, but there’s a big trend afoot amongst geeks to use the game Dance Dance Revolution — where you bounce around on a dance pad — as a form of exercise. Now there’s even a web site devoted to people using it thusly, called Get Up Move. Check out the testimonial from Matt Keene, a 6’5” guy who shed enormous amounts of poundage:
I lost about 140-150lbs with the help of DDR and a weight bench. But before I found DDR I tried walking and a weight bench and I only lost 20lbs, then after DDR it melted off in under a year. So the main factor of my weight loss is the intensive cardio workout that is DDR. I played about 4-5 times a week with about… 18 songs a day. I now play like everyday.
I don’t exercise, other than walking a lot in my everyday life. And one reason the gym holds no allure for me is that it’s deadly boring. Sports — since they’re games — would be a terrific motivating force for me, but it’s hard to participate in team sports in Manhattan becuase green space is at a premium. So the idea of having a physical, game sport that can be played in your living room is insanely brilliant.
(Thanks to Slashdot for this one!)
I’ve discussed the weirdly high intelligence of parrots before, but here’s more evidence that birds can be quite brilliant — and even have a “theory of mind”. A theory of mind is the ability to understand that other beings are sentient and have their own personal thoughts. Children generally develop this at 18 months, and you can because they learn to follow the gaze of another person and understand something about the gazer’s thoughts and internal state.
Pretty sophisticated, eh? Except that a couple of British scientists recently did “theory of mind” experiments with ravens and found that they, too, seemed to be able to grok complex stuff about a human gazer. More interestingly, raven seem to know when other ravens are checking them out, and are able to dissemble and deceive. As The Economist reports, one scientist noticed a particularly weird interaction between two ravens, one dominant and one subordinate:
The task was to work out which colour-coded film containers held some bits of cheese, then prise the containers open and eat the contents. The subordinate male was far better at this task than the dominant. However, he never managed to gulp down more than a few pieces of the reward before the dominant raven, Munin, was hustling him on his way. Clearly (and not unexpectedly) ravens are able to learn about food sources from one another. They are also able to bully each other to gain access to that food.
But then something unexpected happened. Hugin, the subordinate, tried a new strategy. As soon as Munin bullied him, he headed over to a set of empty containers, prised the lids off them enthusiastically, and pretended to eat. Munin followed, whereupon Hugin returned to the loaded containers and ate his fill.
At first Dr Bugnyar could not believe what he was seeing. He was anxious about sharing his observation, for fear that no one would believe him. But Hugin, he is convinced, was clearly misleading Munin.
(Thanks to Robots.net for this one!)
The Cassini space probe has entered the “Saturn planetary system” — the zone in which Saturn’s enormous field of gravity is so huge that it becomes a mini solar system, with a gazillion of its own moons. Scientists are most interested in the huge moon Titan, and NASA Cassini imaging expert Carolyn Porco has written a short essay describing Titan.
I kind of cracked up when I read it, because it’s superb example of a genre I like to call “astronomer travel literature”: Floridly metaphoric evocations of extraterrestrial destinations, which feel as though they’d been excerpted from a Fodor’s travel guide in the 23rd century:
Patchy methane clouds float several miles above the icy ground. In places, large, slow-moving droplets of methane mixed with other liquid organics fall to the surface in cold but gentle rains, cutting gullies, forming rivers and cataracts, carving canyons, and filling basins, craters and other surface depressions. Imagine Lake Michigan brimming with paint thinner.
Above the methane clouds and rain lies two hundred kilometers worth of globe-enveloping red smog, making the Titan nights starless and the days eerie dark, where high noon is as dim as deep Earth twilight. Over eons, smog particles have drifted downwards, growing as they fell, to coat the surface in a blanket of organic matter. On high, steep slopes, methane rains have washed away this sludge, revealing the bright bedrock of ice. Could Xanadu, the brightest feature on Titan, be a high, methane-washed, mountain range of ice?
Man alive, they actually call that outcropping “Xanadu”. I love NASA nerds.
(Thanks to Slashdot for this one!)
Nokia has just announced a phone that can send “airtexts” — messages that are displayed in the air by a set of LEDs. As their press release notes:
By waving the Nokia 3220 from side to side, the LED lights of the Nokia Xpress-on(TM) Fun Shell light up to “write” a message that appears to float in mid-air. Exchanging messages across a crowded room or at open-air concerts will never be the same.
Blogger Joi Ito speculated on the possibility of this being used for some hilariously aggressive heckling at conferences. Imagine some stuffed shirt giving a speech — and looking up to see the words “YOU SUCK” floating in the middle of the audience. Heh. And speakers thought the backchannel was bad.
(Thanks to Boing Boing for this one!)
Here’s one for the philosophers: Is New York’s water kosher? You’d imagine so, but recent tests have discovered that codepods — tiny, half-millimeter-long creatures — are surviving through the city’s filtration process. I just had a glass of water with breakfast, and for all I know I drank a couple of these things.
The problem is, codepods are crustaceans, and that means they’re not kosher. (“And they’re ugly,” as one Orthodox Jew in Brooklyn noted.) Ingesting insects, too, is against Talmudic law. The codepods have thus triggered a huge debate amongst those who keep strictly kosher in New York, because as the New York Times points out, the tiny critters raise some weird religious questions:
What defines an insect? Does seeing one through a microscope constitute seeing one for the purposes of kosher law? And, perhaps most confoundedly, can a person legitimately claim not to see a copepod with the naked eye after looking through a microscope and learning what one looks like?
I love it. Religion has always had to grapple with the corporeal world, and it’s not getting any easier, given that modern technology actually changes the nature of reality. Last fall I wrote a piece for the New York Times Magazine pointing out that mobile phones have reshaped our notions of time and space, and, as a result, the geography of religious life:
Muslims in other countries — like Britain — have begun using a service that tells them the prayer times in Mecca, which means they essentially live in two time zones at once: local time for their professional lives and Saudi time for their spiritual lives. ”They’re existing in two countries simultaneously,” Bell notes.
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog