I’ve blogged and written journalism regularly about people’s emotional relationships with robots and artificial life forms. But the Washington Post just published one of the best things I’ve ever read on the subject — a feature article by Joel Garreau on the emotional relationships between today’s soldiers and the many robots they use to keep themselves alive.
It opens up with a story about how an army roboticist was testing a clever new design for a robot modeled on centipede — which explodes land mines by intentionally stepping on them:
At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifully, he says. Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.
Finally it was down to one leg. Still, it pulled itself forward. Tilden was ecstatic. The machine was working splendidly.
The human in command of the exercise, however — an Army colonel — blew a fuse.
The colonel ordered the test stopped.
Why? asked Tilden. What’s wrong?
The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.
This test, he charged, was inhumane.
Awesome. And the article gets better and better — and more and more surreal — from there on in. Garreau reports on soldiers who award “purple hearts” to their bomb-defusing robots that get injured; soldiers who describe in details the personality quirks of their ‘bots (“Sometimes you get a robot that comes in and it does a little dance, or a karate chop, instead of doing what it’s supposed to do”); soldiers that take their robots on furlough, to give them “rest”.
As Garreau points out, the army’s use of robots has a cyborgic element: It’s sometimes hard to tell where the robot ends and the human begins. He tells the story of a Predator drone pilot who crash-landed a damaged Predator, and in the seconds before the crash, unconsciously lunged beneath his seat: “He had bonded so tightly with the machine hundreds of miles away that he was searching for the lever that would allow him to eject.”
(Thanks to Slashdot for this one!)
Check it out: NASA has released a CGI-animated trailer promoting our eventual return to the moon. Stylistically, as sliabh notes:
Someone at NASA has been watching way too many Battlestar Galactica and Firefly reruns …
Heh. Actually, what really cracked me up was how strangely threatening the video seemed. There’s all this creepy, minor-key horror-movie music, combined with bleed-in text that ominously proclaims: “We took a giant leap … we stopped … we’re going back.” Then there’s a shot of a lunar vessel approaching and impassively snapping pix through its single HAL-like eye. Then boom! It’s all action, with a bunch of rovers thundering across the lunar surface like beetles while launch-ships swirl overhead, all set to unsettlingly thumpy action music. It feels precisely like the trailer to the upcoming Transformers movie … except in this case the invading, marauding aliens are us. Why, yes, we humans are returning to the moon — because we’re gonna dismantle it and SLAUGHTER ANYTHING IN OUR PATH.
Seriously, it’s super weird. Did NASA actually intend this to seem so, uh, apocalyptic? The whole segment appears to have been shot not from the point of view of us optimistic, yay-for-space-exploration geeks, but from the point of view of some nameless, gentle race of peaceable moon inhabitants who are about to get totally vaporized by a ruthless horde of colonizing, gibbering humans.
(Thanks to sliabh for this one!)
Yesterday I blogged about “resuscitation science”— and the startling discovery that rapidly infusing a nearly-dead person with oxygen can actually hasten their cellular death. In contrast, scientists in this area are arguing that someone who’s been deprived of oxygen for a while should be kept cold, very slowly warmed up, and only then gradually introduced to oxygen.
After reading about that, Tony Comstock emailed me with this great anecdote:
I used to be a white water river guide, and many of the most exciting rivers were fed by snow melt and ran very cold. There was simple saying regarding resuscitation of people who had drowned in these rivers: they’re not dead till they’re warm and dead. While this could mean performing ultimately fruitless CPR for more than an hour, it also saved lives.
As I wrote back to Tony: “I’m always intrigued to see the ways that the everyday practices of people in the world — farmers, athletes, mothers, etc. — intuit scientific principles long before scientists themselves figure them out …”
There’s a fascinating piece in today’s Wall Street Journal about how tech-savvy parents are picking unusual names for themselves and their kids — so that they’ll be more googleable. The parents, as the story points out, are aware that search engines dominate modern epistemology: If you can’t be found on Google, you don’t exist. Ask.com says that 7% of all its searches are for personal names; meanwhile, 80% of executive recruiters do an online search for applicants’ names, and 40% of people say they’ve used search engines to hunt down long-lost acquaintances. Women who acquire a super-common last name when they marry find that they vanish from the googleosphere.
In the age of Google, being special increasingly requires standing out from the crowd online. Many people aspire for themselves — or their offspring — to command prominent placement in the top few links on search engines or social networking sites’ member lookup functions. But, as more people flood the Web, that’s becoming an especially tall order for those with common names. Type “John Smith” into Google’s search engine and it estimates it has 158 million results. [snip]
Some people have taken measures to boost their visibility online, including creating listings in professional directories and paying companies to help them appear more prominently in search results. Parents-to-be routinely plug baby names into search engines to scout out the online competition. Some actors and musicians weigh the impact of less unique stage names.
The big problem with the article, though, is that it never mentions the most screamingly obvious and generally bulletproof way of ensuring you have lots of Google juice: Blogging. Today’s search engines reward people who have online presences that are well-linked-to. So the simplest way to hack Google to your advantage is to blog about something you find personally interesting, at which point other people with similar interests will begin linking to you — and the upwards cascade begins.
This is precisely one of the reasons I started Collision Detection: I wanted to 0wnz0r the search string “Clive Thompson”. I was sick of the British billionaire and Rentokil CEO Lord Clive Thompson getting all the attention, and, frankly, as a freelance writer, it’s crucially important for anyone who wants to locate me — a source, an editor, old friends — to be able to do so instantly with a search engine. Before my blog, a search for “Clive Thompson” produced a blizzard of links dominated by the billionaire; I appeared only a few times in the first few pages, and those were mostly just links to old stories I’d written that didn’t have current email addresses. But after only two months of blogging, I had enough links to propel my blog onto the first page of a Google search for my name. Sometime soon afterwards I moved to the #1 spot, and these days a search for the single word “clive” — an extremely common name outside the US — produces my blog as the fifth result on the first page. Woo hoo!
Okay, I’ll stop the gratuitous boasting. But the question remains: Why didn’t the Journal piece talk about this? Possibly because the writer had unconsciously adopted the corporate/advertising view of the Internet, which is that, dammit, there’s got to be some way to throw money at this problem and automatically vault our company’s crapola product to the center of the nation’s attention, right? Corporate interests generally hate Google, because they cannot easily buy their way to prominence. Not that it stops them from trying: That’s why there’s an industry in “search engine optimization” — which the Journal duly namechecks — and, of course, splogs and spambots. But the truth is that the only way to get really good, durable google juice is to work for it. There’s no magic solution. You certainly can’t just sit around and expect the search engines to love you because you’re, like, awesome.
I particularly like the fact that whenever someone bemoans the ungoogleability of those with unduly-common names, they use the example of “John Smith.” Hey, all you John Smiths: You’re doomed! Give up! You’ll be drowned in a tsunami of hits for the historical John Smith of Pocahontas fame, right?
Yet if you actually do a Google search for John Smith, you find that indeed, the top few hits are for the famous John Smith, as well as the current UK politician John Smith. But you’ll also find that seventh hit on the first page is for … John Smith, a British folk musician.’ This isn’t a guy with a big ad budget, or even, as far as I can tell, any advertising budget at all; he sells through CD Baby, which indicates to me that he’s totally indie. But clearly he’s amassed a lot of Google juice, and it’s probably because he’s made a few smart moves that are likely to attract links: He offers plenty of samples of his music, as well as several completely free MP3s, and has a guestbook for comments. I bet if he added a blog to his site he could kick himself up even higher on the Google rankings.
The problem with this solution to Google anonymity is precisely that it requires so much work. As people noted in my thread on Radical Transparency a few months ago, the Web and the blogosphere privilege the time-rich, which means they’re hardly meritocratic. I regularly get swamped with work and don’t blog for weeks at a time, which I personally hate — when I’m not blogging it feels like a juicy part of my brain has shut down, and I generate far fewer useful ideas; but it also terrifies me because I know that I’m probably losing Google juice. I’m rarely time-rich.
On the other hand, I prefer a world that gives some advantage to the time-rich over one that reserves all the advantages to the money-rich.
Check out this totally awesome video: An interview with the guy who recorded the theme music for the Doctor Who show! You get to watch him twiddling the knobs on his utterly gnarly 1970s synthesizer, reproducing the swoopy, buzzy opening tones — and chatting with the documentary host about the nuances of ring modulators and how they impacted the crafting of the Dalek’s vocal inflections.
I could sit around watching stuff like this until I die. It reminded me of the discussion that erupted back when I blogged about Coagula, a little app that translates images into synth-like sounds; when translated a picture of my face into a noise, my friend Eric Weissengruber pointed out that “your face sounds like the beginning of the old Doctor Who theme.” It also reminds me that I’ve always wanted one of these suckers for my guitar …
(Thanks to Music Thing for this one!)
Given the increasing attention to obesity in America — which is either a major public-policy challenge or a moral panic, or both, depending on your point of view — I’ve been reading up on the science behind it all. So I was totally into this awesome piece in today’s Science section of the New York Times, in which Gina Kolata surveys the four-decade-long work of the research physicians Rudolph Liebel and Jules Hirsch (pictured above, in his Times snapshot).
They did some incredibly hard-core experiments to explore the reasons people got so fat. In one study, they took a bunch of obese people who agreed to live at Rockefeller University Hospital for eight months — dieting carefully to bring their weight down by about 100 pounds each. It worked, and when they examined the patients’ fat cells, the scientists saw that the cells had transformed: They used to be huge and stuffed with yellow fat, and now were normal in size. But they all regained their weight later on. As Kolata reports:
[That] led them to a surprising conclusion: fat people who lost large amounts of weight might look like someone who was never fat, but they were very different. In fact, by every metabolic measurement, they seemed like people who were starving.
Before the diet began, the fat subjects’ metabolism was normal — the number of calories burned per square meter of body surface was no different from that of people who had never been fat. But when they lost weight, they were burning as much as 24 percent fewer calories per square meter of their surface area than the calories consumed by those who were naturally thin.
The Rockefeller subjects also had a psychiatric syndrome, called semi-starvation neurosis, which had been noticed before in people of normal weight who had been starved. They dreamed of food, they fantasized about food or about breaking their diet. They were anxious and depressed; some had thoughts of suicide. They secreted food in their rooms. And they binged.
The Rockefeller researchers explained their observations in one of their papers: “It is entirely possible that weight reduction, instead of resulting in a normal state for obese patients, results in an abnormal state resembling that of starved nonobese individuals.”
In a reversal of this study, Ethan Sims of the University of Vermont took a bunch of naturally slender people and had them eat up to 10,000 calories a day, until they became obese. They had no trouble losing the weight and keeping it off. In a later study, Hirsch and Liebel examined children who’d been adopted and found that their adult propensity for obesity — or thinness — closely tracked their biological parents. If their biological parents were obese, it didn’t matter if the kids grew up with skinny parents who taught them healthy eating patterns; they most often wound up obese too.
The public-policy implications, the scientists argue, are significant. If it’s true that the children of obese parents are the ones most likely to become obese — “80 percent of the offspring of two obese parents become obese, as compared with no more than 14 percent of the offspring of two parents of normal weight” — then it’s waste of money to target the anti-obesity message at the children of skinny folk. One could do more to fight obesity by devoting resources to locating and supporting those most at risk.
Mind you, given that being fat carries such a powerful social stigma, one might wonder about the emotional ramifications of a nation-wide effort to round up all the fat kids and target them with weight-control programs. Either way, this question of the genetic basis of obesity is really interesting, and I now want to read Kolata’s book Rethinking Thin: The New Science of Weight Loss — and the Myths and Realities of Dieting
Why I blogged a while back about how anesthesiologist Patrick Kochanek had created “zombie dogs” — canines that were officially brain-dead for three hours, then slowly brought back to life. What I didn’t know is that he’s part of an apparently booming new field of “resuscitation science,” medical research aimed at dramatically refashioning our ideas about how to bring nearly-dead people back from the brink.
Some of the discoveries are pretty wild. According a piece in Newsweek, Lance Becker — another emergency-medicine expert — has recently made headway in grappling with one of the biggest mysteries: Why do we die when our oxygen flow is cut off? Traditionally, doctors have assumed it’s because our cells need oxygen to live, so they die when they’re deprived. But that theory was dealt a big blow when scientists finally started looking at oxygen-starved cells under a microscope, only to find that they survived just fine for up to several hours when cut off from blood flow (and thus oxygen).
Becker, in contrast, discovered something really nuts: That when you deprive cells of oxygen for more than five minutes, they die not because of an immediate lack of oxygen. They die when the oxygen supply is resumed.
Why? As Newsweek reports:
Biologists are still grappling with the implications of this new view of cell death — not passive extinguishment, like a candle flickering out when you cover it with a glass, but an active biochemical event triggered by “reperfusion,” the resumption of oxygen supply. The research takes them deep into the machinery of the cell, to the tiny membrane-enclosed structures known as mitochondria where cellular fuel is oxidized to provide energy. Mitochondria control the process known as apoptosis, the programmed death of abnormal cells that is the body’s primary defense against cancer. “It looks to us,” says Becker, “as if the cellular surveillance mechanism cannot tell the difference between a cancer cell and a cell being reperfused with oxygen. Something throws the switch that makes the cell die.”
The implications for ER medicine are huge. If Becker’s right, current emergency practices actually kill people — because when you’re brought into the ER and you’re not breathing, the first thing they do is flood you with oxygen. But all that does, Becker claims, is radically accelerate the speed at which your cells die. What ER doctors ought to do instead, he theorizes, is chill your body down and very, very slowly warm it up while gradually reintroducing oxygen.
Which sounds, now that I go back and look at it, rather like the zombie-dog experiment. Either way, this resuscitation area is bound to get curiouser and curiouser as time goes on.
(Thanks to Plastic for this one!)
Wired News just published my latest video-game column, and this one is about physical sports. It’s got an an interesting pedigree. A while ago, I was chatting with Hasan Elahi, an art professor at Rutgers University (I was actually interviewing him for an upcoming story I’ve written about him for Wired magazine.) I mentioned to Elahi that I’m fascinated by game design, and wondered idly why — in an age where video-game design is flowering — there’s almost no-one designing new physical sports. Elahi informed me that one of his grad students, Tom Russotti, actually was designing new sports, and he put me in touch with him. Ta da: A few months later, I hooked up with Russotti in Prospect Park here in Broolyn, where I got a chance to play one of his new sports! (That video above is a record of our spastic game: I’m the guy in the jeans who appears in the first scene.)
Interestingly, this column highlights the dirty secret about my Wired News gig, which is that it really isn’t about video games at all. It’s about ludology and the philosophy of play! It just turns out that video games are the best possible vehicle to discuss ludology these days — so they’re the natural subject matter.
Anyway, the column is online here, and a permanent copy is archived below:
The dawn of “Aesthletics”
Why don’t game designers create more real-world, physical sports? I talk to one guy who does
by Clive Thompson
I catch the Whiffle ball with one hand, spin around, and begin dribbling it off my bat as I drive for the goalposts. Damn: I’m swarmed by defensemen frantically waving their bats and trying to block my shot. Taking a dive for it, I spy an opening — then smash the shot past the goalie.
Woo hoo! I’ve just scored the first goal in a ferocious game of “Whiffle Hurling.”
Yes, Whiffle Hurling. I suspect you’ve never heard of it. Actually, I’m positive you’ve never heard of it — because the sport didn’t exist until two years ago.
Whiffle Hurling was invented in July 2005 by a Tom Russotti, an MFA grad student at Rutgers University — and the sole practitioner of what he calls “aesthletics.” So far, only 10 games of Whiffle Hurling have ever been played. I can personally attest that it’s insanely fun and offers up a genuinely new blend of activity: The crazy intensity of Irish hurling mixed with the low-stress, low-injury appeal of Whiffle ball. It manages to be simultaneously casual and intense, which is perfect for nerds like me.
And it also poses an interesting question: Why don’t more people invent new sports?
That musical score above? It’s a piece of classical music based on the structures of the protein Thymidilate Synthase A. Some biologists at UCLA developed a set of nifty parameters for translating the structures into music, ran a couple of different proteins through them, and produced a pile of sheet music. You can check out the sheet music and listen to the MIDI files played via your browser’s built-in music module — a piano, in my case — here! (To listen to that specific bit of music pictured above, click here.)
Interestingly, the results sound eerily like the cheesy MIDI soundtracks to mid-80s side-scrolling arcade shoot-‘em-up games like Gradius or Scramble. I’d love it if a casual-game designer used use this stuff for a new Flash-based shooter!
There are also some possibly practical uses for this technique, too, listening to proteins is a novel way of analyzing their structures and how they work. As the researchers write:
Huntington’s disease is an example of a triplet repeat disorder in which an expansion of a repeated glutamine sequence causes the protein to lose its proper function. Such an expansion leads to a late-onset neurological disorder. The LacY permease protein spans the membrane of Escherichia coli and has a distinct hydrophobic region of phenylalanines. This sequence facilitates the protein to move through the bacterial membrane. In the Huntingtin example, one can hear an obvious repeated pattern of glutamines and polyprolines, and this pattern can be compared to the less obvious repeated pattern of phenylalanines heard in the LacY permease.
I love the idea of using music and sound as a new vector for studying biology. It reminds me a bit of Jim Gimzewsk’s work on “sonocytology” — listening to the vibrations of individual cells, which I blogged about two years ago.
Everyone knows that if you don’t want to screw up your back, you should lift heavy objects by bending your knees, not your back. But 20 per cent of all workplace back injuries in North America are caused not by lifting — but by pushing and pulling heavy weights. So what’s the correct way to pull or push?
For years, scientists assumed that the forces at play in your body when you push or pull never spin out of control they way they do with lifting. Whenever they measured these forces, they didn’t seem big enough to cause serious damage.
But recently, two biomechanists — Kevin Granata of Virginia Polytechnic Institute and Bradford Bennett of the University of Virginia — realized the previous experiments had left something out: The “cocontraction” of muscles, which is the body’s use of opposing muscle groups to stabilize the body. So they wired up a bunch of experimental subjects with gonimeters to measure muscle activity, and had them push levers at various force levels: 15 per cent of their body mass, 30 per cent, or as hard as they could push. The results? As Cognitive Daily reports:
… when cocontraction was factored in, the force on the spine increased by as much as 400 percent, depending on the height of the handle and the amount of force applied. Cocontraction was greatest when participants bent lower to push on the handle, as they would when pushing the heaviest loads. In these cases, stress on the spine matches stress in lifting tasks and may be what leads to the most injuries.
They also note that as the amount of pushing force increases, the vertical component of the pushing force also increases, because the volunteers need the corresponding downward force on their feet to gain traction. This makes a pushing action more like a lifting action, again potentially increasing the chance of injury.
Ow. I don’t think their study formally concludes the best way to push or pull, but it’s a cool area to study. It also makes me feel sore just reading about it, because I’ve had a hair-trigger back ever since grade 13 in high school, when I worked at a lighting store and screwed myself up ferociously by lifting superheavy chandeliers with one hand while wiring them into the ceiling with the other. I had been an otherwise perfectly healthy 18 years old, but a year of that sort of work left me so bent over with pain that I had to lie in bed for four days recovering. “If I’d known you were a cripple,” my boss told me as I limped home, “I wouldn’t have hired you.” Nice.
A really sad note: Granata, a co-author of this study, was one of the professors who died in the recent Virginia Tech shootings.
It’s real. In fact, it’s a lovely example of the noble Grimpoteuthis — the crazy-deep-water-dwelling “Dumbo Octopus”, so named for its big floppy ears (or whatever the heck those things are). Collision Detection reader Paul Gemperle sent me a couple of links to some amazing photos of Grimpoteuthis, as well as a short French documentary of the thing in action.
The video is hallucinogenically strange in the way that only films of benthic-depth sea creatures can be: Gauzy see-through animals lazily turn themselves inside out, ultracreepy writhing masses of collective-life-form tentacles lunge for prey, and Dumbo octopuses impassively regard the camera lens with what appears to be an intelligence probably not much lower than a member of Congress. In one shot, a huge-ass lidded eye attached to some snouted celaphopod opened up to stare at me and I was like, man — this stuff looks like a Ridley Scott f/x masterpiece. Or a really awesome video game.
All of which made me think: Deep-sea life is so aggressively odd-looking that it’s indistiguishable from Hollywood CGI creations. Sure, that Dumbo octopus is real; but if it weren’t, how could you tell? Someone ought to harness this blurriness as a pedagogical technique. They could make a short documentary aimed at grade-school kids that mixes fake CGI sea animals with real ones, and challenges them to figure out which is which. It’d be a nice way to hammer home the central fun of marine biology, and of science in general: Why bother making things up when reality outweirds you every day?
(Thanks to Paul for this one!)
When you’re trying to lose weight, portion control is a big deal, as everyone knows. But how well can we actually tell how big our portions really are? A bunch of elegant experiments by Cornell psychologist Brian Wansink have proved that we suck at judging the quantity of food we’re eating, because we’re regularly tricked by size illusions: If the container serving us is really huge, the amount of food seems smaller and we inadvertantly binge.
Wansink’s experiments are incredibly clever at teasing out how easily we’re fooled. In 2005, he offered moviegoers two-week-old popcorn — “stale enough to squeak when it was eaten” — in medium and super-big buckets; despite the fact that the food tasted like crap, the ones served in super-big buckets ate 33.6% more. (A PDF of his paper is here.) In a 2003 experiment, Wansink served people soup in a bowl that secretly and slowly refilled itself as people ate. They ate 73% more than those served the same soup in regular bowls.
As a profile of Wansink in today’s New York Times points out:
The scariest part is that most of us think we are immune to these hidden persuaders. When the moviegoers were told about the popcorn experiment afterward, most of them scoffed at the idea that their bucket size had any effect on them. “Things like that don’t trick me,” one of the gorgers said.
And as Wansink notes in his soup paper — a PDF is here — the folks who consumed 73% more did not perceive themselves to have eaten any more than normal.
I have this problem with coffee all the time. Coffee shops keep on serving java in increasingly massive cups, so by noon I’ve generally consumed enough caffeine to exterminate a house pet. Indeed, if you really want to eat less, Wansink’s big suggestion is to buy antique plates off Ebay, because plates from the 40s are way, way smaller than today’s plates — so you wind up eating less because your portions seem bigger. He uses that picture above to illustrate the perceptual trick: The black dot’s the same size in each case, but looks smaller in the second picture because the surrounding dots are bigger.
Apparently Wansink wrote up all his findings in a book last year called Mindless Eating, but somehow it slipped under the radar for me; I’m going to go buy it now!
See that object above, on the right? It’s the most perfectly spherical object ever made by hand. It’s only the size of a ping pong ball, but its surfaces are so smooth that were it blown up to the size of Earth, the tallest mountain would be only eight feet high. It’s one of four spheres that are current floating in Gravity Probe B, which is possibly the coolest piece of space engineering evah.
Gravity Probe B is an audacious attempt by NASA and Stanford to confirm Einsteinian physics by measuring, with utterly berserk precision, how much Earth’s enormous mass curves space-time around it. The Probe is a satellite that contains four chambers of superfluid helium, chilled to a martini-like minus-271 degrees Celsius. A sphere — composed of fused quartz — floats inside each chamber and spins rapidly, forming a three-dimensional gyroscope. They’re so free of any physical disturbance that they form an almost perfect space-time reference system. The spheres can detect changes in their positioning as small as 0.5 milliarcseconds — roughly the width of a human hair viewed from 20 miles away. Heh.
So here’s how it works: The Probe is aligned to a distant guide star, IM Pegasi. According to Newtonian physics, a gyroscope free of any interference ought to point in the same direction for eternity. So if the spheres inside Gravity Probe B drift away from their orientation to IM Pegasi, they’re being affected by the Earth’s space-time pull — and we’ll be able to measure it and see if it conforms to Einsteinian predictions. Specifically, the two effects we’ll be able to see are “frame dragging” and the “geodetic effect”. As one scientist described it:
“If experimental science is an art, then I would look at GP-B as a Renaissance masterpiece,” says Jeff Kolodziejczak, NASA’s Project Scientist for GP-B at the Marshall Space Flight Center.
The whole reason I’m blogging about the Probe now is that it’s been in orbit for three years, and NASA is finally getting preliminary data out of it. Apparently the gyroscopes indeed appear to be drifting, though it’ll take a while to separate the signal from the noise to fully confirm the frame-dragging effect. In the meantime, if you want to built a papercraft model of the Probe, there are plans here.
I’ve always wanted to write a humungous magazine feature about Gravity Probe B, because it’s such a freaky epitome of the wonderful craziness and bulldogged tenacity of scientists. It was originally proposed 47 years ago (!!) — but was delayed for decades waiting for funding, waiting for the shuttles to be built to get it aloft, then discovering that, whoops, the shuttles couldn’t actually handle that sort of payload, then designing a rocket to finally get it aloft. They also had wait for all manner of engineering breakthroughs to make those spheres. But what a metaphorically lovely finale: The most perfectly round objects ever made by humanity, flying through the void on one of the purest scientific quests ever.
So, I’m coming insanely late to this one — actually, to be precise, I’m coming insanely late to everything now, because I haven’t been blogging for six freaking weeks! (Slammed by work, crushed by deadlines, whine, whine, meow, meow, etc.) Anyway, regular readers of this blog — assuming any actually still, y’know, exist — will recall that back in January, I blogged about a Wired story I was researching on “radical transparency”: Why so many organizations were realizing that it made more sense to talk openly about their internal doings than to be secretive about them. I asked people for their input, and a huge and amazing thread ensued.
Last month Wired published the actual article, and it’s online here! The niftiest part is that in the margins of the print copy of the article, we ran several comments from the blog, with pointers indicating the part of my article they either inspired, commented upon, or disagreed with. It was, as one fact-checker noted to me while working on the piece, “like running letters to the editor before the piece has ever seen print”, heh. I think the comments are awesome — and thanks to everyone who posted originally! It was incredibly valuable being able to hear people’s thinking while I was hacking through this stuff myself.
The cover of the Wired issue took on a life of its own — because the magazine decided to run an acetate overlay of Jenna Fischer from The Office, and when you lift the cover she appears … naked, albeit covered coyly by a sign. When I found out the concept, I remember thinking, christ, isn’t that precisely like those pens from the 1950s where you turn them upside down and the woman’s bikini vanishes? And as you’d imagine, the blog commentary was acerbic.
At any rate, the piece is here: let me know what you think, and thanks again!
The See-Through CEO
Fire the publicist. Go off message. Let all your employees blab and blog. In the new world of radical transparency, the path to business success is clear.
By Clive Thompson
Pretend for a second that you’re a CEO. Would you reveal your deepest, darkest secrets online? Would you confess that you’re an indecisive weakling, that your colleagues are inept, that you’re not really sure if you can meet payroll? Sounds crazy, right? After all, Coke doesn’t tell Pepsi what’s in the formula. Nobody sane strips down naked in front of their peers. But that’s exactly what Glenn Kelman did. And he thinks it saved his business.
I'm Clive Thompson, a writer on science, technology, and culture. This blog collects bits of offbeat research I'm running into, and musings thereon.
Currently, I'm a contributing writer for the New York Times Magazine and a columnist for Wired magazine. I also write for Fast Company and Wired magazine's web site, among other places. Email or AOL IM me (pomeranian99) to say hi or send in something strange!
May 20, 2011 » 02:28 PM
From Christopher Kennedy’s very droll book “Neitzsche’s Horse”.
July 28, 2010 » 07:35 AM
“Wr” - S
July 06, 2010 » 10:05 AM
My Xbox broke, and I was trying to Google some possible technical solutions, when I noticed that Google appears to be encouraging me to make a typo. I suppose it’s possible that Google’s algorithms know that typing “wont” instead of “won’t” would produce better results.
June 29, 2010 » 05:00 PM
On the other hand, when I tried the test for multitasking, I was pretty abysmal. I performed worse than people who identify themselves as heavy multitaskers, and those who identify as low multitaskers.
June 29, 2010 » 04:58 PM
I finally got around to trying out the interactive “test your distractability and multitasking” page at the New York Times, which they put up alongside their story earlier this month about how computer distractions are eroding our lives.
According to the test, I guess I have good focus — I’m not very distractable!
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog