A while ago, one of the hottest videos on YouTube was of a young guitarist, sitting on his bed, playing a speed-metal version of Johann Pachelbel’s “Canon in D”. It was totally mesmerizing; I watched it about five times in a row, as did several million other fans. (That’s a screenshot above.) Chuck Klosterman, the culture critic for Esquire, also got interested in the clips, and he realized that it presaged an intriguing cultural shift: YouTube is bringing back the lost art of guitar wankery.
Why? Because 1980s-style shredding faces problematic paradox: “Very often,” as Klosterman pointed out, “profoundly exceptional guitar playing is boring to listen to.” YouTube, however, changes those stakes, because it offers us a new way to see the craft at hand. As Klosterman wrote:
It’s difficult for nonmusicians to appreciate world-class guitar playing through solely sonic means, mostly because a) the difference between great guitar playing and serviceable guitar playing is often subtle, and b) every modern listener assumes production tricks can manufacture greatness. (As a result, radio audiences are automatically skeptical of what they hear.) Guitar brilliance usually comes across as ponderous. But that changes dramatically when one adds the element of video; somehow, watching changes the experience of hearing. There are certain things that sound good only when (and if) you can see them. And YouTube lets you see them.
Whenever you enter the highest, stupidest, Bucketheadiest stratosphere of electrified insanity, one thing becomes clear: Guitar-godding is an athletic pursuit … and athleticism needs to be seen in order to be appreciated … Early in his career, Eddie Van Halen turned his back to the audience whenever he played solos, supposedly because he was afraid rivals would steal his techniques. Had he insisted on doing this forever, very few people would have cared about his music. (We would probably assume “Eruption” was performed on a German synthesizer built from the spare parts off a fire engine.) People needed to see how his fingers worked. Only then could they understand that Eddie Van Halen was doing something they could not understand. His guitar was not a primitive machine that made it easier to meet girls and get free drinks; his guitar was a futuristic machine that was fucking hard to fucking operate. You can fake being cool, but you can’t fake being good. That’s the musical potentiality of YouTube: It allows us to see elements of musicianship that are difficult to hear (even though hearing is supposed to be the whole idea). It could make a handful of people recognize (and care about) virtuosity in a way that hasn’t happened since the fall of King Crimson.
Testify, brother. I think Klosterman’s right, and I for one could not be happier. This is partly because I really enjoy speed metal, and indeed as a teenager attempted to play the stuff myself. (I was fast, but not that fast.) And I confess I’ve been increasingly dissatisifed with the direction of modern pop, which has more and more privileged screechy and/or whiny vocalists who are utterly unable to play any instrument themselves, and thus, usually, unable to actually write music or songs themselves.
It reminds me of that study that I blogged about a while ago, which showed that of all celebrities, musicians were the least narcissistic because the mere act of acquiring and honing a quantifiable technical skill had the psychological effect of pulling your head out of your ass, at least a slightly. I think the same is true of the pop audience: The more they’re schooled to respect actual musicians with actual skills, maybe the less they’ll be impressed by talentless whiny “singers”.
A while ago, New York magazine asked me to write a big feature on the future of New York’s weather. This plunged me hipdeep in all sorts of interesting stuff — from the many naturally-recurring climate cycles that govern our weather, to the predicted effects of global warming. The result was published a few weeks ago, and I’m just now getting around to blogging it!
The story — accompanied by that excellent illustration above, created by James Porto — is free at the magazine’s web site, and a permanent copy is also archived below!
The Five-Year Forecast
Unseasonably warm, with freakish snowfalls and chance of cyclone. This winter will be weird, and the weather will keep getting weirder.
by Clive Thompson
The ash tree vanishes.
The next time you’re in Central Park, go up to Harlem Meer on the north end, then wander westward on the pathway into the heart of the park. After the first sharp turn, look off to the west and you’ll see a thick stand of ash, its rough bark set off by delicate oval leaves. Long before New York existed, the ash thrived in this region, and the city’s settlers used the tree’s dense but springy wood to make everything from church pews to baseball bats. The ash has been here since the beginning.
But its time is about to come to an end. In recent years, foresters have quietly decided not to plant any new ash trees. Why? Because the city is becoming too warm and dry for them, and they’re dying off. Green and white ash, our local varieties, are classified as “hardiness zone three or four,” northern trees that prefer moist, well-drained soil. New York used to be like that, 200 years ago — but the temperature in the past century has risen over two degrees, and it’s getting drier every year. “Last year we had stretches without rain that were practically six weeks long,” says Neil Calvanese, vice-president of operations for the Central Park Conservancy, which maintains the park. And the warmer weather has introduced new wood-eating bugs that afflict the tree. Normally an ash will live 250 years, but this summer Calvanese had to chop down a majestic 130-footer when it stopped thriving. “Ash in the park,” he says, “I really don’t see as having much of a future.”
Here’s a lovely, intriguing — and, in keeping with the holiday spirit, totally depressing — bit of holiday research: It turns out that the more familiar you are with your spouse or partner, the more likely you are to buy gifts they hate!
That’s the result of an intriguing study recently done by Davy LeRouge and Luk Warlop (PDF link), two European professors of marketing. They took a few hundred couples, showed them pictures of furniture sets, and asked them to predict which ones their partners would like. Sometimes the people simply had to guess; in other situations, the experimenters would help them out by telling them which furniture settings their partners themselves had picked. And then the researchers also asked the subjects to predict which furniture settings would please a total stranger.
The result? People were pretty good at predicting the likes and dislikes of total strangers — yet astonishly crappy at figuring out the preferences of their closest and dearest partners. And when they were given new information about which furniture their partners picked for themselves? It didn’t help. They were still better at picking gifts for total strangers than for their loved ones.
Why? Possibly, the professors theorized, because when we’re very familiar with our spouses it can be hard to separate our own preferences from theirs. We mistake things we’d like for things they’d like. Also, we tend to cherish hidebound ideas about what our partners are like, and we’re unable to step outside those assumptions — even when our partners themselves give us fresh, new information. When we face down strangers, we have none of those biases and thus are able to more clearly see them as they are.
Our findings reveal that being familiar with the person for whom you are predicting the product attitudes is a burden rather than an advantage, mainly because it prevents people from taking full advantage of newly provided information about that person’s product attitudes … Further evidence shows that it is the extensive amount of vivid information that the predictors hold about familiar others that prevents them from selecting more valid prediction cues.
Deck the halls. Mind you, maybe these sorts of errors are propping up the world economy. Think of it this way: What you really want is a new pair of jeans … but instead, your partner thoughfully buys you a bunch of CDs of bands that he likes and you loathe. So you go out after the holidays and buy the jeans yourself. Essentially, there have been two rounds of gift-buying: Your partner’s addled, narcissistic purchase of crap you don’t want, and your own purchase of things you’d actually like. Double the spending — double the boost to the economy! And double the landfill!
Speaking of totally insane Xmas shopping binges — I’ve been intrigued to watch the evolution of the pricing of the Playstation 3 on Ebay. Since Sony produced way less PS3’s than people wanted, the auction market has been going crazy with the devices changing hands. And this means, of course, that we can accurately measure the outer limit of what people are willing to pay to get their hands on a PS3.
I can’t claim to be tracking this with any scientific rigor, but from occasionally dropping into Ebay and checking and the prices for PS3s in the weeks directly after the release of the device, this is the trend that I noticed:
- Nov 17, the day the PS3 was released: $3,000
- four days later: $1,000
- one month later: $950
What this means is the truly hard-core early-adopters were willing to pay a premium of $2,000 just so they could get their hands on a PS3 four days earlier than anyone else. Then the market quickly settled on $1,000 as a reasonable price to pay for a PS3, without having to either wait in massive lineups at a game store.
Only in America can shopping be considered a contact sport. And Christmas is the Super Bowl for competitive consumers. You’ve got a list of must-buy toys for your little toddler, and you’ll be damned if someone else gets those gifts before you! Use whatever means necessary (physical harm?) to snatch this season’s hottest toys before the other greedy shoppers get their hands on them!
Oh — and merry Christmas!!
It’s actually quite fun! I confess that I’ve always enjoyed the spectacle of toy crazes that produce Australian-rules, smashmouth holiday shopping. Back when Tickle Me Elmo first came out and kids went nonlinear over it, there was an incident in Fredericton, New Brunswick, in which a Wal-Mart employee was sent to the hospital after being stampeded by rabid parents. (Up in Canada, newscasters started referring to the doll as “Trample Me Elmo”.) If there actually were a “War on Christmas”, which there isn’t, one could scarcely ask for a better culprit than America’s annual, Caligulan orgy of consumption.
After breaking my blogging-fast last week, I awoke to discover that my commenting system was b0rked yet again. This time, apparently what happened was spambots pounded my comment script so ferociously — 13,000 calls in one day — that my hosting service, Pair.com, shut down all access to the script, so that not even I had the ability to restore the permissions. The long story is that I had to update to the latest version of Movable Type and install a plugin called AutoBan that attempts to help prevent flood attacks.
When I was young, did my mother sit me on her knee and say, “Son, one day you’ll spend two entire workdays revamping your blog’s code so that you can thwart spambot flood attacks?” No, she did not. Yet here we are, sigh.
Anyway, comments are up and working again!
Can an interactive web site produce false memories?
Possibly so, according to a fascinating paper to be published this month in the Journal of Consumer Research by Ann Schlosser, a business professor at the University of Washington. Schlosser performed an intriguing experiment: She took two groups of people and had them check out two different web sites devoted to the same digital camera. One site included static pictures; the other was interactive, allowing users to play around with a virtual version of the product.
Later, she tested them on their ability to recall details about the camera. She intentionally included details that were false, but sufficiently plausible that they might have been true. The result? The people who viewed the interactive demo of the camera were much more likely than the folks who’d only viewed static images to “remember” the false details as being present. Or another way of putting it: The interactive demo was more likely to produce false memories of the product — potential buyers who thought the camera could do things it can’t.
Why? Schlosser theorizees that it’s partly because interactivity encourages more “certainty” in our memories, and thus increases the likelihood that we’ll believe suggestively false details to be true. And, as she concludes:
These findings suggest that marketing managers should test their campaigns for both true and false memories. Although it may seem advantageous for consumers to believe that a product has features that it actually does not have (e.g., by increasing store visits and purchases), it may ultimately lead to customer dissatisfaction. Because false memories reflect source-monitoring errors—or believing that absent attributes were actually presented in the marketing campaign—consumers who discover that the product does not have these attributes will likely feel misled by the company.
One interesting thing Schlosser points out is that market-research folks almost never study the false-memory effects of advertising. Sure, they test to see whether consumers who’ve looked at promotional material can recall true information about a product. But they rarely check to see whether the consumers also remember false information. An interesting — if telling — elision, eh?
This also makes me wonder about whether other virtual-reality environments, such as simulation video games, can create false pools of knowledge. This is a potentially a big deal for the “serious games” folks, because many of them create brilliant little simulations as a way of educating people about complex situations. Cool enough! But what if they these sims also unintentionally impart bogus knowledge — making the gamers feel so artificially sure of the complex system that they attribute properties to it that don’t exist?
Interesting stuff to think about, either way.
As I’ve written before, video games were the first place we learned how to interact with information on a digital screen. Icons? Controller movement? Screen-scrolling? Navigation of complex menus? All these concepts now part of computer interface design were first hacked out in games.
So now Steffen Walz, a PhD student at the Swiss Federal Institute of Technology, has closed this loop — by designing “Playce”, a web site that you navigate by playing it like a retro-80s video game.
Playce is divided into two panels vertically, and you begin by picking one of four different types of games based on your personality. As you play the game, you unlock different parts of Walz’s site (which is mostly a portfolio of his design work). Pick the “achiever” game and you’ll be playing a version of Breakout, where you have to destroy certain bricks to navigate to different site pages. Pick the “killer” game, and you play an old-skool shooter where you blast little tanks, soldiers or planes to go to pages. (That’s a snapshot of one section of the “killer” screen above.)
As Walz notes on his web site:
The art and craft of make-believe place-making challenges architects, urban planners, game and interaction designers, and it likely to (need to) take advantages not only of the game generation’s competencies … but also reflect the expectations of the Homo Ludens Digitalis, who has been trained to win not only in the gamespace, but in the gamespace that is everyday.
As Walz noted in an email to me, the interface isn’t exactly an efficient way to navigate, but it’s pretty thought-provoking. And it makes me wonder: Are there any examples out there of web sites that navigate in gamelike fashions, without directly referencing games? I.e. in a more subconscious way?
What would it be like to view your entire life in a few minutes? Last month, I wrote a Fast Company article that talks about Gordon Bell’s attempt to record everything that happens to him. One of the things he uses is a Microsoft SenseCam — an experimental, wearable camera that automatically snaps pictures of what you’re looking at, all day long. The question is, what do you do with all those zillions of pictures? Is there any way to use them to improve your memory or cognition?
Well, as I noted in the story, a couple of Irish and British scientists tried something interesting: At the end of each day, they’d download the day’s pictures and quickly scroll through them like a flashbook — viewing hundreds of snaps in a minute or so. They discovered that it would help “seal” the day’s events in your real, brain-based memory. (Indeed, it even drastically improved the everyday recall of a woman who suffers from ongoing amnesia.)
William Braine, a friend of mine, read my article and then had his own experience of this effect — inadvertantly. As he wrote in an instant message to me:
This weekend I transferred the contents of two older computers to my new iMac. When I imported the 3000-or-so photos from 1998-2006 into the new machine, they flashed by at about a quarter-second each. I got to see shots of our honeymoon, our apartment, a fat me, an ultrasound, a thin me, a newborn, a new house, a baby, another new house — with vacations and friends and family all speeding through … amazing.
Cool, eh? Since so many people now snap tons of pictures of their daily activities, I’d imagine there’s a good market for simple screensaver-like apps that intelligently sort your pictures and then whizz through them in different ways, to produce this sort of cognitive priming. And the most interesting effects aren’t necessarily about remembering things in a utilitarian way; they’re probably more about, as Bill noted above, the emotional aspect — different ways of re-experiencing and assessing your life.
Imagine being 60 years old, and having one psychologically significant picture taken from each month of an entire life’s archive. That’s 720 photos. Scroll them by at the speed that Bill experienced — four per second — and your life would flash by in three minutes. What in god’s name would that feel like? I figure whatever version of Flickr that exists 50 years from now will have this sort of capability, so I guess I’ll eventually find out.
People love to joke about how big-city life makes people talk faster, walk faster, and generally stress out. But according to a study in this month’s issue of Current Biology, the urban environment also turbocharges birds. A couple of Dutch scientists recorded the song of the Great Tit — Parus major, pictured above — in both forest settings and city settings, then compared them. The result? As they note in their abstract:
Urban songs were shorter and sung faster than songs in forests, and often concerned atypical song types. Furthermore, we found consistently higher minimum frequencies in ten out of ten city-forest comparisons from London to Prague and from Amsterdam to Paris.
Their paper is behind a paywall, alas, but for free they put online a dozen audio samples of the bird songs — so you can hear the difference for yourself! It’s pretty cool. Go to the bottom of the page and listen in particular to the comparison of the songs recorded in downtown Paris versus the rural area of Fontainebleu; it’s precisely the same call, except pitchshifted higher and faster in Paris.
Evolutionary scientists have theorized for decades that the noisy environment of cities would alter birdcall, but this is some of the most impressive evidence yet assembled. It’s also metaphorically lovely. When I spill my coffee while fumbling to answer a mobile-phone call while racing for the subway next week, I can take comfort: Even the songbirds are racing to keep up.
(Thanks to Eurekalert for this one!)
When is a gore-soaked video game like a Shakespearean sonnet? When it’s Gears of War, my friends. Or so I argue in my latest Wired News column. Check it out online free at the Wired News site, or archived below, and see if you agree!
Why Gears of War rocks
by Clive Thompson
Why is Gears of War so insanely awesome?
Technically, it’s just the same old, same old. You’ve seen stuff like this a bazillion times before. Gears of War is yet another run-and-gun shooter in which you blunder through the post-apocalyptic boneyard of civilization, repetitively slaughtering a bunch of hulking, gibbering aliens. Creepy things lurk in the dark; fresh ammo packs are scattered improbably in open sight; and as the guts paint the hallways red, your teammates curse like a bunch of Tarantino wannabes. Name every single war-weary cliche of the run-and-gun genre, and Gears of War dutifully ticks it off.
This one has already been all over Digg and Boing Boing, but I can’t resist. Behold: An octopus escaping from an underground box by squeezing its entire body through a teensy one-inch hole.
As I’ve been warning for years — as soon as these guys get tired of our crap, we are doomed. Why hasn’t someone made a movie about octopuses rising up from the briny deepy, penetrating our trillion-dollar anti-celaphopod national defenses by squeezing under door-cracks, and just totally handing Western civilization its ass? Possibly because it would be too close to the terrifying truth.
Steven Johnson just published The Ghost Map, a superb book about the 1854 outbreak of cholera in London — and how its cause was uncovered by a clergyman and a doctor who used local maps to grok the topology of the outbreak. Cool enough, but Johnson used his thinking about neighborhoods and mapping to create a new website called outside.in.
It’s a pretty simple concept: You type in your zip code or address, and outside.in shows you any relevant info online — ranging from blogger reviews about nearby restaurants to tidbits in local papers. I popped in my zip code — 10011 — and got info about a new nearby Austrian restaurant, city-council tax reform, and nearby artists working on a Darfur project.
The really interesting thing here, though, is Johnson’s philosophy behind the project: The seemingly paradoxical proposition that while Internet technologies were originally touted as “making geography irrelevant”, in actual fact they excel at the opposite — giving you richer info about the stuff that’s going on nearby you. As Johnson told today’s New York Times Arts section:
“It really shows that the old idea that the Internet was going to make cities obsolete had it exactly wrong,” he said. “In fact the Internet enhances cities in all these different ways. I think it lets people have the kinds of conversations that we sentimentally always imagined that people were having.”
“When you combine that mix of the opportunity for discussion and debate between people who don’t necessarily know each other, when it’s all grounded in an actual physical place and it’s not just about going into a game world and arguing over dragons or something like that,” he continued, “then I think you have something that is a real enhancement of civic conversation and the kind of public space that’s so important in a great city.”
Amen. Much as I thrive in virtual worlds — from World of Warcraft to the blogosphere to ECHO — you can’t deny that meatspace is where it’s at. Mind you, if I didn’t already love the idea of being surrounded by millions of interesting strangers and having their lives collide with mine at unpredictable moments and with a wildly varying quality of results, I wouldn’t live in New York. Heh.
Yesterday, the New York Times Magazine published a piece I wrote about whether Web 2.0 technologies like wikis and blogs could help improve the dot-connecting ability of the US’ intelligence agencies. The story is online at the web site, linked above, but here’s a permanent copy too below!
(That image above, by the way, is from the excellent graphics done for the piece by Lisa Strausfeld and James Nick Sears, illustrating networked connections between various terrorism-related terms.)
Could blogs and wikis help stop the next 9/11?
by Clive Thompson
When Matthew Burton arrived at the Defense Intelligence Agency in January 2003, he was excited about getting to his computer. Burton, who was then 22, had long been interested in international relations: he had studied Russian politics and interned at the U.S. consulate in Ukraine, helping to speed refugee applications of politically persecuted Ukrainians. But he was also a big high-tech geek fluent in Web-page engineering, and he spent hours every day chatting online with friends and updating his own blog. When he was hired by the D.I.A., he told me recently, his mind boggled at the futuristic, secret spy technology he would get to play with: search engines that can read minds, he figured. Desktop video conferencing with colleagues around the world. If the everyday Internet was so awesome, just imagine how much better the spy tools would be.
But when he got to his cubicle, his high-tech dreams collapsed. ”The reality,” he later wrote ruefully, ”was a colossal letdown.”
I fixed the comments. It turns out the permissions on my commenting script got b0rked — not sure why, but they’re better now.
Thanks to Anthony, Ronnie, Adam, Jesse, Jemaleddin, Zac, and others for helping point me to the fix!
Sheesh. I finally start blogging after two months, and for some reason the comments seem to be busted! Several folks have emailed me to say they’ve tried to post and can’t.
Whenever anyone tries to post, they get this error message:
You don’t have permission to access /mt3/mt-comments.cgi on this server.
Anyone know what might have gone wrong here? If you’ve had this error on your instal of Movable Type and think you know how I can fix it, please email me for my eternal thanks!
This is pretty excellent: Apparently, one of my magazine articles was part of the inspiration for YouTube!
Recently, Jawed Karim, one of the three cofounders of the site, gave a speech at the ACM conference at the University of Illinois at Urbana-Champaign. He talked about the many trends that he was tracking in 2004 that led up to the “aha” moment where he envisioned YouTube. One of these moments was when Karim read “The Bittorrent Effect,” a story I wrote for Wired magazine.
In the piece, I described the infamous episode of Crossfire in which Jon Stewart showed up and reamed out the two hosts for “hurting America” with their formulaic, gormless Punch-and-Judy approach to modern political debate. The clip of Stewart’s rant was quickly ripped, posted online, and passed around with such speed that — as best as I could calculate — over 2.5 million people saw it online. Then, as I wrote:
By contrast, CNN’s audience for Crossfire was only 867,000. Three times as many people saw Stewart’s appearance online as on CNN itself.
If enough people start getting their TV online, it will drastically change the nature of the medium. Normally, the buzz for a show builds gradually; it takes a few weeks or even a whole season for a loyal viewership to lock in. But in a BitTorrented broadcast world, things are more volatile. Once a show becomes slightly popular — or once it has a handful of well-connected proselytizers — multiplier effects will take over, and it could become insanely popular overnight. The pass-around effect of blogs, email, and RSS creates a roving, instant audience for a hot show or segment. The whole concept of must-see TV changes from being something you stop and watch every Thursday to something you gotta check out right now, dude. Just click here.
Karim says that when he read this, he immediately realized there was a huge market for a simple tool that unleashed “clip culture” and allowed people to easily post 3-minute video segments online. YouTube was born from his epiphany! If you watch the video of Karim’s speech — posted, naturally enough, on YouTube — you can see his discussion of my article begin at the 26-minute mark. That’s a screenshot of his PowerPoint presentation above.
I am, of course, thrilled to have been responsible in some small way for the extreme goodness that is YouTube. Though I’m probably not as thrilled as I’d be if — as my friends now joke — I’d actually had the idea for YouTube myself, heh. Then again, if I had developed YouTube and sold it to Google for, like, $380 trillion or whatever the heck it sold for, would I be sitting here blogging? Or would my personal army of nuclear-powered robots be sitting here blogging?
A friend of mine just gave me a pair of those way-kewl Nike shoes that include sensors which broadcast your footsteps to your iPod. As Apple and Nike proudly proclaim, their shoes are a revolution in fitness — because they allow an iPod to track precise information about how far you run and how many calories you burn. “Your shoes talk,” as Apple boasts. “Your iPod nano listens.”
And apparently, so does your creepy ex. A group of computer scientists at the University of Washington wondered if they could build a simple device to secretly track somebody by the signal emitted from their shoes. So they set up a laptop, and whaddya know: It turns out that each shoe broadcasts a unique identifier, and it took the scientists only a few hours to write computer code that would sniff it out and track it. They wrote a report summarizing the stalkertastic possibilities raised by the shoes, as their press release reports:
A jealous boyfriend could track a woman’s movements, or compare them with the movements of a suspected rival. And although a receiver only picks up the signal when a person is within range, a stalker could hide receivers near a home, a gym and a restaurant, for example, to closely monitor his or her target’s movements.
Nice! Of course, this is only the tip of the iceberg. As more and more products are shipped with radio-frequency ID labels, it’ll be increasingly easy for people to track where you’re going based on the radio-ID being constantly squirted out by, oh, your cup of coffee.
Speaking of which, you know those new credit cards with “blink” superfast payment technology — where you just wave the card at the cash register and it deducts the money? That’s also RFID technology, and as Boing Boing reported a while ago, those are pretty easy to scan and swipe your information from too. Time to start carrying your credit cards in a radio-proof Altoids tin!
Here’s a truly gorgeous little Flash game: flOw. It’s a simple concept: You control a little amoeba-worm-like creature, and you use your mouse to move it around and eat smaller things floating around in the primordial soup. Each chomp makes your amoeba longer, and more “powerful”. There are also little blue and red thingies; eating a red one dives you down one layer deeper into the soup (if you think of “deeper” as “receding away from you, inwards towards your computer screen”), and blue ones make you rise back upwards. As you go deeper, you begin to face various freaky cephalopodic enemies that try to kill you, but they also drop power-ups that make you bigger and more powerful yet.
The interesting thing about flOw is that the designer, Jenova Chen, designed it based on Mihaly Csikszentmihalyi’s concept of flow. Csikszentmihalyi called “flow” the exhilirating sense of engagement we get when we’re wrapped up in a task that is perfectly matched to our skills. If it’s too easy, we get bored; too hard, we get frustrated. But hitting the precise mid-point puts us in “the zone” of flow.
Most video games, Chen argues, try to adjust their difficulty on the fly so they perfectly match the player’s aptitude. But games are also largely based on the emotional logic of the side-scroller — by which the game slowly ramps up in intensity as you go along, under the theory that it will tease you to slowly improve your skills. Games like this take their metaphoric cues from the relentless march of time; you can’t opt to scroll backwards if you’re getting freaked out. Chen designed flOw in a different way, as a recent piece in the Wall Street Journal notes:
Mr. Chen’s concept hinges on users unknowingly setting their own difficulty level. “Not with an option box that says easy, medium and hard,” he insists. “I want the player to control it subconsciously, based on what they’re doing.” In the face of a frustrating enemy, players are free to avoid the fight and search for more food, evolving into a more potent form. (Mr. Chen says the first squid-like enemy, encountered at level five, was made excessively difficult on purpose to see if players would instinctually flee from an unfair fight.)
On the other hand, if creature-on-creature combat is too easy, players may gravitate toward more fighting and less eating, and that self-imposed diet will make “flOw” tougher. Mr. Chen hopes players over time will self-select the correct difficulty — keeping the game engaging, but not frustrating — without ever really thinking about it.
An interesting idea! Not entirely revolutionary; one of the reasons people like World of Warcraft so much is that you can choose to plunge in and pick the most dangerous, hard-driving fights one after the other, or ease back and simply kill the same enemies over and over again to slowly gain experience.
What’s far, far more revolutionary about flOw is its artistic beauty. The graceful, looping movements of the ameoboid life-forms, coupled with the trippy, arrival-of-the-mothership ambient music, give the game a totally meditative feel. Trippy!
(By the way, Chen has previously created Cloud, an equally zen-like game where you fly through the air collecting clouds and drawing patterns with them. I wrote about it earlier this year for my column in Wired News.)
(Thanks to El Rey for this one!)
Last month, Fast Company published a big piece I wrote about Gordon Bell — the Microsoft Research scientist who’s trying to record every single experience he has, every day: Every phone call, email, conversation, web page, snapshot of everything he sees. It’s a pretty fascinating project, and it led me to speculate a bit about the future of memory. The piece is online here at the Fast Company site, and a copy is archived below!
A Head For Detail
Gordon Bell feeds every piece of his life into a surrogate brain, and soon the rest of us will be able to do the same. But does perfect memory make you smarter, or just drive you nuts?
by Clive Thompson
Gordon Bell will never forget what I look like. He’ll never forget what I sound like, either. Actually, he’ll never forget a single detail about me.
That’s because when I first met the affable 72-year-old computer scientist at the offices of Microsoft Research Labs, in Redmond, Washington, he was carefully recording my every move. He had a tiny bug-eyed camera around his neck, and a small audio recorder at his elbow. As we chatted about various topics — Australian jazz musicians, his futuristic cell phone, the Seattle area’s gorgeous weather — Bell’s gear quietly logged my every gesture and all my blathering small talk, snapping a picture every 60 seconds. Back at his office, his computer had carefully archived every document related to me: all the email I’d sent him, copies of my articles he’d read, pages he’d surfed on my blog.
“Oh, I’ve got everything,” Bell said cheerily. And when I saw him the next day, down in his cramped personal office in San Francisco, he offered to give me a glimpse of the memories he’d collected. He plunked down in front of his computer, pulled up a browser, typed in “Clive Fast Company,” and there they were: Hundreds of pictures of the meeting scrolled by on his screen, and the sound of our day-old conversation filled the room. It was a deeply strange feeling. My random chitchat is being preserved? For all eternity? He nodded, pointing to a mundane Dell computer parked beneath his desk. His massive store of data. His “surrogate brain.”
Does your language affect how you perceive the rhythms in music?
Possibly so, according to some incredibly cool new research. A group of scientists recently got interested in longstanding assumptions about how people sort sounds into groups. Traditionally, researchers would test people by playing them a bunch of musical sounds that alternated in loudness (such as loud-soft-loud-soft-loud-soft-etc.) or duration (such as long-short-long-short-long-short-etc.). Then they’d ask people to group the sounds: Where did a grouping of sounds begin and end?
Historically, people would say that a louder sound marked the beginning of a group, and a lengthened sound indicated the end of a group. This makes sense to me. Think about pop music: A verse or chorus in a pop song will often begin with a loud, accented syllable, and conclude with a drawn-out note. These findings were so regularlyconfirmed in lab experiments that eventually, they came to be regarded as “universal” laws of human perception — the acoustic lattices that organize the way we hear language and music.
Except for one thing: The studies were only conducted with Western people speaking Western languages. So this new group of scientists — who work in San Diego and Kyoto, Japan — wondered if the findings would still hold up with Eastern subjects. They decided to remount the experiment, comparing native speakers of Japanese with native speakers of American English.
Sure enough, differences emerged. While the Japanese speakers agreed that loud sounds marked the beginning of groups, they disagreed when it came to sound duration: They felt that a short sound was most likely to mark the end of a group.
This, the scientists theorize, may be because of how language trains our minds to perceive rhythm. In English, “function” words like “the” or “a” tend to come at the beginning of phrases and combine with longer, meaningful words like nouns or verbs. That means that linguistic chunks tend to start short and end long. But Japanese, as the researchers note in this press release, works differently:
Japanese, in contrast, places function words at the ends of phrases. Common function words in Japanese include “case markers,” or short sounds which can indicate whether a noun is a subject, direct object, indirect object, etc. For example, in the sentence “John-san-ga Mari-san-ni hon-wo agemashita,” (“John gave a book to Mari”) the suffixes “ga,” “ni” and “wo” are case markers indicating that John is the subject, Mari is the indirect object and “hon” (book) is the direct object. Placing function words at the ends of phrases creates frequent chunks that start with a long element and end with a short one, which is just the opposite of the rhythm of short phrases in English.
The scientists now think they could analyze the structure of a language and predict how its speakers would perceive rhythms in music.
Since I haven’t blogged in two months, I have a big pile of stuff I’ve written — at Wired and other places — that I’ll slowly link to over the next few days. One of the things I wrote was this gaming column for Wired News, in which I played the Christian-Rapture game based on the gazillion-copy-selling Left Behind series. The column is online here, and a copy is archived below!
Going Into Godmode in Left Behind
by Clive Thompson
One thing you can’t deny about the Bible: It’s got an awfully thrilling plot. The Book of Revelation — the story of the end days of Earth — is treble-charged with Jerry Bruckheimer-style combat. Armies of darkness trample the earth; the ultimate villain ascends to power; then a final conflict rends the fabric of space and time. You could be forgiven for wondering: Why hasn’t someone made a game out of this?
Next week, your prayers will be answered — with the arrival of Left Behind: Eternal Forces, a game based on the Left Behind books. For those who just teleported in from the moon, the massively popular Left Behind series tells the story of the Rapture, in which millions of the world’s Christians are whisked off to heaven by Jesus. Those left behind form into two armies: The Tribulation Force of the newly repentant born-again, and sepulchral, one-world-government forces led by Nicolae Carpathian, a man who is charismatic, effeminate, European and thus quite obviously Satan.
Regular readers of this blog will have noticed the sephulchral silence around Collision Detection in the last, oh, two months. That’s because I was hit with a tsunami of work that made it impossible to do anything other than eat, sleep, type, and hang out with my infant son.
The storm has passed, and the blogging is starting again. Though I’ll be intrigued to check my log files and see if anyone is actually reading this thing any more. A while back, I was talking with Cory Doctorow about the need for blogs to keep updating — all! the! time! — to preserve their audiences. He agreed, though he also wondered whether the existence of RSS might help maintain a blog’s readership in the event of a hiatus in publishing. So long as your RSS subscribers don’t delete you from their readers, he suspected, they’ll probably return when you start blogging again.
Heh. I guess I’ll find out!
I'm Clive Thompson, a writer on science, technology, and culture. This blog collects bits of offbeat research I'm running into, and musings thereon.
Currently, I'm a contributing writer for the New York Times Magazine and a columnist for Wired magazine. I also write for Fast Company and Wired magazine's web site, among other places. Email or AOL IM me (pomeranian99) to say hi or send in something strange!
May 20, 2011 » 02:28 PM
From Christopher Kennedy’s very droll book “Neitzsche’s Horse”.
July 28, 2010 » 07:35 AM
“Wr” - S
July 06, 2010 » 10:05 AM
My Xbox broke, and I was trying to Google some possible technical solutions, when I noticed that Google appears to be encouraging me to make a typo. I suppose it’s possible that Google’s algorithms know that typing “wont” instead of “won’t” would produce better results.
June 29, 2010 » 05:00 PM
On the other hand, when I tried the test for multitasking, I was pretty abysmal. I performed worse than people who identify themselves as heavy multitaskers, and those who identify as low multitaskers.
June 29, 2010 » 04:58 PM
I finally got around to trying out the interactive “test your distractability and multitasking” page at the New York Times, which they put up alongside their story earlier this month about how computer distractions are eroding our lives.
According to the test, I guess I have good focus — I’m not very distractable!
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog