Dig this: Brain scans appear to show that doctors can shut down the parts of their brains that lets them empathize with pain in other people.
It makes sense, of course. Because doctors are often required to cause pain in patients — as part of treating them — doctors probably need to develop the ability to at least partly ignore the pain they’re causing, or they’d never be able to deal with the stress. The neuroscientists decided to see if there was anything in doctors’ brain activity that actually reflected this ability. So they put a bunch of doctors and a non-doctors into an fMRI machine, and had them look at randomly interspersed pictures of people being pricked with acupuncture needles and touched with Q-tips. The results? According to this write-up:
Among the control group, the scan showed that the pain circuit, which comprises somatosensory cortex, anterior insula, periaqueducal gray and anterior cigulate cortex, was activated when members of that group saw someone touch with a needle but not activated when the person was touched with a Q-tip.
Physicians registered no increase in activity in the portion of the brain related to pain, whether they saw an image of someone stuck with a needle or touched with a Q-tip. However, the physicians, unlike the control group, did register an increase in activity in the frontal areas of the brain—the medial and superior prefrontal cortices and the right tempororparietal junction. That is the neural circuit that is related to emotion regulation and cognitive control.
They also asked the two groups to rate the level of pain they felt people were experiencing while being pricked with needles. The control group rated the pain at about 7 points on a 10-point scale, while the physicians said the pain was probably at 3 points on that scale.
Now that latter paragraph is super interesting. It would appear that some of the doctors’ ability to ignore pain isn’t wilful — it’s also involuntary. It’s not just that they can turn off their empathy; they can’t turn it on again when specifically asked to do so. Thus, they’re more likely than you or I to underestimate the amount of pain someone is experiencing.
Psychologists and patients-rights advocates have long argued that doctors don’t take pain-alleviation seriously enough; Jerome Groopman wrote an article for the New Yorker on this subject a while back. These findings might help illuminate some of the reasons why pain-management goes on the backburner in medical backburner: Perhaps the doctors simply aren’t perceiving it.
“Never give in!” Winston Churchill famous intoned. “Never give in! Never, never, never, never — in nothing great or small, large or petty.”
Good advice for winning a world war, clearly! But apparently it’s unhealthy to behave this way in everyday life. According to a new study out of the University of British Columbia, sometimes it’s psychologically healthier to just give up.
The scientists who did the study ran three experiments where they gathered physiological data about a bunch of teenagers with various attitudes towards achieving hard goals. As a press release notes …
… The psychologists followed teenagers for a full year. Over that time, individuals who did not persist obtaining hard to reach goals had much lower levels of a protein called CRP, an indicator of bodily inflammation. Inflammation has recently been linked to several serious diseases, including diabetes and heart disease, suggesting that healthy but overly tenacious teens may already be on the road toward chronic illness later in life.
Accordingly, Miller and Wrosch suggest it may be more prudent to cut one’s losses in the face of an insurmountable obstacle. “When people are faced with situations in which they cannot realize a key life goal, the most adaptive response for physical and mental health may be to disengage from this goal,” write the authors.
Apparently the healthiest teens of all were the ones who quickly figured out when a goal was going to be overly hard to achieve, quit — but then immediately honed in on a new, more achievable goal. (A PDF of the study is here.)
In a sense, this is another case of “science confirms the obvious”, but it’s nice to actually have some hard data in defense of the fine art of throwing in the towel. There are way too many managers in this country who deploy Successories posters unironically, and they desperately need someone to hand them a copy of this research.
Halo 3 Balances Hot New Guns, Old-School Cool
by Clive Thompson
So, here I am again: Standing on a sandy beach, near the edge of a pine-tree forest, hunched behind a stone outcropping — while an army of bellowing aliens singe my armor with 3,000-degree bolts of plasma.
Does this feel familiar? Of course it does. I’m playing Halo 3, the final part of the 15-million-copy-selling trilogy. And the designers at Bungie Studios are trying to satisfy the same sort of paradoxical longing from their audience that pop bands wrestle with: We want them to do exactly the same thing they did on their first album — but, y’know, even better.
So when I got to that sandy-beach level, I had a jolt of deja vu, because it looked eerily similar to the sandy-beach-and-pine-trees level in … the first Halo. Then I realized this was probably intentional: The designers are giving me the architectural equivalent of a wink and a nod.
Do you work from home? When you answer the phone, do you try to pretend you’re at an “office” — or do you let your colleagues know where you are? Marci Alboher wrote an interesting trend story in today’s New York Times claiming that work-from-home entrepreneurs are increasingly candid about the fact that they work from home. If noise leaks in from the dog or kids in the background, it’s not longer a big deal, according to the enterpreneurs that Alboher interviews, as well as this pundit:
“It is no longer a faux pas to have a life at the other end of the telephone line.” Ms. Jackson said. “It can make you feel like you’re dealing with a holistic person. And it is just another sign that we are moving away from the industrial age in that we no longer have two totally separate spheres called work and home.”
This trend is certainly true for me. When I first started working from home in 1994, I’d go to considerable pains to disguise the fact that I was working from my crummy bedroom, because a lot of interviewees were uncomfortable with it. It didn’t seem professional to them; they expected reporters to be working in some 1940s-style office tower that looked like The Daily Planet or something. But in the last few years, this bias has dropped entirely. Nobody could really care less where I am, so long as I’m on the phone.
The article offers a few reasons explaining this shift, but neglects one that I think is the most powerful: The mobile phone.
In the last five years, mobile phones have transformed the acoustic and geographic environments in which people conduct business. They, almost more than the Internet, decoupled the relationship between “work” and an “office.” These days, my interviewees are half the time talking to me while they walk down the street or hang out in hotel conference hallways or ride in their diamond/titanium UFOs to their secret Arctic lairs — so why would they care, or even notice, that I’m not working from an office too?
(The picture above is from the article — the home-office of some Brooklyn dance company.)
Wired magazine this month published my latest column, and this one probes a troubling question: Why are people avidly willing to donate thousands of dollars to help a random stranger in trouble — but shrug their shoulders when millions of people are suffering from easily-curable diseases?
My attempt to answer this is online here at the Wired web site, and archived permanently below!
Why we should count on geeks to rescue the Earth
by Clive Thompson
Bill Gates is an improbable humanitarian. He built a reputation as a nightmare boss at Microsoft, a totalitarian who screeched at employees he thought were stupid. He bludgeoned competitors with his illegal monopoly. And he’s a nerd’s nerd — someone who seems perennially uncomfortable around people and only at ease dealing with the intricacies of software code.
And that is precisely why he’s now saving the world.
As you probably know, Gates is aggressively tackling third world diseases. He has targeted not only high-profile scourges like AIDS but also maladies like malaria, diarrhea, and parasitic infections. These latter illnesses are the really important ones to attack, because they kill millions a year and are entirely preventable. For decades, they flew under the radar of philanthropists in the West. So why did Gates become the first major humanitarian to take action?
The answer lies in the psychology of numeracy — how we understand numbers.
As most connoisseurs of whale trivia know, all “toothed” whales echolocate — they hunt underwater prey via reflected sound waves. In contrast, all baleen whales — which hunt krill by straining them through seive-like mandibles — do not echolocate. Why the difference?
Marine evolutionary biologists have long assumed it’s because toothed whales dive down deep to hunt prey at night, whereas baleen whales stay near the surface. Since it’s so insanely pitch-black down there, the toothed whales had to evolve echolocation merely to be able to navigate.
For years, this explanation made perfect sense. Still, the biologists were puzzled by one unanswered question: How did the toothed whales know there was food down below?
Because they followed the squid. Many squid rise up to the ocean surface during the night, and plunge down deep during the day. In a recent paper, David Lindberg — the UC Berkeley professor of integrative biology — used fossil records of ancient whales and squid to argue that the toothed whales probably followed the squid down deeper and deeper until they realized there’s a massive buffet of squid kicking around in the ocean’s basement. At which point they also were like, damn, we’re gonna need us some echolocation to fully harvest this squid bonanza.
Lindberg and Pyenson propose that whales first found it possible to track these hard-shelled creatures in surface waters at night by bouncing sounds off of them, an advantage over whales that relied only on moonlight or starlight.
This would have enabled whales to follow the cephalopods as they migrated downwards into the darkness during the day. Today, the largest number of squid hang out during the day at about 500 meters below the surface, though some go twice as deep. During the night, however, nearly half the squid are within 150 meters of the surface.
Squid: Noble pistons in the engine of marine evolution.
What’s the best way to memorize material? If you want to remember it for a few days, the best way is “cramming” — studying the material over and over again in one long sustained session. But if you want to recall material for years to come, don’t cram — because according to a new experiment, cramming hurts your long-term memory.
Educators have long known that “overlearning” and “massing” — studying material repeated in long, late-night sessions — works pretty well in the short term. Students who cram historically do better on tests than those who don’t. But scientists didn’t know whether overlearning helped you remember things years down the line.
So recently, the psychologists Doug Rohrer and Harold Pahsler decided to figure it out. They took two groups of people and had one of them cram for a test, studying material 10 times in a row — while another group only studied the stuff only 5 times in a row. When both groups took tests immediately after studying, sure enough, cramming worked: The “overlearners” achieved three perfect scores on average each, compared to only one perfect score for the ones who’d studied less often. But then Rohrer and Pahsler gave the groups the same tests — four weeks later. This time there was no difference in performance. The advantage from cramming had evaporated.
Okay, so let’s say you actually do want to remember things for the long-haul. How should you study? Rohrer and Pahsler tested that too. In their next experiment, they gave people crazily different study regimens — some of the subjects studied material repeatedly every 5 minutes (a classic “cramming” regimen), while others reviewed material only every month or so. Then they tested everyone six months after their study sessions. Bingo: The ones who did best had studied the material merely once a month.
Assuming their findings — which are published here in a PDF paper — hold water, the implications for education are enormous. That’s because most high-school and college courses are designed to reward cramming. They’re setting up students to forget things. Even textbooks are designed with overlearning in mind, as the authors point out:
This apparent ineffectiveness of overlearning and massing is troubling because these two strategies are fostered by most mathematics textbooks. In these texts, each set of practice problems consists almost entirely of problems relating solely to the immediately preceding material. The concentration of all similar problems into the same practice set constitutes massing, and the sheer number of similar problems within each practice set guarantees overlearning.
So how would you re-engineer textbooks and classes to emphasize long-term recall? You could, they suggest, have teachers intersperse topics throughout the year, instead of forcing kids to focus on them pell-mell for a few days, never to return again. Textbooks could be rewritten so that lessons are also intermingled: “For example, a lesson on parabolas would be followed by a practice set with the usual number of problems, but only a few of these problems would relate to parabolas. Other parabola problems would be distributed throughout the remaining practice sets.”
This is seriously fascinating stuff.
Here’s an interesting development: A couple of University of Cincinnati researchers have developed an artificial-intelligence program that can understand knock-knock jokes.
Knock-knock jokes, of course, frequently rely on a pun, as with: “Knock, Knock. Who is there? Wendy. Wendy who? Wendy last time you took a bath?” Humans intuitively understand when a pun is being made, because we’re able to notice that a crucial word in the punchline is being misused semantically — it’s an incorrect meaning — and is riding along purely because it sounds like the correct word. But computers have huge, huge problems with grasping semantics and homonyms.
So to create their knock-knock joke ‘bot, the researchers programmed their AI with a big list of homonyms and their various meanings — including, significantly, a lot of proper names (like “Wendy”) because a lot of knock-knock jokes rely on proper names. Then whenever the AI reads a new knock-knock joke, it identifies the crucial “joke” word, then pings its database of homonyms to see if any of the words’ rival meanings “fits” the joke. If it does, the bot flags the joke as “funny.”
Why bother doing this? Because one of the reasons computer systems make mistakes is that they can’t perceive human intent: They can’t tell when we’re being sardonic, or joking, or wry. As the other researcher, Larry Mazlack, said in a press release:
“Part of the difficulty lies with the formality that computers and people need to use to interact with each other,” says Mazlack. “A critical aspect in achieving sociable computing is being able to informally communicate in a human language with computers. Computationally handling humor is critical to being able to conduct an informal dialogue with a computer; Julia Taylor is making good progress in advancing knowledge in this area — other people in my lab are working on different aspects of less formal ways of using computers.”
I think their work is pretty interesting, but in a way, it’s also a testament to the towering difficulty of producing truly conversational AI. Back in the 70s, AI researchers figured they’d have this nut cracked in a couple of years … but now we have a much richer sense of how dense and nuanced human language and meaning can be. So now we’re seeing more and more brute-force approaches to encoding knowledge about human meaning. Much like Doug Lenat’s CyC database, the argument is that the only way to truly teach a robot the nuances of language is to bootstrap them like children: Teach ‘em one thing after another for years on end.
Of course, keep in mind that these poor AI folks at Cincinnati are spending months of work just to get a robot to grasp really bad jokes. Imagine the toil necessary to communicate the meaning of subtle humor! “Even leaden puns,” as Taylor sighs, “are very difficult to understand.” It also reminds me of one of the reasons that knock-knock jokes are so widespread: Their very unsubtlety means they have the largest possible audience — i.e. the largest possible group of people who’ll understand them. The more nuanced and allusive a joke becomes, the fewer and fewer the number of people who’ll grok it.
This is why humor, in a way, is a sort of Turing test for humans. One of the surest ways to figure out that someone comes from a totally different background, culture, generation, whatever, is to make a joke … and then realize they’re staring at you with a completely blank expression. It’s as if you tried telling a joke to one of the Whites from Alpha Centauri, and they are now regarding you with interstellar incomprehension, unable to unpack cultural assumptions you figured were nigh-universal. Your parents find your uberironic, media-satured in-jokes annoying and meaningless; you find their humor painfully obvious and Catskillian. No wonder we’re trying to develop robots who’ll listen patiently, and then laugh.
(Thanks to the Bioethics Blog for this one!)
Today, Wired News published my latest video-game column — and this one is about what I call “gamer regret”: A nagging sense of hollowness that plagues me every time I think about how many hundreds of hours I’ve spent playing games.
It’s online here at the Wired News site, and a copy is archived permanently below:
Battle With ‘Gamer Regret’ Never Ceases
by Clive Thompson
In retrospect, maybe I shouldn’t have looked.
I was 10 days into playing Dungeon Maker: Hunting Ground — a little RPG I reviewed here last month — and I was poking around the “settings” menu. I noticed that it had a “time played” option, which shows you how long you’ve been toiling away at the game. Curious, I clicked it.
Upon which my heart sank into a fathomless pit. Thirty-six hours? How in god’s name had I managed to spend almost four hours a day inside this game? I should point out that this was not the only game I’d been playing during that time. I’d also been hip-deep in BioShock and Space Giraffe, so I’d been planted like a weed in front of my consoles for hours more.
This is a missing-time experience so vast one would normally require a UFO abduction to achieve it.
Here’s a first in the history of policing: An officer used a Segway to pursue and catch a fleeing, gun-wielding suspect.
Apparently the Chicago police force has 29 Segways, and various officers use them to patrol the downtown core. Officer Thaddeus Martyka was riding one when he heard a gunshot. He saw the perps running away, and gave chase — at the Segway’s blistering top speed of 12.5 miles per hour. Eventually the suspect pooped out and Martkya dismounted and cuffed him. As the Chicago Sun-Times reported:
This is the first time I can recall that the officer used it to chase somebody down,” Central District Cmdr. Kevin Ryan said.
Heh. I’ll bet. It’s such a great mental picture: An officer, trundling leisurely down the street at a pace that would not disturb a cupholder full of coffee, just very … slowly … wearing … the suspect … down. No screaming sirens, no people diving out of the way of the pursuing vehicle: Just the gentle whine of an electric motor as implacable justice meanders down the street.
Even better is the fact that the Sun-Times used the full, official name of the device — the Segway Human Transporter. Human Transporter? I wasn’t aware of any Segways designed for other, y’know, species. Is there some sort of Segway for dogs or giraffes or something that I’m not aware of?
(Thanks to Fark for this one!)
I play a lot of “casual” web games, and the majority of them are pretty forgettable — either because they’re badly designed or because they’re unremarkable remakes of pre-existing arcade games that needed no further perfection.
So I was delighted to discover Bloxorz, pictured above. It’s my favorite type of simple game — something with a few simple rules that can be learned in minutes, yet yield endlessly complex and head-scratching puzzles. Basically, you have this block that you move along a grid, by flipping it upright on its tail, down on its side, or rolling it along on its side. Your goal is to make it to the end of each level, and this winds up requiring some crazy spatial reasoning that flummoxed me after the first ten levels or so.
I doubt this play mechanic is entirely new, but it’s certainly new to me! Really addictive.
(Thanks to Rock Paper Shotgun for this one!)
This is beyond awesome: Pablo Gleiser, a physicist, took 12,942 issues of various Marvel comics, traced all the connections between 6,486 different characters — and produced a massive social-network map of the Marvel Universe.
Some of his findings? Superheroes have way more connections than villians — which may help explain why they win so often. “Only heroes team up,” Gleiser notes, “while villains do not.” Superheroes are superconnectors; villians sit on the periphery of the social web. This, he theorizes, is probably due to the built-in rules that comic-book authors must follow. As he writes in his paper, the PDF of which you can download here:
We believe that the origin of this division is due to the fact that, although the Marvel Universe incorporates elements from fantasy and science-fiction the arguments of the stories were restricted by a set of rules established in the Comics Authority Code of the Comics Magazine Association of America. In particular, rule number five in part A of the code for editorial matter states that “Criminals shall not be presented so as to be rendered glamorous or to occupy a position which creates the desire for emulation …”
With great power comes great responsibility. Speaking of which, Gleiser found that the single-strongest bond in the network — i.e. the tie most commonly reiterated in Marvel plots — is between Peter Parker and Mary Jane. (“A fact,” he writes, “that shows that although the [Marvel Universe] deals mainly with superheroes and villians the most popular plot is a love story.”) The next most-important superconnectors? The Thing, the Beast, Namor, and the Hulk.
That map above illustrates the sad plight of villians. It shows the strongest 300 links. The black dots are heroes — most of which are nicely and tightly interconnected. The white dots are villians, which tend to be connected only to a black dot — usually their sworn arch-enemy. (The grey dots are “other types of characters, such as people, gods or nodes with no classification.”)
But here’s a thought, fanboys. Network theory also predicts “the strength of weak ties”: I.e. the fact that while superconnectors have lots of strong connections, the most interesting, creative and unpredictable social effects come from weak links — connections between people only slightly joined. Does this hold true for the Marvel Universe? Are the stories in which those weak ties appear particularly intriguing, wacky, or unexpected than the ones that are characterized purely by strong ties? I want some grad student to tackle this one and get a PhD for it.
(Thanks to the New Scientist for this one!)
By now, you’ve probably heard about Idaho senator Larry Craig, who resigned last week after it was revealed that he’d been arrested — and pled guilty — to disorderly conduct. And you’ve no doubt also heard about his actions: He was caught engaging “in behavior commonly used to solicit sex, such as tapping his foot in the bathroom stall and touching the arresting officer’s foot with his own.”
But why do the police bust this sort of activity? In part, to prevent guys who aren’t interested in having public-bathroom-stall-sex from being propositioned there. Yet this weekend, Laura MacDonald wrote a fascinating op-ed piece in the New York Times pointing out that the toe-tapping gay-sex codes were designed specifically to be impenetrable to outsiders. Indeed, it’s a decades-old language, and it was first studied in a 1970 paper by Washington University researcher Laud Humphreys, entitled “Tearoom Trade: Impersonal Sex in Public Places.” As MacDonald notes …
In minute, choreographic detail, Mr. Humphreys (who died in 1988) illustrated that various signals — the foot tapping, the hand waving and the body positioning — are all parts of a delicate ritual of call and answer, an elaborate series of codes that require the proper response for the initiator to continue. Put simply, a straight man would be left alone after that first tap or cough or look went unanswered.
Why? The initiator does not want to be beaten up or arrested or chased by teenagers, so he engages in safeguards to ensure that any physical advance will be reciprocated. As Mr. Humphreys put it, “because of cautions built into the strategies of these encounters, no man need fear being molested in such facilities.”
The way MacDonald sees it, Craig is a victim of a communications-theory paradox. He got arrested for being a nuisance to others — yet the whole reason anyone uses the toe-tapping code is precisely to avoid being a nuisance to others.
I have to admit, I’m quite charmed by this defense of Craig. The problem, of course, is that it obscures the obvious fact that while the toe-tapping code is certainly discreet, it’s still illegal to have sex in bathroom stalls in the first place, no matter how discreetly. (It also elides the fact that Craig has spent his entire career inveighing against homosexuality — precisely the sort of political agitation that drives gay men into the closet, into untenable marriages, and thus, eventually, out into public bathrooms in pursuit of sex. You’d imagine the irony of this moral calculus would have occurred to him over the years!)
But back to the science. I was particularly fascinated to learn about Humphreys’ techniques. Since he couldn’t get his subjects’ permission, apparently he “tracked down names and addresses through license plate numbers, [and] interviewed the men in their homes in disguise and under false pretenses”. This is why “‘Tearoom Trade’ is now taught as a primary example of unethical social research”, as MacDonald notes. It also underscores the fact that we owe some of our most interesting social-psychology insights to research that wouldn’t be allowed any more because it’s too creepy — like the Milgram shock experiment. I’m not arguing in favor of unethical research, but it gets ya thinking.
Here’s a cool study: Alberta biologists appear to have demonstrated that spatial memory in cats is cemented not when a cat sees something — but when it navigates around it. Merely perceiving something, for a cat, isn’t enough.
In their experiment — pictured above — they had a cat step halfway over a small barrier, at which point they distracted it with food. They lowered the barrier while it was eating, but when the cat moved on, it still raised its hind legs, believing the barrier to still be there. This is the sort of behavior you’d expect, of course.
But then they repeated the experiment with a twist. This time, the cat was stopped at the point when it had seen the barrier — but before it had stepped its front paws over it. Again, they lowered the barrier while the cat ate the food. But when the cat moved onwards, it didn’t raise its hind legs high enough the clear the now-removed barrier. The cat had seen the barrier, but because it hadn’t been forced to navigate it with its front paws, the barrier hadn’t become part of its spatial memory.
“The movement of the forelegs does something unusual,” Mr. McVea said. “It cements the memory of the obstacle.” They had similar results using a barrier that the cat could feel but not see, demonstrating that visual cues were not necessary to create the memory.
Mr. McVea said most likely the brain’s motor cortex, which sends signals to the muscles to initiate movements, was also sending signals to another part of the brain involved in mapping the environment.
Sylvain Fiset, a professor of psychology at the University of Moncton in New Brunswick, said the work suggested that the cognitive processes of cats were adaptable to circumstances and that they “use diverse neural pathways to remember different events.”
I wonder how to what extent our brains work the same way. Some experiments with athletes navigating obstacles, for example, might reveal some neat things about the way we learn sports, gymnastics, and the all-important skill of wandering through your house in the pitch-dark without smashing your shins on furniture.
Dig this: The city council of Haringey in the UK hired a spy plane to fly overhead and identify which households are wasting the most energy — to try and shame them into turning their heat down. As the Times reports:
An aircraft, fitted with a military-style thermal imager, flew over the borough 17 times to take pictures of almost every house in the area.
Footage of heat loss was converted into stills, then laid over a map of the area, before each house was given colour-coded ratings.
Homes that were losing the most heat were represented as bright red on the map. The least wasteful households were shown in deep blue. Shades of paler blues and reds were used to show grades of heat loss.
Then they put the map online, so that Haringey residents could see whether they lived next to an energy hog. Here’s the web site of the company that makes the heat surveys; check out the survey they did of the British Houses of Parliament, too.
I was fascinated by this because I mentioned this idea, quite by coincidence, at the end of my column for the July issue of Wired. I was writing about how “ambient information” can help us reduce our energy consumption by making visible the patterns of our personal energy usage. At the end of the column, I speculated on a fun idea: What would happen if everyone openly published their personal energy usage on their Facebook page, or in an RSS feed? I argued that what psychologists call the “sentinel effect” would take over — we tend to behave better when our peers are scrutinizing our behavior — and we’d all start conserving even more energy.
Mind you, I think this would only work if was done voluntarily. I’m not sure people would be too keen about having their houses surveyed by a plane, heh.
(Thanks to Simon Winter for this one!)
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
El Rey Del Art
Frankly, I'd Rather Not
The Shifted Librarian
Howard Sherman's Nuggets
Donut Rock City
The Antic Muse
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
Arts and Letters Daily
Alan Reiter's Wireless Data Weblog
Viral Marketing Blog