Thursday, August 23, 2007

Is Kid Nation as Bad of an Idea as it Sounds?


On the face of it, Kid Nation sounds a lot like the last season of Survivor (the one in which participants were separated according their race): designed to court controversy. Its basically Lord of the Flies as a reality show - kids, left to their own devices, competing, forming alliances, and getting injured. Considering the knee-jerk, "well, I never" reaction that this show is sure to provoke, I thought I'd consider what the likely risks and rewards of such a show really are.

The NYTimes article seems to dwell on the worse-case scenarios outlined in the participant consent contract. So, why would a parent allow their child to risk life and limb for a shot at $25,000 and possible semi-fame? First off, I think that critics of the show's ethics will likely exaggerate the risk to the children involved. Though I haven't seen the show, I'd guess that the actual risk of bodily harm to the participants is very low, but is made to look much higher than it actually is. This is the case with most reality shows. The producers play up the danger element and play down the fact that their are highly trained medics just off camera. It wouldn't be to the producer/network's advantage to have a participant seriously hurt, even if they weren't at financial risk b/c of the air-tight contract. Really, they're all about trying to make situations seem far riskier than they really are, and they're quite good at that.

But why risk any chance of harm to your children? I think its less about the monetary reward and more about the changing nature of celebrity and what it means to be on national television. Setting aside the questionable motivations of the stage-parent for a moment, we can safely say that many more Americans can realistically aspire to be recognizable to people they do not know personally (my basic definition of fame/celebrity) thanks to YouTube and reality TV.

Celebrity before reality TV and YouTube was a rare commodity that was synonymous with a dramatic increase in one's monetary and social capital. There was always a downside - your public identity would be predetermined. Before you met new people, they'd already have a fixed (often inaccurate) idea of who you were. The ability to shift our perceived identities according to context is something we do unconsciously all the time in order to communicate with others. To be deprived of that ability is likely to make a celebrity feel isolated. Of course, this was a small price to pay for all the money and adoration that old-school celebs received.

After almost a decade of popular network reality TV, it seems apparent that the notoriety achieved by the contestants comes with a different set of trade-offs. Advertisers and producers still recognize the value of minor celebrities - familiarity to an audience garners attention (and perhaps affection) for their product, but its unclear how much that audience familiarity really boosts sales or viewership (I'm guessing that a cameo by Gervais from Survivor doesn't result in the ratings boost that a cameo by Bill Murray would). Also, more and more reality shows are niche marketed, so that a person's inability to function as a mutable public persona would only be limited to a segment of the public.

If nothing else, we can say that the duration and extent of fame seems to have shrunk both the upside and downside of fame, though I'd suspect that people would still recognize you long after your value as a spokesperson or promoter diminished. There are good parts about being on TV and bad parts, and its difficult for either the critic or the proponent to say which outweighs the other until we come up with some sort of unbiased longitudinal study of celebreality. Until then, you can't fall back on the old "they knew what they were getting into when they signed the contract" defense. If no one really knows the long term effects of semi-fame on one's mental health, then the "informed" in "informed consent" doesn't really mean anything.

...
10/13/07
Upon further reflection, I've decided that the new fame (reality TV, online fame) is a much worse deal than old fashioned fame for this reason: the rewards, the positive reputation and the money that comes with it do not last very long, but the inability to be anything other than what you were at the moment that fame struck is just as long as before. In other words, the perks are fewer but the downside is just as big.

Monday, August 20, 2007

Art vs. Commerce, Resolved


While wandering around the Art Institute of Chicago, I started thinking more about the difference between "classic" art that lasts hundreds of years and commercial "art" that is replaceable and disposable, at least in terms of the way it is created, received by the general public, and its role in the economy. Some of the art in that museum had been appreciated by billions of people, and had generated hundreds of millions of dollars in revenue over the years. What accounted for that? Was it a cadre of critics who deemed the work "classic," or did those critics and curators recognize some intrinsic appeal that is not limited by time or culture?

I'm pretty sure that the determining factor between art and commerce is longevity or lasting value and universality. Longevity and universality can be achieved two ways: remaining in sync with the aesthetic and cultural values of society over time and space OR by using one's station in life to acquire the markers of "classic" status, if not the substance.

The relevation that I had while staring at "The Feast of Herod and the Beheading of Saint John the Baptist" was this: certain kinds of art critics are no different than advertising: they're both using their accrued capital (either social, in the case of critics, or monetary, in the case of ads) to artificially boost the value (the longevity and universality) of a work.

If you'll allow me the opportunity to completely "dork out," I've worked up this chart that better explains what I'm talking about.



I think this might be related to thoughts I've had on the debate over the wisdom of crowds vs. the wisdom of experts. You could replace "experts" with "critics." In order to determine the relative value of crowds vs. critics, we must first determine which critics simply use their station in life to subject the masses to their opinions and which critics predict the popularity of a work over cultures and time. Then, you could compare the thoughts of the predictor critics to those of the masses (i.e. crowds, the public, first weekend box office, democracy). My guess is that they'd fare pretty well, b/c as wise as crowds are, they can't see beyond their own micro-culture and they tend not to base their collective decisions on thorough research of previous successes and failures the way an informed critic could.

Of course, predicting the future success of a work is pretty difficult (though not impossible). As I was driving back from O'Hare today, I heard Marky Mark Wahlberg's "Good Vibrations," which, thanks to Wahlberg's rising status as producer of a successful show and his Oscar nominated turn in The Departed, probably gets more airplay than Brian Wilson's "Good Vibrations." I guess critics can analyze the intrinsic values of some work and say that, "all things being equal," its likely or unlikely to withstand the test of time. But some of those "things" are the careers of the artists and the subsequent assignment of kitsch value. But that's a subject for another blog.

Wednesday, August 08, 2007

Terrorism and Contagious Media


There was a truly provocative blog post on the NYTimes' Freakonomics by Stephen Levitt. The blog entry solicited ideas on which terrorist attack would wreak the most havoc. Predictably, the article prompted both scathing rebukes and praise for its openness in roughly equal measure.

It got me to thinking about whether the scathing critiques had any merit. As I understand it, the worry of most of the naysayers was that either terrorists or unhinged people looking for ways to lash out at the world would get ideas from this blog and be more apt to try to carry those ideas out. It might be that they actually get a specific idea on how to cause the most fear, or it might be that just talking about terrorism in this manner puts it in the forefront of their minds and gets them to act out while not necessarily cribbing an actual idea from the website.

This is similar territory to that which I covered in my blog post-VA Tech. In both cases, we imagine an unhinged individual with nothing to live for who wants some sort of revenge on the rest of the world. He has this nebulous rage built up, but its unclear as to how it will be released. Maybe if he is presented with one set of stimuli (say, a lot of ultimate fighting videos and some death metal), he will train to become an ultimate fighter and beat the shit out of similarly frustrated young males. If he is presented with another set of stimuli (say, non-stop coverage of a mass murder or extensive, detailed speculation as to how to carry out a terrorist attack that would cause the most fear), then he might be more inclined to carry out such an act. A third set of stimuli might prompt him to merely kill himself, etc. With Virginia Tech, the worry seemed to be more emotional than logistic. The images of the gunman had a certain visceral power that offended people. In the case of today's NYTimes blog, its just words.

Many comments on the blog that fall into the pro-openness, pro-Levitt category take a "cat's out of the bag" approach to the potential harmfulness of information. This assumes that all nodes on the information network are equal. If a bit of information is on some obscure message board, then its liable to have the same effect on people's behavior as if it were on a higher-profile webpage. The linked nature of the internet means that if a bit of information is interesting, funny, or dangerous enough to warrant attention, it will get attention via digg, delicious, or the viral spread of blogs, vlogs, and emails.

Here's my problem with that reasoning as it applies here. What Levitt wrote isn't what might actually cause harm. He sketched out only one scenario. Its the aggregation of reader comments that could contain the terrorist scenarios that are superior to any that have been thought of before.

I've been waiting for the Wisdom of Crowds wiki-logic to hit the war on terror. By aggregating these scenarios, we seem to be doing the terrorists' work for them. It takes time, energy, and intellectual ability to think up plausible scenarios for terrorist attacks. One writer (e.g. Tom Clancy) could be pretty good at that, and a bunch of devoted terrorists could be just as good if not better, but a larger group of well-educated, creative people (if they worked collectively) would certainly be better at it than either Clancy or the terrorist. So I think you'd be mistaken to say that if we can come up with a bright idea for causing terror, it would've already been thought of. Even the most sophisticated think tank is probably no match for the collective wisdom of the NYTimes' readership (as I pat myself on the back).

Then there's this paradox: the people who think the information is harmful and comment accordingly are, in some sense, aiding and abetting the harmful information by making it more visible. In the inexorable logic of online community popularity, if a comment has many comments, it is more likely to be considered "important," to be forwarded, to be read. The virus spreads.

If indeed this discussion is followed by a large scale terrorist attack (or a few of them), we shouldn't assume that it caused it/them, nor should we fall back on the well worn truth that terrorism is extremely uncommon and therefore is nothing to worry about. Personally, I have never been hit by a car even once in my life. Does this mean that I shouldn't look both ways before crossing a street? We have lived in a world where Tom Clancy and other writers have dreamed up scenarios for terrorism, and one where groups of terrorists have spent a lot of time and energy thinking up ways to disrupt societies, but I don't think we've ever had an instance of a large number of creative, intelligent people brainstorming about ways to cause mass fear. In the sense that this is unprecedented, I think its impossible to definitively say whether or not this kind of openness is necessarily good.

But it could be good. The quicker we can think of potential problems, the quicker we can plan solutions. Dubner and Levitt have always been convinced that people are worried about the wrong things (handguns instead of swimming pools, for instance). There are those who believe that the threat of terrorism is way overblown and think that Levitt's exercise proves that by showing the disparity between possible scenarios (lots) and actual events (very, very few), we prove this. But I would say that we might learn what kinds of terrorism would be worth prepping for by discussing in this way. If we talk about it openly, we might discover that we should spend the money we're spending on airline safety on protecting the food and water supply or developing a well-known, well practiced quarantine protocol. Dubner and Levitt are all about correcting conventional wisdom when its out of whack, and this would seem to be an instance where they're needed.

You could also argue that by familiarizing us with possible doomsday scenarios, the article and the discussion makes eventual hysteria less likely. And really, that's what would cause a society to collapse: not the attack, but the ensuing hysteria. If we can convince potential attackers that we'll bounce right back from an attack (either b/c of our preparedness, our short attention span, or both), they'll be less likely to attack in the first place.

Hmm. Short attention spans...

So maybe its good that our attention spans have been whittled away by advertisements. This way, we can't stay scared for very long.