Sunday, June 10, 2007

The Sopranos Final Moments


Its been said that David Chase didn't want to send a typical mob-story message with the conclusion of The Sopranos. If Tony was triumphant, that would essentially say that crime pays. If Tony died or was thrown in jail, that would say what most Hays code era mob stories said: crime doesn't pay. How, then, can you end The Sopranos (or any story, for that matter) without sending a message?

The Sopranos always distanced us from the main characters to some extent. We sympathized with them, but we also analyzed them and laughed at them. It was never about getting sucked into a narrative where you were more emotionally involved than you were intellectually involved. The immersion in the diegetic world you felt was a secondary pleasure. At most, its a hybrid between family drama and aloof satire. Though its been said that messages aren't David Chase's style, I'd argue that he still likes to use the show to comment (albeit in subtle and often ambiguous ways) on life, rather than as a way to get viewers emotionally involved in the lives of fictional characters. The show was, of course, marketed and talked about like any other show. We were supposed to care whether or not Tony died, and I suppose a lot of viewers did (the ones who are probably extremely pissed off after the conclusion of the series). But personally, I never got that involved in the story. The show was never really about the war between the two families. There weren't that many cliffhangers. So the ending of the show doesn't really piss me off. I'm still not sure that I like it, though.

At the very least, I admired it because it left me with a unique feeling, which is pretty hard to do after several millennia of storytelling. I imagine that a lot of people would write off the ending as lazy ambiguity, ending a show in medias res because he couldn't think of any satisfying way to end it - the ultimate cop out or, worse, a prank. Maybe, but I think that's oversimplifying. In the last scene, there's a very conscious build-up of tension through cross cutting between Tony & Carmela, Meadow (safely?) outside the restaurant, and the shady dude who walked in with AJ. Clearly, we are meant to think that this shady dude will abruptly spoil the familial idyll at the diner table. The family is finally getting along for once, and now it will be shattered. I knew this trope, and yet my heart was still beating a mile a minute at the end.

There's some dramatic irony at the conclusion: the characters think they're fine, but because of what the narration shows us, and because we're well versed in tension building exercises that have littered Hollywood since the days of D.W. Griffith, we're convinced that they're not fine at all. That's the tension at the end: between the lives the characters know and what we know of their world. True ambiguity would've given us and the characters competing cues as to what was about to happen. If it had been a "choose-you-own-adventure" or a "viewer-supplies-the-ending," like many are mistaking it for, it would've presented us with competing cues of at least 2 possible (and equally likely) outcomes. While one could argue that it was equally possible that what Tony was looking at in the final frame was either is killer or Meadow, its telling that one of those possibilities is extremely mundane. If you're 50% sure that a person coming through a door, any door, at any time, is coming to kill you, then that's not the same as when a character is in a temporary state of mortal danger (always the case with choose-your-own-adventures) and he/she may or may not make it out alive.

Instead of wondering whether he'll be fine or whether he'll die, we're nearly certain that he's going to die. The final emotion is one of dread and unrealized fear, not curiousity. The final violent act is unseen, and perhaps is all the more powerful because of that. Then again, maybe he's fucking with us and I'm overanalyzing. I've found that most people err on the side of "the writer is fucking w/ us" simply because they're afraid of being made to look like fools. At the same time, there are those blind acolytes who will find meaning in anything, regardless of whether the author intended it, or whether other people find that meaning. Another possibility that isn't considered by many: maybe he's doing both. Maybe he's telling the closure-junkies raised on serial TV to go fuck themselves, and he's leaving those willing to look deeper with a unique ending that was as unpredictable as the show always was. As with David Lynch films and TV shows, the ambiguity reveals cultural differences, what we find acceptable in our art, or if we believe that TV is a place for art.

Speaking of Lynch, I kept thinking back to the most provocative conclusion of a story I'd ever seen, one that sparked my interest in the possibilities of TV, film, pop culture, and art: Twin Peaks. Its worth noting that David Chase claims to be a big fan of that series, and that Lynch catches plenty of shit for not concluding his stories in a classical, unambiguous manner. The Twin Peaks ending more obviously evoked feelings of dread, but the more I think about the conclusion of The Sopranos, the more I see parallels between the two endings. They're both deeply unsettling, partly because we're aware of some horrible thing that the characters are not aware of.

I think this will come out on subsequent viewings. There was so much build-up, so much general hype and momentum that it was hard, even for the distanced viewer, not to get caught up in expecting what they were about to see. As time passes, perhaps the feeling of being cheated will, too.

Footnote: I actually mis-remembered the last shot. I thought I saw Meadow coming through the door, when in fact I only saw Tony's reaction. There is not shot of Meadow entering. Such is the power of great editing. Also, Bill on Time magazine's article's comments caught a significant bit of editing I noticed but quickly forgot about: we see a shot of Tony glancing into the diner followed by an impossible POV shot of Tony sitting at a table. A witness to his own death?

Monday, June 04, 2007

Anti-Heroes and the Therapists who Love Them


After reading the analysis of last night's episode of The Sopranos on Slate, I buy what Jeff Goldberg said: that like Dr. Melfi, the viewers tricked themselves into believing that they were doing something good by analyzing this person that they knew was evil. But now, at the end, finally we wake up and realize that we need to stop, that if we’re going to be good, reasonable people, we can’t enable this person any longer.

There’s also something interesting going on in terms of the ambiguity of viewer identification with characters on The Sopranos. For me, Dr. Melfi has always been more sympathetic, if for no other reason than that she’s from a socio-economic class closer to my own than most of the other characters are. That’s probably true for a lot (but maybe not most) of the audience. We think of ourselves as superior to Tony, who isn’t terribly bright, but we think that he wants to be good. We feel his anger and his sorrow. We’re occasionally empathetic. But we also have known all along that he’s a murderer, that he’s a bad person.

In Scorsese on Scorsese, Marty talks about wanting to portray people normally thought of as bad or evil, particularly Jake Lamotta and Travis Bickle, as something more sympathetic, or perhaps more complex. This wouldn’t excuse their actions (or would it?), but would somehow be more realistic, and therefore preferable to the black-and-white unrealistic morality of traditional Hollywood. While describing the sympathy that viewers might feel for the main characters in Goodfellas, Scorsese talks about it in terms of growing up: “It raises a moral question, like a kid getting older and realizing what these people have done, but still having those first feelings for them as people.” At the conclusion of most of their narratives, Scorsese’s anti-heroes are either praised by a backwards society (Taxi Driver, King of Comedy), thus indicting society, or they’re leading undesirable, unglamorous lives (Goodfellas, Raging Bull), which, in a culture where glamour and fame are indistinguishable from respect, can only be considered a fate worse than death. None of the films really let the immoral protagonist (or the viewer who identifies with him) off the hook, but it doesn’t punish them the way the old Hays code would’ve demanded that they do. This moral gray area is defended on the grounds that it is more realistic than Hollywood’s depiction of evil.

Dr. Melfi hasn’t really played too prominently in the last few seasons of The Sopranos, so this second-to-last episode, in which Melfi recognizes that she’s been mollycoddling a murderer, seems like a bit of an unsatisfying cheat, an escape hatch for those of us who always felt uncomfortable with the prospect of a murderous protagonist. I’ve spent so much more time with Tony that I’m more inclined to identify with him, but when I take a step back, I clearly identify with Melfi more, feeling that, as Scorsese put it, its time to grow up and judge this person for his actions. What might’ve been praised by some as the realistic, gray-area depiction of evil is really just an attempt to excuse the voyeuristic pleasure of the therapist/viewer.

Because it has two points of viewer identification, the show can have it both ways. We can wash our hands of Tony and the show, giving up our vicarious addiction to the pleasures of acting selfishly, or we can believe, like Tony believes, that we're the last of a dying breed, and that there's a nobility in that.

Then there's the nascent terrorist AJ, but maybe it'll be better to write about that after the series concludes.

Is The Sopranos a Unified, Coherent Text?


As The Sopranos concludes next Sunday, I keep wondering whether it could be watched (or taught) in the same way that an epic novel could be read. The amount of time one would need to watch it roughly corresponds to the time it would take to read an epic novel - about a semesters' worth of watching, at a leisurely pace of 6 hours per week.

The most obvious thing working against any TV show aspiring to be be great, lasting art is the fact that they are written as they go along. Everyone loves to point out that Dickens wrote classics in serial format, and that some great works of literature and film were parts of a serial that was written as they went along, but generally, the most read, most cited, most resonant works tend to be conceived of at once, with an overarching plot, OR they have such a structure imposed on them afterwards.

Even if David Chase was more secure than most show runners, virtually guaranteed the right to end the show on his terms, there's still the issue of multiple writers and directors with separate visions. Naturally, these visions must pass muster with the show runner, but still there are more likely to be tangents that don't tie into the whole arc of the story. Of course, these tangents may develop the characters, or be thematically resonant with the rest of the show, but there seems to be the expectation on the part of critics and viewers that each episode and season provide a certain amount of plot points related to a story that carries through from the start of the show to the end.

The critical discourse on Slate often describes ways in which themes and actions that occur in the final season tie into themes and actions that happened in earlier seasons. The mere fact that the show achieved some unity despite being on television (which is assumed to work against authors' efforts to created a unified text) is worthy of praise; so much the better if those themes resonate with the culture and all cultures in general.

But is this kind of unity unlike what Terry Winter laments about network TV: "where everything is wrapped up in neat little bows"? On one end of the spectrum, we have "reality," where there are no conclusions until each of our consciousnesses are snuffed out, where there are infinite vantage points instead of the lone POV of the narration. On the other end, we have the half-hour sitcom, where no actions have lasting consequences, and we're offered a single point of view and many "neat and tidy" conclusions. Somewhere in the middle are the great works of narrative art. But I think that their inner unity or cohesion isn't praiseworthy because its more realistic than sitcoms and such (though that may be true), but more because of a long standing rule of aesthetic judgment: unity = quality.

The last episode didn't quite cohere with the rest of the show to the degree that other episodes in the final season did. It featured a lot of Tony "checking in" with various characters (e.g. Uncle Junior, Janice, Paulie) who were no longer causally connected with the major plotlines that were still in play - will Tony be indicted or murdered by Phil, etc. AJ's multiple reversals were consistent with his confused adolescent character, but still felt a little haphazard. The show did, however, maintain a thematic unity, commenting on American entertainment (particularly TV), depression, and corruption from the very beginning to the very end (though the observations about AJ's depression seemed to run out of steam in the last episode). As for the last scene, I think that it was more consistent with the rest of the show than many are likely to realize.

Of course, unity isn't everything. There are a million reasons why The Sopranos is a great show, or why any story that isn't particularly well unified can be just as affecting as any other.

Tuesday, May 29, 2007

Thanks a Lot

The first ep of Fox's new reality series On The Lot started off virtually indistinguishable from pilots of other competition reality shows - "confessional" interviews, meltdowns, triumphs, and tragedies. Things got a bit more interesting in the second episode. The format is true to the American/Pop Idol template, but its clear that there will be more controversial debates than the ones over song choice and pitchiness.

The first controversy: is
this guy's film exploiting the mentally challenged? Hopefully the judges' comments (and the obvious discrepency between their perception and the will of the audience) will ignite the honest conversation we need to have about laughing at freaks (explored in depth in this earlier post). Its too simplistic to say that the Hollywood judges are too politically correct for the masses, but I can't help but wonder what the discrepancy between the judges' and the audience's opinions (not to mention that the judges harp on the fact that there aren't enough female directors in Hollywood, and the voting audience doesn't seem to care) says about who is really offended by anything on television.

Isn't there something vaguely paternalistic about the news media determining what is offensive and what isn't offensive? How many careers would've bit the dust over the past year if it had been put to a vote? The new visibility of the lives of the famous makes every little slip up a career-ender, but there are two things that might counter that: the ever-shrinking duration of the news cycle, and the democratizing effect on what is deemed offensive. There's an entire generation growing up thinking that they can get away with broadcasting racist rants online. We can keep them off TV, but when you can reach billions of people online, who needs TV? Is it any wonder that
one of the most enduring TV comedies of the internet era is one of its most uniformly offensive? To quote Mr. Politically Correct himself (Kramer): "People, they want to watch freaks!"

Between the fact that the key demographic is an au
dience weened on YouTube video shorts that tend to be more sensational and offensive than anything on TV, and the fact that a vicious "vote for the worst" campaign could do serious damage in light of the low ratings, we might see films that give new meaning to the term "lowest common denominator." Would Spielberg have to give the Fred Durst of filmmaking a development deal at Dreamworks? Considering the fact that the real Fred Durst already has such a deal, it doesn't seem too far-fetched.

Somethings - television and music - seem ideally suited to the pseudo-democratic system that American Idol employs. Other endeavors - fashion and interior design - are not, and so we defer to a panel of experts. Where does film fit into this? On The Lot is taking place at a time where film, once a populist medium, is becoming a more and more elite medium, with internet taking the bottom rung on the ladder and TV moving up a notch. Filmmakers, like fashion designers, are the tastemakers. Its an expensive, top-down medium.

Speaking of beloved mentally challenged characters, this guy always seemed a little slow to me. And he's a professor!

Thursday, May 24, 2007

Is Lost a Unified, Coherent Text?


As many have observed, there are two levels on which Lost is analyzed - what happens within the story, and how the story is being told. I'll start with the latter.

I just finished re-watching the 3rd season finale, which aired last night. As with The Sopranos, which is wrapping up in 2 weeks, its exciting to be able to analyze these programs with a bunch of other people online as they unfold . Its significant that we're not analyzing it after it aired (as with a movie), but rather while the story is unfolding. The writers can observe our reactions, our speculations, and learn from those reactions and speculations. This allows for a new kind of storytelling, in which the authors become more and more adept at being able to guide the emotions and speculations of the audience by observing how they react to various twists and turns.

Though its possible that Lost is written in this manner, its also possible that the major, underlying story was written some time ago, and we're just being let in on elements of the story, pieces of the puzzle, in a drawn-out, non-linear fashion. To use an aquatic metaphor, the audience observes what appear to be unconnected islands on the surface of the sea only to learn that they are parts of one, pre-established, unified structure that lies underneath the sea. The writers had decided long ago what the ultimate reality of the show was. The only part that they are making up as they go is how they will reveal that reality.

Another possibility is that they are making it up as they go along, but doing it in a surprisingly clever fashion which makes it appear as though they had planned things all along. If you leave a story sufficiently open while writing it, including many mysteries and gaps that can be filled in later (and lord knows, Lost has plenty of those), you allow yourself the leeway to do this. A prototypical example of this method of storytelling is Mulholland Drive, which was made as an open-ended TV pilot in 1999, and then re-made, with some additional footage, into a standalone film in 2001. According to interviews with David Lynch (and by virtue of the fact that the original was intended to be an ongoing serial plot), the decision as to what the ultimate reality of the story was - that it was all the fever-dream of a suicidal wannabe starlet - was not conceived of until after the bulk of the story was written and filmed.

What is miraculous about Mulholland Drive is that it does not appear this way (at least to me. Opinions vary, of course). It seems to possess a coherent unity, with foreshadowing and callbacks to elements or themes that pervade the text. This need for unity is most pronounced for mysteries, though unity of themes and intricately woven plots are the marks most commonly associated with quality by all critics of all texts. Even though it is certain that Lynch was "making it up as he went along," the text was open enough (and he was clever and careful enough) to make it appear as though he planned it all along. To me, this kind of retroactive unity is more impressive and pleasurable than the pre-established unity of a written-all-at-once text. Its almost like watching a magic trick.

As for the episode itself, the narration seemed unusually deceptive in its depiction of Jack's future life after he has escaped from the island. The camera featured a few close-ups and we heard the trademark whooshing sound that had accompanied flashbacks in previous episodes, and yet these were not flashbacks. I think audiences are cool with characters lying to us, as long as they are lying to other characters (though this has its limits, which Lost seems to be pushing), but to have the narration deceive us in this way is the kind of abuse of the internal rules of narration established by the show that drives viewers away.

As far as the content goes, here's what I can piece together (without having consulted other theories online): the apparently indigenous people - the "hostiles" - were being protective of the mysterious spirits (e.g. Jacob, Walt, possibly the big black cloud) that have always been on the island. Thus, they murdered all the members of the Dharma Initiative save for one - Ben - who decided to join up with them. Locke, having felt the healing powers of the island, is sympathetic towards this group, and thus wanted to keep it a secret from the corrupting forces of the outside world, namely whomever is on that boat waiting to rescue them. After Jack and Kate (and perhaps others) are brought back to civilization, they are told (by the military, the govt, the scientists) never to speak of the island to anyone. Perhaps the person in the coffin violated this non-disclosure agreement.

So the basic question of the show might be: if you discover some amazingly powerful force, do you keep it a secret, or do you allow scientists or governments to get a hold of it?

Monday, May 14, 2007

Seeing it Again, for the First Time


I watched Jacques Tati’s Playtime for the first time last week. I wasn’t as blown away as I had planned on being, given the stellar reviews I’d heard from friends and critics. At the outset, I understood that Playtime wasn’t your average movie, that I would have to maintain some sort of critical distance, appreciating framing and visual gags, not worrying about getting immersed in a fictional world. Even though I knew these things, I couldn’t help viewing it the way I view every movie – identifying characters, and trying to figure out what’s coming next for them. Even if I know that’s not how I should be viewing a film, its extremely hard to discipline myself to not think this way, especially on the first viewing.

So then I listened to a critic’s commentary track on the DVD, and of course, one of the first things he says (directly quoting Jonathan Rosenbaum, I think) is: you have to watch this film multiple times to appreciate it. So there was confirmation that I wasn’t a complete square for not having adored the film right off the bat. At the same time, I somewhat resented having to spend another 2 hours watching the film again to “get” it. And what if I watched it twice and still didn’t like it? Should I keep watching it until I like it, until I “get” it?

What’s the most intriguing to me about the whole experience is that even though I knew what to look for the first time out, I couldn’t help but be distracted by something. Perhaps it was not so much the fate of the characters (as is the case with most narratives), but with the suspense over what the author/artist will do next? What will the whole film look like? What is the overall shape of the film? How will it all tie together? All the while, I think I was trying to mentally construct what “type” of art film Playtime is, what sort of game Tati was playing, whether or not the pace of the film would pick up or slow down. And I think it’s the inconsistent pacing of a lot of so-called art films that throws me, that keeps me from sitting back and appreciating the film on its own terms during the first viewing. Playtime seemed internally inconsistent, flirting with becoming a traditionally-paced narrative, but never quite making it.

In contrast, Gus Van Sant’s trilogy of languidly paced films (Gerry, Elephant, and Last Days) are perfectly internally consistent. Most viewers (myself included) had almost certainly known what they were getting themselves into when they watched the films, either by reading reviews or, in the cases of the latter two, knowing that the director had made extremely slow-paced barely-narratives. With Elephant and Last Days being based on historical events, you know what’s going to happen in the end, so there’s no surprise there, even on the first viewing. And, more importantly, there are no stylistic surprises during the course of the films. They unfold at a perfectly even pace, not really trying to hook viewers in any more than they are already to the paper-thin plot. I have no problem sitting back and appreciating those films as art films on the first viewing. My mind can settle into one reading strategy. Its as if I’m adopting the “multiple viewing” strategy on the first viewing. I simply couldn’t do that with Playtime, or another film I watched last week (for the first time): Julien Donkey-Boy.

Why 1.5-3.5 hours?


On the two DVDs I watched last week – Julien Donkey Boy and Playtime – critics and/or writer/directors defend the work in question by saying that they were part of an effort to push motion picture making forward, to change the language of cinema. I appreciated those sentiments, and yet, I can’t say that I enjoyed either film very much, even though I knew before watching them that they were going to be “different.” Why couldn’t I break out of my standard viewing strategy of guessing what will happen to the characters, or guessing what kind of artistic statement either film would be, what specific developments in cinematic language they would bring about?

Part of it has to do with the fact that a movie unfolds over time. If you want to be non-linear, or non-narrative, fine, but you cannot change the fact that certain parts of the whole will be viewed before the whole can be evaluated. Thus, its really hard to banish the thoughts of “what will the whole turn out to be” or “how will the parts fit together” from the viewer’s mind.

With an abstract painting, you get it all at once, therefore there is no wondering what the whole will look like. You can try on different interpretive stances while looking at the painting. There is no penalty for adopting a stance which doesn’t turn out to be fruitful. You can just start over. However, with film (or any mode of expression that unfolds over time), there is a penalty for adopting an “unproductive” stance towards the material. Each passing moment, if you didn’t orient yourself in a certain way towards the film/video, then you’ll have to do the work of revising your initial interpretations after the fact, or you’ll have to watch it again. And many art films, Julien Donkey Boy and Inland Empire among them, flirt with traditional narrative structure within scenes and between scenes to such a degree that its very hard not to keep switching between various stances towards the work, and, in the end, feeling a bit lost.

Another thing that caused me to fall back into that traditional “what happens next” narrative viewing strategy is the fact that these films are almost always between 1.5 and 3.5 hours, roughly the same length as traditional narrative films. Really, the only thing I’m used to watching that is that long is a traditionally structured narrative. If Julien Donkey Boy had been a 9 hour long looped installation at a museum, or a sort of fictitious 9-hour webcam narrative available online, or a 10 minute short, I would've viewed it in a totally different way. Even if I know to try to watch either of these films as “art films,” its really hard to overcome that habit of viewing 2 hour motion pictures in a certain. Filmmakers like Harmony Korine and David Lynch have discovered how liberating digital video can be in terms of the footage shot, and yet they’re still slaves to the time constraints of cinema, for no good reason that I can discern.

My favorite example of an “art film” that I was absolutely devastated by is Clu Galagher’s A Day with the Boys – 10 minutes long. But had this been 1 and a half hours long, I’m almost sure it would’ve had very little impact on me. Really short videos get to you before you have time to wonder if you’re viewing it “the right way,” or if you need to keep track of characters. Really long videos/films (Empire, though I’ve never experienced it, seems to have this effect) wear you down, until you give up trying to interpret it and just let it happen.

Saturday, April 21, 2007

Levels of online discourse on message boards


After perusing two comments sections that relate to the Virginia Tech massacre - one section under the killer's video on YouTube and the other under a column written by a Dartmouth student on the NYTimes page - I've come to the conclusion that there are many different levels of discourse online. At first, it appears as though there is only one - the shrill, profanity-laden dialog you see on high-profile, heavily trafficked sites like YouTube. There's lots of talk of the disinhibiting effect of anonymity, and how it will result, inevitably, in mean-spirited discourse. But its likely that the majority of online discourse takes place "below the surface" of this level of discourse, involving groups of people with similar values who are less likely to flame one another.

The more "public" the discussion is (that is, the more hits that website gets), the more likely the discussion is to be about pecking order. The more "private" it is, the more the discussion will be about sharing information. To say that online discourse is uniformly public just because it can all be accessed by anyone misses the point of how we actually use these spaces to interact with others. We use them according to our particular tastes and desires, which are largely pre-determined by our real-world circumstances (our upbringing, what neighborhood we're living in, our profession). The way sites are linked together continues this trend of linking like-minded people to one another, resulting in smaller, less angry conversations. YouTube is a place for looking at what everyone is looking at. Blogs (where smaller groups of people congregate with like-minded folks) are the place to discuss them.

Then there's the matter of moderation (the NYTimes blog comments board makes more of a point of letting people know that its being moderated than the YouTube one). Presumably, the smaller the site, the less moderation would be needed. But that wholes process (what gets cut, why it gets cut, how many comments get cut) is never really clear. Not that we seem to mind.

Wednesday, April 18, 2007

Viral Violence and the End of the Powerful Image

The fear of contagious media violence was put forth most forcefully and articulately in Videodrome. So its odd that I was going to screen that movie today for my History of Media class while the whole thing seems to unfold on the news in front of me.

On Keith Olberman's MSNBC show, an FBI pundit advises MSNBC not to play the videos that the Virginia Tech killer has sent to NBC News. He says that while describing the existence of these videos would inform us without causing us harm, the kind of visceral images depicted in the videos results in, at the very least, copycat attempts, and at the worst, copycat murderers. Brian Williams makes an allusion to the potential "negative social consequences" of showing the videos. But its too late - all major media networks have played the videos. Let the experiment begin.

MSNBC is desperately showing totally unrelated clips of heroic acts performed by people over the past few years, including a man who selflessly risked his life to save someone from being hit by a subway in New York. Would this "balance things out," making up for the contagious visceral images? They play the videos, but seem to be desperately apologizing for playing the videos at the same time. The whole thing plays out as a self-loathing attempt to keep people informed.

At moments like this, I almost feel pity for TV news, and TV in general. Perhaps the logic behind airing the videos is that they will be on the Internet, and TV news is competing with the internet, so they need to show the videos. But the videos have a power on TV that they don't have on YouTube or on the internet. Hangings and beheadings have been on the internet for years now, but there is no palpable fear around these videos. I think this has something to do with the fact that its easier to resist clicking on a video or going to a website than to turn away from a news broadcast. And with a news broadcast, there is a human face, a trusted anchor, a personality. The internet is just this anonymous, wide-open repository of human desires, while the TV aspires to tell us what's important. But right now, TV is clearly going through an identity crisis.

There's this idea that by disseminating information about the violence, we are causing it to happen again. If we do not air the image, if we elect to show the heroes instead of the villains, then it will be less likely to happen in the future.

But we're back to this issue of looking at freaks (real or fake), of seeing an unpleasant side of humanity that we had been able to ignore. First, we gawk or laugh. Then we feel guilty about giving them our attention. But then, we get over it, and we assimilate unusual behavior. Given the increased visibility of unusual behavior that is just starting, we're going to have a lot of assimilating to do.

In any case, these videos are likely to change the public understanding of online videos, to bridge the gap between the beheading snuff videos and vlogs. Perhaps a video conversation, with more vlogs than ever, will erupt. We'll normalize these new images by drowning them in our own. He'll be quoted, edited, remixed, parodied, bootlegged, and forgotten. With its unending torrent of vlogs, online video has reduced video from icon to conversation. At the end of last semester, my students and I speculated about the impact of the immanent video of Saddam's hanging. Sure enough, someone was there with a cellphone camera and several iterations of the video ended up on YouTube. But what impact did it really have? Where is the Saddam hanging now? Buried under a mountain of Colbert Report clips and hockey fights. Out of a need to dis-empower the killer, we will continue this trend. This may be a huge step towards the total disappearance of the power of the video image. Maybe the power the video image was more about the exclusive ability to create and disseminate than its verisimilitude.

I ended up not showing Videodrome in class, instead using the time to go over the history of the contagious media violence theory, as well as an informal discussion of the idea. As bizarrely apt as a screening of Videodrome would've been, I think that the decision not to show it was one of the best I've ever made. I suppose that by not screening it, I bought into its central premise: that videos can be deadly viruses, infecting us with murderous or suicidal inclinations. This time, at least, conversation seemed like the best way to communicate.

Neither Melodrama nor Satire



With whom do we identify when watching a TV show/Movie: the writer or the character?

In examining my own feelings for a fictional narrative, I find that I go back and forth from moment to moment between identifying with an imagined author (or audience) and a character. Maybe the most interesting stories, the stories that last, are the ones in which you seem to inhabit both positions. You feel distanced from the fiction, able to pass judgment on the actions of characters, and yet you're also part of those actions.

I think my experience of The Sopranos bears this out. Most times, I'm laughing at Tony, at his malapropisms, his child-like impatience, his lack of foresight. I feel as though the author is making a general point about society (or a certain type of person) and the audience is understanding that point. The point might be something like: the contemporary American pursuit of professional success and familial stability and/or attempts to live by an old, outdated code in a modern world can often lead to absurd situations, or a feeling of hollowness (my interpretation only).

At times like this, the show plays as satire, at which points I feel as though I'm identifying with the imagined author/audience, agreeing with their critique of society. Other times, I feel immersed in the fiction, just as happy or as upset as the characters are.

The moments from the series that stick with me, that pop into my mind periodically, are just that: moments. Not extended stories, not concepts, not even actions. But moments in which characters seem to reflect on their lives. How odd that this visual medium that concentrates so much on action and spectacle only really sticks with me when I'm watching characters who are thinking. And I suppose that they mean more to me because I only have a sense of what those characters are thinking and feeling. Its never spelled out in a voice-over.

Example: Season 6.5, Episode 1: Tony sitting at the edge of the lake after getting the shit kicked out of him by Bobby.

What makes this moment resonate with me? Its everything around the moment, the long history of the character leading up to that scene. I'm only able to feel that sense of weariness because I've seen Tony go through so much, and I know that he doesn't like to lose, but that he has a reflective side, and is capable of seeing the hollowness of his pursuit of power.

The scene also sticks with me because of how its presented. The fact that the sequence features a straight-on close-up of Tony, battered and bruised, and a POV shot of the lake, with no music, make it mean something different than if it had been a single, slow dolly shot circling the character who is expressing his thoughts through dialogue to another character with poignant strings on the soundtrack. In fact, the scene (if I'm remembering it correctly. Maybe its a later scene) punctures the idea of non-diegetic music guiding our emotions by having the radio that's on in the background switch away from "This Magic Moment" to news coverage of the war in Iraq. This reveals the narration to be less manipulative than I thought it was, and pulls me (with Tony) out of my reverie.

Its all based around whether or not the character is aware of the absurdity of his/her situation to the degree that we, the audience, are. What I love about The Sopranos is it goes back and forth, giving me a break from identifying with the characters, allowing me to step back and laugh at the entire situation. It doesn't allow me to settle into that glib, above-it-all point of view that most satires prompt, but it doesn't rely on cheap tricks to guide my emotions the way most televised melodrama does.

Its not that I think melodrama or satire are inherently inferior to this hybrid mode of storytelling/identification, but I do believe that the stories that allow us room to vacillate between identification positions, between author and character - are ultimately the stories that we keep coming back to, the ones that withstand the test of time, that become classics. Stories with a fixed audience identification position (melodrama, horror, satire) are, in a sense, disposable. We cycle through them at a faster rate. They're like amusement rides or non-fiction essays. That said, there's plenty of subtle satire within a lot of melodrama (Douglas Sirk comes to mind), and some melodramatic moments in your average satire. But I haven't come across many shows or movies that balance the two in the way that The Sopranos does.

Its likely that its conclusion will be filled with more earnest, reflective moments than distanced, satiric ones. Here's hoping we're allowed to have another laugh or two at Tony's expense. Or at least Little Carmine's.

Friday, April 06, 2007

Online Video Archive - Reconsidered


After seeing a talk by Richard Pedersen of the Arts Institute at Bournemouth, I've decided to reconsider my position on YouTube as a superior media archive. With a lot of Web 2.0 sites (wikipedia and YouTube), whether or not they are "good" comes down to how we use them, which is, in part, contingent on how we refer to them. The fact that Wikipedia is compared to encyclopedias (which happens in part b/c of the "pedia" in its name) is good, b/c it helps people to understand that wikipedia is a starting point for research, just as encyclopedias are. Problems occur when people use wikipedia in lieu of scholarly journals, published books, or more substantial forms of established knowledge (which is what a lot of people are doing, unfortunately).

With YouTube, if it is viewed as a replacement for film and video archives, that's a problem. As Pedersen said in his talk, YouTube isn't backed up anywhere, videos come and go, the quality is awful (though I find it frustrating that those who attack YouTube use the term "poor quality" as if it were an objective assessment), and there are plenty of chopped up, fraudulent version of things floating around on it. Fine, its not an archive, and its completely inadequate for scholarly research. But so what? Does that mean its not going to be a part of the way the public at large learns more about its mediate past? If we use YouTube (or the pay per view archive taht may follow) the way we use Wikipedia, as a starting point, we're all going to know exponentially more about media than we do now. I can't help but suspect that part of the resistance towards collective archiving of media online isn't the same knee-jerk defensiveness that all experts feel towards Web 2.0. Let's define it as a different kind of knowledge, less perfect but more fluid, use it to benefit our cultures, and move on.

So what do we call this? Its not exactly an archive, and its not exactly a library. Maybe there's never been a name for something like this: an imperfect, expansive, fluctuating catalog of our collective mediated past (or, in the case of Wikipedia, our knowledge present). I suppose we need to stop looking to the past for metaphors, b/c they just get us into trouble. Better to begin to study the ways in which people use the information they get from these sites. The longer they're around, the more opportunity there will be to study this.

Thursday, March 29, 2007

Online TV/Film Archiving - The Celestial Mulitplex


This is a response to a blog entry by Kristin Thompson, which was a response to A.O. Scott's article in the NYTimes about the promise of online film distribution and archiving.

The stumbling blocks, for Thompson, seem to be issues of fidelity and/or accuracy. User-generated archives like the vast library of TV shows and ads floating around on YouTube are often (or always, depending on your standards) of low quality, frequently mislabeled, and may be just clips of a longer original or may have a logo imprinted on them (either by the network that broadcast it or by the uploader). If online TV/film archiving works like wikipedia, then you would start off with a partial, mislabeled, low-quality bootleg of a TV show/movie/ad, then it would be corrected/replaced by another person, and then that one would be replaced by an HD version, until you had a copy which would be, in some cases, more true to the original than any well-funded archivist could possibly produce. We, the viewers, would have to make due with more inaccurate, partial, low-quality versions of these motion picture texts, but its better than the alternative: nothing. And it certainly wouldn't be surprising if the improvement of each text evolved at a rate akin to that of wikipedia.

Many experts still have trouble understanding why wikipedia isn't filled with inaccurate information, just as I'm sure many motion picture archivists cannot imagine an open, online archive that won't be filled with incomplete, mediocre copies of films. The debates around the accuracy of wikipedia continue, but I think its safe to say that wikipedia is better than nothing. That's the thing: wikipedia is "competing" with existing encyclopedias. What would the celestial multiplex be competing with? Netflix? Your local library? I'm not saying the online archive would be perfect, but its not hard to imagine it being far more comprehensive than any motion picture library the average citizen has access to. Is being a purist about obscure, out-of-date cinema really worth depriving most people access to millions of films?

Even the google book archiving project seems to miss the point of Web 2.0 (or 3.0, or whatever people are calling it). Experts have to let go of the idea of one person or a group of people being the arbiter(s) of the "truth" of an idea, or, by extension, a book or a film. If there is a site like wikipedia for motion pictures, the experts are free (and perhaps would have a duty) to upload their own pristine copies of films, and correct any misinformation that people have provided along with it.

But how would this work with copyright? Its not unthinkable that once videos are uploaded by users to the celestial multiplex, they can be claimed by their original copyright owners, but instead of being taken down by those owners, they would charge $ to let users view them. I think Google video has set up some sort of pay-per-view archiving of TV shows along these lines. If studios/copyright holders refuse to go along with this centralized, monetized system, one will evolve anyway (see: Napster, gnutella, bittorrent, YouTube, etc). Music labels smartened up by working out deals with ITunes. If motion picture copyright holders won't, then Bittorrent and YouTube (and whatever's next) are likely to pick up the slack. Again, people imagine that we'll either be consuming media in the traditional way, offline, or we'll get it free, illegally, online. Compromise becomes inevitable. ITunes and music labels have made it work. Why wouldn't this work for motion pictures?

These things are hard to predict, of course. I'm only saying that the celestial multiplex isn't as miraculous (nor as inevitable) as Thompson or Scott seem to think. Really, the two authors are writing at cross-purposes. Scott, like most of us, just wants to watch films to experience pleasurable emotions and learn more about life in general. Thompson is more concerned with cinematic artifacts. For scholars, sites like wikipedia or YouTube are insufficient. But that certainly doesn't mean that they aren't of some use to some people, and it doesn't mean (as Thompson suggests) that these sites won't continue to pop up, incorporating different types of information - words, music, films - in the future.

Tuesday, March 13, 2007

Freakish Behavior


This is an expansion of some musings from an earlier post on freaks and YouTube. I stumbled across the videos of DaxFlame. Like many other video bloggers, he exhibits what could be called unusual behavior in his videos. This unusual behavior is humorous to many, and, at the very least, intriguing to the rest of us. His unusual behavior seems reminiscent of a mental/emotional disturbance, perhaps Tourette or Asperger's syndrome. So, is he faking it, completely, or partially?

If he is bluffing, then we shouldn't even watch his videos, because that will just make him more visible. He could potentially profit by acting as though he has a mental problem when he, in fact does not, and he tacitly encourages others to do this by being so popular. If he is not acting, then many would say that disturbed individuals shouldn't be hid from public view, but should be out there in the public sphere, making videos, chatting with others, so more power to him.

There are certain cues or qualities we can look for in order to determine whether or not (or to what degree) his performance is a put-on. His apparent age would make us believe that it is less likely that he's acting (unless we have another Andy Milonakis on our hands). He's got about 44 videos, so that makes him less believable than someone like Beebee, who has hundreds. Interestingly, interaction with other people seems to reveal bad acting to a far greater degree than the direct mode of address that most vlogs employ.

It becomes clear that we may never know whether or not many of these video bloggers are imitating freakish behavior in order to mock it and draw attention to themselves, or whether they are just being themselves. The freakish behavior becomes unmoored from the individual. At that point, we have to judge ourselves, the audience, and not the person who created the video. Do we find any of this behavior to be amusing? Do we find it intriguing enough to watch? I suspect that many people who want to watch freaks (and the people who exploit this desire by imitating them in order to get attention and the $ that will come from getting attention once vlogging becomes commodified) don't feel any genuine contempt for them, but just find it worthy of their attention, like anything that is unusual in the world.

Ultimately, isn't it better to acknowledge the fact that we find unusual behavior and appearances (whether a person is responsible for it or not) to be worthy of our attention instead of pretending that we don't feel that way? Many of these videos seem to evoke sympathy, derision, and laughter. This combination of reactions, and the visibility of those reactions, seems like a step in the right direction, away from the total lack of visibility of freakish behavior and appearances and/or the politically correct "noble freak" images pushed by traditional, paternalistic media. Once we get past our initial shock and interest, maybe we'll begin to think about how people become freaks, why its important to maintain behavioral diversity, and the relationship between neural health and "normal" behavior.

One also wonders whether people who guest star in video blogs like this one know whether or not the star of the show is affecting a personality. The more I think about it, Borat may be the most resonant film in this age of video blogging.

Friday, February 23, 2007

Lifelogging: How We Forget



Just read this fascinating article in the Chronicle of Higher Ed about life-logging: recording audio (and eventually, video) every moment of your waking life. You can then search through that data the way you would search through your memory, only it would not decay the way your memory does. Would this be a good thing?

I have kept a personal journal for 11 years now. I've made an effort to record my thoughts and many events from my life, no matter how embarrassing or mundane. Now that I've got 2000 pages worth of data, I can search for a person's name or an emotion (e.g. hate, love, crush) and analyze my life, my behavior, and my consciousness in ways that my decaying memory doesn't allow me to do. So, how is this any different than lifelogging? Isn't it just a matter of degree?

When I have a conversation with someone, even an especially private one, my memory is recording their every word. I than have the option of going back to my computer and recording those words. Those words can then end up on the Internet or who knows where (though I'm obviously very careful to guard them and not put them on the net). But we've had this option of recording private events...forever, right?

So then it is a matter of degree. But that doesn't make it any less significant in the ways in which it could potentially disrupt social life. In fact, this article finally convinced me of the worth of privacy in this age of surveillance. I was a longtime holdout only b/c the crux of most privacy advocates' arguments seemed to be invoking Orwell and leaving it at that. Indeed, its an extremely hard argument to make b/c 1) the march of surveillence technology feels inexorable and 2) its hard to point to many widespread instances of abuse or make the chilling effect on behavior visible.

But that's why this lifelogging experiment that the people in the article engaged in was worth doing. By pushing it to an extreme, by making it personal rather than political, I could finally see the ways in which it would radically alter social behavior. We totally underestimate the role of forgetting and deception in our self-images and the images of others. We are designed to underestimate these things. Perhaps we each need to record our lives (or read about someone else who has done this, in my case) to understand how much we forget and how much we distort our memories.

This brought me back to a thought I had after my hard drive crashed a few weeks ago. I was watching 2001 on TMC, and considering the words of HAL, thinking about whether or not the fear of a sentient machine was still a fear of ours, 40 years after this film was made. The big mistake HAL's programmers made (and indeed a fault of most programmers) was to think that they could design an infallible computer. No matter what, a computer, like a human, can screw up. The reason why computers are inhuman (and perhaps why they strike fear in our hearts) is because they fail in different ways than us. But if we recognized this, and tried to design them to fail in ways more like the ways in which we fail (to design them to "forget" things gradually, to act erratically in certain situations, instead of aspiring to perfection), then computers and robots wouldn't be anything worth fearing. We should get to know the design of our minds, and then design computers in a similar but slightly less flawed fashion.

Really, what makes computers unlike humans is the ways in which they decay. In one of my classes, I'd claimed that digital technology did not decay. It either worked or it did not. I was proven wrong the next week when we brought in several gaming consoles, including my old (roughly 18 year old) Nintendo Entertainment System. One of my students played Mega Man 3 (I think it was 3, but I may be misremembering) and the game gradually became more "buggy," the screen increasingly clogged with glitchy graphics until finally, inevitably, it froze. So I realized that digital technology gradually decays, but it decays in a different way than our minds. And it is vulnerable in ways that we are not.

That is what defines us, or at least sets us apart from computers: the ways in which our minds and memories decay or become damaged. The ways we forget. Perhaps we shouldn't be working on computers that recall everything, but on computers that "forget" data in the same ways we do. I suppose that's the promise of meta-data: to get computers to recognize importance and meaning in the ways that our minds do.

As applied to my personal journal, I'd need some way to teach the computer that some information (my happiest memories) is more important than others (whether it was Mega Man 3 or 4 that student was playing).


http://chronicle.com/free/v53/i23/23a03001.htm

Friday, February 16, 2007

Marketing and Terrorism, part 2: NIN's Year Zero ARG


I'm especially fond of stuff I find on the internet in indirect, multiple ways. First, I saw something on digg.com about the marketing of a new NIN album via strange websites. Then, a student of mine emailed me a story from mtv.com about the websites. At first glance, it appears to be an ARG (alternate reality game) dedicated to marketing the yet-to-be-released Nine Inch Nails album Year Zero. Nothing particularly remarkable there. But there are numerous references to bioterrorism. Will this get NIN and their marketing firm into hot water a la ATHF?

What's interesting is that once you get out on the web, authorship becomes so murky that it might be impossible to hold anyone responsible for possible confusion between marketing and terrorism. Whose to say which sites are officially sanctioned and which are not? I always thought of viral marketing as an insidious way of turning fans into unwitting marketers, which may seem harmless now, but in the long run may reduce us to state of paranoia where we spend untold hours and cognitive energy trying to figure out what is reliable information about our environment and what is merely an attempt to sell us something. But this idea of including a controversial topic like terrorism (is it overhyped? is it underhyped?) seems like a new, clever twist. Both NIN and ATHF (purposely in the case of NIN, accidentally in the case of ATHF, I think) are allying their product with one side of this controversy - the leftist position that terrorism is overhyped. If you wear a t shirt with Err giving the finger, or design your own Year Zero bioterrorism website, you're simultaneously saying that you think terrorism has been overhyped and saying that you like NIN and ATHF.

Its tough for me to be objective about this, since I'm a big fan of both ATHF and NIN, and I think that the current administration doesn't understand terrorism as well as it could. But if I take a step back, I realize that ARGs are becoming political, and perhaps involving the authorities in a kind of theater that they're not even aware that they're a part of (a la the "characters" in Borat). I wouldn't say that the motives of the company or the artist are either profit or politics. They're both. Its designed to get you to buy the album, but its also designed to get young people to recognize the manipulative nature of military recruitment.

Since this is open to fan participation, if the fans don't like the music or the direction the story takes, perhaps they can take those in a different direction, though I'm simultaneously skeptical that either a mob-rule story or the stridently political story that Year Zero is shaping up to be (yes, yes, we get it. Bush sucks) will be anything revolutionary.

Again, the problem that this creates, the problem that all contemporary viral or stealth marketing adds to, is an erosion of our ability to communicate with one another about our environment effectively. ARGs are post-modern media's logical conclusion in the era of convergence, and they raise the same question: are they good, b/c they get people to see how most our "reality" is a social construction primarily made up of media texts, or is it bad, b/c it encourages people to doubt everything, to not take anything seriously. While I mull this one over, it'll be fun to see what happens with Year Zero.

Oh, and BTW, doesn't wikipedia kinda spoil the surprise of ARGs? People just aggregate data on that site, so all you have to do is check up on it every so often. It undergoes a rigorous vetting process, much more rigorous than you and your friends, digg, cable news channels, or the New York Times could ever hope for. And it does it fast.

Sunday, February 11, 2007

Re-editing Foreign CG Animated Films for the Worse


I just casually stumbled upon a weird phenomenon I hadn't been aware of: CG animated films made overseas that get re-edited for American family audiences (either for comprehensibility or adult themes, such as the vaguest hint of sex). Both Doogal and Arthur & The Invisibles tanked in the US, but did OK overseas (something like 70/30 or 80/20 splits) and both were released by the Weinsteins. So, did they tank b/c they were badly re-edited and badly marketed, or b/c they just weren't American enough?

What's odd is that I hadn't known that these were European products. I guess I had assumed that CG animation was strictly the domain of Pixar, Dreamworks, and a few shitty knock-off production houses. Perhaps many who who went (or didn't go) to see these films in the States assumed that they were from one of these budget mini-studios. Perhaps, since these films appeal primarily to kids, who care more about being able to talk about movies w/ their peers than older folks (playground fodder as opposed to water-cooler talk), once these films were seen as "uncool," it was over for them.

If they present an alternative to the suffocating same-ness of American CG animated fare, I'm all for them. Even if they suck.

Saturday, February 10, 2007

Jimmy Smash, American Idol: Self Image, Vlogging, and the YouTube Freak Show


First off, there’s an archiving issue here. YouTube might go down someday, and researchers really need to be archiving a sample (not all, as that would be impossible) of the videos on YouTube, including some of the unpopular videos.

And that’s why I find Beebee’s videos interesting – b/c they’re unpopular. I think what’s revolutionary about YouTube is what’s going on under the surface. It also is an illustration of how online motion pictures cannot be thought of as mere extensions of the entertainment motion picture industry as it stands. People are using motion pictures for other things, namely to connect with other people, to socialize, to work out issues of self-image, to make friends, to understand the culture or sub-culture that they are a part of. Should we be regulating, censoring, and advertising in the middle of people’s social interactions?

For the sake of argument, I’m going to refer to beebee as a “freak.” He is an unusual fellow. I mean no offense by this. I’m only pointing out that his behavior and appearance are unlike those of the majority of others many of us encounter in our lives and through the mainstream media. People are not inherently freakish, but are considered freaks b/c they happen to be different than the group of people they are surrounded by, which radically changes the minute you upload a video of yourself. Even though you may only want to be part of a community of a select, sympathetic few, the nature of the Internet (blogs, ebaum) dictate that your video/blog/whatever will be linked to by a group of people who are quite different than you, so different that they find you funny and worthy of derision. For the age group that is especially concerned with forming a self-image - early teens (coincidentally, this seems to be the group that using vlogs and blogs the most) - the jump from the playground to the vlogosphere is significant.

Is it exploitative to watch or link to beebee’s videos? Is it unethical to comment on the videos, to tell him to stop posting videos, to make fun of him, or just to laugh and forward it to friends? This dilemma of freak appeal is nothing new (see Howard Stern’s Wack Pack and/or your average elementary school playground). I recently saw the episode of The Sopranos in which Tony is trying to join his neighbor’s exclusive country club, but feels as though he’s just being kept around for the WASPs’ amusement. Tony’s story about Jimmy Smash – the boy with the cleft palate they all used to make fun of in school, who would sing for the amusement of others but go home and cry himself to sleep - was a perfect articulation of the ambivalence many people feel towards freaks. They can't help but laugh at them, yet they feel guilty for doing so. Its not a question of whether or not you sympathize with them. Many people sympathize with them AND find them hilarious. Freak appeal drives the ratings of the most popular show in the US, American Idol.

The standard answer to these questions is: yes, it is exploitative to look at these videos, to talk about these videos, and certainly to laugh at them. Any other viewpoint is a justification for our sick, unethical desire to laugh at freaks. No matter which side of the argument you’re on, it comes down to a question of normalization through censorship. Either you normalize the freaks by laughing at their behavior/appearance and by forwarding them on to others an encouraging them to do the same OR you normalize people who are laughing at a certain mode of behavior or appearance by discouraging them from doing so on ethical grounds.

But the comments on beebee’s videos and many others suggest that the audience is split in two – the people who laugh at it, and the people who want to encourage him to be more visible, to ignore the haters and come out of his shell. The Internet has been a way for freaks to shed their abnormal physical characteristics and their social hang-ups and make friends in a new way. Vlogging seems to be the next step in this process – people are returning to their skins, accepting the facts that they stutter, they look unattractive, they have boring things to say, and are willing to be judged by these characteristics. But instead of rejecting them outright (as would happen in RL), the online community is embracing them to some degree. Is this genuine? Is it a genuine attempt to compensate for the meanness of others that may or may not work out? Not sure, but will find out.

There’s also the issue of whether or not Beebee890 is acting, which cannot help but be an issue post-lonelygirl. Now, we look for physical characteristics – the moles on his face – as some sort of marker of authenticity. He could be affecting the voice and the character of beebee. Again, this raises ethical issues. Is it immoral to even suggest that a disabled vlogger is “faking it?” Is there any way a person could fake such a thing and parlay it into ill-gotten financial or social capital? Could you build up a fanbase under these false pretenses, sell ad time, insert a product in the video, and make a profit? Granted, this is lower than low behavior, but it’s worth considering the possibility of it. In the end, the lesson of lonelygirl was to be critical of all video (in particular vloggers) purporting to be “real.” It is easier to be a fraud online than in person. Its not a reason to doubt everything (which plenty of people do on YouTube, insisting that all videos are faked), but merely to consider that possibility.

Monday, February 05, 2007

Aqua Teen Bomb Scare: The Other Story

There are really (at least) two stories to come out of last week's Aqua Teen Hunger Force marketing bomb scare: one has to do with the over-reaction of the police and the media, and artists playing the mainstream press for the bunch of chumps that they are. The other is the conflict between the profit motive and the security motive. This second story may turn out to be the more lasting and significant one.
I find that story more compelling because it pits one side of the ideological split in the US against itself. Granted, I'm writing and reasoning broadly here, but its not too much of a stretch to say that conservative America is for unrestrained capitalism as well as strong homeland security. This is a more compelling conflict than ideological conflicts between conservative and liberal American cultures (say, the conflict between security and privacy). In those cases, there is no internal conflict. People are either one way or the other. The two groups essentially operate independent of one another, watching their own news, investing in their own funds or companies, socializing with like-minded people, and flirting with the idea of actual debate by watching a bit of FoxNews/The Daily Show to angry up their blood before returning to the safety of their own belief system. Its wrong to think of these two Americas as geographically separate. They're neighbors, co-workers, spouses, etc. But in terms of their philosophy, they feel neither the overwhelming need to convert the other side nor be converted themselves.

But let's say you're of the conservative persuasion. You own shares in a multi-national media conglomerate, and you'll be damned if Uncle Sam tries to regulate any aspect of said company's attempts to maximize profits. If this company you invested in needs to put wacky signs underneath bridges and in subways to reach that coveted 18-34 male demo, well then you'd better not prevent them from doing so. At the same time, you can't believe that these weirdo artists have put us all in grave danger (and are laughing about it!) by putting bomb-like devices underneath bridges. But sooner or later, these two forces - the profit motive and the security motive - will come into conflict. And now they have. In this particular case, you could argue that Turner Broadcasting isn't as right-wing as most corps, but as a corp, I'm sure it has a lot of gung-ho Republicans sitting on its board.

This conflicts between unrestrained capitalism and a government's attempts to keep the country on lock down are bound to butt up against one another as long as there is a strong conservative bend to the leadership in this country. This isn't to say that every ideology isn't without its inner conflicts. But, since this one has to do with the limits of advertising, I thought it was worth pointing out here.

Thursday, February 01, 2007

24, the Terrorist Threat, and Err

First off, you have the image of Err giving the finger alongside stern faced officials, and Shepard Smith saying, “their god is an Indian…that turns into a wolf.” There’s something so absurd, so satiric about these images and sounds that its hard to move beyond them. But, for the sake of argument, let’s consider another facet of this event: the two people on trial at the center of this. They could be cast as tools of a multinational corporate advertising behemoth or as artists with ties to Boston’s beloved academic community.

Really, I can’t see the parent company (either Comedy Central or Turner Broadcasting) as an enemy in the court of public opinion. The two likely enemies are (if you’re on the right) the long-haired, smirking artists who don’t realize how serious the terrorist threat is and need to be taught a lesson or (if you’re on the left) the incompetent authority figures (police, govt) that are trying to outlaw art via the war on terror.

As in most cases, people’s existing beliefs determine their reaction. But, for a moment, I won’t be so jaded, and I’ll believe that there are some people who haven’t made up their minds and would consider new information. What information might we consider?

To start with, you’ve got an box-like object that looked to be about 2’x1’ that had wires and possibly duct tape on its exterior OR you’ve got a lite-brite. The debate hinges on this visual, which cannot help but advertise ATHF, and also is proof that even if there is visual evidence, those who are passionately disposed one way or the other will see two different objects: one sees a lite-brite, the other sees a box with wires and duct tape. One sees an object that has been there for 3 weeks and has appeared in many other cities in similar locations, another simply sees that it is under a major bridge where there may have been graffiti but no protruding objects or devices.

Its impossible to ask everyone at all times not to do anything that might look suspicious to someone else. You can reduce the odds of someone setting off a bomb in a city by asking people to report suspicious packages or behavior, but if people report too many false alarms, then it makes the city less safe. So the crux of this argument becomes: what constitutes suspicious behavior or packaging? What could a bomb look like? Where would it be placed?

Each of our individual beliefs on this matter has to do with how great a threat we believe terrorism to be. The government and various corporations have an incentive to exaggerate the threat, while other groups (artists, terrorists, libertarians) have an incentive to see less of a threat than the one that truly exists. Some people may acknowledge the threat, but may feel that the actions taken in this case reveal our inability to detect false positives, thereby revealing how vulnerable we are on a very public stage, thereby making us more vulnerable (especially when the authorities cannot admit they made a mistake because they don’t want to lose face).

Its interesting that people have been bringing up 24 on the blogs, suggesting that the show has some influence over our perception of a terrorist threat. Its good that people are beginning to acknowledge the effects of fictional media on our perception of reality. The next step might be that each of us spend more time gathering information about the relationships between large states and those fighting against large states using violent tactics throughout history (Israel and Ireland seem like good places to start), the technology typically employed by these people, basic human psychology, the nature of insurgent movements throughout history, global politics, and encourage others to do the same. This holistic approach towards gathering information seems like the only way to recalibrate our individual or collective perception of “suspicious” behavior or packaging.

Wednesday, January 31, 2007

Blogs and Comments: reacting to news (plus a few notes on controversial advertising)


Its just a few hours after a bomb scare here in Boston, one that happens to have a particularly absurd bend to it. Initially, I saw the images of traffic stopped and the bomb crews under bridges on TV, and reports that it was not a genuine threat but a hoax. Then a friend told me that he had heard on NPR that the "suspicious packages" that were causing all the hubbub were lite-brite images of Err. This sounded too ridiculous to be true, so I didn't really believe it until I got home and looked it up on the web. Most of the articles in online newspapers are of the same sort: simply reporting what happened, which is what I'd want from them. But I wanted more than that. I wanted opinions. I wasn't so much interested in the event as the public reaction to the event. Would it just blow over really quickly? Will most people side with the Mayor and the Governor against the network and the advertisers, or will most people side with Comedy Central et al?

Obviously, the sites that people comment and blog on are not representative of the public reaction, but in just 30 minutes, I was able to find a wide range of reactions to the events that brought up a lot of good points that I would've never thought of and that TV news probably wouldn't have broadcast (generally, TV just parrots the same facts over and over, or presents one highly-opinionated viewpoint): that the placement of the signs (under bridges) combined with the fact that if they were not lit up and viewed from a distance they wouldn't look like Err giving the finger as much as a box with wires sticking out of the back make the alarmist claims somewhat understandable; that Shepard Smith said, "their god is an indian...that turns into a wolf"; that Bill O'Reilly suggested that a cartoon character be arrested; that the signs had been there for 3 weeks, etc.

You have to sift through a lot of erroneous, opinionated information while trolling blogs, but when something like this happens, they're more satisfying than traditional media simply because they moves faster. That's what I came to realize: the pace of information on blogs. This makes me all the more interested in blog or comment filtering (a la digg's system of making the most-recent, most-read, most-approved comments more visible than the others). Or perhaps the thing to do is find a community of bloggers or commenters that you've come to trust and go to those sites when big events happen. I'd guess that its this sped-up information that is largely responsible for the shortening of the news cycle. Cable news and legit press are just trying to keep up.

This event could also be proof of the point I made in a previous post about the unintended effects of advertising. There's no doubt that this particular situation had more to do with poor judgment on the part of the police and government than it does with advertising, but it does bring up an interesting point about the limits of advertising. Although, if this does, in fact, create buzz for the film (which it almost certainly will), and thereby generate more revenue for the parent company, then this would be considered effective advertising, and therefore should be mimicked in the future, no? It will be incredibly difficult to draw correlations between the profitability of the Aqua Teen franchise and this controversy. But as much as this might drive up their revenue, its an impossible thing to plan or duplicate. I'd argue that its precisely the perceived unintended nature of the publicity that makes it appealing to consumers. They're drawn to the product only b/c they believe the ad agency and the network when they say "we had no idea this would happen." And really, buying something related to Aqua Teen at this point (say, an Err shirt) isn't so much a vote for headline-grabbing controversy and police-bating as much as an assertion that the police, homeland security, and the FBI need to handle potential terrorist threats in a different manner.

Sunday, December 31, 2006

On Distracting Advertising

I'd originally intended this blog to be strictly about blogging, the idea being that focused writing is better writing. But I don't see the harm in making this blog about media in general, so here are some thoughts on advertising.

First, there are motives. Let's assume that some advertising annoys people every now and then, or at least isn't as entertaining as most other forms of entertainment. Maybe it isn't annoying enough to make you change the station/website, but given the choice to go without it, you would. The economic incentives that more and more websites are offering for ad-free versions of sites are evidence of this demand. Let's also assume that companies want to sell as many products as possible while spending as little as possible on advertising. Advertising companies, on the other hand, would rather have these companies that actually produce things to pay for MORE advertising, therefore they will try to justify their existence and will try to expand advertising in whatever ways possible.

So, wouldn't a minimalist ad campaign benefit the consumer and the company that produces the goods? Is a 30-second TV spot or a flashing banner the most effective way to get people to buy your goods? Why couldn't one company create a 10 second ad or a slender, unadorned banner ad that says to consumers, "if you want less distracting advertising and the better entertainment experience that would result from that, then buy our product. If you don't buy our product, you are, in effect, voting for more distracting advertising and shittier content." If this worked, then other companies would follow suit.

Getting the consumer's attention is good. Getting them annoyed is bad. There's no distinction being made between attention and distraction in a lot of web advertising. Perhaps the web allows companies that produce goods to do an end-run around the advertising companies to go directly to the consumers. Much the same way the web rendered old-school record companies obsolete via social networking sites. Companies and the indivuals who work for them will fight for their livelyhoods by insisting that they're indispensible. But this refusal to evolve is to be expected, and has nothing to do with the effectiveness of ads or the economic necessity of a billion dollar ad industry.

As more and more websites tinker with how to generate revenue, the future of entertainment looks to be less like films and HBO and more like the shitty network TV everyone loves to dump on. There are, of course, plenty of exceptions to this rule, but still, I would argue that there is a correlation between the lasting entertainment value of any cultural object and whether or not it is created for ad-supported media.

Perhaps this will encourage more...insidious forms of advertising, namely viral marketing, and maybe that's not such a good thing. Maybe our future will involve trying to deduce whether or not any conversation we're having with anyone is an attempt to sell us something, and so the distractions of our entertainment experiences will invade our social lives (spam is just the beginning). I'm willing to debate this point. I honestly don't have a strong opinion about it right now. Maybe this is the future and maybe its shittier than our present. But I just can't accept the horrible obsolescence of advertising as it is.

It also might make for more entertaining ads, and one could argue that since the inception of TiVo, this has already started to happen. Here's the easiest way to tell: if you give people the technology to easily bypass ads, are they still watching them? I have no problem with entertaining ads, and there are plenty of them. The real problem, as I see it, is the distraction factor. If you have a half-hour lunch break to go to your favorite online video website, you'd rather not spend most of those 30 minutes trying to filter through information deciding what's an ad and what's not. You just want to relax and be entertained. Perhaps you'd like to be informed of products that you might want to buy, but that is NOT the same thing as spending time filtering through distracting information looking for useful nuggets, which is what many of us spend countless hours doing.

I'm actually trying to watch football while writing this, and I've seen about 30 car ads even though I have no intention of buying a new car in the next 5 years. Yes, I accept the premise that these ads have some subliminal effect on subsequent purchasing decisions. Because I've seen so many Chevy ads, I'll think of them instead of Saturn (who doesn't advertise as much on the programs/websites I go to) the next time I'm considering buying a car. Ads serve a purpose in our economy. They are not worthless. But their worth has to be weighed against the cost they exact on our ability to function otherwise.

Friday, November 10, 2006

What kind of a sucker do you take me for?

Lonelygirl15 doesn't seem to be a fluke, in the sense that many other vlogs are posted by ambiguously "real" personalities. Its impossible to tell to what degree people are putting on an act. Perhaps part of what keeps viewers coming back to such videos is the game of trying to figure out who the poster REALLY is - a bit like the pleasure of watching a mystery. They watch the videos, look for cracks in the facade, and try to figure out the motives of the poster. Its not a black-and-white "are they confessing or are they acting" question, but rather - what aspects of the poster's persona are real, which ones are fake, and WHY is the poster being fake in that way. Presumably, they are being fake in that way to boost their ratings, or perhaps to feel superior to the viewers ("ha ha, I fooled you" type of thing). To try to figure out one of these could-be-real-could-be-fake vloggers is to try to understand how creators view their audiences. If they think they can fool us, how are they trying to fool us? Its a bit of a competition between creator and audience, and I don't see this game ending anytime soon.

Blogs as conversation and lasting art

After reading some of my students' blog entries about blogs they had read, I was reminded about something intriguing about blogs in general: they're ambiguous status as personal conversation/confessional OR as a lasting statement about life, akin to a novel. Granted, most blogs are of the former kind - gut-spilling for the benefit of a select few (usually real-world friends), but I like the fact that there could be profound writing hidden among these entries, and that they are not explicitly marked as "literature" or "art." It could be one particularly interesting, well-written entry in an otherwise self-indulgent confessional blog - great writing is great writing, and to me, its almost "greater" or perhaps somehow more authentic when its not in a published anthology or in a well-known novel. Its the ambiguous status of personal blogs that keeps the blogosphere interesting to me. For this reason, I hope that people don't just see blogs as a way to refer people to interesting news sites on CNN.