Friday, July 11, 2008

Absolute Privacy


In the increasingly strident and polarized popular discourse surrounding privacy, people tend to break down into two categories.

Absolutists: the right to privacy, like freedom and justice, is an inalienable right and an abstract concept that does not have to be defined in any specific way relating to the real world. What I choose to watch on TV or YouTube is just as private as my social security number or 24-hour surveillance footage of me or what kind of cereal I choose to buy. Any monitoring of these activities or collection of data pertaining to them constitutes a violation of my Constitutional rights. These rights may as well have been handed down by God; that is to say the morality, logic, and appropriateness of their application to any given situation cannot be questioned. In fact, their strength derives from their universality, from the fact that they are appropriate to all possible situations (and, I would argue, from their vagueness).

Situationalists: The value of privacy in a given context depends on what can be gained by the individual or the society when it is sacrificed.

Imagine two worlds: one in which no one has any weapon larger than a slingshot and another in which everyone has an atomic bomb. In order to use any weapon in either world, people must go through various preparatory acts which include talking on the phone, emailing others, and doing other things in the privacy of their own homes. An absolutist would say that we should not monitor phonecalls, emails, or other private activities. We should work to make sure people don't use the bombs, and we should try to dismantle the bombs. But if we can't do those things, then it is worth running the risk of having a few angry people use the bombs and thereby destroy the world just so long as privacy is preserved. A situationalist would also advocate weapons control and anger management, but once those options have been exhausted, then they would say that the slingshot world requires one level of privacy (no wiretaps, etc) while the atomic bomb world requires another (24-hour surveillance).

We do not live in either of these worlds, but we've been moving steadily from the slingshot world to the atomic bomb world. Its increasingly easy for a small group of angry people to kill a lot of people. This has less to do with Bush or Bin Laden, Islam or Christianity, China or Palestine, and more to do with technology and our interconnectedness (side question: has our interconnection via the internet made us less vulnerable, as it was originally intended to do, or more vulnerable to attack?).

This steady move doesn't make the atomic bomb world an inevitability. We need arms control. We need to remember that two things made 9/11 as bad as it was: jet fuel and architecture. If jets were battery powered and people in buildings were more spread out, it would be harder to kill large numbers of people. We also need to figure out why people who launch attacks are so pissed off and try to do something to resolve those conflicts non-violently. Taking steps to reduce inequality, even if that means (god forbid) regulating a market or implementing a governmental program every now and then, would almost certainly help in this regard. Once we've done those things to the extent that we can, we need to decide whether we are absolutists or situationalists about the remaining risk and our privacy.

Resolutions: approximating the likelihood of and identifying the motivations for abuse of surveying power. Designing mechanisms to detect abuse and punishing abuse severely. Figuring out ways of assessing the risk of failing to survey the population properly and punishing the over-valuation or under-valuation of a threat severely. These are, of course, extremely difficult to even approximate. And yet, I would argue, this is what people in power are already doing, and this is what they must do. My hunch is that people who aren't in power (Joe Blow on the internet forum) tend to be absolutists because they do not have to face the negative consequences of being wrong. Those who are in power must face these consequences, and so they tend to be situationalists. If we were ever to have a more direct democracy and really give the power to the people, I'd wager that we'd make some pretty big mistakes b/c most people are not used to dealing with the consequences of very bad decisions made on a large scale. We'd learn, of course, but it would be pretty ugly.

The problems go beyond the rapid evolution of technology. I think that our culture (primarily on the left) is running into a conflict that we (primarily on the right) had when we tried to reconcile modernity with another document written in another era: the Bible. The way we think about the world - in particular language and the self - is not the way that we looked at it when the Constitution or the Bible were written. The shift happened gradually, as the population became more educated and interconnected, more industrialized. I'm not saying that the way we perceive our world and our selves is superior or inferior to the way we perceived them before; only that it is different, so different that words written in one paradigm, while still maintaining some semblance of meaning that applies to our lives today, need to be translated and updated.

This sounds really dangerous, I'll admit. This way lies moral relativism. But I'd argue that there is no other way to deal with our inner conflict between the modern reliance on technology, logic, and interdependence and our obsession with the roots of our democracy.

To sum up, we need to shift the debate away from whether there is potential for abuse of surveillance power (there is) to the likelihood of it being abused (how easy does the technology and socioeconomic structure make it to abuse that power) and the motivations for abuse. We also need to take a longer view with surveillance and let go of the idea that we ever lived in a world where determining our levels of privacy was completely up to us. We need to be vigilant for abuse of surveying power, aware of structures that concentrate this power, and we should question the legitimacy of outside threats. But we cannot keep getting freaked out about a few cameras, invoking Orwell every time a new technology hits the market. We need to realize that cultures and even species have been playing this game of hide and seek for awhile now. When a culture is surveyed, its mode of expression changes. A more intricate slang language develops. A subtle sense of irony develops, one that is indiscernible to outsiders. Pranksters demonstrate how sophisticated photoshop and video editing technology is so that courts will no longer be able to use video surveillance as proof that an event occurred. Like any power struggle, it does not end. Both sides evolve more sophisticated ways of outsmarting the other side. Better eyes lead to better camouflage, ad infinitum.

Monday, July 07, 2008

YouTube: Public or Private?


One of the most interesting things about the case of Viacom ordering YouTube to hand over its information on users is that the act of watching videos is being referred to as a private act. Is it as private as our social security and credit card numbers? Half as private? Twice as private? Where did the idea of what you watch being private come from? Here's one answer to that last question:

"'(the right to watch videos in private) is protected by the federal Video Privacy Protection Act, 'Mr. Opsahl added. Congress passed that law in 1988 to protect video rental records, after a newspaper disclosed the rental habits of Robert H. Bork, then a Supreme Court nominee."

One might have guessed that Bork's rental history was similar to that of Clarence Thomas. Actually, Bork, who's nomination was rejected, had fairly tame tastes while Thomas, who serves on the court today, claimed to have watched some less mainstream fare.

The Electronic Frontier Foundation argues that the act of watching a video on YouTube is similar enough to the act of watching a video cassette to qualify that act for protection under the video privacy protection act. I would argue that YouTube is not very similar to watching videos in the privacy of your own home. There are so many things about the site that make it feel public - the comments, the favorite lists open to the public, the usernames. It is precisely those attributes that set YouTube apart form existing online video sites and led to its massive success. YouTube became what it is today b/c it is not as private as watching TV. It is communal and public.

Watching a video on YouTube is NOT (and never was) a simple private act. Opsahl makes the classic mistake of thinking that just because YouTube has video, like VCR tapes, it is used in the same way and has the same relationship to publicness/privateness as its visual predecessors. He is not alone in the error of attribution. Viewers, lawyers, privacy advocates, and even users may mistake watching YouTube for a private act, but be assured, it is not.

Sunday, June 29, 2008

New Media Mood Rings


I've become more interested in people's use of media related to their moods. You could understand any moment of human experience by looking at moods, how they change, what changes them, why they change, why we're able to control them sometimes and can't control them at other times (if we could, wouldn't we always choose to be happy?).

All media consumption is related to moods, but with some, there seems to be more variety and fluctuation in moods. TV is interesting in that, at least for me, it seems to generate very few moods: the ritualistic up-down-up of a good drama; the introspective, aggro-but-still interested-in-learning-about-human-behavior vibe I get from HBO dramas; the self-esteem that comes from watching "healthy" TV like Charlie Rose; the amusement brought on by comedy; the pleasant, trance-like numbing of the mind that sports and pretty much everything else induces; some combination of these. With film, its mostly the same. But with music, things are totally different.

I tried in vain to label all of my music according to mood: drunk, funny, energetic, emo, emotional, happy, in love, rock out w/ cock out, blissed out, sad, druggy, melancholy, etc. (it would be interesting to explore color coding as a means of categorizing music according to mood. Maybe this would work better than words). I was thinking that when I was in a certain mood, I'd dial up those songs on my Ipod, but it wasn't that simple. Sometimes, I knew what I wanted, what song, what artist, what genre. But most of the time, I'm not sure how I feel or what I want, but I'm not willing to give myself over to the randomness of the shuffle feature. That's the key difference between interactive and passive media, and I'm defining music, as we listen to it now in the post-album era (and perhaps TV in the era of remote controls and frequent commercial interruptions), as "interactive" in the sense that we expect to exert control, that we're not willing to give ourselves over to the mood-shifting narrative created by the artist.

Music media can be a mood enhancer, mood manager, mood changer, or, most interestingly of all, a mood indicator (like a mood ring). It seems odd not to know whether you're happy or sad, but most of the time, according to my use of music, that's how I am. Its not until I don't hit the "skip" button when that Beatles song comes up on shuffle that I realize I'm really happy.

I'm still sketching out these ideas on mood and the level of interaction or control in media. I think w/ online video, we're treating video in the same way we treat music, but it might be overly-simplistic to say that we use both to enhance moods rather than experience a new mood. I doubt that self-reporting will get us the answer to these questions. Most of the time, we don't know exactly what we want, but maybe we're too afraid to admit that b/c it means that we're being controlled, which goes against every freedom-loving, individuality-promoting instinct in our minds.

When you look at the big picture of our media use, our use reflects our demographic, our psychographic, our values and beliefs and preferences. But when you start looking closely, you can see the ups and downs of our moods, which are far less predictable than other patterns. You could predict I'd like a new band b/c I like that genre of music, but you can't predict what I'm going to listen to tonight b/c you can't know whether the hundreds of interactions I have w/ people today are going to put me in a good or bad mood...unless you carefully monitored my brain activity and physiological states? Or just spied on me all the time? Maybe the next generation of IPods will measure that sort of thing, and finally I'll get the media I want but didn't even knew I wanted.

Wednesday, June 25, 2008

Washington Week in Review


I feel obliged to write about my experience as an intern for the Media & Democracy Coalition here in Washington DC. This wasn't my first instinct, given that policy is not my area of expertise, and so I imagined that I'd have more to gain from and little to add to the online conversation about media policy. But since this is a fairly unique opportunity for a media scholar to see how the sausage gets made (is it me or is this metaphor becoming more and more popular?), I thought I'd write about it, albeit from the perspective of an under-informed outsider.

Today, I attended a meeting put on my the Media Access Project with two representatives of the presidential candidates, a representative from Skype, and one from AT&T. They talked about what the next administration would do regarding telecom policy in the next 4 years. For the most part, there were no surprises. I was reminded of everything that frustrates me about political discussion. You have two sides of an ideological divide stating and re-stating their beliefs on one central issue: what actions do legislators and businesses take in order to bring about the broadest benefits for all? One side privileges government regulation. The other privileges unfettered markets. Both of them speak in familiar generalities. When they do recognize that regulation might be appropriate for some circumstances and not for others, they don't talk about the attributes of that circumstance (say, the rate at which people adopt a technology, or the rate at which the cost of a product falls). Instead, they point to the fact that innovation occurred after a certain policy was enacted. One side interprets the innovation as an acceptable level while the other sees it as less than what could have been under a different policy. In the real world situations that they're comparing, they cannot make logical claims about causality between policy and innovation, but that doesn't stop them from doing so.

The whole discussion seems like posturing, just a way of publicly declaring, yet again, that the individuals in question (Obama, McCain, and their surrogates) stand where you would imagine them to stand on the issues. This kind of public posturing isn't without value. In a representative democracy, citizens need to be familiar with candidates so that they can make voting decisions based on something substantive. But is that all there is to such talks? Posturing in public, and decisions made to placate the wealthiest players behind closed doors? No.

I think that the answers to the questions being debated by these two sides can be found using behavioral media studies and behavioral economics. You could say, definitively, whether or not Skype was in direct competition with Verizon (and therefore deserving of the same kinds of regulation as Verizon) by finding out if a sample group of consumers acted as though the goods were substitutable given what they are bundled with (Skype with internet access, Verizon with cable TV and internet at a higher price). If they treat it as a substitutable good at a certain price, then it should be regulated like a service provider when sold below that price level.

You could see if regulation or deregulation of a communications market (be it at the application level or the equipment/backbone level) results in more innovation or less innovation by creating two parallel online environments where avatars are inclined to gain some resource (virtual $) and need to communicate with other avatars in order to compete or cooperate with them to achieve their goals. One of the environments has regulated communications competition and the other does not. Wait a few months, give the providers of the communications a chance to innovate or fail to innovate, and compare the results.

It wouldn't be prohibitively costly to run such studies. A few thousand dollars would get you properly motivated subjects and the environments needed to conduct the studies. You could say that parties with certain interests will fund certain studies that reinforce their stance on the matter (which has little to do with what is empirically true in terms of consumer behavior). This might have been true at one point. I wouldn't argue that, historically, behavioral economic studies had limited impact on policy and the everyday reality of businesses and consumers. But when the cost of conducting such experiments falls, along with the cost of disseminating transparent results and the cost of duplicating experiments, it becomes harder to propagate a lie about consumer behavior.

I'm not certain of this. Its just a hunch that came to me as I was listening to the candidates' proxies debate (it could've just as easily been telecom lobbyists on one side and NGOs and non-profits on the other). Information on behavior gets out there. People judge whether or not it is accurate, and if the results are really important to them (if they were trying to decide if a de-regulated biotech industry would come up w/ a cure for their cancer any faster than a regulated one), then they won't care about free market or socially conscious ideology. They'll care about accurate predictions and repeatable results. I'm placing my faith in the openness of information and lowering of barriers that the internet allows for. I'm assuming that this openness will somehow prevent politicians and businesspeople from propagating false information about behavior, that openness will make it so hard to maintain a lie that it will be easier to just work harder, honestly, to achieve your goals. People will do so not out of any moral inclination, but just b/c it'll be easier. Perhaps this faith is misplaced, but it seems like the best way to maintain my optimism in this town.

Wednesday, June 11, 2008

Unions & the Quality of Screenwriting


A thought came to me as I was watching the end of season 3 of The Wire while listening to the director and writer's commentary. The director talks about lens lengths, subtle camera movements, blocking. All of these have effects of on the viewer, ones that they likely can't quite put their fingers on. I certainly wasn't conscious of some of the track-ins that were happening during conversations between two characters, but I bet that, subconsciously, it helped to boost the sense of tension and importance in the scene for me, all the more so b/c The Wire doesn't use the crutch of emotion-directing musical cues. But I couldn't help but think that the camera movements, the lighting, the music (if there were any), the performances, and everything else serve to augment the core of the text - the script. Even though I know that TV and film are collaborative arts, I cannot help but think that the script is the most important element of either medium.

I had a sudden pang of guilt about an earlier entry I wrote about why screenwriters will never get much pay or much respect. I've enjoyed The Wire so much and believe that it will go on to generate hundreds of millions of dollars in DVD, online, and cable syndication revenue; how could I say that its writers weren't entitled to most of that revenue when they created the core of that text?

I still believe what I wrote in that entry: that unlike directors, producers, financiers, cinematographers, agents, and actors, writers do not need a lot of capital to ply their trade. The reason why all the rest of those individuals get paid so much is not b/c they're equally or more responsible for what shows up on screen. Its b/c they need capital and, in the case of agents and producers, connections to practice their trade, to get better at it, and to get anything done in that industry. Capital is rare. Pens and paper (or laptops) are not. Because of that simple economic truth, writers will never get all that much money. There are always more out there.

Of course, there aren't that many George Pelacanoses and David Simons out there. Good writing is rare. But how do you know that one writer is better than another, that one script is better than another?

I'd argue that unions, because they encourage producers and studios to pay flat rates for scripts that have little to do with the number of people that will actually see the movie or the TV show, encourage mediocre writing. Producers would pay writers more if there were some way of knowing what they were paying for.

And what about the David Simons and George Pelaconoses that are just starting out, that could use the support of the unions to make it through the lean years so that they can get the time to write something as good as The Wire? I think that with the storytelling incubator that the internet allows for, either through making low-budget web-series or no-budget blogs publication of scripts (hell, even fanfiction gives scouts an idea of who the good and the bad writers are), financiers will be able to spot the writers who can please large audiences without having to guess, offer them the contracts they deserve, and stop taking bad bets on bad writers. Good writing has always been rare, but b/c financiers had to guess as to what might be popular, they had no incentive to pay them all that much, and unions didn't help by encouraging those financiers to compensate good and bad writers equally based on the misguided premise that today's hack could be tomorrow's Herman J. Mankiewicz. Not anymore.

Tuesday, June 03, 2008

The Architecture of Serendipity

After watching this provocatively titled vlog about blogs and whether or not they work to limit our viewpoints and fragment our society, I got to thinking about how we might get more evidence to support either of the opposing theories - blogs give us a much wider range of views and help us better understand our fellow humans vs. blogs prompt us to make strong bonds with like-minded individuals and do not prompt us to consider alternate viewpoints.

If we're comparing blogs to older media with its top-down editing and broad audiences, then one key difference between the two is what Cass Sunstein artfully refers to as "the architecture of serendipity." When you see or hear some bit of news or an opinion that you would not have sought out, either b/c its content did not interest you or its point of view conflicted with your own, then you are consuming media serendipitously. This choice of word is a bit misleading, as the arrangement of content in a newspaper or a broadcast is not haphazard. It is designed to be something that the consumer will like enough to keep tuning in, but also to be something that the creator either believes the consumer should know or an unconscious reflection of the creator or, in most mass media cases, the creators.

I think the question at the heart of this is: is there such a thing as too much choice?

And I'm not talking about the excess of choice that paralyzes people and forces them to make lousy decisions. I'm talking about choice that seems to be a prerequisite of individuality, identity, and self realization. We can become individuals because we have options as to how to think and how to act. The more options we have, the more our selves we can become.

This may be our future, but it certainly was never our past. Society pushed back against people's self interests for, well, most of the history of society. If one wants to get Freudian about it, one could say that society acted as our super-ego via mass media and, before that, networks of gossiping locals who subtly or unsubtly expressed approbation through information you really didn't want to hear.

But enough high falutin speculation. How do we put this to the test?

Study 1: two groups provide a list of interests to the media creator. Group 1 is instructed to provide a very general list of topics while group 2 is asked to provide a very specific, exhaustive list that includes topics and opinions about those topics. They are then provided with news feeds that are tailored to their interests. Both groups feel as though they have exercised some choice in the process of consuming media (just as the TV viewer with the remote control and the blog reader both feel as though they choose what to view or read), but group 1 has less control over what they're consuming. In fact, you could have 5 or 10 groups with varying levels of specificity in their lists of interest.

So then, what would you measure after this part of the study, and how would you measure it? What do we fear about blogs or mass media? We use words like "cocooning" and "fragmentation," but what are we really talking about, and how could you measure it? You could measure:

Happiness: do people with more information choice report being happier? Do they act happier? Yes, this would be tough to measure, but psychologists have done it before, I'm sure.

Knowledge: What would probably happen here is that the group with the most choice would have knowledge that was deep but not broad, the group with some choice would have fairly broad and fairly deep knowledge, and the group with little choice would have knowledge that was neither deep nor broad. Is in-depth knowledge better than a broad range of knowledge? No way to tell, but I think we could agree that having both is best.

Open-mindedness: If you had a debate with someone who held a belief opposite of your own, would you just get really angry at them and start yelling, would you adopt some of their beliefs while sticking with some of your own, or would you just roll over and let them convince you of anything?

Social Skills: How well are you able to communicate with someone who is not like you, in terms of what they believe (it would be interesting to see if blogs were different than video blogs in this regard: maybe b/c you can't see who writes a blog, you're more apt to base your similarity judgment solely on stated opinions rather than visible characteristics like race, class, gender, and age).

It seems silly to defend a paternalistic media or society, to think that our happiness, our range and depth of knowledge, our open-mindedness, and our social skills would be better if we submitted to a society that severely restricted our choices. Just because society has always been that way doesn't mean that it always should be that way. But I really think that people instinctively believe that more choice is better and that this notion should be questioned. We have never experienced true self realization. Maybe its not what we think it would be.

I'd wager its a matter of degree. A total reduction of choice would make you feel lousy and make you into an automaton. But is the other extreme any better? It might just make you less open-minded, less able to get along with a lot of other people, and less happy. I bet we could begin to find the answers to these questions with controlled studies of information consumption and blogs.

Saturday, May 24, 2008

Mood/situation & the Personal Blog


After reading this week's New York Times Magazine's cover story about blogging and online life in general, I began wondering how exactly "snark," as an attitude, as a writing style, came to take over so much of the blogosphere. Blogger Emily Gould defines it as "smart yet conversational, and often funny in a merciless way," though just how smart and funny such blogs are seem to be in the eye of the beholder. Perhaps her subsequent characterization of their style as "righteously indignant but comically defeated" rings more true. Did several blogs that were written in this style become popular, inspiring other bloggers who wanted to become popular to unconsciously imitate that style?

And what of the people reading and responding to personal diary blogs? What does that tell us about the function of personal blogs for readers?

"They were co-workers, sort of, giving me ideas for posts, rewriting my punch lines. They were creeps hitting on me at a bar. They were fans, sycophantically praising even my lamer efforts. They were enemies, articulating my worst fears about my limitations."

Keep in mind that Emily's blog on Gawker was only somewhat personal. Like a lot of blogs, its a hybrid of news, commentary, and diary. This kind of hybridization may be an inevitability caused by the desire of bloggers and their advertisers for a broader audience, or so the NYTimes piece suggests.

So, Co-workers contributing ideas: I think of them as co-authors, who then should get some of the revenue generated by the ads on the blog.

Creeps: people are out to pick other people up, on social networking sites, on adultfriendfinder, on blogs, in real life, pretty much everywhere. Rejection doesn't have much of a price in the anonymous world of blogging, so why not act creepy? Maybe she's into creeps.

Enemies: those working out identity issues or trying to affect society in some way. They find someone who represents values that they dislike and publicly express hatred towards them so as to discourage others from holding those views. We may dismiss this behavior as digital vandalism, but like real life vandalism, there are deeper social issues that motivate it that are worth exploring.

Fans: those looking for entertainment.

But maybe calling it "entertainment" or "voyeurism" is to cheapen and misunderstand it. So as to better understand what blog fans are after, I thought I'd engage in a little self-ethnography.

What do I get from reading personal blogs? When I watch TV or movies or read books or listen to music, I seem to want to identify with someone: a character, an author. Behind that desire to identify, I think there's a desire to escape but also to find someone who is in a similar situation, who is feeling what I'm feeling. I want this maybe to learn more about what to do in my mood/situation and to not feel alone in the way I'm feeling, or maybe to feel as though as foul as my mood/situation is, there are people who have it far worse than I do. This kind of "at least I'm not that awful" schadenfrued should be familiar to some fans of Jerry Springer, Flavor or Love, or other reality tv shows. Some fans watch, in part, to feel superior.

Here's the big difference between personal blogs and Big Media content in terms of how we relate to their characters or creators (either as our inferiors or sympathetic individuals whom we can identify with): considering how many millions of personal blog entries there are, I bet you that given the proper search technology, I could find someone who is in nearly the exact same mood/situation that I'm in instead of having to settle for someone whose mood/situation vaguely resembles mine. In fact, I have done this.

When we watch a TV show and read a book, we know that our mood/situation is not exactly like the character's or the author's. Maybe the character is a New York socialite or a farmer, but the story deals with themes that are universal - true love, loyalty, respect, mortality. We draw analogies to our own lives - the lack of respect afforded to Christopher Moltesanti is similar to my situation at work, or Sherman Klump's negative body image and lack of self-confidence is similar to my own. We read stories about people who lived hundreds of years ago in cultures totally dissimilar to our own, and yet somehow we can identify with the characters.

In the oral storytelling tradition before mass-media, stories could be localized or tailored to fit the listeners' lives. Once stories became commodities, then they were forced to be general and universal. We were forced to identify with people who were not like us, which probably had good consequences (we see a world that is more diverse than our particular corner of it) and bad ones (we disavow our heritage and our selves so as to try in vain to become more like the wealthy, beautiful, upbeat people on TV). With the proliferation of personal blogs, maybe stories are becoming more personal than they ever were. And don't think that its the same thing as gossip. Personal blogs function as gossip only when we know the people who are writing or are featured in them.

Imagine thinking of a phrase that described how you felt and being able to search for other people who felt the same way. Or imagine watching a clip from a movie or listening to a song that fits your mood and being able to read about the lives of the other people who were in that mood (fellow viewers and listeners). You can do all that now. And its not always about actually interacting with these like-mooded people. Sometimes, its enough to know they're out there.

Maybe finding someone in the same mood/situation as you will be a dead end. Maybe they'll have nothing new to add and their companionship, if you want to call it that, will feel hollow. But we keep searching, we keep trying to connect.

Saturday, May 17, 2008

First Taste for Free

A couple of weeks ago, Nine Inch Nails "released" an album online for free, prompting me to consider the viability of such an economic model.

First off, we need to note that NIN is (are?) a well established artist. People know their music, like their music, and would spend time listening to their music. And that's the key - consumers would spend time listening to the album. Forget money. Money spent for any particular text does not matter so much in the new media economy. Attention and time matter. Once you build up interest, you can charge for a subsequent album, a t-shirt, a concert, etc.

In today's media economy, if an artist can establish a fan base, they don't have to charge them for every album or every concert. As long as they can maintain some profit margin, as opposed to the huge profit margin they enjoyed before, the artist can make a living. Before, labels and studios had to charge consumers way more than a CD or a movie cost to produce b/c they had to pay for the duds that they churned out (Pareto's principle). They couldn't know which one of their products would be duds and which would be smashes. They also had to cover the costs of developing new talent (equivalent to R&D costs). But I don't think that distributors need to pay for that anymore. Artists can record and film stuff on the cheap, put it out there for free, see if its popular, cultivate interest in the unique output of that artist, and then charge $ for the subsequent output of that artist. You could also whore yourself out to advertisers and make $ off of modestly popular work, but I think the purest model for art/entertainment sales should be: give them a taste for free, then charge them for more once they're hooked (the drug-dealer economic model). Ads just add clutter, and I'm not sure they even work well enough to justify their existence.

Really, the only thing that agents, studios, and labels are useful for is visibility. They can artificially boost the visibility of an artist's output through promotion. But bloggers (gatekeepers, tastemakers) and the transparency of popularity on the web counteracts such promotional efforts. Right now, a small percentage of consumers make their decisions as to what to spend their time listening to or watching based on bloggers or online popularity tracking (that may in fact be affected by promotion). That's right now. But what if the people who paid attention to bloggers and popularity trackers were jsut early adopters? What if, in the coming years, most people took their cues from these sources? Could it render advertising, promotion, labels, studios, and agents all obsolete? I think that it could.

Whenever I raise the possibility that promotion may be futile in the new media economy, I hear a voice saying that promotion is more insidious than I imagine it to be, that its influence is hard to track but still exists. This voice says that anyone questioning the viability of promotion and advertising is hopelessly naive. This strikes as possibly true, but doesn't seem like a strong argument; more like something said by those with a vested interest in defending the business model of advertising and promotion. Its going to get very hard to trace whether an artist or a work become popular due to promotion or organic word-of-mouth popularity based on the merit of the work. However, the internet makes these paths visible, so maybe we could begin to see how effective promotion is when compared to the aggregated tastes of millions of users. If you want to defend the role of promotion and advertisement in the art industry, fine, but back your argument up with some hard data. If you're really about making money, then do it efficiently. Spend your money figuring out what makes a work good or popular and then make it. Don't promote shit that you're uncertain about. To quote Martin Sheen in Wall Street: "Create, instead of living off the buying and selling of others."

Giving some of your work away for free is, in a sense, the ultimate promotion. Usually, ads and previews for movies are distorted pieces of a larger whole that are designed to make the whole more appealing than it actually is. But if you give away a whole album, a whole season, a whole movie for free, then there's nothing distorted about that.

And yet, it would seem unfair if, say, the creators of Lost gave away most of the narrative for "free" on TV and then concluded it with a movie that we have to pay to see. I see this with online video: its very difficult to tell whether an individual video is functioning as a satisfying media product in and of itself, a piece of a larger work that you could either have to pay for or tolerate ads with (in which case it is just an "ad" for the larger work), or both.

Maybe promoters are necessary, or inevitable. But I certainly don't think that art couldn't be sold w/o them. Artists are engaging in large scale economic experiments, and the results suggest that we should start reconsidering the role of promoters in the art business.

Wednesday, April 30, 2008

How Election Coverage Can Decide Elections


I've been thinking about the press coverage of Reverend Wright's recent speeches, in particular the coverage of the major cable news networks and the NYTimes, though I'd suspect what holds true for these outlets holds true for most media. All agree that there are two priorities that, at times, conflict with one another: getting a certain candidate elected (Obama) and that candidate or other high-profile people linked (however vaguely) to the candidate being able to speak their minds. How much must one sacrifice in order to get elected? How many games does a candidate have to play?

To answer the question, you have to look at the poll numbers. Its a common complaint that the public is too focused on poll numbers, not focused enough on the issues, and this is b/c the news media frames elections as "horse races." I think that heavy use of the "horse race" frame (which emphasizes poll #s over all else) leads to more frequent/larger shifts in those numbers. That is, the more self-aware a public becomes of its opinion, the more likely it is to shift. The reasons for the shift are essentially arbitrary. It might be Reverend Wright, it might be "bitter-gate." There will always be something that either the competing candidate, the news media, or bloggers who support the competing candidate will exploit, either to get their candidate elected, to raise their own stature as "opinion leader," and/or to boost their ratings and make a profit. Elisabeth Noelle-Neumann's Spiral of Silence sums up this brilliantly. Its a must-read for anyone who is genuinely trying to understand why the primary is going the way it is going.

Fluctuations in polls are not due to the larger public's reaction to an event (like Wright's "controversial" remarks on race), nor are they the inevitable result of increasingly visible poll numbers per se (hating the pollsters and the NYTimes for posting poll #s gets us nowhere). The larger public reacts to professional interpretation of minor fluctuations in public opinion. First, the media (main stream or bloggers, doesn't matter) select an event which they can interpret as "controversial" enough to plausibly effect voter opinion. Then they limit their polling to one small but purportedly influential segment of the general populace (undecideds, superdelegates, other bloggers, white working class females between 25-40 since last Tuesday). How long this time period is and what the event happens to be are of no consequence. Both main stream media and bloggers will dig until they find an event that can be spun as controversial and a small enough sliver of the public to show that there is some movement in the polls that is plausibly correlated to that event. In doing this, they justify their own existence. They are the source of information about public opinion, and our conception of public opinion is, for better or worse, what we base our voting decisions on (if you don't believe this, read Noelle-Neumann's book).

This creates a cycle: larger and larger segments of the population accept the premise that public opinion is being altered by the event, making the connection between the event and public opinion ever more plausible.

It becomes acceptable (perhaps laudable) to change one's position on a candidate. This is the "change" election in the sense that voters are expected to change their opinion on candidates several times over the course of the year.

Why does all this work against Obama? Maybe b/c he was ahead, and favorite-toppled-by-underdog makes for a more compelling story than underdog-can't-come-back, which is why both candidates were trying so desperately to frame themselves as underdogs. It doesn't help that most people who publicly rush to Obama's defense are perceived by many as elite (the digerati, the NYTimes).

I think that the new technology and the ways it allows information to spread changes how public opinion fluctuates and so it changes how our leaders are elected. The first step is to understand how it works, to give up, for a moment, our dreams of perfect democracy or a perfect candidate as well as our nightmares of a totalitarian mainstream media cabal. Just take a step back and try to see how it all works. Then you can make your value judgment and think about how you might change the system. Personally, I think that the way out of this bind is...another technological innovation. I've seen innovations on assessing user demand work on small scales, on YouTube or within online communities that introduced wiki-ratings or similar widgets. Different tools, all under the heading of "new media" or "internet," can change the flow of public opinion and could get us to recognize how we're all shaped by public opinion and yet all have the power to resist it and decide things based on judgments of the candidates and the issues.

Sunday, April 20, 2008

Movies and Data Visualization


Check out this amazing chart (or is it a graph? Maybe I'll just call it an interactive graphic) @ the New York Times. Box office earnings of every movie since 1986 represented graphically. At first glance, it just shows what one would suspect: summers and holidays are when the hit movies come out. But if you look closer, you can follow the paths of each film as its earnings rise and drop. You can quickly see which movies had "long tails." Its interesting to think about what those movies might have in common with one another.

Just looking at this, I start to see the flawed, delayed feedback system that movie creation and distribution is based on, all due to something that is becoming increasingly irrelevant: "shelf space" (or in this case, theater space). Studios release big movies at certain times b/c more people go to the movies then (summer/holidays), but people have started going to movies more at those times b/c that's when the big movies are out, not necessarily b/c that's the only time they want to go see movies. But b/c they're making huge, bloated-budget movies that have to compete with one another, studios have to make us starve for any half-way decent movies during the off-months (this April is pretty bad), not b/c its what we actually want, but b/c of finite shelf-space and bloated budgets, both of which could be (and perhaps are being) eliminated. Here's hoping everyone stays home this summer and watches hulu and YouTube.

Promotional space is still finite, though, so its still the movies with the biggest promo budgets that have the big numbers (they tend to peak faster and drop off faster than indy word-of-mouth hits). I really want to see how well films could do on their own merit. It would be easy enough to make a graph that corrected for promotional budgets, if only you could that information, if only studios weren't so protective of that data. I'm starting some work with a professor here at Michigan on the amount of information that flows into our households each day. Perhaps it would be easier to just track the amount of ads one can see instead of trying to get those stats from the people who put them out there. Advertising and promotion, by definition, is visible. Its trying to be seen, has nowhere to hide. You could just do a web-search for a movie, see how many hits you got, code explicit promos separately from mentions on blogs, etc. If one movie is more visible in its explicit and unofficial promotion than another movie that makes the same amount of money over the course of a similar time period, then you might conclude that the first movie actually succeeded based on its own merits and not on extensive promotion.

Long-term sales of DVDs might also get around the promotion issue, getting us closer to what people actually like.

Specific stats (things like "smallest drop from 5th to 6th week") are available in numerical form on sites like Box Office Mojo. In some cases, you're looking at the graphic representation to find specific things, in which case you'd be better off with a list of numbers. But still, images give you a bunch of patterns to notice that you can't notice right away by looking at numbers.

Sunday, March 16, 2008

Mapping Narratives

I thought I'd post an abridged version of the paper I presented at this year's Society for Cinema and Media Studies conference here.

Here's what i had in mind: at any given moment in a film, information is being imparted to the viewer. At the same time, the viewer is aware (to varying degrees) of whether or not various characters are privy to this information. If you see a monster sneaking up behind the heroine, you are aware of the mortal threat to the heroine and you are aware of the fact that she is not aware of the monster. Thus, you feel that the heroine is quite vulnerable, more vulnerable than she would have been had she known that the monster was right behind her.

I'm positing that many (if not all) emotions a viewer experiences are dependent on fluctuations in these relative ranges of knowledge of events over the course of a narrative. For this project, I made some rather crude charts to illustrate my proposed mode of analysis. These charts show the course of movie narratives, and points during that narrative at which a key bit of information (an obvious threat to the protagonist or an obvious aid in helping he or she achieve his or her goal). 1=high level of knowledge ; 0=low level of knowledge. Click on slides for detail.



Flaws: as someone was kind enough to point out at my presentation, I have left out so many subtle aspects of narration (information revelation) in this crude model. The slightest bit of music or glance from a character could be considered a bit of information that would affect our emotions and our perception of goals and threats. To this I would say: I'm working on it. As I said, this model is very basic and crude. While watching movies for this paper, I became aware of the almost infinite amount of information that narratives dole out. And yet, it is not truly infinite. Also, it can be categorized, into threats and goals, by character awareness, perhaps by level of ambiguity. With a more sophisticated data visualization tool, I think I could create a chart that incorporates every bit of information in a narrative that could possibly affect viewer emotion. You could zoom in on various sections of the narratives, or highlight certain types of information.

Then there's the question of motivation. This model works well enough when the goals and threats are well defined, and when viewer identification with the protagonist is relatively straight forward. But what about when its not? When you look at them closely, many movies are bound to have moments of ambiguity concerning motivation, or the status of some bit of information vis a vis a given character. To that, I would say: this is one of the reasons why viewers do not have identical experiences of the same movie. Their experience and values will shape their identification and also their interpretation of some information as possibly helping or hindering a character. And yet, I do not think it is infinitely varied.

In any case, there is a way to set about proving the worth of this mode of analysis. I propose hooking a few viewers up to machines that measure their physiological states and comparing the read out yielded from those sessions with some narrative maps. We may not be able to say that a moment on our viewer physiological reaction chart represents "guilt" or "suspense," but I think we would be able to see fluctuations in the viewer's mental and physiological states. If those fluctuations corresponded to fluctuations on the more sophisticated narrative charts I hope to make in the future, in terms of the fluctuation's spacing, their duration, their degree (which I believe they would) then I think we could start to see how the machinery of narrative actually works on our emotions.

I think we already know the basics of this, intuitively. We're all familiar with the tropes of suspense, with dramatic irony, with farce. This is extending (and exacting) that intuition.

Saturday, February 23, 2008

3 Ways to Define Good Cinema: Box Office, AFI, and IMDB


As the subjective coronation that is the Academy Award for Best Picture approaches, its interesting to compare a few gauges of public and critical enjoyment: AFI's top 100 (critics), Box Office revenue adjusted for inflation (public), and IMDB's top 250 (a certain segment of the public). If nothing else, a comparison might give us some clue as to what kind of sample of the public IMDB users are. Also, it might give us some indication of differences between public and critical perception of films.

What the lists have in common
Amazingly, Star Wars is on all three lists (#2 B.O., 15 AFI, and 11 IMDB). My personal opinion on this film is that the acting and dialog are awful and the special effects haven't aged well. Still, the themes resonate across time and cultures, the story is well constructed, and its a good mix of humor, romance, action, and philosophy. The Godfather just misses being in the top 20 on all lists (21 B.O., 2 AFI, 1 IMDB). Just as I'm surprised that so many critics like Star Wars, I'm shocked that The Godfather, with its European art cinema tendencies, is as popular as it is. I wonder if this film is a lot more popular with men than with women. Might that also be true of Star Wars? If that's so, then the male bias in movies isn't just coming from the critics or IMDB users. Its coming from paying customers. And yet, there are plenty of women who consume media. Historically, were they (and are they still) not making the decision as to what films to go see? Might this change in the future? Are TV and books more women's media than film and video games?

What they don't have in common
Gone with the Wind is #1 in Box Office, #6 with the critics, but its nowhere to be found on the IMDB site. IMDB definately has a bias towards newer films while AFI skews much older. The average age of a film in the IMDB top 20 is 30 years, while the average age of AFI's top 20 is 50(!). The average age of the top 20 box office hits is 38.

Casablanca is 3 on the AFI, 9 on IMDB, and nowhere to be seen on box office. Could it be that the general public didn't or wouldn't really like this movie as much as the critics or the semi-elite users of IMDB? Possibly, or maybe Casablanca was never promoted properly in the theater or re-released widely. Perhaps its stature grew over time. Perhaps all of these things account for the discrepancy.

E.T. is the 4th highest grossing film of all time, and rates a reasonable 24 on the AFI list, but is nowhere to be found on IMDB's 250. The critic/public hybrid represented by IMDB seems to be more testosterone heavy than either the general public or the critics. IMDB's top 20 is packed with men and violence. E.T. is a bit schmaltzy and certainly isn't very high in testosterone. This male (presumably young male) skew is a big reason why IMDB should not be confused with public or critical praise, but more likely represents, on average, the desires of young male Americans.

The Empire Strikes Back is 12 on all-time BO, 8 on IMDB, and its left off of the AFI list. I've always felt that Empire is vastly superior to Star Wars in every way: acting, cinematography, character development, pacing. The Joseph Campbell mythic themes that made the first one great are still there, too. The only reason I can think that AFI found Star Wars so superior to Empire is the former's cultural and economic significance, which is weird b/c they have no problem putting films that didn't exactly change cinema or the public consciousness (The Searchers, for example).

Critical godhead Citizen Kane (the Stairway to Heaven of movies, as it were) sits atop the AFI list. Only 3 films stayed in the same spot from the 1998 top 100 AFI list to the 2007 list, and this was one of them. IMDB rates it a respectable 24, and of course, its not on the box office list. I wonder if this were re-released and promoted heavily, would the public show up? Would they like it? Would they get what the big deal was about this movie? I kinda doubt it. Again, maybe critics are rating this movie highly for being innovative, in terms of style (depth of field), story, and themes. Sure, its still a good story, and the look of the film, the acting, and the dialog still hold up reasonably well, but I think the critics are mostly rating it so highly for its influence on subsequent movies (and the IMDB folks are probably acting like critics on this one). Ditto to Wizard of Oz (10 on AFI, 110 on IMDB, nothing on BO).

The Searchers, considered by many to be the apotheosis of a critics movie, is number 12 on the AFI list, 241 on IMDB, and nowhere on box office. What's most interesting is that on the 1998 edition of AFI's top 100 list, this film was 96(!!!). What the hell happened in those 10 years? Other big jumpers include Vertigo (52 spots) and City Lights (65 spots to 11).

Many films (Psycho, Schindler's List, Vertigo, Dr. Strangelove, Lawrence of Arabia to name a few) are higher on AFI's list than IMDB by 20-50 spots, but that's not really that much of a discrepancy. They both think the film is great. But none of these films grosses all that much $. Unsurprisingly, epics do better on the box office list, probably b/c they play well on big screens and are heavily promoted to recoup high costs of production (both of which critics and IMDB users don't seem to care about). Sequels and films based on existing properties do better on the box office list, too, for obvious reasons.

So, what lesson can we draw from this brief comparison? I think we should pay more attention to IMDB and AFI for a few reasons: big screens won't really account for that much huge revenue in the future. Only 3 out of the top 20 films of all time were released during the home video (VHS/DVD) era. Back when theatrical release was the only revenue stream for film, they made them so that they would play well on the big screen. AFI and IMDB lists are made by people watching films that were made during that era and the era of home video, so I don't think they'd have this big screen historical bias that the box office list has. Presumably, home video, with its smaller screens, isn't going away. It would help to have a list of top 100 grossing films in all formats (theatrical, VHS, DVD, online, TV, etc). The same is true for repeated viewings: critics and IMDB users have time to pour over films again and again, so that might be a better indication of what people would really like, instead of what they've been told to like by marketers.

On a similar note, I think that in the future, promotion won't influence film revenue the way it has in the past. The long tail economy of the internet will allow good films to rise to the top over time. For these reasons, the top box office list might look more and more antiquated as the years go by. I think that the critics list are just as prone to cultural elitism bias as ever, but there also needs to be a way to track audience desire that can factor out the effects of marketing. Maybe that's what truly "good" criticism can do.

Box Office adjusted for inflation:

1
Gone with the Wind



2 Star Wars



3 The Sound of Music



4 E.T.: The Extra-Terrestrial



5 The Ten Commandments



6 Titanic



7 Jaws



8 Doctor Zhivago



9 The Exorcist



10 Snow White and the Seven Dwarfs



11 101 Dalmatians



12 The Empire Strikes Back



13 Ben-Hur



14 Return of the Jedi



15 The Sting



16 Raiders of the Lost Ark



17 Jurassic Park



18 The Graduate



19 Star Wars: Episode I - The Phantom Menace



20 Fantasia

AFI:

1 Citizen Kane 1941
2 Casablanca 1942
3 The Godfather 1972
4 Gone with the Wind 1939
5 Lawrence of Arabia 1962
6 The Wizard of Oz 1939
7 The Graduate 1967
8 On the Waterfront 1954
9 Schindler's List 1993
10 Singin' in the Rain 1952
11 It's a Wonderful Life 1946
12 Sunset Boulevard 1950
13 The Bridge on the River Kwai 1957
14 Some Like It Hot 1959
15 Star Wars 1977
16 All About Eve 1950
17 The African Queen 1951
18 Psycho 1960
19 Chinatown 1974
20 One Flew Over the Cuckoo's Nest 1975

IMDB
1.9.1The Godfather (1972)263,716
2.9.1The Shawshank Redemption (1994)311,972
3.9.0The Godfather: Part II (1974)151,051
4.8.9Buono, il brutto, il cattivo, Il (1966)85,867
5.8.8Pulp Fiction (1994)267,773
6.8.8Schindler's List (1993)179,754
7.8.8One Flew Over the Cuckoo's Nest (1975)134,305
8.8.8Star Wars: Episode V - The Empire Strikes Back (1980)189,692
9.8.8Casablanca (1942)117,857
10.8.8Shichinin no samurai (1954)65,877
11.8.8Star Wars (1977)229,423
12.8.8The Lord of the Rings: The Return of the King (2003)239,837
13.8.712 Angry Men (1957)63,122
14.8.7Rear Window (1954)77,927
15.8.7Goodfellas (1990)145,925
16.8.7Cidade de Deus (2002)90,867
17.8.7Raiders of the Lost Ark (1981)162,612
18.8.7The Lord of the Rings: The Fellowship of the Ring (2001)273,337
19.8.7C'era una volta il West (1968)43,490
20.8.7The Usual Suspects (1995)184,812

Are 'No Country for Old Men' and 'There Will Be Blood' good movies?


Right after seeing No Country for Old Men and There Will be Blood (both of which I liked, kind of), I got the feeling that critics would love them and the public would wonder what the critics were smoking. The lackadaisical pacing, the meandering plots, the lack of distinct musical cues, emotionally remote protagonists, and the oddball endings all seemed like something that people wouldn't like nearly as much as critics. I was right about the critics, but I'm not so sure about the second part.

One hint that this is the case can be found on metacritic which aggregates critical reception of a film and compares it to the opinions of users. The 18 point difference between critical praise and user praise of Blood and 17 points for No Country are larger than those of any other film out now, maybe bigger than any film on the site ever (need to do more research there). Granted, the sample of users is so small (329 & 401) and is interested enough in critical reaction to come to a site dedicated to aggregating it, and therefore it probably isn't very representative of general public opinion. IMDB users (larger samples at 30,000 and 60,000) rated both films very highly, so highly that they're both in the top 40 films of all time according to users.



For a moment, let's assume that the public didn't (or wouldn't) like either of these films as much as the critics. As I've always been interested in the relationship between critical praise and popular success, having rejected the reductionist platitude that critics like one type of film and the public another, I thought I'd take another look at why the discrepancy might exist.

One explanation would have it that there are certain static characteristics, both thematic and stylistic, that critics are drawn to more than the general public. Grandiose themes like the criticism of American capitalist arrogance, might be a thematic thread that is common to so many critically acclaimed films, while long-duration shots and any of the attributes listed above seem to fare better with critics than audiences.

In some cases, I think critical praise, of a film or the themes and stylistic traits of a film, precede the popular embrace of those things. Does the praise cause the public to like those things; that is, does the public or has the public ever looked to critics to tell them what's good and what isn't? Are the critics merely good at predicting what the public will eventually like? Are they just as likely to guess which movies will be popular if they were to pull a film's name out of a hat (the times they anticipate popularity are roughly equal to the odds that they would do so by chance)?

It would be fascinating to do an analysis of films that were loved by critics and hated by the public. Within that category there would be to sub-categories: the films (or attributes of films) that the public came to love (or at least accept) and films/attributes that the public still dislikes. The former would show us instances critical anticipating of popularity (critics doing their job well) while the other would show us what the critics like that the public will never like (critics acting like cultural elites, trying in vain to force the public to like what they like).

Limited access might skew there numbers. Even if all critically praised films become available to everyone via online or in-person rental, some will be better promoted than others, and so their popularity would reflect their promotion budget more than any intrinsic characteristic of the film. But still, the internet allows people to see some prima facie indication of what films critics liked that the public wouldn't like if they were made to watch the film instead of selecting it b/c they sought it out or b/c it was heavily marketed. Critics are made to watch every film, so for it to be a sensible comparison, the public would have to be made to watch every film too.

Tuesday, February 12, 2008

Mood Matters


Looking at my Netflix queue can be pretty depressing. By that, I mean that many of the movies I've put in the queue aren't exactly uplifting fare. The movie I watched tonight, L'Enfant, is a perfect example: critically acclaimed, but sad as hell. I have a really tough time getting in the mood for such films so I tend to bump them down in my queue. Turns out I'm not alone in this kind of behavior, as this article from the Wall Street Journal indicates. Even better, someone did an empirical study on the phenomenon of thinking you'll be up for a high-brow film on a later date, and then once that date rolls around, you'd much rather see "low brow" fare (though I take issue with Read et al's categorization of Groundhog Day and The Breakfast Club as low-brow).

Though the study and the article pertained to the high-brow/low-brow distinction, there's an implicit assumption in both: critically acclaimed = downer. Here's the interesting thing that my Netflix queue, with its swelling number of TV shows, points to: this is NOT true of critically acclaimed TV. In particular, I'm thinking of several shows from HBO (The Wire, The Sopranos, Rome) but I think you could apply it to any of the most critically acclaimed TV shows ever. Its not that the aforementioned shows are lighthearted, exactly. In fact, all of them can be quite depressing at times. However, they've all got moments of humor (albeit dark humor) and its those lighter touches that keep me coming back, and make me unafraid of diving in to 13 hours of a show even though I'm in a pretty good mood these days and don't particularly want to be brought down. When I think about settling in for a season of any of those shows, I think it won't put me in a bad mood. I cannot say the same for many of the "better" films of the past 10 years.

Earlier, I wrote about the tone of The Sopranos and that tone seems to pervade HBO dramas in general. Somehow, those shows manage to be deep and insightful without being horribly depressing. Is it the length of films that prevent them from being insightful and wry instead of insightful and leaden? Is it merely a convention, a habit? Whatever it is, its keeping me away from more and more films and making me more enthusiastic about the future of serial motion picture narratives, on TV and online.