Monday, June 29, 2009

The Pitfalls of Hypothesizing about Film Success


Say we take a film like Transformers 2. The film, like most other films, has a lot going on, in its content and the circumstances under which it was released: grand spectacle, a link to something that is established in a target audience's cultural memory, an extensive marketing campaign, the fact that its a sequel, Shia LeBeouf, its director, its screenwriter, its late-June release not opposite of other big action blockbusters, etc. Which of these elements is most responsible for the film's success? Let's throw in another element of the film: Skids and Mudflap, two robots who (quoting the NYTimes piece) "talk in jive and are portrayed as illiterate; one has a gold tooth." The depictions have been called racist by many. Are these depictions reasons why the film is more successful, or is the film successful in spite of those depictions?

The questions are essentially unanswerable, but its not because film is Art and one cannot theorize about why some people like art and others don't, or why some art is profitabe and other art is not. It is so hard to predict why films, as opposed to other art forms, are successful or not b/c there are so few comparable films made and the circumstances of release (marketing, timing) play such a significant role in their success/failure. In order to determine what aspects of a product are responsible for its success, we need to make comparisons, but there are so few comparisons to be able to make that its harder to predict what will succeed. If you wanted to find out what elements of motion picture content made a certain text successful (e.g. certain choices in pacing, plotting, certain bigoted depictions, certain actors, lighting, etc), then you would look away from film and towards online video. There are simply more comparable texts, and the circumstances under which each video is watched are so varied that the uniqueness of each viewing can be considered to be random error and cancels out. What you're left with is a more pure comparison and better insight as to how motion pictures work on audiences than one you would try to do looking at a successful box office film like Transformers 2 and making generalizations about what aspects of the film resonated with the public. And yet film and cultural theorists have been doing just this, and continue to do just this: identifying certain characteristics of a film that possesses many charateristics, noting that the film was successfull, and then making claims about a culture's preferences.

Its interesting to consider recent advances in two untraditional predictions markets, both linked to the work of Nate Silver: presidential politics and baseball. Frankly, I don't know much about Silver's prediction models, but I'd guess he just takes discrete characteristics of each event (a race for office, a baseball game), takes a data set comprised of past events, and sees which characteristics, when all other characteristics are controlled for, exert the most influence on the outcome. You take those influences, assess the observable characteristics of the upcoming game/election, and make predictions of the outcome. With presidential elections, you have very few comparable events to use, while in baseball, you have many. In the former, I would think that you would have to start incorporating patterns in opinion polls (which fluctuate systematically based on various characteristics of world events and their coverage).

The trouble with film is that there aren't the equivalent of polls. Yes, there's test screenings, but those samples are so small and they're just used for minor recuts, not learning about why certain people like certain characteristics of films under certain circumstances. There's too little comparable data to work with. Perhaps the prevalence of remakes, reboots, adaptations is an attempt by producers to use the "data" of those other properties being successful with their suite of characteristics, and making a bet based on that. Its not very systematic, but in a way, I trust it more than I trust anyone who guesses that a film resonated or failed to resonate with the public b/c it was/wasn't successful at the box office and possessed a certain characteristic. That's just guesswork on their parts.

Monday, June 15, 2009

The Politics and Psychology of Academic Language

The most lively panel I attended at this year's ICA conference was the "Keywords: Effects" panel. 4 panelists were, in their own ways, demanding that we move beyond "effects," or, to put it another way, were proclaiming the death of effects research. One problem with effects researchers: they proclaim to study actual real-world violence when they actually study people's tendencies to administer sound blasts to other people in a lab setting. Another problem is that effects studies only measure an effect at a set point in time when it is likely that true media effects take place gradually and constantly over time (so it would be better to call them "processes" than "effects"). Afterward, Ron Tamborini had a great comment, basically saying that the argument over effects research was a semantic one, and that proclaiming that "effects research is dead" is just a good way of scaring those new to the field that what they're doing is worthless. His comment garnered applause from some members of the audience, and I have to say that I agreed with what he was saying.

I've read and seen more disputes similar to this one. Basically, they take this form: one group of researchers does some research with obvious, acknowledged limitations. Despite the limitations, the research moves us beyond what we used to know about that phenomenon. They select a word for their work. Time passes and methods improve. Another group of researchers comes along and pokes holes in the work of the first group of researchers. At this point, two things can happen. The first group of researchers can take heed of the criticism and modify their methods while retaining the original term they used for the phenomenon or their study of that phenomenon OR the second group of researchers can come up with a new name, decry the old names for the old methods/constructs to be obsolete, and move the field forward.

Note that in both cases, the actual research, the actual methods and the actual bits of information that we know about a phenomenon are exactly the same. The debate is not over what we know, but over how much and in what ways language shapes our ideas of what we know. For decades, we've known that language matters, that word choice constricts and opens up ways of thinking. But this valuable observation has been used in only one way: to point out the ways in which existing power structures foreclose the possibility of new meanings. Unfortunately, it has not been applied to an equally common and equally problematic use of language: the invention of new words and terms to further one's career or to aid the progress of one's in-group. The side effects of this political use of language are that we get bogged down in semantic debates and the growth of our collective, public knowledge of actual phenomena is retarded (while smaller groups in the private sector accumulate vast amounts of knowledge about people). Though the intentions may be good and the observation that "language determines and is determined by power structures" is a valuable one, those making this observation have released a cacophany of go-nowhere neologisms that sell a few books and then are lost to academic linguistic history.

What I see too often is the construction of straw men by up-and-coming researchers who, instead of helping to build knowledge for all, make careers for themselves (nevermind whether they intend to do this; their motives don't determine the outcome). In order to do this, they deny the possibility (indeed in many cases, as with "effects research," the plain reality) that researchers working within the existing paradigm can modify their methods to accomodate things we've learned about the phenomenon or new tools we have for studying the phenomenon. Effects researchers have acknowledged the weak construct validity of earlier studies and have adapted accordingly. They've tested various outcome variables at various points in time, developing data on trends rather than static outcomes. They've changed their ways. Do they need to change their name? To what end?

Along with "effects," the words "rational" and "cognitive" were hotly contested words at ICA this year. The debate over whether people are "rational decision makers" fits this pattern. Just because people decide, at a certain moment, to weigh the importance of one thing (be it their marital bliss, their libidos, their faith in God or community or justice) and just because they had certain restrictions on the information available to them doesn't mean that their brains worked in a fundamentally different way than they would if they had more information or valued different things at that given moment. To say that people are "rational" or "irrational" decision makers (or "cognitive" or "emotional" decision makers), to me, implies that people's brains work in fundamentally different ways, which is not the case. The circumstances under which decisions are made change, but the mind still works the same way: take in information, consider how it changes our belief that an action will help bring about a state we desire at that moment, and act accordingly. This applies to every human action I can think of, no matter how supposedly "irrational" it is. Yes, I'll acknowledge that different parts of the brain are activated when making certain kinds of decisions (what we might refer to as "emotional" decisions), but I think that the brain is still taking in information and acting based on a desire. The word "irrational" implies that its either random or totally at the whim of some other guiding force. In reality, the thinking is just based on different temporary priorities and limited amounts of information. There is also confusion over whether "rationality" assumes that we are deliberate in our actions. Again, I don't think that it means that. We can be rational even if we make rational decisions on an unconscious level.

Ann Gray's preference for the words "research material" in lieu of "data" provides another example. She claims that " 'data' has strong associations with 'evidence', 'information' and 'proof' as well as being associated with the products of more conventional sociological research methods." Its hard to deny this, and yet won't using the term "research material" confuse a lot of people? Those words already are loaded with their own meanings for every reader. How do you weigh the consequences of your word choice: causing confusion vs. freeing us from the restrictive nature of old paradigms?

In the giddiness of having seen the connections between language choice and power, we've gotten bogged down in debates over words, overestimating the power of an academic to alter language and misunderstanding the ways in which languages actually evolve over time (I have the feeling that language interventions are destined to fail, though I'd welcome any evidence that suggests otherwise). These debates serve to turn off anyone outside of a small number of in-group academics (as Tamborini noted, they even turn off and confuse undergrads and grad students) and, again, allow other people in other fields to advance knowledge of the world while we quibble over what words to use.

There have been some failures of those actively seeking to change language for political/ideological reasons, as well as some successes. Why do some words fail to catch on? I would argue that even if a word is loaded with meaning that hinders the population who must use it, the change in language will only take if the substitute is clear and concise enough.

Who uses the word matters (Wanda Sykes PSA for not using the word "gay"). If a more and more diverse set of people use a word or stop using a word, the word will have more/less of a stigma, or will gradually stop being perceived as a grammar rule that P.C. academics are trying to force us to adopt.

In the end, the internet has taught us that language spreads like a virus (though its hard to say how long some of the neologisms will be with us). Language is important and it affects how we behave and think, but changing it is quite difficult. It seems that you cannot change it simply by presenting evidence that a word is somehow discriminatory. If you really want people to use your new word, or to stop saying "faggot" or "nigga," I think you can't just lecture people or point out that powerful people dictate language. There has to be a more nuanced understanding of the spread of words throughout a population. I don't study linguistics, so for all I know, there is such an understanding. I'd be interested to learn what it is.

Wednesday, June 10, 2009

Are Online Communities Sustainable? (or online relationships, for that matter)


Reading an interesting post by Trent Reznor regarding his departure from social media (in particular Twitter, but also his extensive participation in online fora w/ fans). He charts his progress from idealist (hoping that he could make the relationship between artist and fan more intimate and unmediated, no PR people, etc) to cynic. His major problems with social media are trolls and anonymity. Essentially, its the classic problem of anonymity leading to more purposefully disruptive hate speech. Reznor offer a little dimestore psychology based on his discovery of who was behind the trolling. It is more or less consistent with the findings of Mattathias Schwartz in his NYTimes article on trolling: people who troll are looking for a way to get back at the world for hurting them, marginalizing them, or rendering them powerless, and anonymous internet fora provide them with the easiest way to do this.

Reznor's relationship with fans is unlike most social media users' experiences. There's a significant real-world power imbalance between star and fan, one that attracts trolling. Trolling doesn't happen everywhere or in random places; usually, its only highly-trafficked places or in communities that someone has something against. Reznor notes how moderators can use filters to reduce the effects of trolls (and places like Digg and Youtube do good jobs of getting rid of spam and trolls by using collective downvoting to obscure them and render them ineffective) but its still trouble to do this and if the benefits don't outweigh the trouble, then you stop doing it.

Social media as a whole seems sustainable to me. People really want the ability to connect with others who share some of the same values, preference, or beliefs, some who may not be available in the real-world social networks they inhabit. But individual online social networks or applications like Twitter and various message boards seem precarious. Some of their appeal might be in their novelty. Another problem might be the "tipping point" effect when several key members decide to leave or have some real-world commitment that draws them away. As with a real-life party, if a couple of key people leave, that tends to clear everyone else out, even if those people wouldn't have planned on leaving that soon in the first place. Its just group-think and there are no negative repercussions for bailing on an online social scene.

Its possible that online social scenes develop at a point when its members have some down-time, in transition periods in their real world lives. Its not that they're "losers" and can't make it in the real world social scenes (though that might still be the case for many). Its more that they have an appetite for sociability that is underserved at the time they join the social scene. So really, members have two things in common: whatever the raison d'etre of the scene is and the fact that they're all in some sort of transition period (which could include a period of identity questioning, hence popularity w/ teens). Anyway, these scenes don't last b/c the law of averages says that each person's real life will eventually interfere with their participation and the group will splinter.

But perhaps that depends on how much the group is really about the people in the group or whatever the group happens to be "about" (e.g. Nine Inch Nails, funny online videos, hunting, etc). I guess the latter are more informational exchanges or opportunities to share amusement over a subject while the former are something resembling (and perhaps standing in for) real world social scenes. Real world social scenes break up, too. People move away, get jobs, have kids, get divorced, etc. But I still suspect that b/c they are joined during times of real world social transition and there's no negative repercussions to leaving, online social scenes are more apt to disintegrate (or at least cycle through members) than real world social scenes. Really, they haven't been around long enough to say one way or the other.

Monday, June 08, 2009

Unoriginality at the multiplex: Franchises are the New "Genres"


Here are some broad trends. I've categorized the top 10 grossing films of several years.

1980: 6 of 10 original, 2 sequels, 2 based on books
1985: 6 original, 2 sequels, 2 based on books
1990: 7 originals, 1 sequel, 2 based on comic books
1995: 5 originals, 4 sequels, 1 based on comic book/cartoon
2000: 7 originals, 1 sequel, 2 based on comic book/cartoon

No interesting trends there. People liked to talk about how Hollywood was infected with "sequelitis," but the numbers don't indicate any significant movement in that direction during those 2 decades. Then something happens in the last decade:

2001: 3 originals, 3 sequels, 2 based on books, 2 remakes
2002: 3 originals, 5 sequels, 1 based on comic book, 1 based on popular musical
2003: 3 originals, 6 sequels, 1 remake
2004: 3 originals, 5 sequels, 2 based on books
2005: 2 originals, 2 sequels, 1 based on book, 4 remakes/reboots
2006: 3 originals, 3 sequels, 1 based on book, 3 remake/reboot
2007: 0 originals!, 6 sequels, 3 based on comic book/cartoon, 1 remake
2008: 3 original, 4 sequels, 2 based on comic book/cartoon, 1 based on book

In many cases, the sequels were sequels of movies that were based on existing properties.

Of course, this is a reflection of what people want to see and what they are presented with. Whether its one or the other is, for the point I'm making, beside the point. I'm claiming that this is not a temporary trend. This is cinema (creators and consumers) obeying a fundamental law of economy. There are other media in which producers can distribute motion pictures to consumers (namely cable TV and the internet). They can also share stories via books, as always. Now, if you were a bank and you were going to fund a major motion picture, which cost 10s of millions of create, distribute, and promote, you would want to be as sure as you could be that the movie would be a hit. An established star is one way to bolster your odds, as is a director or writer with a proven track record of hits. But what about the story or the premise. Ideally, you'd want to be able to test it out for a smaller sum of money. And that's what we're able to do now. When you make a film out of an existing property, be it a cartoon, a novel, or an older film, you attract an audience who believes the film will be similar to the existing property and you have evidence that the story or the premise will resonate w/ people.

Its a little odd that it didn't happen sooner. Why weren't all movies tried as novels first? Maybe b/c some stories would only work on the big screen as spectacle. But now, with the internet and lots more TV channels, you would have to be a bit daft to bankroll an unproven story as a film. Why not make it into a miniseries on TNT or a novel first, see how it does, and retain the motion picture rights?

I would suppose that many cinephiles lament the lack of originality in mainstream cinema (if they care anymore about anythign "mainstream" that is). But are these remakes, reboots and sequels really any less original? Do we judge originality by a title? Couldn't a non-sequel thriller be less original (that is, more similar to its predecessors) than a sequel? I think that this is possible and has been the case in some instances. Really, franchises are the new genres: boundaries within which various artists work.

What to do with CGI films? Are they a genre? There are two companies that dominate - Pixar and Dreamworks. They employ many of the same creative people, use a lot of the same dramatic tropes. More importantly, I feel like audiences treat them more like a series of films and less like a genre. In terms of number and "quality," they are more like movies in a series than films in a genre: there are few and they are of uniform quality.

This is just the top slice of cinema, too. There are plenty of "original" stories lower down the charts, though again, I would question the idea of original. There has evolved a horribly formulaic strain of indie film that, I would argue, are, as a group, no more original by any definition of that term than the bulk of franchise films.

We needn't lament the fact that more hit films aren't fantastically original, the way they were, say, the in 70's. There are still great, original stories being told using moving pictures, but they aren't being told on the big screen. This is what should happen, economically speaking. Cinema no longer holds the same place it did 30 or 40 years ago when it was, essentially, the only place to go for amazing, engaging stories. Once the internet ramps up as a distribution platform for video, cinema will be even less like the cinema of yore. Get over it.

Friday, June 05, 2009

What kind of music do you like (right now)?


In keeping with my habit of making broad generalizations based on my personal experience w/ media...

As I was assembling a playlist for an upcoming roadtrip, I was thinking about the kinds of music I would want to listen to but also, acknowledging the social nature of most media consumption, what kind of music the people I'll be traveling with would want to listen to. Naturally, I thought in terms of genre. I'm pretty sure these guys don't like metal much anymore (if they ever did), which is a shame, b/c I do. Then I thought about my answer to that classic get-to-know-you question "what kind of music do you like," and, of course, my answer would be that typical avoiding answer: "lots of kinds, pretty much everything."

If you looked at my music collection, you would find many different genres from different eras and different places around the world well represented. But that doesn't mean I'd want to listen to any of it at any given moment. Our media preferences are governed by long-lasting preferences (I've liked metal since about 9th grade) as well as short-term moods (I'm not in the mood for metal right now). Here's my theory: as music collections expand due to the falling monetary value of songs vis a vis Napster, Torrent, and all that shit, long-lasting preferences broaden and explain less and less of why anyone wants to listen to any kind of music at a given time. As choices expand, mood and immediate context play a greater role in determining what you will choose.

But its tougher to know what kind of music you're in the mood for than knowing that you like rap or hate country. I've tried relabeling my music according to mood (so, there are rap songs and metal songs that are both labeled "energetic" and classical and rock songs that are labeled "melancholy") and occassionally that helps me find music that suits my mood and feels right, but most times, I find myself cycling through my shuffle until something clicks.

The way we engage with music changes when options becomes plentiful. Choice increases due to falling production/distribution cost. It happened w/ music, but the trends you see will happen with all other media. When you have all of those options, you can't rely on your identity as much to determine what media will satisfy you. You can't just say to yourself "I like this kind of music, or that kind of TV show, or that kind of news, so that's what I'll choose." Something happens to our decision making process when we have abundant, diverse options. I'm not quite sure what it is (experiments to follow, I hope), but my hunch is that we want to cede control to something else. Shuffle is one thing. Search engines are another. We're wary of being controlled, but we experience so much uncertainty and regret after choosing something when there are too many other options that we want our choice to be restricted.

Sometimes, we do know what we're in the mood for, but those moods and those preferences become more diverse given more and more choices.

Wednesday, June 03, 2009

The Problem with False Consciousness (and false desire and false choice)

There is a strain, one might say a dominant strain, of cultural theory (derived from Marxist theory) that claims that individuals immersed in a culture, exposed to certain information via media while other information is kept from them, are unable to know the truth about how the world works and thus make decisions that are not in their collective or individual best interests but rather in the best interests of those in control of the information flow. On the face of it, the theory of false consciousness seems possible, even likely. But here's the rub: the theory itself is just another way of looking at the world supplied and supported by individuals with interests of their own, some of which run counter to those of people reading about the theory. It is possible that those who are exposing others as having pulled the wool over our collective eyes are, in fact, pulling a different kind of wool over our eyes. The new illusion of "seeing the world as it really is" is all the more convincing given that the revelatory nature of the theory. How do we know it is not another illusion, one more pernicious than the last? We don't, and most cultural theory provides little evidence to suggest one way or the other whether it is just another bias looked at human nature. Are we naturally competitive or naturally cooperative? Are corporations and advertisers in charge of telling us what to desire, or are charismatic leaders/writer/artists the ones pulling the strings?

I like to use two movies from 1999 as convenient illustrations of false consciousness and (if you'll pardon the unwieldy double negative) false false consciousness. The Matrix is a classic good v. evil story of false consciousness. A handful of good guys need to clue everyone else into the fact that they are not acting in their own best interests, but are rather part of an elaborate illusion that serves the interests of a controlling "other." Fight Club issues a similar indictment of mainstream culture, albeit in a less metaphorical more literal manner. However, the bunch of rag-tag rebels that fight The Man inevitably coalesce around a charismatic leader who, as it turns out, is insane. Group-think develops, critical thinking goes out the window, and the group of rebels is even more lost than when it began. I am heartened by the fact that popular cinema can still address (and prompt audience members to debate and think through) important socio-politico-philosophical issues of the day. Just in terms of acting as teaching tools, these movies can liven up a dreary classroom discussion about free will and hegemony.

As valuable as fiction is in helping us understand our socio-political reality, it can only take us so far. In order to really understand things, we need evidence. Most of the crit/cult theory I've read cites cherry-picked instances of people deprived of infinite choice and freedom and/or maintaining a subsistence level of wealth while those in power stay in power via hegemonic, patriarchal culture. The tacit assumptions are that: information desemination - in the form of popular culture, education, and other cultural institutions such as church, the government, or news agencies - is part of the root cause of power imbalances and that the world could be otherwise (i.e. equality is possible given human nature).

In order to further interrogate the line of reasoning behind false consciousness, let's take an ordinary claim. Let's say you think someone who just spent $5,000 on a new paint job for his car but lacks the money to pay for his child's health care, college education, or nutritious diet has somehow been conditioned by culture to value some material goods (e.g. car paint jobs) over others (school, food, health). How can we, as theorists, step in and say that this person is no longer capable of making decisions for themselves? I suppose the crit/cult theorists also assumes that in the long term, the indiviual and the group that he or she is a part of will suffer. His child will be more likley to fall ill or earn less money w/o health care, a healthy diet, or better schooling. As a group, they will have less opportunities. They will live shorter, harder lives - something (and this is crucial) we can all agree is undesirable. If they only saw the connection between their consumption of culture and the long-term undesirable consequences, then they would alter their behavior, rise up against their oppressors, and alter culture.

In order for this to happen, you need to establish that some conditions are objectively undesirable. Is living a shorter life objectively undesirable? Not necessarily. Is being able to retire at an early age if one so chooses objectively desirable? Sure. Even if we reject the notion that a person's worth should be judged solely on their monetary worth, we can accept the fact that the systematic impoverishment of a people is undesirable. So then how do we draw a connection between certain behavior that may give pleasure in the short term (getting that $5,000 paint job) and long term displeasure (impoverishment) in a way that a) everyone can understand and b) does not elevate the theorist to the position of truth-teller?

You need to look for instances when people who hold one opinion about how the world (or some small part of it) works revise this opinion based on information presented to them. We have the dual opposing influences of authority (e.g. the news media, the scientific community, both of which were grossly mistaken about human nature in 1930's Germany) and upstart revolutionaries (which are at least equally likely to become corrupt by power and get things wrong - see The Great Leap Forward, an extension of what was, at the time, revolutionary thought). Charisma and authority go a long way to swaying people about big issues like human behavior and economy, but what about small, manageable issues like, say, the length of two lines?
Maybe this is a shitty example because we're all very familiar with the illusion. My point here is that we initially see that the lower line is longer than the top line. If, however, an authority were to come along and, before our very eyes, remove the diagonal lines at the end of each line, we would see, with our own eyes, that the lines are of equal length. Now, is this proof that the lines are of equal length? Absolutely not. You could get out your micrometer and say, "actually, the top line is shorter than the bottom one." But actual, physical reality isn't what concerns me (actually, I think it is indeterminate, but that's another blog entry). What I'm interested in are the patterns of people's behavior, specifically what precipitates a revision of worldview. In most cases, people will believe that the top line is shorter than the bottom one until you remove the diagonal lines, at which point they will think that they are of equal length.

This example reveals a different kind of false consciousness, a short-term false consciousness. It all happened in front of our eyes. We can acknowledge that we were mistaken. We thought our information about the situation was complete and accurate, but in retrospect, thanks to the revelation from the authority figure, we know that it was not.

Those of us studying culture, information, media and how it relates to freedom, happiness, well-being, and choice need to make the claims about false consciousness more like this. We need to make the connections between short-term pleasure and long-term displeasure more obvious, more indubitable. Impossible, you say? Bullocks! We've got exponentially more data about people's shifting desires than we have had in the past (and by "we" I mean the public, though if we're not careful, it might all end up in the hands of a fortunate few. That really would be hegemony).

Its a hard thing to acknowledge that we're not very good at predicting what will bring us long term pleasure, as individuals and as groups. When we're wrong, we look for scapegoats (The Man, the government, the media, etc), and sometimes we're right, but other times, we made bad decisions based on imperfect information about the connections between those decisions and long-term loss. How do you convince a person that his desire to get a $5,000 paint job was the result of a culture intent on keeping him down? You lay bare the mechanisms of culture, not in some vague way, but in a concrete, indubitable way that shows how all people (not just a gullible few) are capable of being misled when presented with certain kinds of information.

To that end, I'm proposing a new research project (featuring testable hypotheses): Is the abundance and restriction of media choice associated with a greater discrepancy between gratification sought and gratification perceived? You can argue with someone else's definition of gratification (for you, it might be having a big house; for someone else, it might be having a car with a sweet paint job), but you'd be hard pressed to find a person who would argue w/ their own definition of gratification.