Tuesday, December 20, 2011
Over-sharing, time dilution, and the weaponization of words
What really concerns many digital immigrants are the consequences of over-sharing. Drunk status posts live forever in the digital archive of the internet, preserved word for word, undistorted by individual memory. Most conversations about this topic concern employment or election: one's past indiscretions come back to haunt those seeking work or public office. Its interesting how the conversations about reputation that only used to apply to celebrities and politicians and reality TV stars apply to all of us now. We can all be publicly disgraced and publicly redeemed.
The argument against sharing assumes that collective, disembodied digital cultural memory operates like the collective, disembodied pre-digital cultural memory and pre-digital individual memories. But I wonder if there might be some differences that aren't being accounted for. There might be a kind of dilution of the past going on. Perhaps each bit of information about the past matters less because there's simply so much of it. We judge gaffes the same way we judge everything else: on the extent to which they are rare. People also speculate that more information shrinks our attention span and with it the duration of the news cycle, so that people move from topic to topic quicker. We might move through the cycles of admonishment and forgiveness more quickly as well.
But maybe the critics of over-sharing are right. Maybe each person who has tweeted something stupid or left a stupid status update is instantly and permanently discredited by some for doing so. In a market where there are so many people out there who are equally accessible online, employers, voters, consumers of entertainment content, and even online daters can discard people for single infractions because there's always a comparable replacement who hasn't said anything stupid (yet). Dismiss anyone for tweeting something stupid and you'll be dismissing a lot of people, but maybe that's not such a problem when you've got so many people to choose from.
There's another way in which over-sharing might be changing the nature of conflict. If you tweet something potentially embarrassing and it lives forever, does that mean that everyone's words can be used as weapons by their enemies against them in the future, the way it happens with politicians now? This might just be the next frontier of human hostility: searching people's pasts and using what they've said as a point in an argument against them, context be damned. Maybe we could only forgive because we could forget, because our memory of ugly, hostile behavior softened with time. Now that we can't forget, now that every ugly, hostile remark is retained, we'll stay mad at people and never give them another chance. We've started to get cynical about politicians because they were the first to have every one of their moves recorded and re-broadcast out of context. Soon, when this happens to many of us, we'll become cynical about human nature.
Here, I must make a point similar to one I've made about privacy: digital social media gives people ammunition to hurt others, but it doesn't create the will to use it, nor does it assume that we can't make rules, laws, or norms that prohibit using past or private information against people. The preservation of information doesn't necessarily mean that anyone will dig it up and broadcast it in the future. Someone has to care enough about finding the information to search for it and have some reason to defame another person. So maybe you can go on sharing as much as you'd like on twitter and facebook. Just don't get on anybody's bad side so that they will go to the trouble of digging up something stupid you'd said years ago and use it against you. And if they do, know how to fight back in the ways pioneered by the frontiersman of character assassination, the political operative: dig up something on your opponent or dismiss their efforts as "mud-slinging".
The outlandishly stupid tweets used by those making the over-sharing argument are rare and, so long as these "be careful what you post" stories are out there, liable to become rarer still. I just don't see people cutting back on sharing, despite the fact that the preservation of day-to-day sharing can make any heartfelt sentiment seem stupid when taken out of context. If people are pre-disposed to disliking a person, they'll take a questionable post from the past as evidence to support their dislike. It they are pre-disposed to liking the person, they'll shrug and say "so what if a person tweeted something embarrassing 5 years ago? Who cares?".
Many assume the words and pictures that preserve and transport our past acts have a certain power, but our view of just how powerful they can be is distorted by our experience with how they were used in the previous age. Before print, television, and film, these things were ethereal and hard to use against anyone or for advancing any agenda. During the broadcast era, they could be used to convince a great many people of a great many things. I imagine that many were initially inclined to the "sticks and stones" way of thinking about preserved, portable information. After some rough lessons, they came to see that the pen could be mightier than the sword. Underlying all of these shifts in the way words and images convey our pasts are conflicts and allegiances that revolve around scarce resources. The power of words to advance any individual or group's agenda depends on its permanence. The print era, with its centralized authority, taught us the power of words and images to shape our view of the past and of individuals and their reputations, but we shouldn't assume that words and images will be just as powerful when the power to create and distribute them was in the hands of the few.
Saturday, November 05, 2011
Using Anonymous
Thursday, September 29, 2011
Guilty Pleasures and Shameful Pleasures
My sense is that it has a very high use-to-liking ratio. People spend more time on facebook than the other 5 most popular sites combined. many people spend more time on it than other things they profess to love. People often say that they're "addicted" to it. Of course, this is an exaggeration, but the tendency of people to say that they're addicted to something, or the tendency to spend a lot of time on something that they don't claim to like all that much, is interesting. What might be even more interesting is if Facebook use, or similar high-use/low-liking leisure activities are displacing activities that people say they really like. Imagine that you are a fan of a TV show. You've got a lot of unwatched episodes on DVR, but instead of watching them, you spend more time of facebook.
There might be many reasons for this. One is the quantity of content. Facebook, much like the internet itself, provides a seemingly unending stream of novel content, something tv shows can never keep up with. Second, you can always watch the shows later, but the news on facebook loses its freshness and its value quickly. Also, you just donkt want to be out of the loop. This brings up another argument about habitual facebook use. No one says that they're fans of talking, or fans of phones, or parties. Its just something that people do. This is why its important to think of facebook use, or any other high-use/low-liking activity on the rise, in terms of displacement. Is it substituting for other kinds of talk, like face-to-face communication, or is it substituting for things that people say that they like, that they say they want to do more of but can't find the time. This is the difference between an enjoyable pasttime and a compulsive time-suck. It also might be the difference between a guilty pleasure (something that you feel bad about because you don't want to be doing it) and a shameful pleasure (something that is frowned upon in your culture but that you really like doing).
In conducting my research on why and when people choose guilty media pleasures, I think this is a crucial distinction to make. You can be a fan of Real Housewives of New Jersey and people might refer to it as a guilty pleasure, but as long as you truly and honestly are a fan, then we're talking about something that violates societies' values, not your own values. It would be interesting if the guilty pleasure were displacing the shameful pleasure.
Thursday, September 22, 2011
A Perfectly Social World
It feels like a similar situation now with a supposed re-design of Facebook on the horizon. This article speculates that Facebook will put an emphasis on passive sharing: rather than signifying that you like something by clicking the "like" button or posting a link on your profile or on someone else's, you will just go about your web browsing and other people will see it (or, more likely, just the parts of it that you want seen). Let's just assume, for a moment, that Facebook does something like this.
The first gut reaction is that its too much of an invasion of privacy, and I'm sure people will write tons about this angle if this ends up happening. But its more interesting to think about why passive sharing might be appealing and what it might feel like to live in a world where more moments of every day are shared and social. So, start out by imagining there's a magical switch that is thrown each time you browse on something that you don't want certain people to know about. It filters out exactly the people you want to keep from knowing what you're doing and it does so without you needing to actually do anything. If this existed, what would be the appeal and effect of increased passive sharing?
As I read articles for my research, design experiments, read the NYTimes, watch ESPN, go to reddit, I have an inner monologue, sometimes an inner dialog, a kind of hypothetical conversation about what I'm reading or writing. Heavy posters on Facebook or twitter (or even heavy texters) have taken to transcribing this inner mono/dialog so that it can start a conversation, and they can do this at any place or time. But passive sharing doesn't necessarily initiate social interaction of any kind. It might act as a pretense for conversation ("I saw that you were reading that article I read yesterday. What did you think of it?") or we might just use it as a more finely tuned means of social comparison than just seeing what people actively post about themselves. You run out of new actively posted items to look at on Facebook pretty quickly, but I doubt you'd ever run out of passively shared activity of others to look at. Plus it seems like more of an "honest" look at how they really are, not just the happy, shiny selves they present in their pics.
Since I'm in dissertation mode right now, thinking about one theory and how everything fits (or doesn't fit) it, I'm thinking about this in terms of choice, value, and delaying gratification. You could always use Facebook and other social media (even a phone) to have a social experience, but for people to connect with you or even to see what you're doing, you had to take some initiative. The people taking initiative - the frequent posters, the tweeters, etc. - may not have been all that relevant to you, in terms of your mood or your pre-existing social connection to them. But what if you always had the option of having an interesting conversation with someone you wanted to converse with about something you wanted to converse about? If that option is sitting there in that little rectangle of light you're staring at right now, you would probably choose it over most other activities.
Its been said that we're social animals, that all humans need social interaction and that society grows from this. But all social interaction has been embedded in the rules and systems of culture and physical space. You were surrounded by people, but they were people whose personal lives you didn't care about all that much. Its interesting to think about a world in which you could always look over and see what your friend is doing and strike up a conversation about it.
Wednesday, September 21, 2011
The Two Facebooks
As I posted before, Facebook's appeal depends to some extent on the "freshness" of the information presented in the feed. There's probably thousands of hours worth of "content" available on most people's Facebook pages. If I considered all the updates from all of my friends as the total information available on Facebook, I (like most people, I would assume) have seen very little of it. The value of each little bit of information about my friends depends on a few things: its recency (wouldn't I rather know what happened to my friends within the past few weeks than know what they were doing last year?) and its relevance to me. Facebook's privileging of the "top stories" over the "most recent" may be an attempt to get users to more relevant information.
They haven't taken away the "most recent" option. Instead, they've turned it into a ticker and put it on the side of the page. Really, they're just preventing users from opting out of seeing the information it deemed to be "top stories" by simply clicking on "most recent". Its interesting to consider the differences between the "most recent" and "top stories" experiences of Facebook. Its likely that people are more apt to merely read "most recent" news and not to actually post anything about it. Facebook has an interest in getting people to post and interact more and be less passive about the experience. That gets them more involved and attached to the application, more "embedded" in some sense. More interaction also gives Facebook more data on users. They can't track what you're looking at when you're scanning "most recent", but they can track posting patterns and use that data to make the "top stories" even more relevant, more satisfying, and better at keeping people on the site.
"Most recent" is really a way of using Facebook for social surveillance, not as a venue for interacting with close ones remotely (which is significantly more valuable a service). Some people may have become accustomed to using Facebook to see the news of people that they really wouldn't classify as close friends, even if Facebook gave them the chance to parse their friends into groups more easily. Maybe they were using it for downward social comparison or "stalking" people, and asking them to create a group of people they like to gawk at and not interact with breaks some sort of spell, makes people more aware of their inner voyeur. In this way, this particular user backlash might be about preserving the "mis-uses" of Facebook, not as a tool for better communication but as a way to look at people without being looked at.
Thursday, August 25, 2011
"It depends what you mean by sex": Google and Language
While researching the effects of stress on learning, I decided to look at differences between the ways male students reacted to stress and the ways female students reacted to stress. I went to Google Scholar and typed in "gender learning college stress" or similar variants of that. While doing this, I recalled that there was a difference between gender and sex, or rather, the contested nature of the definitions of the words. Ultimately, I think I'm interested in a social role rather than the levels of estrogen and testosterone one possesses.
What if I was more concerned with the biological trait and violence, and I assumed (probably wrongly) that academic writers made a distinction between gender roles and biological sex? Then I'd have to google "sex violence", which, even in the relatively porn-free ecosystem of Google scholar, would yield plenty of irrelevant results related to the act of sex, not the characteristic of sex. Really, its just a homonym problem, but its interesting to consider the role this problem plays in debates like the one over the definition of "sex" and "gender".
Perhaps in the future, searching won't be entirely contingent on individual words free of any context. Maybe search will get smarter about what we want. But in the meantime, we're in a world in which words (or names, for that matter) are at a distinct disadvantage if they refer to too many different concepts or people. Of course, the people who started the debate over the meaning of the word "sex" or the word "womyn" or many other words weren't thinking about the impact of google on the efficacy of their intervention. Again, its hard to say how long we'll be living in the word of context-less word searches, but if you're banking on it being around for awhile, it pays to use unique terms.
This leads to another problem, one that has frustrated me for years: the tendency of scholars to come up with yet another term for a concept that is subtley different from another concept we have a term for just so they can have an "original" concept to base their career and reputation on. The reverse happens, too: theorists hijack each other's terms and re-brand them, much to the confusion of students everywhere. There's a point at which this behavior leads to a breakdown in communication: are you talking about "priming" or "priming"? (or perhaps "priming"!).
The evolution of language has always been messy, and while google does bring a kind of order to our information environment, it may not be doing wonders for our languages
Sunday, August 21, 2011
Rechargeable Value
In trying to think through how I will have media users rate the immediate gratification value of their media selections in scheduled and un-scheduled choice environments, I've run across some artifacts that make it difficult to re-schedule certain media experiences in people's lives and still have them be enjoyable or valuable to them. For instance, if you insist that people only send text messages between 3 and 5 PM each day, they may not get much value out of text messaging at those times because the motivation to send those messages was time sensitive. They needed to re-schedule an appointment or make arrangements for dinner that evening. I think we tend to over-estimate the time sensitivity of the value of such communiques (including online chatting). Would you really feel deprived if you had to wait a few hours before learning that someone cared for you or want to make plans for the next day, or learning a bit about how someone's day went? Probably not, but we've gotten so used to having the ability to message whenever we'd like and we don't see a reason to change.
Wednesday, August 10, 2011
"The Thug Finding the Gutenberg Press"
What is the role of digital media in cases of social disorder, be it a riot or a revolution?
The answer on the tips of every social media guru's tongue is that social media - text messaging, group messaging, social networking sites - make organizing protests and mass looting easier and faster. Without this tool, the argument implies, the young men (and it is mostly young men, for what that's worth) would not overthrow the government or burn down the business. Another argument states that it is the way the rulers are behaving (austerity measures, police brutality, and a gap between rich fat cats and the proletariat) that inevitably leads to this kind of behavior.
Both of these arguments seem unsatisfying to me. Riots and revolutions happened without social media and there have been plenty of nations throughout history that had huge disparities between ruling classes and non-ruling classes that didn't erupt into violence for long periods of time. Perhaps both of these things contribute to the likelihood of these events, but perhaps there are other ways in which the new media landscape contributes to this likelihood.
Media content, be it what we see on television, what we read on our favorite site for news, or what we see in our Facebook feed, influences our ideas of what is normal in society or the sub-segment of society to which we believe we belong, which in turn affects our actions. By framing a certain behavior as more or less normal, a message sender can affect behavior of the message receivers. Its possible that various kinds (both positive and negative) of coverage of social unrest frame it as something that angry young men at a certain place in time do, enforcing a kind of norm to engage in civil disobedience, violence, or destruction of property. Instead of relying on the depictions of protesters, freedom fighters, and rioters that the mainstream news give us, we can get a first-hand look at them on social media sites. Even if only 5% of the population goes to these sources instead of MSM for coverage of the unrest, if its the right 5% (i.e. the 5% inclined toward real-world action), it matters. Perhaps this gives readers the impression that these aren't just objects on the screen to be watched, but people who are similar to the readers, who could interact with the readers. Maybe that makes identifying with them easier.
The panoply of opinions and ideologies on the internet makes finding justification (and a group that makes your thoughts about behaving in a certain way more normal) easier to find as well. I think this gets lost in the discussion of how easy social media facilitates the logistics of social unrest. You may start out with anger, but if that anger can't find justification, its unlikely to manifest itself in action. Sure, a mind sufficiently detached from reality can find a justification pretty much anywhere, but even those with a firm grip on reality can find reasons to act in ways that they couldn't when the messages were manufactured by people with too much to lose to advocate civil disobedience, violence, or property destruction.
Perhaps the default sentiment toward authority in complex societies is anger, an instinct that we feel from perceiving that we do not have much control over our fate. But that anger gets channeled into avenues other than civil disobedience, violence, and property destruction when we can't find justification or a group performing these actions to make them seem more normal.
(quote from Mike Butcher, TechCrunch Europe)
Wednesday, July 20, 2011
Puppies & Iraq
I just saw Page One, a documentary about the New York Times, which raised some interesting (if oft repeated) questions about journalism that come along with the financial instability of the industry: is there something about a traditional media outlet like the NYTimes that is superior to the various information disseminating alternatives (news aggregation sites, twitter, Facebook, Huffpo, Daily Kos, Gawker, etc.) and, if so, what is it? What is it about the New York Times (or the medium of newspapers in general) that would be missed if it was gone?
Bernard Berelson asked a similar question in a study of newspaper readers who were deprived of their daily newspaper due to a workers' strike in 1945. The reasons people liked (or perhaps even needed) the paper back then - social prestige, as an escape or diversion, as a welcome routine or ritual, to gather information about public affairs - are all met by various other websites and applications, some of which seem to be "better" - that is, more satisfying to the user - at one or all of these things than any newspaper is.
I want to pick apart this idea of that which is "more satisfying" to the user, or what it means to say that they "want" something. The mantra of producers in the free market, no matter what they're selling, is that they must give the people what they want. Nick Denton of Gawker has a cameo in Page One in which he talks about his "big board", the one that provides Gawker writers with instant feedback about how many hits (and thus, how many dollars) their stories are generating. Sam Zell, owner of the Tribune media company, voiced a similar opinion: those in the information dissemination business should give people what they want. Ideally, you make enough money to do "puppies and Iraq" - something that people want and something that people should want. To do anything else is, to use Zell's phrase, "journalistic arrogance".
Certainly, a large number of people are "satisfied" with the information they get from people like Denton and Zell. But Denton and Zell, like any businessmen, can only measure satisfaction in certain ways: money, or eyeballs on ads. There are other, often long-term social, costs paid when people get what they supposedly want. When news is market driven, the public interest suffers. So goes the argument of many cultural theorists. But who are they to say what the public interest is? Why do we need ivory tower theorists to save the masses from themselves?
Maybe that elitist - the one who would rather read a story about Iraq than look at puppies - is not in an ivory tower but inside of all of us, along with an inner hedonist (that's the one that would rather look at puppies all day). There are many ways to measure what people like, want, need, or prefer. I'm not talking about measuring happiness as opposed to money spent/earned. I'm considering what happens when we're asked to pay for certain things (bundled vs. individually sold goods) at certain times (in advance of the moment of consumption vs. immediately before the moment of consumption). There is plenty of empirical evidence to suggest that those two variables, along with many others situational variables external to the individual, alter selection patterns of individuals. Want, or need, or preference does not merely emanate from individuals. When we take this into account, we recognize that a shift in the times at which individuals access options and the way those options are bundled together end up altering what we choose. We click on links to videos of adorable puppies instead of links to stories about Iraq because they're links (right in front of us, immediate) and because they've already been paid for (every internet site is bundled together, and usually bundled together with telephone and 200 channels of television). If it wasn't like that, if we had to make a decision at the beginning of the year about whether we "wanted" to spend all year watching puppy videos or reading about Iraq...well, I guess not that many people would want to spend all year reading about Iraq. But I reckon that many people would want, would choose some combination of puppies and Iraq if they had to choose ahead of time. The internet is a combination of what we want and what we should want, and so is the NYTimes, but they represent a different balance between those two things. The Times is 100 parts puppies, 400 parts Iraq. The internet is 10000000000 parts puppies, 100000000 Iraq (or something to that effect. When you change how things are sold, you may not change what people want, as many theorists claim, but you do change how we measure what people want.
Maybe we never have to defer to a theorist to tell us what we should be reading or watching in order to be a better citizen. Maybe we just need to tweak our media choice environment so that it gives the inner elitist a fighting chance against the inner hedonist.
Tuesday, July 05, 2011
Restricted Access
Monday, July 04, 2011
The Ethical Issues of Analyzing Time, Desire, and Self-Control
Monday, May 30, 2011
The other privacy setting on your Facebook account
After attending another stimulating International Communication Conference, I've been thinking a bit more about the issue of privacy and everyday use of social media such as Facebook. Its one of those issues that seems to interest nearly everyone, not just theorists, but teachers, parents, even teenagers.
I have problems with the popular narrative that Privacy is a human right and that it is eroding or otherwise disappearing in the age of networked selves ( I fully articulate most of these in this earlier entry). To echo Ron Rice's argument in 'Artifacts and Paradoxes in New Media', I think we just assumed that the level of privacy we knew was somehow the natural state of things, and that we only tended to think about privacy when it was obviously breached, but not about how it was constructed with the help of older technologies in a certain way, to hide certain information from certain people. One example of many: having a private conversation required certain kinds of architecture: small rooms in big structures with sound-proof walls. I supposed these technologies kept some of the privacy we'd grown accustomed to when we weren't living in tightly packed urban centers. Still, it seems likely that, as individuals and as groups, we would have a strategic advantage to achieving our own ends and outwitting our competitors if we were able to trade information with allies while our enemies weren't able to overhear, and that this (along with the simultaneous drive to survey our environment for threats or resources) drives innovation in privacy enhancing and destroying technologies. While I'd agree that anyone totally robbed of privacy would be at a distinct disadvantage compared to those doing the robbing, and that this imbalance between those with and those without must privacy is to be avoided, talking about it like its a fundamental human right implies that it is somehow absolute or that, once upon a time, we actually had the ability to communicate with many others and keep those communications hidden from others, or that such a world could or should exist, which I don't buy.
The most compelling way to talk and write about technology and privacy would be to reveal the ways in which technologies unintentionally and subtly enhance or erode personal privacy. This got me thinking about Facebook. Most of the public debate over Facebook and privacy has been about its settings: how customizable they are, how easy they are to use. The mere existence of buttons that you click to change your level of privacy stops many users from thinking about other ways you can use the technology to alter your level of privacy. In particular, I'm thinking of the number of friends one chooses to have. As that number creeps up, the photos and the status updates you post gradually become more public and less private. The more gradually this happens, the harder it is to notice. It would be interesting to know the rate at which people acquire (or get rid of) friends on Facebook and if this rate is associated with how aware they are of their level of privacy and if that affects their behavior in any way.
More generally, this is about technologies of self-performance that allow us to set our privacy or "reach" at one point, letting us grow accustomed to one imagined audience. After we set the settings, we think "okay, I know who I'm performing to and I'll tailor my performance to that group", even though social media audiences are always changing in unpredictable ways. Even this blog and others like it are written with one imagined audience, but then that audience changes in ways that would be very difficult for the author to predict: encompassing people from various spheres of the performers' life.
As always, changing the technology can help us out of this bind. Maybe there could be a little graphic representation on every social media that let the author know what kinds of people were in the audience and how that was changing over time. It would take into account the current occupation and place on a professional network of each audience member (a color-coded "status" marker) and the social distance from you based on frequency of interactions between that person and other people you know. You could probably find that information if you really dug around for it (google insights and the like offer good data on web audiences), but what you need is something that is up in the face of the performer, just as it would be in the real world when you enter a room and start speaking. Technology will always change our levels of privacy. Instead of stopping it or trying to reverse it, the best we can do is to make it explicit.
Friday, April 29, 2011
Television is not a toaster with pictures, and Facebook circa 2005 is not Facebook circa 2015
Most people have likely formed an opinion about media effects: the degree or type of effects certain types of media use has on us. They likely talk about this in terms of media technologies (TV, radio, internet) or types of media texts (violent video games, Fox News). I think that we're using an outdated way of talking and thinking about the question of media effects.
We know that we spend more time using media - at home, at work, on the move - than ever before. And we know that there is a greater quantity of media options (in terms of content options and affordances of the technologies or applications) than ever before. I think that these two facts, by themselves, should prompt everyone (even skeptics) to reconsider the degree and type of media effects, and the ways in which we go about assessing these effects.
First, things have changed on the level of media technology. The question of media effects arose at a time at which the types of media (identifiable by their affordances) were limited. Producing and distributing a widely used communication technology that was functionally different than another communication technology took a lot of capital. Once the vast broadband and mobile online networks were established, it became significantly easier to create and distribute applications that, functionally, differ from other communication technologies. Much the same way the establishment of the electrical network permitted growth in the variety of machines in our lives and the establishment of the highway and railway systems permitted the growth of transportation, this network growth has increased diversity. Before the establishment of the electrical grid, you could make plenty of generalizations about machines and their effects on humanity because the number of widely used machines was quite limited. Afterward, those generalizations didn't make as much sense given the variety of machines that people used. Was the effect of a television the same as a toaster? Using pre-electrical network logic, the answer was yes.
Lumping all the uses of the internet or all the uses of Facebook together creates a similar problem. We still want to use the frames that were established by scholars and researchers during the 20th century for understanding media now. We talk about (and study) the effects of Facebook, of the internet, of texting, in the same way we talked about the effects of television 30 years ago: as if these things were discrete entities. But this approach doesn't make sense in a world in which the media forms to which we give names quickly change in fundamental, functional ways. The internet of 2011 is likely no more similar to the internet of 2000, in terms of its uses and effects on users, as radio (or a toaster, for that matter) is to the internet of either era. Yes, some useful things can be said about the effects of all online experiences just as some useful things can be said about the effects of the use of all machines (studies of modernity), but most of these general ways of thinking are just reflections of a time when "machine" or "online experiences" were easier to generalize about, before the network made new technologies and texts easier to create and distribute.
There is a similar problem with the way we think about the effects of types of content. If we talk about the possible effects of a particular film or television show, we do so because we believe that the principles at work are generalizable beyond that one text and that one audience. A film scholar writes about their experience viewing a certain film, but implicit in their writing is the assumption that other people will experience the film in a certain way and/or that other people will experience other similar films in a certain way. When the number of texts explodes, as it has done for the past several years, the question of what one particular person (or type of person) does with one particular text reveals less about the overall experience of media consumption. This just wasn't a problem when the number of texts was lower, when readers could be expected to have seen "classic" texts.
This isn't to say that there won't likely be some media texts that will be experienced by large numbers of people over long periods of time. Our thirst for common experiences will ensure the persistence of canons, but these canons will consist of a smaller and smaller sliver of our overall media use. Figuring out the effect of the two hours I spend watching Citizen Kane is worthwhile, but what about the effects of the thousands of hours I spend online? How do we go about assessing that? Do we break the texts into genres, like we did before? Do we pretend that Facebook or social networking sites are media technologies in the way that radio or television are media technologies? What happens when Facebook's online movie-watching feature takes off? Is its effect on users the same as it was 5 years ago? This seems ludicrous, and yet studies that purport to be about the effects of "Facebook" do not take it into account.
How do we adapt? We identify characteristics of media experiences that cannot be made obsolete by developments in technology and we base our theories on the presence or absence of those characteristics. How many people does a media application connect a user to? How often is it used? Is its use planned or spontaneous? What emotions is its use typically associated with? What parts of the brain light up when we're using it? What are the gratifications sought by the users? If you look at technology this way, you can make sense of the effects of television and toasters without having to conflate the two, and without having to re-invent your theories when the next contraption is invented. These approaches to media effects seem suited to an era in which there will be too many different texts and media technologies for any human to keep track of.
Sunday, March 20, 2011
Ironic Liking vs. In-Spite-of Liking
In thinking more about Rebecca Black's popularity, I start to think more about how anyone gauges whether something is popular and/or well liked, and what that means.
At the most basic level, we have the number of plays or views. The fact that Rebecca Black's video has been played over 20 million times tells us that it is, in some way, popular. Of these, some could be curiosity seekers, some could be people who like the song listening to it repeatedly, some could be people who enjoy laughing at the song listening to it repeatedly. The story surrounding Black's song indicates that most viewers do not like the song in the way that most people earnestly "like" what they like.
It takes very little effort and no money to click on a link and watch a YouTube video. The same cannot be said for downloading a song on ITunes. So, which is more likely: that tens of thousands of people earnestly like a song that many people find horrible OR that tens of thousands of people are willing to pay money to listen to something that you do not earnestly like?
What do I mean by "earnest"? People talk a lot about liking something "ironically." But I feel like this is the wrong term. "Ironic" implies that they like something for the opposite reason than one would expect them to like it. But are those who grow "ironic" mustaches really growing it for the opposite reasons than those growing it for earnest reasons? What would that mean? The term "ironic" just doesn't seem to convey anything meaningful. It doesn't help us understand why people buy or do or wear things that are inconsistent with their apparent tastes and likings. The term "parody" might be used to describe such actions, but I feel like that's inadequate, too. "Parody" implies that they are trying to get a person to laugh at that which is being parodied. Its commonly understood as a way to ridicule something. In some cases, I can understand how this might be true of a hipster growing a mustache or of someone who finds Rebecca Black's voice to be grating and her lyrics to be awful: they're growing/listening to these things to make fun of and feel superior to those that earnestly do/like them. For them to do this, there must be a receptive audience, someone to share in the joke. If they were in the exclusive company of earnest fans or mustache-growers, I can't imagine that they would keep up the parody for very long. But if they're surrounded by others who feel similarly toward people who grow earnest mustaches and earnestly like Rebecca Black's music (or that type of pop music), then parodic liking is a way of bonding, of signaling that you're part of a group.
I'd like to suggest that something else is going on: In-spite-of liking. This means that someone hates a certain aspect of a show, movie, song, clothing, famous person but likes another aspect of it. All of these things are comprised of many elements, but most of the ones that are liked in this fashion are somehow un-self-conscious, nakedly attention-getting, unapologetic and unsubtle. Perhaps people long for these characteristics and if they happen to be packaged with something that the user does not like - misogynist lyrics, nasally voice, rampant consumerism, a lifestyle that one cannot identify with - they're willing to overlook these things in favor of the characteristics they like. Maybe they also identify with the fact that these people are proud and hated by many. Unlike parody, this could take place in a vacuum. If I liked the beat of Rebecca Black's song and found it catchy, despite the fact that I thought the lyrics were inane, that it was irresponsible to mock a 13-year-old, and that her voice was grating and nasally, I might still listen to (or even download) the song without having to do this in front of anyone.
I wouldn't expect anyone to be able to articulate these feelings, but that doesn't make them any less true. People might just say "I like it, the end." So if we want to understand what predicts liking, we might have to move away from self-report and find out patterns of liking that diverge from traditional models, instances we have usually called "ironic liking." If we isolate each characteristic and ask whether one is liking something as a kind of performance for the benefit of others, we can better understand this phenomenon.
Post-script: Amazingly, 10 year olds seem to grasp the concept of ironic liking. This focus group also indicates that the song skews young: people under 10 seem to like it more than teens.
Saturday, March 19, 2011
The Fate of Rebecca Black
In just over a week(!), unknown teenager Rebecca Black's quasi-home-made music video Friday went from 3000 views to 22 million views. Like many viral sensations, there was a catalyst in the form of online opinion leaders (i.e. bloggers with connections to mainstream media): popular blogger Tosh.O got the ball rolling with a re-post of the video from The Daily What. Its hard for anything to "go viral" without these influential bloggers drawing the material to the attention of their audiences, which draws the attention of mainstream media, which draws the attention of your mother. But are there any characteristics of this text that set it apart from things that don't go viral that it has in common with other things that have gotten popular?
One possible precedent is the meteoric rise of YouTube pop sensation Justin Bieber. Commonalities: they happened at roughly the same time, meaning that the relationship amongst smalltime teen artists, YouTube, bloggers, the mainstream media, and the general audience was roughly the same during both phenomena. They're both producing pop music in the style that is popular at this time. Here, one might engage in subjective judgment of the merit of their music, but there is where the conversation about what goes viral ceases to interest me. Maybe Bieber's talented, maybe he isn't. If we're interested in figuring out why both Bieber and Rebecca Black got popular quickly on YouTube, we have to look at characteristics of the video texts in relation to their reception which - conveniently enough for anyone interest in understanding media, meaning, and effect - is captured on the YouTube video itself, on Twitter, and in the blogosphere.
Both Bieber and Black inspire diametrically opposed reactions: you either love them or you hate them. Though its hard to be certain of the identity of anyone online, judging by the preferences of those who leave comments (which, in another boon to media researchers, is easily available, searchable, and analyzable), loving/hating of both seems to be determined by gender, age, and some personality trait (anti-authority or anti-social, perhaps). The loving/hating of these artists seems more personal than the loving/hating of artists who rise through the traditional star-making machinery of Hollywood. Stars are either conditioned to speak and behave in a certain way or are selected based on their match to very specific popular archetypes. They feel remote and unrelatable in a way that overnight sensations do not.
But Black might actually have more in common with a certain brand of reality TV, of which Jersey Shore is the most salient example. These texts have at least two kinds of viewers: those that identify aspirationally with some element of the characters' behavior (usually their bravado in the face of haters, or their unchecked hedonism during a buzzkill economy) and those that love to hate them and/or laugh at them, feeling superior to them. Additionally, there are people who, paradoxically, occupy both ends of that spectrum, liking the characters "ironically." There's a genius to this kind of entertainment: by incorporating both lovers and haters of these people, they double their audience AND they get people talking online, which is necessary in the age of social media for marketing purposes. As one of my professors says, TV exists to give people something to talk about, so these texts are popular because they're ways for us to talk about taste, class, appearance, values, and pretty much any other element of human behavior.
There's also the question of authenticity. Black seems to be authentic in her performance (as opposed to being purposefully bad in a parody style) given her age and her earnest appearances on mainstream media outlets. But the authenticity of those millions of people who watch the video is in doubt. Judging by a lot of the comments on YouTube and the mainstream media story, most people are laughing at Black, but its conceivable that there could be some other earnest 13-year-old girls who honestly like the song and, out of empathy, are un-ironic fans of hers. If you hate her, you're saddened and alarmed that so many people can have such horrible taste in music. If you love her, you're saddened and alarmed that people can be so cruel as to laugh at a 13 year old. In any case, she seems to be making some money off it, money that, properly invested, will still be hers when we find someone else to laugh at. And she's making showbizz connections. The narrative will probably go something like this: mean internet beats up 13-year-old girl, Bieber or Usher or some famous dude jumps to her defense, voice coach tells her how to sing slightly better, young teen girls see her as empowering figure, ??? = profit!
This still leaves me with one question, one that I hope many people are asking as more cewebrities pop up: does it matter how you get famous? I can't help but think of Star Wars Kid, who ended up in therapy after so many people made fun of him on the internet. Sometimes, the social mediasphere can be an extension of junior high. There's an endless supply of 12-15 year-olds doing something embarrassingly earnest and, mistakenly, recording it. These are the next Rebbeca Blacks. And the conversation about the effects of Rebecca Black's rise and reception doesn't end with whether or not she lands on her feet. Maybe some girl, too young to understand exactly how people get famous online, will see that Rebecca Black is famous and met Justin Bieber, so she'll make a similar video, but it won't go viral. It'll just get passed around her middle school, she'll be made fun of, and develop an eating disorder. Maybe this will be the 21st century equivalent of the emotional wreckage of the would-be starlet turning tricks on Sunset Strip.
As audiences, we seem to always be hungry for someone to serve as a topic of debate. By tearing them down and building them up, we're helping create the rise-and-fall/fall-and-rise stories that have always kept audiences rapt. As performers, we seem to be stuck between killing ourselves when someone speaks ill of us or being way too proud of our shitty music or personality, impervious to criticism (haters gonna hate!). Those wondering how our culture will get past this period would do well to follow Rebecca's story as it unfolds.
Sunday, March 06, 2011
The Pecker at the Party: Immediate gratification, social media, and social situations
Imagine you are at a cocktail party. Most individuals are talking to one another while some are pecking at their smartphones. One of the guests at the party confronts a “phone-pecker”, explaining that she finds this behavior to be rude and cannot understand why the individual is engaging in what appears to be “anti-social” behavior at a party. The “phone pecker” retorts, claiming that he is actually being quite social, just with people who do not happen to be in the room at that moment. He is looking at photos of friends on Facebook, answering emails and sending text messages, reading tweets. How, he asks, is his socializing any different (or inferior) to the socializing that is happening at the cocktail party?
This situation - in which an apparently solitary individual stares at a screen and pecks at the screen or a keyboard or, in some cases, talks to the screen - is increasingly common. More social interaction is mediated - taking place via text messages, emails, social media (e.g. Facebook, Twitter) - and more people are concerned about the implications of this for individuals and society. The cocktail party guest’s identification of this behavior as “rude” suggests a breach of etiquette, but the perception of these acts as rude may only be a symptom of some people’s failure to understand how the technology is being used. Such a failure might be a result of the rapid diffusion of the technology and the predictable failure of social mores to evolve at a similar rate. Older generations lamenting the corrupting influence of new technologies is likely as old as technology itself. Once the technology is sufficiently common, such complaints are seen as the mark of a curmudgeon rather than a legitimate qualm. So the fact that this is seen as rude fails to distinguish between a public act that is simply new and will eventually be harmless and no longer seen as rude and an act that is, in some way, harmful to the social fabric and will always be seen as rude or aberrant.
Before proceeding, it is worth noting that the phone-pecker may not be being social at all. He may be scanning his Facebook feed (the equivalent of looking at other guests at a party while not conversing with them) or checking sports scores. Given the size and orientation of networked mobile devices and the expectation of privacy that is associated with the use of such devices, it is difficult for observers to be able to tell the difference between social interaction on mobile devices and non-social behavior.
Let us assume the phone-pecker is interacting with others via the mobile device. Are there really no differences between the nature of heavily mediated person-to-person relationships (that is, those that take place chiefly via text message, email, Skype, and Facebook) and those that are not as heavily mediated?
Older individuals often marvel at the sheer number of texts that teenagers send in a day. Half of all American teens send over 50 texts per day, according to Pew, 2010. Some teens send hundreds per day. This number is not shocking if one considers the text messages to be similar in purpose and content to turn-taking conversations. Teens engaged in long turn-taking conversations on land-line telephones and, before that, in person. The length of statements in such conversations was and is likely to be short, no more than the 150 characters allowed by most SMS (i.e. texting) services. So the length of individual statements and the frequency of text messaging and social media use, while initially seeming alarmingly different, do not to differ significantly from other existing forms (mediated or not) of communication.
What about the people participating in the interaction? There has been fear surrounding social media and one-to-one technology (dating back to the introduction of the telephone) that remote communication devices would facilitate relationships between vulnerable populations (e.g. children) and those seeking to take advantage of these populations (e.g. sexual predators, advertisers, etc.). Though there have been instances of such behavior, the bulk of mediated social interaction (on Facebook and text messaging, if not on Twitter, which, in any case, operates more like a micro-broadcast medium like blogging and less like a rapid interaction application) is still between parties that are acquainted with one another from non-mediated social worlds like school, work, and get-togethers in real world locations like bars and parties. In other words, the people on the other end of the phone-pecker’s pecks are likely to be similar, in terms of their connections to the phone-pecker, to the other cocktail party goers. So it seems, again, that what initially appeared to be different may not be that different at all.
But there's another possible area of difference: the nature of the relationship. The maintenance of relationships, like any other endeavor, is comprised of acts that are immediately enjoyable but do not pay long-term or collective dividends (being able to complain about work to a friend, flirting with a married co-worker, playing a game of basketball with a friend) and other acts that are not immediately enjoyable but do pay off in the long run (changing a diaper, attending a boring work meeting, discussing a difficult topic with a spouse). This isn’t to say there aren’t many (if not most) social interactions that are both immediately gratifying AND help foster long-term gains for the self and others (great sex, great parties, enjoyable collaborative work), but that some relationship maintenance, like many other things in life, is not immediately gratifying but pays off in the long run. An overindulgence in immediately enjoyable social interactions and a failure to engage in any other kind of social interactions may lead to shorter-duration relationships as individuals more frequently (and accurately) accuse one another of being selfish. A failure to think about one’s long-term goals in relationships goes hand in hand with being self-centered or selfish.
Throughout most of human history, our relationships were constrained: by the surveillance and judgment of others, by geography, by class, by time. The proliferation of networked communication devices allows us to (at least partially) remove these constraints. For some, this is a good thing: relationships hindered by repressive regimes are allowed to flourish online. For others, I think it is not. These constraints often kept immediately gratifying interactions at a distance, forcing us to talk with co-workers about work when we would rather be flirting with our partners, to talk with our parents when we would rather be talking with our long-time friends, conversing with our long-time friends when we’d rather be talking to someone new and exciting, talking to our spouses when we’d rather be flirting with a co-worker. They were shaped by the randomness of geographic distribution, by the history of institutions developed by those in power, and in these cases, circumvention of such constraints is, ultimate, a good thing. But either by accident or by design, those constraints kept us from indulging our every social whim.
Again, this isn’t to say that individuals without networked communication technology did not indulge in selfish interactions not in their or anyone else’s best long-term interest. However, when you make immediately gratifying options more easily accessible, you make them more likely to be chosen, particularly by those low in self-control. Why would the choice of whom to interact with and what to converse about be any different than any other decision (in which the temporal and spatial proximity of temptations make them more likely to be chosen)?
To return to the phone-pecker, it is possible that he simply prefers talking to someone who is not at the party. The pecker was invited to the party because of an assumed mutual interest in interacting with those at the party, but given his inattention to other party goers, he cannot count on this assumption lasting very long. He uses technology to develop and nurture several intense, mutually beneficial relationships with co-workers (frequently answering work emails), romantic partners, and close friends (frequently texting at parties), but ignores those very same people at other times so that he can communicate remotely with others. When spending time with his romantic partner, he frequently converses with friends through text and answers email, neglecting his relationship with his partner. When spending time at work, he frequently blows off work to flirt with his partner. It isn’t that he ignores certain people, but that, given the choice between one conversation and another, he chooses the more immediately interesting conversation.
By doing this, he may be training himself to become intolerant of social situations that are not immediately gratifying, becoming accustomed to the immediately gratifying interactions provided by networked communication devices. At a party, this is relatively harmless, though he might find himself invited to fewer parties in the future. But if he bows out of important discussions/ arguments with his partner before they can be resolved, if he pays more attention to work emails and less attention to his children, if he talks to his partner more than his co-workers about work, or flirts more to his sexy co-worker than he talks to his partner about their finances, eventually, his relationships with all parties will suffer through lack of attention.
Tuesday, February 22, 2011
Is vlogging a female medium?
Something in an interview with cewebrity Magibon on Know Your Meme got me thinking: are there more female vloggers than male ones, and if this is so, why might that be? Magibon says that in Japan, most males do not go on video and, if they do, they do not show their faces. In my casual perusal of home-made videos from Japan on YouTube, I've found this to be true, and it wouldn't surprise me if this were true in the US as well.
Why might a gender difference exist in online self-expression? First off, a disclaimer: any difference we might observe is as or more likely to be a product of cultural expectations of gender roles than it is to be a product of some inherent difference between the sexes. Having said that, its possible that young females believe they can gain status by gaining attention, and one way to gain attention is to use their looks. Perhaps many males, here and abroad, do not enter this entertainment arena because, traditionally, males do not derive their cultural worth from showcasing their looks to the extent that females do. Perhaps males fear some sort of permanent tarnishing of their professional image. Perhaps they fear that employers won't take them seriously when they find their rather silly video blog. Young females, not having as much to lose in the traditional professional world (or at least not anticipating that they will when they get older) jump right in and start vlogging.
The result is a medium dominated by female producers, but is this media created for a female audience? Probably not as much as, say, the female blogging community. Take the looks out of the picture and, I would imagine, you take away a good sized portion of the young male audience. Its worth re-thinking how we identify authorship for YouTube and vlogging. Are females really empowered when they have to cater to a male audience (a young, hetero male audience fixated on looks)? Then there are those wildly popular make-up tutorial videos created by women for women. Even when both the audience and the creators are women, it seems to be ultimately geared toward pleasing men (albeit indirectly). This just doesn't seem to be true of the female blogosphere, and I think most of it has to do with looks.
Magibon made it seem as though males "can't compete" with females in the user-generated video arena, that it would, in some sense, come to be dominated by women. But how dominant are these young women?
Thursday, February 03, 2011
Will the Revolution be Tweeted?
I just went to a terrific, timely talk by visiting professor Michael Dobbs on social media's role in revolutions. It was a rare treat to be in a room with well-informed experts on media and politics and discuss something that was going on right at that moment. Dobbs gave many examples of the popular press claiming that social media, Twitter in particular, had precipitated the successful revolution in Tunisia and the still-in-progress revolution in Egypt. He brought up examples of techno-utopian views on the subject (e.g. Clay Shirky's TED talk on the revolutionary power of Twitter) as well as rebuttals such as Malcolm Gladwell's piece for the New Yorker.
One question that came to mind during his talk was: when assessing whether or not Twitter and social media are capable of facilitating revolutions, to what are comparing them? And by "we," I mean the users, the public, the press, the critics, anybody. Gladwell essentially compares Twitter and Facebook groups to real world activism. I think he does this because he believes (as do I) that a fair amount of social media activists think of their tweeting or Facebook-group-founding as more similar to participation in a protest than sitting at home passively in front of a television screen and watching it all unfold. They think they're being active, but Gladwell points out that they ultimately have little real power because, unlike civil rights protesters or other activists who actually changed our world, they are not making any real sacrifice; they are not risking much; they are not forming lasting bonds with people for whom they would make some real sacrifice (e.g. a large sum of money, risking one's life, etc). So that, Gladwell says is the key difference between real world protesters and virtual ones: shared sacrifice.
I basically think of online activism the way i think about online "friends": instead of just saying, "online _____ is no substitute for the real thing!", I think it presents us an opportunity to pull apart the real world phenomenon and ask what parts of it are duplicated by the online proxy, what parts of it aren't, and how those parts matter for outcomes of interest. So, what is the point of protesting?
First, its a way to devalue the ruling party's monopoly on physical force. If enough people get out there in the town square and don't back down after being physically threatened and assaulted, then the power to threaten and assault loses its meaning. There's also a "softer" power of protesting crowds: they can choose not to vote for someone, they can choose not to spend their money somewhere. Even if they don't live in a democracy, they can make it even more glaringly apparent to the outside world that they're living in a country that isn't even remotely democratic. Dobbs seemed skeptical that this could result in the overthrow of a repressive regime. It didn't work in Poland, it didn't work in Iran, and it probably wouldn't work here, not unless people make very real, significant sacrifices, ones that couldn't be done online.
Dobbs essentially said that Twitter is a way to share information. Like pamphlets distributed in other revolutions, they are a necessary but not sufficient criteria for revolution. Other things need to be in place: long-standing, easy-to-grasp grievances, for one. But I don't think social media is just a way to get information out. The difference has to do with social pressure. Television was and is quite good at presenting the spectacle of many people behaving a certain way. Combine this spectacle with many little nodes on a network behaving or expressing ideas in the same way and you've got the appearance of consensus, which is a powerful tool.
This gets me to the idea of a tipping point. There is a point at which the contagion of an idea rapidly speeds up, a point at which it seems like "everybody" is buying the same t-shirt or saying the same catch-phrase or using Facebook. These are all fairly benign trends, which makes sense. Buying a certain kind of t-shirt didn't involve much sacrifice: they're all pretty much the same to you, and if this one is popular, even if its a little more expensive, maybe you should buy it. Putting your life on the line or uprooting your family or risking your livelihood (all things that may be called for if you're participating in a revolution) involve significant risk. But those arguing against the revolutionary power of social media miss a key point: social pressure can convince you to make great sacrifices under the right circumstances.
Here's how I think it works. At the start, you need to have a group of people who are very similar to you, the Twitter user, and they have to be engaging in some activity that you were pretty close to doing. Then you might be convinced to do what they're doing. As the number of people doing that thing grows, it starts to matter less and less how similar those people are to you and how predisposed you were to act that way in the first place. When it comes to social pressure, there is an effect of sheer numbers. It doesn't replace or cancel out those other effects, but as the number grows, the effects of similarity and pre-disposition lessen. If you give people who want a revolution a sense that they can pull it off because there are so many other young, unemployed, pissed-off men who are ready to risk jail or a beating, then its more likely to happen than if you didn't give them that sense. This is something that pamphlets or television alone cannot do.
At the end of Dobbs talk, another professor pointed out how many billions of tourist dollars Egypt is losing each day. China's economy can withstand a repressive government, but it doesn't look like this one can. We'll just have to wait and see.