Tuesday, December 20, 2011

Over-sharing, time dilution, and the weaponization of words

If there are classes on "how to use social media" that will be taught in secondary school, its likely that one of the topics will be the hazards of over-sharing. There's a common sentiment, typically expressed by digital immigrants like the New York Times' Timothy Egan, that young people share too much via social media. Many people seem baffled as to what the reward might be for telling people so much about your inner thoughts and feelings. Egan suggests, like many others, that this compulsion to share is driven by (and drives) the sharer's narcissism, though some research I'm working on right now with a colleague indicates that no such connection between narcissism and social media use exists. In fact, if there is any connection between tweeting and narcissism, its actually in the opposite direction than is suggested by Egan and Co.

What really concerns many digital immigrants are the consequences of over-sharing. Drunk status posts live forever in the digital archive of the internet, preserved word for word, undistorted by individual memory. Most conversations about this topic concern employment or election: one's past indiscretions come back to haunt those seeking work or public office. Its interesting how the conversations about reputation that only used to apply to celebrities and politicians and reality TV stars apply to all of us now. We can all be publicly disgraced and publicly redeemed.

The argument against sharing assumes that collective, disembodied digital cultural memory operates like the collective, disembodied pre-digital cultural memory and pre-digital individual memories. But I wonder if there might be some differences that aren't being accounted for. There might be a kind of dilution of the past going on. Perhaps each bit of information about the past matters less because there's simply so much of it. We judge gaffes the same way we judge everything else: on the extent to which they are rare. People also speculate that more information shrinks our attention span and with it the duration of the news cycle, so that people move from topic to topic quicker. We might move through the cycles of admonishment and forgiveness more quickly as well.

But maybe the critics of over-sharing are right. Maybe each person who has tweeted something stupid or left a stupid status update is instantly and permanently discredited by some for doing so. In a market where there are so many people out there who are equally accessible online, employers, voters, consumers of entertainment content, and even online daters can discard people for single infractions because there's always a comparable replacement who hasn't said anything stupid (yet). Dismiss anyone for tweeting something stupid and you'll be dismissing a lot of people, but maybe that's not such a problem when you've got so many people to choose from.

There's another way in which over-sharing might be changing the nature of conflict. If you tweet something potentially embarrassing and it lives forever, does that mean that everyone's words can be used as weapons by their enemies against them in the future, the way it happens with politicians now? This might just be the next frontier of human hostility: searching people's pasts and using what they've said as a point in an argument against them, context be damned. Maybe we could only forgive because we could forget, because our memory of ugly, hostile behavior softened with time. Now that we can't forget, now that every ugly, hostile remark is retained, we'll stay mad at people and never give them another chance. We've started to get cynical about politicians because they were the first to have every one of their moves recorded and re-broadcast out of context. Soon, when this happens to many of us, we'll become cynical about human nature.

Here, I must make a point similar to one I've made about privacy: digital social media gives people ammunition to hurt others, but it doesn't create the will to use it, nor does it assume that we can't make rules, laws, or norms that prohibit using past or private information against people.  The preservation of information doesn't necessarily mean that anyone will dig it up and broadcast it in the future. Someone has to care enough about finding the information to search for it and have some reason to defame another person. So maybe you can go on sharing as much as you'd like on twitter and facebook. Just don't get on anybody's bad side so that they will go to the trouble of digging up something stupid you'd said years ago and use it against you. And if they do, know how to fight back in the ways pioneered by the frontiersman of character assassination, the political operative: dig up something on your opponent or dismiss their efforts as "mud-slinging".

The outlandishly stupid tweets used by those making the over-sharing argument are rare and, so long as these "be careful what you post" stories are out there, liable to become rarer still. I just don't see people cutting back on sharing, despite the fact that the preservation of day-to-day sharing can make any heartfelt sentiment seem stupid when taken out of context. If people are pre-disposed to disliking a person, they'll take a questionable post from the past as evidence to support their dislike. It they are pre-disposed to liking the person, they'll shrug and say "so what if a person tweeted something embarrassing 5 years ago? Who cares?".

Many assume the words and pictures that preserve and transport our past acts have a certain power, but our view of just how powerful they can be is distorted by our experience with how they were used in the previous age. Before print, television, and film, these things were ethereal and hard to use against anyone or for advancing any agenda. During the broadcast era, they could be used to convince a great many people of a great many things. I imagine that many were initially inclined to the "sticks and stones" way of thinking about preserved, portable information. After some rough lessons, they came to see that the pen could be mightier than the sword. Underlying all of these shifts in the way words and images convey our pasts are conflicts and allegiances that revolve around scarce resources. The power of words to advance any individual or group's agenda depends on its permanence. The print era, with its centralized authority, taught us the power of words and images to shape our view of the past and of individuals and their reputations, but we shouldn't assume that words and images will be just as powerful when the power to create and distribute them was in the hands of the few.

Saturday, November 05, 2011

Using Anonymous

I saw E. Gabriella Coleman give a terrific talk Friday at the New Media/Social Change Symposium on Anonymous, the loose online collective bent on wreaking havoc and occasionally bringing about positive social change. Like Alan Smithee or Luther Blissett, Anonymous is an "open reputation" or an "improper name", an identity to be used by anyone. Coleman's talk prompted me to ask several questions about Anonymous, aside from the obvious question of who uses the title: why do people use it and under what circumstances do they use it?

There's something exhilarating, liberating, and perhaps frightening about an identity that gives anyone the freedom to do or say anything without fear of reprisal. You could think of the title "Anonymous" as less of an identity and more of a tool that could be used by the powerless to subvert the creeping surveillance state. But that's not quite what this is. In actuality, there are authorities trying to identify and prosecute members of Anonymous or, more intriguingly, trying to infiltrate it and discredit it by engaging in activities in its name that are antithetical to its stated modus operandi. In order to outwit their "competition", members of Anonymous require free time and technical know-how: two things many young, unemployed or under-employed men happen to have in spades. It certainly helps explain the original aesthetic and ideological tendencies that marked Anonymous's original endeavors, and also helps explain its limits: why it can't be picked up by any group to do anything. Instead of a selfless Robin-Hood-type collective, we're left with a small, relatively homogeneous group that is the only group capable of using the tool well. They all have the same axes to grind, the same sense of humor, and very similar experiences: all markers of a relatively uncomplicated identity. Anonymous didn't have to be an identity, but there just aren't that many people who have the time and the skill to use it as a tool.  

While one anonymous collective is good at disrupting and tearing things down, another relatively anonymous collective is good at building things up: wikipedia. As an anonymite, you can create or you can destroy, but (to return to the topic of the symposium) can you really change anything, any existing power structure in the offline world? Offline world changes - changes in policy, changes in the flow of capital - usually require actors to maintain traceable identities. But maybe the disruptions caused by Anonymous are enough to spark offline changes by people who have traceable identities - voters, investors, consumers, and workers.

Finally, to get a sense of the possibilities of anonymous social action, its worth considering two competing popular narratives that revolve around revolutionaries, justice, and open reputations. Anonymous borrowed the iconography of a graphic novel and film from the 1980's/mid-2000's: V for Vendetta, which featured the open reputation of V, a freedom fighter (or anarchist) intent on bringing down a fascist regime. The story leaves it open as to whether V was sane or not, but his actions - toppling a regime that is obviously fascist - are treated by author and audience as having a positive outcome. Conversely, in Fight Club, we see a collective like so many military or para-military revolutionists, bent on washing away any trace of individuality in order to form a stronger coalition, only to re-discover the value of individual identity when one of them is martyred. But the real identity problem in this story is with the main character, who is either a downtrodden worker drone or the lead revolutionary. Instead of sloughing off his old, ineffectual identity for his gleaming new one, he's caught between the two, a stranger to himself. It remains to be seen as to whether the lead hackers behind Anonymous are less like Bruce Wayne/Batman and more like Ed Norton's character/Tyler Durden, and whether their minions are more like the group-thinking benign terrorists of Project Mayhem and less like the triumphant masses of the Arab Spring.


Thursday, September 29, 2011

Guilty Pleasures and Shameful Pleasures

Do many people really like facebook?

My sense is that it has a very high use-to-liking ratio. People spend more time on facebook than the other 5 most popular sites combined. many people spend more time on it than other things they profess to love. People often say that they're "addicted" to it. Of course, this is an exaggeration, but the tendency of people to say that they're addicted to something, or the tendency to spend a lot of time on something that they don't claim to like all that much, is interesting. What might be even more interesting is if Facebook use, or similar high-use/low-liking leisure activities are displacing activities that people say they really like. Imagine that you are a fan of a TV show. You've got a lot of unwatched episodes on DVR, but instead of watching them, you spend more time of facebook.

There might be many reasons for this. One is the quantity of content. Facebook, much like the internet itself, provides a seemingly unending stream of novel content, something tv shows can never keep up with. Second, you can always watch the shows later, but the news on facebook loses its freshness and its value quickly. Also, you just donkt want to be out of the loop. This brings up another argument about habitual facebook use. No one says that they're fans of talking, or fans of phones, or parties. Its just something that people do. This is why its important to think of facebook use, or any other high-use/low-liking activity on the rise, in terms of displacement. Is it substituting for other kinds of talk, like face-to-face communication, or is it substituting for things that people say that they like, that they say they want to do more of but can't find the time. This is the difference between an enjoyable pasttime and a compulsive time-suck. It also might be the difference between a guilty pleasure (something that you feel bad about because you don't want to be doing it) and a shameful pleasure (something that is frowned upon in your culture but that you really like doing).

In conducting my research on why and when people choose guilty media pleasures, I think this is a crucial distinction to make. You can be a fan of Real Housewives of New Jersey and people might refer to it as a guilty pleasure, but as long as you truly and honestly are a fan, then we're talking about something that violates societies' values, not your own values. It would be interesting if the guilty pleasure were displacing the shameful pleasure.

Thursday, September 22, 2011

A Perfectly Social World

There's an interesting window of time between when you've heard that something new and supposedly "game-changing" is about to happen and when know what it is, the moment before the unveiling. Once you find out, it usually disappoints, but before you hear about the reality, knowing that something big might be on the horizon gets you thinking about what's possible. I remember that happening in the months leading up to the release of the Segway scooter. We knew that an inventor with an incredible track record and lots of resources had secured some very interesting looking patents, but we didn't know if this thing would fly, how it would be powered, or how fast it would go.

It feels like a similar situation now with a supposed re-design of Facebook on the horizon. This article speculates that Facebook will put an emphasis on passive sharing: rather than signifying that you like something by clicking the "like" button or posting a link on your profile or on someone else's, you will just go about your web browsing and other people will see it (or, more likely, just the parts of it that you want seen). Let's just assume, for a moment, that Facebook does something like this.

The first gut reaction is that its too much of an invasion of privacy, and I'm sure people will write tons about this angle if this ends up happening. But its more interesting to think about why passive sharing might be appealing and what it might feel like to live in a world where more moments of every day are shared and social. So, start out by imagining there's a magical switch that is thrown each time you browse on something that you don't want certain people to know about. It filters out exactly the people you want to keep from knowing what you're doing and it does so without you needing to actually do anything. If this existed, what would be the appeal and effect of increased passive sharing?

As I read articles for my research, design experiments, read the NYTimes, watch ESPN, go to reddit, I have an inner monologue, sometimes an inner dialog, a kind of hypothetical conversation about what I'm reading or writing. Heavy posters on Facebook or twitter (or even heavy texters) have taken to transcribing this inner mono/dialog so that it can start a conversation, and they can do this at any place or time. But passive sharing doesn't necessarily initiate social interaction of any kind. It might act as a pretense for conversation ("I saw that you were reading that article I read yesterday. What did you think of it?") or we might just use it as a more finely tuned means of social comparison than just seeing what people actively post about themselves. You run out of new actively posted items to look at on Facebook pretty quickly, but I doubt you'd ever run out of passively shared activity of others to look at. Plus it seems like more of an "honest" look at how they really are, not just the happy, shiny selves they present in their pics.

Since I'm in dissertation mode right now, thinking about one theory and how everything fits (or doesn't fit) it, I'm thinking about this in terms of choice, value, and delaying gratification. You could always use Facebook and other social media (even a phone) to have a social experience, but for people to connect with you or even to see what you're doing, you had to take some initiative. The people taking initiative - the frequent posters, the tweeters, etc. - may not have been all that relevant to you, in terms of your mood or your pre-existing social connection to them. But what if you always had the option of having an interesting conversation with someone you wanted to converse with about something you wanted to converse about? If that option is sitting there in that little rectangle of light you're staring at right now, you would probably choose it over most other activities.

Its been said that we're social animals, that all humans need social interaction and that society grows from this. But all social interaction has been embedded in the rules and systems of culture and physical space. You were surrounded by people, but they were people whose personal lives you didn't care about all that much. Its interesting to think about a world in which you could always look over and see what your friend is doing and strike up a conversation about it.





Wednesday, September 21, 2011

The Two Facebooks

Facebook has changed the way that it presents updates of information about a user's friends, starting the familiar cycle of backlash and revision. Setting aside the inevitable grumpiness of many users who are averse to change, why did Facebook make these changes? How and why is it hoping to reshape the user experience?

As I posted before, Facebook's appeal depends to some extent on the "freshness" of the information presented in the feed. There's probably thousands of hours worth of "content" available on most people's Facebook pages. If I considered all the updates from all of my friends as the total information available on Facebook, I (like most people, I would assume) have seen very little of it. The value of each little bit of information about my friends depends on a few things: its recency (wouldn't I rather know what happened to my friends within the past few weeks than know what they were doing last year?) and its relevance to me. Facebook's privileging of the "top stories" over the "most recent" may be an attempt to get users to more relevant information.

They haven't taken away the "most recent" option. Instead, they've turned it into a ticker and put it on the side of the page. Really, they're just preventing users from opting out of seeing the information it deemed to be "top stories" by simply clicking on "most recent". Its interesting to consider the differences between the "most recent" and "top stories" experiences of Facebook. Its likely that people are more apt to merely read "most recent" news and not to actually post anything about it. Facebook has an interest in getting people to post and interact more and be less passive about the experience. That gets them more involved and attached to the application, more "embedded" in some sense. More interaction also gives Facebook more data on users. They can't track what you're looking at when you're scanning "most recent", but they can track posting patterns and use that data to make the "top stories" even more relevant, more satisfying, and better at keeping people on the site.

"Most recent" is really a way of using Facebook for social surveillance, not as a venue for interacting with close ones remotely (which is significantly more valuable a service). Some people may have become accustomed to using Facebook to see the news of people that they really wouldn't classify as close friends, even if Facebook gave them the chance to parse their friends into groups more easily. Maybe they were using it for downward social comparison or "stalking" people, and asking them to create a group of people they like to gawk at and not interact with breaks some sort of spell, makes people more aware of their inner voyeur. In this way, this particular user backlash might be about preserving the "mis-uses" of Facebook, not as a tool for better communication but as a way to look at people without being looked at.

Thursday, August 25, 2011

"It depends what you mean by sex": Google and Language


While researching the effects of stress on learning, I decided to look at differences between the ways male students reacted to stress and the ways female students reacted to stress. I went to Google Scholar and typed in "gender learning college stress" or similar variants of that. While doing this, I recalled that there was a difference between gender and sex, or rather, the contested nature of the definitions of the words. Ultimately, I think I'm interested in a social role rather than the levels of estrogen and testosterone one possesses.

What if I was more concerned with the biological trait and violence, and I assumed (probably wrongly) that academic writers made a distinction between gender roles and biological sex? Then I'd have to google "sex violence", which, even in the relatively porn-free ecosystem of Google scholar, would yield plenty of irrelevant results related to the act of sex, not the characteristic of sex. Really, its just a homonym problem, but its interesting to consider the role this problem plays in debates like the one over the definition of "sex" and "gender".

Perhaps in the future, searching won't be entirely contingent on individual words free of any context. Maybe search will get smarter about what we want. But in the meantime, we're in a world in which words (or names, for that matter) are at a distinct disadvantage if they refer to too many different concepts or people. Of course, the people who started the debate over the meaning of the word "sex" or the word "womyn" or many other words weren't thinking about the impact of google on the efficacy of their intervention. Again, its hard to say how long we'll be living in the word of context-less word searches, but if you're banking on it being around for awhile, it pays to use unique terms.

This leads to another problem, one that has frustrated me for years: the tendency of scholars to come up with yet another term for a concept that is subtley different from another concept we have a term for just so they can have an "original" concept to base their career and reputation on. The reverse happens, too: theorists hijack each other's terms and re-brand them, much to the confusion of students everywhere. There's a point at which this behavior leads to a breakdown in communication: are you talking about "priming" or "priming"? (or perhaps "priming"!).

The evolution of language has always been messy, and while google does bring a kind of order to our information environment, it may not be doing wonders for our languages

Sunday, August 21, 2011

Rechargeable Value


In trying to think through how I will have media users rate the immediate gratification value of their media selections in scheduled and un-scheduled choice environments, I've run across some artifacts that make it difficult to re-schedule certain media experiences in people's lives and still have them be enjoyable or valuable to them. For instance, if you insist that people only send text messages between 3 and 5 PM each day, they may not get much value out of text messaging at those times because the motivation to send those messages was time sensitive. They needed to re-schedule an appointment or make arrangements for dinner that evening. I think we tend to over-estimate the time sensitivity of the value of such communiques (including online chatting). Would you really feel deprived if you had to wait a few hours before learning that someone cared for you or want to make plans for the next day, or learning a bit about how someone's day went? Probably not, but we've gotten so used to having the ability to message whenever we'd like and we don't see a reason to change.

So, there's at least the perception that the value (in terms of enjoyment and utility) of messaging is affect by time in this way. Other mediated experience can be time shifted without losing enjoyment or utility. People are fine with planning to Skype at certain times and would be fine with switching the time if needed. People are fine with shifting a TV show to fit their schedules. They'd probably like to watch it as soon as possible because they're anxious to find out what happens next and they'd like a dose of the pleasure brought on by the show as soon as possible. If I had a choice between the next Batman film coming out tomorrow and having to wait a year to see it, I would choose to see it tomorrow. I might even pay a bit more to do so. But if I had to wait an extra few hours or even a few days to see it, it wouldn't change the value of the experience. I'm sure I would still enjoy it.

There are some interesting exceptions to this rule, some non-interactive media experiences whose value are somehow contingent on the timing of the experience. News and sports seem to decline in value over time after their live broadcast in a way that other types of content do not. For some reason, live-ness (or, to use a grocery metaphor, freshness) matters. Of course, a classic game might still be fun to watch on ESPN classic years later and old news might be fascinating to some, but for the most part, "yesterday's news" isn't very enjoyable or useful.

I had an interesting experience when I deprived myself of a few kinds of media experiences that I partake in a lot: NPR, Facebook, and Reddit. I was away on vacation for a few days and just didn't have time to check any of these (that is what one does with these sources: they check on them). I then experienced more pleasure than usual when I checked these at the end of the vacation. It was as if their value had been recharged since the last time I checked on them. Many interesting bits of news about my friends and about the world had accrued since I had last checked in. When I'm in my normal media habit, I check on these things regularly, getting some utility and enjoyment, but then quickly exhaust their value, having used up the best parts, having to wait until people in my Facebook feed post interesting things, until Talk of the Nation has another media-related offering, or until enough interesting/funny things are posted on Reddit. When I check on them often, their values are quickly exhausted, but when I don't check on them for a bit, their value accrues and lasts longer.

This isn't quite how the value of traditional current events news works, I think. I wasn't adamant about going back and finding exactly what had happened in Washington or Libya while I was off the grid. An even more obvious example would be weather reports: I'm not going to go back and read weather reports for Ann Arbor for the days I was out of town. If there's some sort of commentary about the events, like I'd find in Slate, NYTimes, the New Yorker, or Grantland, then the pieces hold their value. The less commentary, the shorter the "expiration date" for the experience.

People check Facebook many times throughout the day because they can, because its there. Each time you check it, you can only go so long before you've read all the good stuff and you're making due with the dregs. The same goes for any continuously updating site: blogs, Twitter, online newspapers, etc. If it weren't, if you had to check it between 3 and 5 each day, you might be able to read longer and enjoy it more, not having to make due with the dregs. You're dealing with a fully charged-up experience. You might even say that this is true for email accounts: if they're accounts you typically get pleasurable emails in, then you'll have more of a chance of getting that pleasure of an inbox full of pleasurable messages if you wait longer to check your email. If you only get messages that you consider to be unpleasant, then the longer you wait, the more unpleasant the moment of checking will be. The trouble is that its too hard to resist the not-fully-recharged version because its always accessible.

Wednesday, August 10, 2011

"The Thug Finding the Gutenberg Press"


What is the role of digital media in cases of social disorder, be it a riot or a revolution?

The answer on the tips of every social media guru's tongue is that social media - text messaging, group messaging, social networking sites - make organizing protests and mass looting easier and faster. Without this tool, the argument implies, the young men (and it is mostly young men, for what that's worth) would not overthrow the government or burn down the business. Another argument states that it is the way the rulers are behaving (austerity measures, police brutality, and a gap between rich fat cats and the proletariat) that inevitably leads to this kind of behavior.

Both of these arguments seem unsatisfying to me. Riots and revolutions happened without social media and there have been plenty of nations throughout history that had huge disparities between ruling classes and non-ruling classes that didn't erupt into violence for long periods of time. Perhaps both of these things contribute to the likelihood of these events, but perhaps there are other ways in which the new media landscape contributes to this likelihood.

Media content, be it what we see on television, what we read on our favorite site for news, or what we see in our Facebook feed, influences our ideas of what is normal in society or the sub-segment of society to which we believe we belong, which in turn affects our actions. By framing a certain behavior as more or less normal, a message sender can affect behavior of the message receivers. Its possible that various kinds (both positive and negative) of coverage of social unrest frame it as something that angry young men at a certain place in time do, enforcing a kind of norm to engage in civil disobedience, violence, or destruction of property. Instead of relying on the depictions of protesters, freedom fighters, and rioters that the mainstream news give us, we can get a first-hand look at them on social media sites. Even if only 5% of the population goes to these sources instead of MSM for coverage of the unrest, if its the right 5% (i.e. the 5% inclined toward real-world action), it matters. Perhaps this gives readers the impression that these aren't just objects on the screen to be watched, but people who are similar to the readers, who could interact with the readers. Maybe that makes identifying with them easier.

The panoply of opinions and ideologies on the internet makes finding justification (and a group that makes your thoughts about behaving in a certain way more normal) easier to find as well. I think this gets lost in the discussion of how easy social media facilitates the logistics of social unrest. You may start out with anger, but if that anger can't find justification, its unlikely to manifest itself in action. Sure, a mind sufficiently detached from reality can find a justification pretty much anywhere, but even those with a firm grip on reality can find reasons to act in ways that they couldn't when the messages were manufactured by people with too much to lose to advocate civil disobedience, violence, or property destruction.

Perhaps the default sentiment toward authority in complex societies is anger, an instinct that we feel from perceiving that we do not have much control over our fate. But that anger gets channeled into avenues other than civil disobedience, violence, and property destruction when we can't find justification or a group performing these actions to make them seem more normal.

(quote from Mike Butcher, TechCrunch Europe)

Wednesday, July 20, 2011

Puppies & Iraq


I just saw Page One, a documentary about the New York Times, which raised some interesting (if oft repeated) questions about journalism that come along with the financial instability of the industry: is there something about a traditional media outlet like the NYTimes that is superior to the various information disseminating alternatives (news aggregation sites, twitter, Facebook, Huffpo, Daily Kos, Gawker, etc.) and, if so, what is it? What is it about the New York Times (or the medium of newspapers in general) that would be missed if it was gone?

Bernard Berelson asked a similar question in a study of newspaper readers who were deprived of their daily newspaper due to a workers' strike in 1945. The reasons people liked (or perhaps even needed) the paper back then - social prestige, as an escape or diversion, as a welcome routine or ritual, to gather information about public affairs - are all met by various other websites and applications, some of which seem to be "better" - that is, more satisfying to the user - at one or all of these things than any newspaper is.

I want to pick apart this idea of that which is "more satisfying" to the user, or what it means to say that they "want" something. The mantra of producers in the free market, no matter what they're selling, is that they must give the people what they want. Nick Denton of Gawker has a cameo in Page One in which he talks about his "big board", the one that provides Gawker writers with instant feedback about how many hits (and thus, how many dollars) their stories are generating. Sam Zell, owner of the Tribune media company, voiced a similar opinion: those in the information dissemination business should give people what they want. Ideally, you make enough money to do "puppies and Iraq" - something that people want and something that people should want. To do anything else is, to use Zell's phrase, "journalistic arrogance".

Certainly, a large number of people are "satisfied" with the information they get from people like Denton and Zell. But Denton and Zell, like any businessmen, can only measure satisfaction in certain ways: money, or eyeballs on ads. There are other, often long-term social, costs paid when people get what they supposedly want. When news is market driven, the public interest suffers. So goes the argument of many cultural theorists. But who are they to say what the public interest is? Why do we need ivory tower theorists to save the masses from themselves?

Maybe that elitist - the one who would rather read a story about Iraq than look at puppies - is not in an ivory tower but inside of all of us, along with an inner hedonist (that's the one that would rather look at puppies all day). There are many ways to measure what people like, want, need, or prefer. I'm not talking about measuring happiness as opposed to money spent/earned. I'm considering what happens when we're asked to pay for certain things (bundled vs. individually sold goods) at certain times (in advance of the moment of consumption vs. immediately before the moment of consumption). There is plenty of empirical evidence to suggest that those two variables, along with many others situational variables external to the individual, alter selection patterns of individuals. Want, or need, or preference does not merely emanate from individuals. When we take this into account, we recognize that a shift in the times at which individuals access options and the way those options are bundled together end up altering what we choose. We click on links to videos of adorable puppies instead of links to stories about Iraq because they're links (right in front of us, immediate) and because they've already been paid for (every internet site is bundled together, and usually bundled together with telephone and 200 channels of television). If it wasn't like that, if we had to make a decision at the beginning of the year about whether we "wanted" to spend all year watching puppy videos or reading about Iraq...well, I guess not that many people would want to spend all year reading about Iraq. But I reckon that many people would want, would choose some combination of puppies and Iraq if they had to choose ahead of time. The internet is a combination of what we want and what we should want, and so is the NYTimes, but they represent a different balance between those two things. The Times is 100 parts puppies, 400 parts Iraq. The internet is 10000000000 parts puppies, 100000000 Iraq (or something to that effect. When you change how things are sold, you may not change what people want, as many theorists claim, but you do change how we measure what people want.

Maybe we never have to defer to a theorist to tell us what we should be reading or watching in order to be a better citizen. Maybe we just need to tweak our media choice environment so that it gives the inner elitist a fighting chance against the inner hedonist.

Tuesday, July 05, 2011

Restricted Access

As I passed by a University of Michigan librarian unlucky enough to have her computer screen visible to passers-by and saw that she was on Facebook, I thought about the rights of employers to restrict the internet use of their employees. I believe there have been mixed results from studies of whether or not allowing employees unfettered access to the internet hurts or helps productivity. I can't recall the source, but somewhere I recall reading that workers who take short breaks every hour to do some leisure web browsing are more productive than those who do not take those breaks.

In any case, let's assume that businesses want to restrict their employees use of the internet for leisure purposes in order to boost productivity. I'm sure many employers block ESPN, YouTube, Facebook, maybe anything that's classified as having adult content using some sort of Net Nanny. But what if the employer wanted to really restrict their employees internet use? What if they thought that it would be better for their employees to, say, read the complete works of William Shakespeare or learn about particle physics than to be on Farmville for an hour or two a day? Somewhat less benignly, what if they wanted their employees to only read or watch materials that showed their company and product in a positive light, or endorsed a particular kind of lifestyle? Could they restrict their employees access to, say, one or two sites like this? Are they within their rights to restrict their employees in this manner?

I have little sympathy for employees demanding the right to surf the net at work. When you are at work, you're supposed to be working. Yes, there are the studies that say that these little breaks can boost productivity, but I don't think there's any research on whether certain kinds of restricted internet surfing is just as good at this. So the employee defense of "a bit of cyberslacking makes me more productive" wouldn't necessarily contradict an employer's right to limit their internet use how they see fit.

Its like having the ultimate captive audience. Sure, you could choose not to watch any of the content we make available to you, but then you'd have to do work (ugh!). Options that might have been unappealing at home suddenly seem interesting. Regarding the scenarios listed above, I'd have some faith that employees would forego any ham-handed attempts to brainwash them into loving the company they work for (opting to actually work instead of watching or reading poorly made, pro-corporate content) but (assuming a certain kind of intellectual curiosity) might actually respond to reading Shakespeare or learning particle physics. It wouldn't have to be Shakespeare, of course. Whatever the employer thought it would be enriching to know could be substituted.

Research on persuasion suggests that convincing someone to do or buy something they didn't already have some inclination to do or buy is extremely difficult if not impossible. If you restricted my access at work to Fox News, I wouldn't suddenly become a right-wing ideologue. I'd get back to work, or daydream, or talk to a coworker. But if its something you've been meaning to do, perhaps the work setting is the proper restrictive environment, providing that unappealing alternative, that would finally get you to read that classic you've been meaning to read.

Monday, July 04, 2011

The Ethical Issues of Analyzing Time, Desire, and Self-Control

My tentative dissertation project (becoming less tentative as my defense date draws closer) has to do with time, desire, and self-control. One basic premise of the project is that each of us has short-term desires and long-term desires, and that these desires are often in conflict with one another. We might say that "part of us" wants to eat that chocolate cake or spend time on our favorite leisure website, and another "part of us" wants to eat less fat and carbs and spend more time working on projects, exercising, or volunteering. This, in and of itself, doesn't seem that controversial.

Through parents/caregivers and the education system, most people learn at an early age the consequences of too-frequently indulging their short-term desires. The more immediate, painful, and affective the negative consequence, the easier it is to convince yourself to refrain from future indulgence. Even before our parents/caregivers, evolution gave us in-born, visceral reactions to things that are good for us in the long run (eating nutritious berries = yummy!) and things that are bad for us in the long run (eating poisonous berries = vomit). But evolution doesn't provide the fine tuning, and in a fast-changing, complex environment, our consequence estimations need outside assistance. A different kind of convincing-of-the-short-term-self needs to happen when the feedback isn't visceral and immediate.

People have been smoking tobacco for roughly 5000-7000 years, but it wasn't until the last hundred years that large numbers of people knew that it hastened their death. Of course, most ancient smokers died from other ailments when the lifespan wasn't long enough for them to die of lung cancer. Once it became long enough, and once scientists had found a connection between smoking and cancer, a large number of people who would've enjoyed smoking in the short-term stopped or cut down (or at least felt guilty) because they had been informed by some trusted "other" that doing so would bring about long-term benefits. This isn't just self-control. Its informed self-control.

In some ways, this is the role of culture in general: to produce informed self-control (Freud's super-ego). We've all got the easy behavioral imperatives figured out: don't eat stuff that makes you puke; avoid situations that evoke terror. Rules exist because some of us (or all of us under some circumstances) may be inclined to behave in ways that are prohibited by those rules. Rules are not so much "made to be broken" as made to correct what was "broken" about our perceptions of consequences. For better or worse, this has become the domain of doctors: first physicians and perhaps now psychologists and psychiatrists. They make rules based on observations of seemingly disconnected actions and consequences. They are experts in consequences. Did psychologists, educators, or scientists aspire to the role of rule-maker? Probably not, but they're a necessary bi-product of a complex world in which our finite senses can't keep track of the many connections between actions and consequences. To believe otherwise is to succumb to nostalgia for a by-gone world.

Things get messy when we get personal about our analysis of time, desire, and self-control: media use (my area of research) and, even more personal, marriage and sex. There have been some terrific articles and commentary about marriage and fidelity in the wake of Anthony Weiner's virtual infidelity and NY's passing of a gay marriage law. A defining characteristic of marriage is the pledge of individuals to stick together. Its an attempt of the long-term-thinking self to override the future short-term-thinking selves so that the long-term self can benefit. But who is informing that long-term-thinking self? What is their evidence? What is their agenda?

This leaves us with an uncomfortable reality: those who can demonstrate the negative long-term consequences of things you know are pleasurable in the short-term and you think are not harmful in the long-term are telling you what to do, and people tend to not like being told what to do. For good reasons, too. Those in positions of power abuse it for their own gain. If I own stock in a cookie company, I'll fund research and coverage of research suggesting that another indulgence is particularly harmful, leading people away from that indulgence and toward cookies. Similarly, certain relationship experts might promote a certain view of monogamy because they benefit from its success in the marketplace of ideas, not because its any more accurate at predicting negative consequences than any other theory. The same might be said of a media effects researcher. Those who reject the findings of so-called experts analyzing this complex causal world can simply blame another aspect of that complex world that isn't under their control, freeing their short-term selves from blame. If people who aren't in long-term, monogamous relationships aren't happy, its not because they couldn't exert the self-control recommended by experts; its because they're being judged by an unfair, retrograde society intent on maintaining a certain kind of social order. If people who play lots of violent video games are more aggressive, then its because you measured "aggression" wrong or its due to some variable the researchers didn't control for. Basically, this leaves everybody believing what they want to believe, deferring to no one, and assessing consequences based on personal experience and the limited experience of those around them.

Since I don't want this entry (or anything else I write) to be an empty exercise in hand-wringing, I'll suggest some priorities for research and reporting on research.

We'll have to move from a proscription paradigm to an explanation paradigm, one that is supported by replicable empirical evidence. It is best to demonstrate how to find the links between short-term behavior and long-term consequences, to let people "see for themselves" as much as you can. Our society has become more complex, making it difficult to see the connections. Much of the study of the world, in science and the humanities, has become equally complex: full of impenetrable jargon and statistics. We've got to make explanations clearer, better educate ourselves so that we have some basic fluency in these languages, and support an education system that helps students understand how to find connections for themselves. Yes, we live in an extremely complex world, but the good news is that we've just scratched the surface of how technology can be used to explain concepts, patterns, and connections to large numbers of people in a customizable, individualized way, for free. Behavioral scientists and theorists might be at the forefront of finding patterns in behavior across time, but they can't maintain the trust of the public unless the public can see for themselves.

Not only can the public see for themselves, but maybe they can do the restricting themselves, too. We've all got a conscience. We just don't have the societal restrictions to assist it, and physically/temporally proximate temptation makes it harder to listen to that voice. We're not all tempted by the same thing, so the restrictions really shouldn't be one size fits all. If people can design their own restrictions, you avoid the possibilities of reactance and the totalitarian manipulation of taste that inspires it.

So, I'd like to run an outside-the-lab experiment to provide evidence that supports my dissertation hypothesis, but I'd also like people to be able to try the experiment on themselves, to plug in their own individual variables.

Monday, May 30, 2011

The other privacy setting on your Facebook account


After attending another stimulating International Communication Conference, I've been thinking a bit more about the issue of privacy and everyday use of social media such as Facebook. Its one of those issues that seems to interest nearly everyone, not just theorists, but teachers, parents, even teenagers.

I have problems with the popular narrative that Privacy is a human right and that it is eroding or otherwise disappearing in the age of networked selves ( I fully articulate most of these in this earlier entry). To echo Ron Rice's argument in 'Artifacts and Paradoxes in New Media', I think we just assumed that the level of privacy we knew was somehow the natural state of things, and that we only tended to think about privacy when it was obviously breached, but not about how it was constructed with the help of older technologies in a certain way, to hide certain information from certain people. One example of many: having a private conversation required certain kinds of architecture: small rooms in big structures with sound-proof walls. I supposed these technologies kept some of the privacy we'd grown accustomed to when we weren't living in tightly packed urban centers. Still, it seems likely that, as individuals and as groups, we would have a strategic advantage to achieving our own ends and outwitting our competitors if we were able to trade information with allies while our enemies weren't able to overhear, and that this (along with the simultaneous drive to survey our environment for threats or resources) drives innovation in privacy enhancing and destroying technologies. While I'd agree that anyone totally robbed of privacy would be at a distinct disadvantage compared to those doing the robbing, and that this imbalance between those with and those without must privacy is to be avoided, talking about it like its a fundamental human right implies that it is somehow absolute or that, once upon a time, we actually had the ability to communicate with many others and keep those communications hidden from others, or that such a world could or should exist, which I don't buy.

The most compelling way to talk and write about technology and privacy would be to reveal the ways in which technologies unintentionally and subtly enhance or erode personal privacy. This got me thinking about Facebook. Most of the public debate over Facebook and privacy has been about its settings: how customizable they are, how easy they are to use. The mere existence of buttons that you click to change your level of privacy stops many users from thinking about other ways you can use the technology to alter your level of privacy. In particular, I'm thinking of the number of friends one chooses to have. As that number creeps up, the photos and the status updates you post gradually become more public and less private. The more gradually this happens, the harder it is to notice. It would be interesting to know the rate at which people acquire (or get rid of) friends on Facebook and if this rate is associated with how aware they are of their level of privacy and if that affects their behavior in any way.

More generally, this is about technologies of self-performance that allow us to set our privacy or "reach" at one point, letting us grow accustomed to one imagined audience. After we set the settings, we think "okay, I know who I'm performing to and I'll tailor my performance to that group", even though social media audiences are always changing in unpredictable ways. Even this blog and others like it are written with one imagined audience, but then that audience changes in ways that would be very difficult for the author to predict: encompassing people from various spheres of the performers' life.

As always, changing the technology can help us out of this bind. Maybe there could be a little graphic representation on every social media that let the author know what kinds of people were in the audience and how that was changing over time. It would take into account the current occupation and place on a professional network of each audience member (a color-coded "status" marker) and the social distance from you based on frequency of interactions between that person and other people you know. You could probably find that information if you really dug around for it (google insights and the like offer good data on web audiences), but what you need is something that is up in the face of the performer, just as it would be in the real world when you enter a room and start speaking. Technology will always change our levels of privacy. Instead of stopping it or trying to reverse it, the best we can do is to make it explicit.

Friday, April 29, 2011

Television is not a toaster with pictures, and Facebook circa 2005 is not Facebook circa 2015


Most people have likely formed an opinion about media effects: the degree or type of effects certain types of media use has on us. They likely talk about this in terms of media technologies (TV, radio, internet) or types of media texts (violent video games, Fox News). I think that we're using an outdated way of talking and thinking about the question of media effects.

We know that we spend more time using media - at home, at work, on the move - than ever before. And we know that there is a greater quantity of media options (in terms of content options and affordances of the technologies or applications) than ever before. I think that these two facts, by themselves, should prompt everyone (even skeptics) to reconsider the degree and type of media effects, and the ways in which we go about assessing these effects.

First, things have changed on the level of media technology. The question of media effects arose at a time at which the types of media (identifiable by their affordances) were limited. Producing and distributing a widely used communication technology that was functionally different than another communication technology took a lot of capital. Once the vast broadband and mobile online networks were established, it became significantly easier to create and distribute applications that, functionally, differ from other communication technologies. Much the same way the establishment of the electrical network permitted growth in the variety of machines in our lives and the establishment of the highway and railway systems permitted the growth of transportation, this network growth has increased diversity. Before the establishment of the electrical grid, you could make plenty of generalizations about machines and their effects on humanity because the number of widely used machines was quite limited. Afterward, those generalizations didn't make as much sense given the variety of machines that people used. Was the effect of a television the same as a toaster? Using pre-electrical network logic, the answer was yes.

Lumping all the uses of the internet or all the uses of Facebook together creates a similar problem. We still want to use the frames that were established by scholars and researchers during the 20th century for understanding media now. We talk about (and study) the effects of Facebook, of the internet, of texting, in the same way we talked about the effects of television 30 years ago: as if these things were discrete entities. But this approach doesn't make sense in a world in which the media forms to which we give names quickly change in fundamental, functional ways. The internet of 2011 is likely no more similar to the internet of 2000, in terms of its uses and effects on users, as radio (or a toaster, for that matter) is to the internet of either era. Yes, some useful things can be said about the effects of all online experiences just as some useful things can be said about the effects of the use of all machines (studies of modernity), but most of these general ways of thinking are just reflections of a time when "machine" or "online experiences" were easier to generalize about, before the network made new technologies and texts easier to create and distribute.

There is a similar problem with the way we think about the effects of types of content. If we talk about the possible effects of a particular film or television show, we do so because we believe that the principles at work are generalizable beyond that one text and that one audience. A film scholar writes about their experience viewing a certain film, but implicit in their writing is the assumption that other people will experience the film in a certain way and/or that other people will experience other similar films in a certain way. When the number of texts explodes, as it has done for the past several years, the question of what one particular person (or type of person) does with one particular text reveals less about the overall experience of media consumption. This just wasn't a problem when the number of texts was lower, when readers could be expected to have seen "classic" texts.

This isn't to say that there won't likely be some media texts that will be experienced by large numbers of people over long periods of time. Our thirst for common experiences will ensure the persistence of canons, but these canons will consist of a smaller and smaller sliver of our overall media use. Figuring out the effect of the two hours I spend watching Citizen Kane is worthwhile, but what about the effects of the thousands of hours I spend online? How do we go about assessing that? Do we break the texts into genres, like we did before? Do we pretend that Facebook or social networking sites are media technologies in the way that radio or television are media technologies? What happens when Facebook's online movie-watching feature takes off? Is its effect on users the same as it was 5 years ago? This seems ludicrous, and yet studies that purport to be about the effects of "Facebook" do not take it into account.

How do we adapt? We identify characteristics of media experiences that cannot be made obsolete by developments in technology and we base our theories on the presence or absence of those characteristics. How many people does a media application connect a user to? How often is it used? Is its use planned or spontaneous? What emotions is its use typically associated with? What parts of the brain light up when we're using it? What are the gratifications sought by the users? If you look at technology this way, you can make sense of the effects of television and toasters without having to conflate the two, and without having to re-invent your theories when the next contraption is invented. These approaches to media effects seem suited to an era in which there will be too many different texts and media technologies for any human to keep track of.

Sunday, March 20, 2011

Ironic Liking vs. In-Spite-of Liking


In thinking more about Rebecca Black's popularity, I start to think more about how anyone gauges whether something is popular and/or well liked, and what that means.

At the most basic level, we have the number of plays or views. The fact that Rebecca Black's video has been played over 20 million times tells us that it is, in some way, popular. Of these, some could be curiosity seekers, some could be people who like the song listening to it repeatedly, some could be people who enjoy laughing at the song listening to it repeatedly. The story surrounding Black's song indicates that most viewers do not like the song in the way that most people earnestly "like" what they like.

It takes very little effort and no money to click on a link and watch a YouTube video. The same cannot be said for downloading a song on ITunes. So, which is more likely: that tens of thousands of people earnestly like a song that many people find horrible OR that tens of thousands of people are willing to pay money to listen to something that you do not earnestly like?

What do I mean by "earnest"? People talk a lot about liking something "ironically." But I feel like this is the wrong term. "Ironic" implies that they like something for the opposite reason than one would expect them to like it. But are those who grow "ironic" mustaches really growing it for the opposite reasons than those growing it for earnest reasons? What would that mean? The term "ironic" just doesn't seem to convey anything meaningful. It doesn't help us understand why people buy or do or wear things that are inconsistent with their apparent tastes and likings. The term "parody" might be used to describe such actions, but I feel like that's inadequate, too. "Parody" implies that they are trying to get a person to laugh at that which is being parodied. Its commonly understood as a way to ridicule something. In some cases, I can understand how this might be true of a hipster growing a mustache or of someone who finds Rebecca Black's voice to be grating and her lyrics to be awful: they're growing/listening to these things to make fun of and feel superior to those that earnestly do/like them. For them to do this, there must be a receptive audience, someone to share in the joke. If they were in the exclusive company of earnest fans or mustache-growers, I can't imagine that they would keep up the parody for very long. But if they're surrounded by others who feel similarly toward people who grow earnest mustaches and earnestly like Rebecca Black's music (or that type of pop music), then parodic liking is a way of bonding, of signaling that you're part of a group.

I'd like to suggest that something else is going on: In-spite-of liking. This means that someone hates a certain aspect of a show, movie, song, clothing, famous person but likes another aspect of it. All of these things are comprised of many elements, but most of the ones that are liked in this fashion are somehow un-self-conscious, nakedly attention-getting, unapologetic and unsubtle. Perhaps people long for these characteristics and if they happen to be packaged with something that the user does not like - misogynist lyrics, nasally voice, rampant consumerism, a lifestyle that one cannot identify with - they're willing to overlook these things in favor of the characteristics they like. Maybe they also identify with the fact that these people are proud and hated by many. Unlike parody, this could take place in a vacuum. If I liked the beat of Rebecca Black's song and found it catchy, despite the fact that I thought the lyrics were inane, that it was irresponsible to mock a 13-year-old, and that her voice was grating and nasally, I might still listen to (or even download) the song without having to do this in front of anyone.

I wouldn't expect anyone to be able to articulate these feelings, but that doesn't make them any less true. People might just say "I like it, the end." So if we want to understand what predicts liking, we might have to move away from self-report and find out patterns of liking that diverge from traditional models, instances we have usually called "ironic liking." If we isolate each characteristic and ask whether one is liking something as a kind of performance for the benefit of others, we can better understand this phenomenon.

Post-script: Amazingly, 10 year olds seem to grasp the concept of ironic liking. This focus group also indicates that the song skews young: people under 10 seem to like it more than teens.

Saturday, March 19, 2011

The Fate of Rebecca Black


In just over a week(!), unknown teenager Rebecca Black's quasi-home-made music video Friday went from 3000 views to 22 million views. Like many viral sensations, there was a catalyst in the form of online opinion leaders (i.e. bloggers with connections to mainstream media): popular blogger Tosh.O got the ball rolling with a re-post of the video from The Daily What. Its hard for anything to "go viral" without these influential bloggers drawing the material to the attention of their audiences, which draws the attention of mainstream media, which draws the attention of your mother. But are there any characteristics of this text that set it apart from things that don't go viral that it has in common with other things that have gotten popular?

One possible precedent is the meteoric rise of YouTube pop sensation Justin Bieber. Commonalities: they happened at roughly the same time, meaning that the relationship amongst smalltime teen artists, YouTube, bloggers, the mainstream media, and the general audience was roughly the same during both phenomena. They're both producing pop music in the style that is popular at this time. Here, one might engage in subjective judgment of the merit of their music, but there is where the conversation about what goes viral ceases to interest me. Maybe Bieber's talented, maybe he isn't. If we're interested in figuring out why both Bieber and Rebecca Black got popular quickly on YouTube, we have to look at characteristics of the video texts in relation to their reception which - conveniently enough for anyone interest in understanding media, meaning, and effect - is captured on the YouTube video itself, on Twitter, and in the blogosphere.

Both Bieber and Black inspire diametrically opposed reactions: you either love them or you hate them. Though its hard to be certain of the identity of anyone online, judging by the preferences of those who leave comments (which, in another boon to media researchers, is easily available, searchable, and analyzable), loving/hating of both seems to be determined by gender, age, and some personality trait (anti-authority or anti-social, perhaps). The loving/hating of these artists seems more personal than the loving/hating of artists who rise through the traditional star-making machinery of Hollywood. Stars are either conditioned to speak and behave in a certain way or are selected based on their match to very specific popular archetypes. They feel remote and unrelatable in a way that overnight sensations do not.

But Black might actually have more in common with a certain brand of reality TV, of which Jersey Shore is the most salient example. These texts have at least two kinds of viewers: those that identify aspirationally with some element of the characters' behavior (usually their bravado in the face of haters, or their unchecked hedonism during a buzzkill economy) and those that love to hate them and/or laugh at them, feeling superior to them. Additionally, there are people who, paradoxically, occupy both ends of that spectrum, liking the characters "ironically." There's a genius to this kind of entertainment: by incorporating both lovers and haters of these people, they double their audience AND they get people talking online, which is necessary in the age of social media for marketing purposes. As one of my professors says, TV exists to give people something to talk about, so these texts are popular because they're ways for us to talk about taste, class, appearance, values, and pretty much any other element of human behavior.

There's also the question of authenticity. Black seems to be authentic in her performance (as opposed to being purposefully bad in a parody style) given her age and her earnest appearances on mainstream media outlets. But the authenticity of those millions of people who watch the video is in doubt. Judging by a lot of the comments on YouTube and the mainstream media story, most people are laughing at Black, but its conceivable that there could be some other earnest 13-year-old girls who honestly like the song and, out of empathy, are un-ironic fans of hers. If you hate her, you're saddened and alarmed that so many people can have such horrible taste in music. If you love her, you're saddened and alarmed that people can be so cruel as to laugh at a 13 year old. In any case, she seems to be making some money off it, money that, properly invested, will still be hers when we find someone else to laugh at. And she's making showbizz connections. The narrative will probably go something like this: mean internet beats up 13-year-old girl, Bieber or Usher or some famous dude jumps to her defense, voice coach tells her how to sing slightly better, young teen girls see her as empowering figure, ??? = profit!

This still leaves me with one question, one that I hope many people are asking as more cewebrities pop up: does it matter how you get famous? I can't help but think of Star Wars Kid, who ended up in therapy after so many people made fun of him on the internet. Sometimes, the social mediasphere can be an extension of junior high. There's an endless supply of 12-15 year-olds doing something embarrassingly earnest and, mistakenly, recording it. These are the next Rebbeca Blacks. And the conversation about the effects of Rebecca Black's rise and reception doesn't end with whether or not she lands on her feet. Maybe some girl, too young to understand exactly how people get famous online, will see that Rebecca Black is famous and met Justin Bieber, so she'll make a similar video, but it won't go viral. It'll just get passed around her middle school, she'll be made fun of, and develop an eating disorder. Maybe this will be the 21st century equivalent of the emotional wreckage of the would-be starlet turning tricks on Sunset Strip.

As audiences, we seem to always be hungry for someone to serve as a topic of debate. By tearing them down and building them up, we're helping create the rise-and-fall/fall-and-rise stories that have always kept audiences rapt. As performers, we seem to be stuck between killing ourselves when someone speaks ill of us or being way too proud of our shitty music or personality, impervious to criticism (haters gonna hate!). Those wondering how our culture will get past this period would do well to follow Rebecca's story as it unfolds.

Sunday, March 06, 2011

The Pecker at the Party: Immediate gratification, social media, and social situations


Imagine you are at a cocktail party. Most individuals are talking to one another while some are pecking at their smartphones. One of the guests at the party confronts a “phone-pecker”, explaining that she finds this behavior to be rude and cannot understand why the individual is engaging in what appears to be “anti-social” behavior at a party. The “phone pecker” retorts, claiming that he is actually being quite social, just with people who do not happen to be in the room at that moment. He is looking at photos of friends on Facebook, answering emails and sending text messages, reading tweets. How, he asks, is his socializing any different (or inferior) to the socializing that is happening at the cocktail party?

This situation - in which an apparently solitary individual stares at a screen and pecks at the screen or a keyboard or, in some cases, talks to the screen - is increasingly common. More social interaction is mediated - taking place via text messages, emails, social media (e.g. Facebook, Twitter) - and more people are concerned about the implications of this for individuals and society. The cocktail party guest’s identification of this behavior as “rude” suggests a breach of etiquette, but the perception of these acts as rude may only be a symptom of some people’s failure to understand how the technology is being used. Such a failure might be a result of the rapid diffusion of the technology and the predictable failure of social mores to evolve at a similar rate. Older generations lamenting the corrupting influence of new technologies is likely as old as technology itself. Once the technology is sufficiently common, such complaints are seen as the mark of a curmudgeon rather than a legitimate qualm. So the fact that this is seen as rude fails to distinguish between a public act that is simply new and will eventually be harmless and no longer seen as rude and an act that is, in some way, harmful to the social fabric and will always be seen as rude or aberrant.

Before proceeding, it is worth noting that the phone-pecker may not be being social at all. He may be scanning his Facebook feed (the equivalent of looking at other guests at a party while not conversing with them) or checking sports scores. Given the size and orientation of networked mobile devices and the expectation of privacy that is associated with the use of such devices, it is difficult for observers to be able to tell the difference between social interaction on mobile devices and non-social behavior.

Let us assume the phone-pecker is interacting with others via the mobile device. Are there really no differences between the nature of heavily mediated person-to-person relationships (that is, those that take place chiefly via text message, email, Skype, and Facebook) and those that are not as heavily mediated?

Older individuals often marvel at the sheer number of texts that teenagers send in a day. Half of all American teens send over 50 texts per day, according to Pew, 2010. Some teens send hundreds per day. This number is not shocking if one considers the text messages to be similar in purpose and content to turn-taking conversations. Teens engaged in long turn-taking conversations on land-line telephones and, before that, in person. The length of statements in such conversations was and is likely to be short, no more than the 150 characters allowed by most SMS (i.e. texting) services. So the length of individual statements and the frequency of text messaging and social media use, while initially seeming alarmingly different, do not to differ significantly from other existing forms (mediated or not) of communication.

What about the people participating in the interaction? There has been fear surrounding social media and one-to-one technology (dating back to the introduction of the telephone) that remote communication devices would facilitate relationships between vulnerable populations (e.g. children) and those seeking to take advantage of these populations (e.g. sexual predators, advertisers, etc.). Though there have been instances of such behavior, the bulk of mediated social interaction (on Facebook and text messaging, if not on Twitter, which, in any case, operates more like a micro-broadcast medium like blogging and less like a rapid interaction application) is still between parties that are acquainted with one another from non-mediated social worlds like school, work, and get-togethers in real world locations like bars and parties. In other words, the people on the other end of the phone-pecker’s pecks are likely to be similar, in terms of their connections to the phone-pecker, to the other cocktail party goers. So it seems, again, that what initially appeared to be different may not be that different at all.

But there's another possible area of difference: the nature of the relationship. The maintenance of relationships, like any other endeavor, is comprised of acts that are immediately enjoyable but do not pay long-term or collective dividends (being able to complain about work to a friend, flirting with a married co-worker, playing a game of basketball with a friend) and other acts that are not immediately enjoyable but do pay off in the long run (changing a diaper, attending a boring work meeting, discussing a difficult topic with a spouse). This isn’t to say there aren’t many (if not most) social interactions that are both immediately gratifying AND help foster long-term gains for the self and others (great sex, great parties, enjoyable collaborative work), but that some relationship maintenance, like many other things in life, is not immediately gratifying but pays off in the long run. An overindulgence in immediately enjoyable social interactions and a failure to engage in any other kind of social interactions may lead to shorter-duration relationships as individuals more frequently (and accurately) accuse one another of being selfish. A failure to think about one’s long-term goals in relationships goes hand in hand with being self-centered or selfish.

Throughout most of human history, our relationships were constrained: by the surveillance and judgment of others, by geography, by class, by time. The proliferation of networked communication devices allows us to (at least partially) remove these constraints. For some, this is a good thing: relationships hindered by repressive regimes are allowed to flourish online. For others, I think it is not. These constraints often kept immediately gratifying interactions at a distance, forcing us to talk with co-workers about work when we would rather be flirting with our partners, to talk with our parents when we would rather be talking with our long-time friends, conversing with our long-time friends when we’d rather be talking to someone new and exciting, talking to our spouses when we’d rather be flirting with a co-worker. They were shaped by the randomness of geographic distribution, by the history of institutions developed by those in power, and in these cases, circumvention of such constraints is, ultimate, a good thing. But either by accident or by design, those constraints kept us from indulging our every social whim.

Again, this isn’t to say that individuals without networked communication technology did not indulge in selfish interactions not in their or anyone else’s best long-term interest. However, when you make immediately gratifying options more easily accessible, you make them more likely to be chosen, particularly by those low in self-control. Why would the choice of whom to interact with and what to converse about be any different than any other decision (in which the temporal and spatial proximity of temptations make them more likely to be chosen)?

To return to the phone-pecker, it is possible that he simply prefers talking to someone who is not at the party. The pecker was invited to the party because of an assumed mutual interest in interacting with those at the party, but given his inattention to other party goers, he cannot count on this assumption lasting very long. He uses technology to develop and nurture several intense, mutually beneficial relationships with co-workers (frequently answering work emails), romantic partners, and close friends (frequently texting at parties), but ignores those very same people at other times so that he can communicate remotely with others. When spending time with his romantic partner, he frequently converses with friends through text and answers email, neglecting his relationship with his partner. When spending time at work, he frequently blows off work to flirt with his partner. It isn’t that he ignores certain people, but that, given the choice between one conversation and another, he chooses the more immediately interesting conversation.

By doing this, he may be training himself to become intolerant of social situations that are not immediately gratifying, becoming accustomed to the immediately gratifying interactions provided by networked communication devices. At a party, this is relatively harmless, though he might find himself invited to fewer parties in the future. But if he bows out of important discussions/ arguments with his partner before they can be resolved, if he pays more attention to work emails and less attention to his children, if he talks to his partner more than his co-workers about work, or flirts more to his sexy co-worker than he talks to his partner about their finances, eventually, his relationships with all parties will suffer through lack of attention.

Tuesday, February 22, 2011

Is vlogging a female medium?


Something in an interview with cewebrity Magibon on Know Your Meme got me thinking: are there more female vloggers than male ones, and if this is so, why might that be? Magibon says that in Japan, most males do not go on video and, if they do, they do not show their faces. In my casual perusal of home-made videos from Japan on YouTube, I've found this to be true, and it wouldn't surprise me if this were true in the US as well.

Why might a gender difference exist in online self-expression? First off, a disclaimer: any difference we might observe is as or more likely to be a product of cultural expectations of gender roles than it is to be a product of some inherent difference between the sexes. Having said that, its possible that young females believe they can gain status by gaining attention, and one way to gain attention is to use their looks. Perhaps many males, here and abroad, do not enter this entertainment arena because, traditionally, males do not derive their cultural worth from showcasing their looks to the extent that females do. Perhaps males fear some sort of permanent tarnishing of their professional image. Perhaps they fear that employers won't take them seriously when they find their rather silly video blog. Young females, not having as much to lose in the traditional professional world (or at least not anticipating that they will when they get older) jump right in and start vlogging.

The result is a medium dominated by female producers, but is this media created for a female audience? Probably not as much as, say, the female blogging community. Take the looks out of the picture and, I would imagine, you take away a good sized portion of the young male audience. Its worth re-thinking how we identify authorship for YouTube and vlogging. Are females really empowered when they have to cater to a male audience (a young, hetero male audience fixated on looks)? Then there are those wildly popular make-up tutorial videos created by women for women. Even when both the audience and the creators are women, it seems to be ultimately geared toward pleasing men (albeit indirectly). This just doesn't seem to be true of the female blogosphere, and I think most of it has to do with looks.

Magibon made it seem as though males "can't compete" with females in the user-generated video arena, that it would, in some sense, come to be dominated by women. But how dominant are these young women?

Thursday, February 03, 2011

Will the Revolution be Tweeted?


I just went to a terrific, timely talk by visiting professor Michael Dobbs on social media's role in revolutions. It was a rare treat to be in a room with well-informed experts on media and politics and discuss something that was going on right at that moment. Dobbs gave many examples of the popular press claiming that social media, Twitter in particular, had precipitated the successful revolution in Tunisia and the still-in-progress revolution in Egypt. He brought up examples of techno-utopian views on the subject (e.g. Clay Shirky's TED talk on the revolutionary power of Twitter) as well as rebuttals such as Malcolm Gladwell's piece for the New Yorker.

One question that came to mind during his talk was: when assessing whether or not Twitter and social media are capable of facilitating revolutions, to what are comparing them? And by "we," I mean the users, the public, the press, the critics, anybody. Gladwell essentially compares Twitter and Facebook groups to real world activism. I think he does this because he believes (as do I) that a fair amount of social media activists think of their tweeting or Facebook-group-founding as more similar to participation in a protest than sitting at home passively in front of a television screen and watching it all unfold. They think they're being active, but Gladwell points out that they ultimately have little real power because, unlike civil rights protesters or other activists who actually changed our world, they are not making any real sacrifice; they are not risking much; they are not forming lasting bonds with people for whom they would make some real sacrifice (e.g. a large sum of money, risking one's life, etc). So that, Gladwell says is the key difference between real world protesters and virtual ones: shared sacrifice.

I basically think of online activism the way i think about online "friends": instead of just saying, "online _____ is no substitute for the real thing!", I think it presents us an opportunity to pull apart the real world phenomenon and ask what parts of it are duplicated by the online proxy, what parts of it aren't, and how those parts matter for outcomes of interest. So, what is the point of protesting?

First, its a way to devalue the ruling party's monopoly on physical force. If enough people get out there in the town square and don't back down after being physically threatened and assaulted, then the power to threaten and assault loses its meaning. There's also a "softer" power of protesting crowds: they can choose not to vote for someone, they can choose not to spend their money somewhere. Even if they don't live in a democracy, they can make it even more glaringly apparent to the outside world that they're living in a country that isn't even remotely democratic. Dobbs seemed skeptical that this could result in the overthrow of a repressive regime. It didn't work in Poland, it didn't work in Iran, and it probably wouldn't work here, not unless people make very real, significant sacrifices, ones that couldn't be done online.

Dobbs essentially said that Twitter is a way to share information. Like pamphlets distributed in other revolutions, they are a necessary but not sufficient criteria for revolution. Other things need to be in place: long-standing, easy-to-grasp grievances, for one. But I don't think social media is just a way to get information out. The difference has to do with social pressure. Television was and is quite good at presenting the spectacle of many people behaving a certain way. Combine this spectacle with many little nodes on a network behaving or expressing ideas in the same way and you've got the appearance of consensus, which is a powerful tool.

This gets me to the idea of a tipping point. There is a point at which the contagion of an idea rapidly speeds up, a point at which it seems like "everybody" is buying the same t-shirt or saying the same catch-phrase or using Facebook. These are all fairly benign trends, which makes sense. Buying a certain kind of t-shirt didn't involve much sacrifice: they're all pretty much the same to you, and if this one is popular, even if its a little more expensive, maybe you should buy it. Putting your life on the line or uprooting your family or risking your livelihood (all things that may be called for if you're participating in a revolution) involve significant risk. But those arguing against the revolutionary power of social media miss a key point: social pressure can convince you to make great sacrifices under the right circumstances.

Here's how I think it works. At the start, you need to have a group of people who are very similar to you, the Twitter user, and they have to be engaging in some activity that you were pretty close to doing. Then you might be convinced to do what they're doing. As the number of people doing that thing grows, it starts to matter less and less how similar those people are to you and how predisposed you were to act that way in the first place. When it comes to social pressure, there is an effect of sheer numbers. It doesn't replace or cancel out those other effects, but as the number grows, the effects of similarity and pre-disposition lessen. If you give people who want a revolution a sense that they can pull it off because there are so many other young, unemployed, pissed-off men who are ready to risk jail or a beating, then its more likely to happen than if you didn't give them that sense. This is something that pamphlets or television alone cannot do.

At the end of Dobbs talk, another professor pointed out how many billions of tourist dollars Egypt is losing each day. China's economy can withstand a repressive government, but it doesn't look like this one can. We'll just have to wait and see.