Reading Jill Lepore's review of Michael Patrick Lynch's new book, The Internet of Us, reminded me to write something on the topic of truth. I haven't read Lynch's book yet, but even the sub-title ("Knowing more and understanding less in the age of big data") gave me that all-too-familiar twinge of jealousy, of feeling as though someone had written about an idea that had been gestating in my mind for years before I had the chance to write about it myself, of being scooped. So, during this brief lacuna between the time at which I learned of this book's existence and the time at which I actually read it, let me tell you what I think it should be about, given that title. That is: how is it possible that the Internet helps us to know more and to understand less? Or, to take Lepore's tack, what is the relationship between the Internet/Big Data and the truth/reality?
At this stage, I have only a semi-organized collection of ideas on the topic. I'll base each idea around a question.
To what extent has truth (or reality) become subjective in the age of the Internet/Big Data?
I think we vastly overestimate the extent to which the Internet has fragmented our sense of truth and/or reality. And by "we," I mean most people who think about the Internet, not just scholars or experts. My sense is that it is a commonly held belief that the Internet allows people access to many versions of the truth, and also that groups of people subscribe to the versions that fit their worldviews. This assumption is at the core of the "filter bubble" argument and undergirds the assertion that the Internet is driving fragmentation of polarization of societies.
I contend that most people agree on the truth or reality of most things, but that we tend not to notice the things we agree on and instead focus on the things on which we do not agree. Imagine that we designed a quiz about 100 randomly selected facets of reality. We don't cherrypick controversial topics. It could be something as pedestrian as: "what color is they sky?" ; "if I drop an object, will it fall to the ground, fly into the sky, or hover in the air" ; "2 + 2 = ?" I'd imagine that people would provide very similar answers to almost all of these questions, regardless of how much time they spend on the Internet. Even when we do not explicitly state that we agree on something, we act as though we believe a certain thing that other people believe as well. We all behave as if we agree on the solidity of the ground on which we walk, the color of the lines on the roadways and what they mean, and thousands of other aspects of reality in everyday life.
The idea that reality or truth is becoming entirely subjective, fragmented, or polarized is likely the result of us becoming highly focused on the aspects on which we do not agree. That focus, in turn, is likely the result of us learning about the things on which we do not agree (that is, of us being exposed to people who perceive a handful of aspects of reality in a very different way than we perceive them) and of truth/reality relating to these handful of aspects genuinely becoming more fragmented. Certainly, it is alarming to think about what society would look like if we literally could not agree on anything, either explicitly or implicitly; so, there is understandable alarm about the trend toward subjectivity, regardless of how small and overestimated the trend may be.
So, I'm not saying that that truth/reality isn't becoming more fragmented; I'm only saying that part of it is becoming that way, and that we tend to ignore the parts that are not.
It's also worth considering the way in which the Internet has unified people in terms of what they believe truth/reality to be. If we look at societies around the globe, many don't agree on aspects of world history, how things work, etc. Some of those people gained access to the Internet and then began to believe in a reality that many others around the globe believe in: that certain things happened in the past, that certain things work in certain ways. Reality and truth were never unified to begin with. The Internet has likely fragmented some aspects of reality and the truth for some, but it has also likely unified other aspects for others.
Maybe I'm just being pedantic or nit-picky, but I think any conversation about the effects of the Internet on our ability to perceive a shared truth/reality should start with an explicit acknowledgment that when people say that society's notion of truth/reality is fragmented, they actually mean that a small (but important) corner of our notion of truth/reality is fragmented. Aside from considering the net effects of the Internet on reality (has it fragmented more than it unified?), we might also consider this question:
What types of things do we agree on?
Are there any defining characteristics of the aspects of truth/reality on which we don't agree? When I try to think of these things (things like abortion, gun rights, affirmative action, racism, economic philosophy, immigration policy, climate change, evolution, the existence of god), the word "controversial" comes to mind, but identifying this category of things on which we don't agree as "controversial" is tautological: they're controversial because we don't agree on them; the controversy exists because we can't agree.
So how about this rule of thumb: we tend to agree on simple facts more than we agree on complex ones. When I think of the heated political discourse in the United States at this time, I think about passionate disagreements about economic policy (what policy will result in the greatest benefit for all?), immigration (ditto), gun rights (do the benefits of allowing more people to carry guns [e.g., preventing tyrannical government subjugation, preventing other people with guns from killing more people] outweigh the drawbacks (e.g., increased likelihood of accidents; increased suicide rates], and abortion (at what point in the gestational process does human life begin?). These are not simple issues, though many talk about them as if the answers to the questions associated with each issue were self-evident.
I can think of a few reasons why truth/reality around these issues is fragmenting. One is, essentially, the filter bubble problem: the Internet gives us greater access to other people, arguments, facts, and data that can all be used by the motivated individual as evidence that they are on the right side of the truth. In my research methods class, I talk about how the Internet has supplied us with vast amounts of data and anecdotes, and that both are commonly misused to support erroneous claims. One of these days, I'll get around to putting that class lecture online, but the basic gist of it is that unless you approach evidence with skepticism, with the willingness to reach a conclusion that contradicts the one you set out to find, you're doing it wrong. Dan Brooks has a terrific blog post about how Twitter increases our access to "straw men." So, not only does the Internet provide us with access to seemingly objective evidence that we are right; it also provides an infinite supply of straw men with which to argue.
In these aforementioned cases in which we disagree about complex issues, we tend not to disagree about whether or not something actually happened, whether an anecdote is actually true or whether data is or is not fabricated. Most disagreements stem from the omission of relevant true information or the inclusion of irrelevant true information. We don't really attack arguments for these sins; we tend not to even notice them, and instead talk past each other, grasping at more and more anecdotes and data (of which there will be an endless supply) that support our views.
If it is the complex issues on which we cannot agree, then perhaps the trend toward disagreement is a function of the increasing complexity and interdependency of modern societies. Take the economy. Many voters will vote for an elected official based on whether or not they believe that the policies implemented by that official will produce a robust economy. But when you stop and think about how complex the current global economy is, it is baffling how anyone could be certain that his or her policies would result in particular outcomes. Similarly, it is difficult to know what the long-term outcomes of bank regulations might be, or military interventionalism (or lack thereof). Outcomes related to each issue involve the thoughts, feelings, and behaviors of billions of people, and while the situations we currently face and those we face in the future resemble situations we've faced in the past (or situations that economists, psychologists, or other "ists" could simulate), they differ in many others that are difficult to predict (that's simply the nature of outcomes that involve billions of people over long periods of time). And yet we act with such certainty when we debate such topics! Why is that? This leads to my last question:
Why can't we arrive at a shared truth about these few-but-important topics?
First, there is the problem of falsifiability. Claims relating to these topics typically involve an outcome that can be deferred endlessly. For example, one might believe that capitalism will result in an inevitable worker revolution. If the revolution hasn't occurred yet, that is not evidence that it will never occur; only evidence that it hasn't occurred yet. There's also the problem of isolating variables. Perhaps you believe that something will come to pass at a certain time and then it doesn't, and you ascribe the fact that it doesn't to a particular cause, but unless you've made some effort to isolate the variable, you can't rule out the possibility that the cause you identified actually had nothing to do with the outcome.
There are falsifiable ways of pursuing answers to questions relating to these topics. And despite all the hand-wringing about the fragmentation of truth/reality on these topics, there are also plenty of folks interested in the honest pursuit of these answers; answers that, despite the growing complexity of the object of study (i.e., human behavior on a mass scale), are getting a bit easier to find with the growing amounts of observations to which we have access via the Internet.
The other problem is the lack of incentive to arrive at the truth. Often times, we get immediate payoff for supporting a claim that isn't true, in the form of positive affect (e.g., righteous anger, in contrast to the feeling of existential doubt that often comes with admitting you're wrong) and staying on good terms with those around you (admitting you're wrong is often inseparable from admitting that your friends, or family, or the vast majority of your race or gender or nationality are wrong). So, there are powerful incentives (affective and social) to arrive at certain conclusions regardless of whether or not these are in line with truth/reality. In contrast, the incentives to be right about such things seem diffuse. We would benefit as a society and a species if we all right about everything, right?
I suppose some would argue that total agreement would be bad, that some diversity of opinions would be better. But we don't tolerate diversity of opinion on whether or not the law of gravity exists, or whether 2+2 = 4. Why would we tolerate it in the context of economic policy? Is it just because of how complex economies are, and that to think you have the right answer is folly? (I suppose that's a whole other blog entry right there, isn't it). But certainly, even if you believe that, you'd agree that some ideas about economies are closer to or further from the truth and reality of economies. So, perhaps what I'm saying is that if we lived in a society where "less right" ideas were jettisoned in favor of "more right" ideas, we would all benefit greatly, but that the benefits would only come if a large number of us acted on a shared notion of the truth and that the benefit would be spread out among many (hence, "diffuse").
But what if there were an immediate incentive to be right about these complex issues, something to counter the immediate affective and social payoffs of being stubborn and "truth agnostic?" I love the idea of prediction markets, which essentially attach a monetary incentive to predictions about, well, anything. You could make a claim about economic policy, immigration policy, terrorism policy, etc., and if you were wrong, you would lose money.
Imagine you're a sports fan who loves a particular team. You have a strong emotional and social incentive to bet on your team. But if your team keeps losing and you keep betting on your favorite team, you're going to keep losing money. If you had to participate in a betting market, you'd learn pretty quickly how to arrive at more accurate predictions. You would learn how to divide your "passionate fan" self from your betting self. And if you compare the aggregate predictions of passionate fans to the aggregate predictions of bettors, I'd imagine that the latter would be far more accurate. I would assume it would work more or less the same way with other kinds of predictions. People would still feel strongly about issues and still be surrounded by people who gave them a strong incentive to believe incomplete truths or distorted realities. But they would have an incentive to cultivate alternate selves who made claims more in tune with a shared reality.
Of course, not all issues lend themselves to being turned into bets (how would one bet on whether or not life begins after the first trimester?), but it still seems like, at least, a step in the right direction, and gives me hope for how we can understand the truth and our relationship to it in the Internet age, perhaps even better than we did before.
Tuesday, March 15, 2016
Tuesday, January 19, 2016
Perception Becoming Reality: The Effects of Framing Polls and Early Primary Election Results on Perceived Electability and Voting Behavior
National polls (and, in the coming weeks, the results of early primaries) present potentially misleading information about presidential primary candidates' chances of winning the eventual nomination. The actual likelihood depends on a several facets of the primary electoral process: how many delegates are assigned by the voters of each state; whether or not a state is winner-take-all; "triggers" and "thresholds" to allocate delegates to particular candidates; when a given state votes during the process. Add to that the effect of whether or not other candidates drop out of the race and who those voters then decide to vote for.
A lot of this can, and has, been modeled. You can model how many people would vote for each candidate in each state (even if there isn't accurate polling data in some states) based on what you know about the relationship between, say, education and likelihood to support a particular candidate. You can know who each voter's second, third, or fourth choice would likely be (i.e., how things will shake out when candidates start dropping out of the race). You can know what the rules are for delegate allocation in each state and how many delegates are in each state. When you take all of this into account, at least for the Republican candidates right now, you end up with a disjuncture between what the polls and what the early primary results will likely be (Trump and Cruz well ahead of Rubio) and who would actually get the most delegates if the primaries were to all be conducted today (Rubio, probably).
The crazy thing about this is that the emphasis on current national polls and early primary results in the media (which, as far as I'm concerned, is a misleading picture of how people would vote if the primaries were all held today) might change later primary voters' perceptions of the electability of their favored candidate, causing them to abandon that candidate and switch to another one.
Surely, there will be some people voting in later polls who will "stand their ground" and still vote for their favored candidates, regardless of what national polls or early primary elections say. Also, there are many reasons why those voting in later primaries may change their opinion over the coming months: for example they may get more information about the candidates, or their favored candidate may say or do something they don't like. But I think at least one possible cause of switching candidates has to do with perceived electability, and that perceived electability could be based on the misleading information from national polls and early primary results.
So then, how will the misleading information sway voters?
My guess is that Trump and Sanders (and possibly Cruz) will keep referring to the polls and the early primary results, claiming it to be evidence of their electability. They would do this in hopes of a herding effect. For Republicans, people in late-voting states who would've voted for Rubio will see supporting Rubio as supporting a likely loser. Spending time and energy supporting him would be a waste, and possibly embarrassing. This would cause them to abandon Rubio and either fall in line with the herd developing around Trump and/or Cruz (likely due to an "anyone but Hillary" sentiment) or sit out the primary vote altogether. For the Democrats, Hillary supporters residing in late-voting states who were on the fence and perhaps supported Hillary only because they thought Bernie didn't have a shot would think that Bernie did have a shot, and switch over to Bernie.
However, this strategy of emphasizing national polls and early primaries might backfire for Trump. He'll keep saying he's winning and will successfully convince people he's likely to win the nomination, but this might freak other voters out ("oh my god, he could actually win!"). This might cause people who would have sat on the sidelines to vote against him. It might cause wealthy donors to throw more money at Cruz or Rubio. It might cause other candidates to drop out sooner and endorse Cruz or Rubio. Call this the "panic mode" reaction to the perception that Trump could win.
There are, of course, many X factors that could swing the election: the economy tanks, someone says something stupid, scandals, terrorist attacks, etc. But I think one factor is whether people think national polls and early primary results predict eventual electability. And whether people think this depends on what they hear both from the candidates themselves and from the news.
The news will likely present a "horse-race" framing of the election, not because they want Trump or Cruz or Sanders to win, but because they want a close race, because it's a simpler story, and because this will boost ratings. There is a chance that some news outlets (I'm looking at you, NPR and NYTimes) will try to convey the complex relationship between staggered primaries with various delegate allocation rules and public opinion. I think the likelihood of any of the above scenarios playing out depends on whether news outlets use the simple, misleading frame or the more nuanced one.
A lot of this can, and has, been modeled. You can model how many people would vote for each candidate in each state (even if there isn't accurate polling data in some states) based on what you know about the relationship between, say, education and likelihood to support a particular candidate. You can know who each voter's second, third, or fourth choice would likely be (i.e., how things will shake out when candidates start dropping out of the race). You can know what the rules are for delegate allocation in each state and how many delegates are in each state. When you take all of this into account, at least for the Republican candidates right now, you end up with a disjuncture between what the polls and what the early primary results will likely be (Trump and Cruz well ahead of Rubio) and who would actually get the most delegates if the primaries were to all be conducted today (Rubio, probably).
The crazy thing about this is that the emphasis on current national polls and early primary results in the media (which, as far as I'm concerned, is a misleading picture of how people would vote if the primaries were all held today) might change later primary voters' perceptions of the electability of their favored candidate, causing them to abandon that candidate and switch to another one.
Surely, there will be some people voting in later polls who will "stand their ground" and still vote for their favored candidates, regardless of what national polls or early primary elections say. Also, there are many reasons why those voting in later primaries may change their opinion over the coming months: for example they may get more information about the candidates, or their favored candidate may say or do something they don't like. But I think at least one possible cause of switching candidates has to do with perceived electability, and that perceived electability could be based on the misleading information from national polls and early primary results.
So then, how will the misleading information sway voters?
My guess is that Trump and Sanders (and possibly Cruz) will keep referring to the polls and the early primary results, claiming it to be evidence of their electability. They would do this in hopes of a herding effect. For Republicans, people in late-voting states who would've voted for Rubio will see supporting Rubio as supporting a likely loser. Spending time and energy supporting him would be a waste, and possibly embarrassing. This would cause them to abandon Rubio and either fall in line with the herd developing around Trump and/or Cruz (likely due to an "anyone but Hillary" sentiment) or sit out the primary vote altogether. For the Democrats, Hillary supporters residing in late-voting states who were on the fence and perhaps supported Hillary only because they thought Bernie didn't have a shot would think that Bernie did have a shot, and switch over to Bernie.
However, this strategy of emphasizing national polls and early primaries might backfire for Trump. He'll keep saying he's winning and will successfully convince people he's likely to win the nomination, but this might freak other voters out ("oh my god, he could actually win!"). This might cause people who would have sat on the sidelines to vote against him. It might cause wealthy donors to throw more money at Cruz or Rubio. It might cause other candidates to drop out sooner and endorse Cruz or Rubio. Call this the "panic mode" reaction to the perception that Trump could win.
There are, of course, many X factors that could swing the election: the economy tanks, someone says something stupid, scandals, terrorist attacks, etc. But I think one factor is whether people think national polls and early primary results predict eventual electability. And whether people think this depends on what they hear both from the candidates themselves and from the news.
The news will likely present a "horse-race" framing of the election, not because they want Trump or Cruz or Sanders to win, but because they want a close race, because it's a simpler story, and because this will boost ratings. There is a chance that some news outlets (I'm looking at you, NPR and NYTimes) will try to convey the complex relationship between staggered primaries with various delegate allocation rules and public opinion. I think the likelihood of any of the above scenarios playing out depends on whether news outlets use the simple, misleading frame or the more nuanced one.
Saturday, January 02, 2016
The Awkwardness of Walking a High School Hallway (or, Digital Tribes: Gamers, Socialites, and Information Seekers)
This thought came to me while reading this New York Times article on app makers' attempts to understand how teens use smartphones and what they want out of the experience. In particular, I was struck by this sentence: "And when your phone is the default security blanket for enduring the awkwardness of walking a high school hallway, it feels nice to have a bunch of digital hellos ready with a swipe."
I thought of my own experience in high school. Indeed, it was awkward. I didn't have a phone as a security blanket. I suppose I just thought about the things that mattered to me as a way of escaping the awkwardness. I thought about the video games I'd play when I got home, or the movies or music I loved. Social media didn't exist. Maybe I thought about hanging out with my friends the following weekend.
Also while reading this sentence, I thought of my nephew (age 9) and niece (age 5). They're both too young for social media and smartphones, but I started thinking about what they'd be like when they are old enough to use these things. My nephew is already enamored with video games, in particular Minecraft. It seems unlikely that he'll be a heavy user of social media, and very likely that he'll spend a lot of time playing video games. My niece plays video games, and I honestly am not sure whether she'll stay interested in video games and/or develop an intense interest in social media, like many middle school and high school girls.
But as I read this article, and as I imagined how my nephew and niece would use media when they get to high school, a picture started to emerge in my head, a picture of at least two, maybe three, relatively distinct "tribes". One tribe spent most of their screen time using social media like Instagram or Snapchat. Another tribe spent most of their screen time playing video games. Of course, there would be some overlap: the gamers wouldn't totally forsake social media, and those who spent a lot of time with social media would also play some games. But they would differ in terms of how these media experiences fulfilled some fundamental needs or desires, how digital media provided a kind of default security blanket for them during the awkward teenage years.
For the gamers, video games would deliver a sense of challenge and accomplishment, and sometimes a sense of esteem (others see what you've accomplished and admire you). They also would provide camaraderie via the community of gamers.
For the social media users (let's call them "socialites"), social media would deliver a sense of social support and esteem, evidence that people are paying attention to you, that people like you, that you're not alone.
And perhaps there would be a third group: information seekers/entertainment consumers - people who use media primarily to consume rather than interact; consume news, consume educational material, consume movies, music, etc. I think I was one of these types of people in high school, and I think they still exist in high schools. Some kids aren't that into gaming or social media. They love movies, music, books, etc.
These are distinct groups driven by distinct desires. This brings me back to the Uses & Gratifications theory, a theory that I'm not too fond of (because I don't think people are very good at reflecting on why they use media), but might be of some use to help determine what the positive or negative effects of media might be.
So what? Why do these categories matter?
Well, for light-to-moderate users, all of these types of media use might help to keep young people happy and engaged with the world around them, give them a sense of belonging and fulfillment. The particular kind of media use that provides that sense of belonging and fulfillment won't be the same for everyone.
What about heavy media use? Well, heavy use is probably bad for all groups, but bad in different ways. For those in the social tribe, heavy use would be associated with a kind of fragile ego and need for validation from others, and preoccupation with this validation. For gamers, heavy use would be associated with not caring about accomplishments in the real, non-game world (i.e., not caring about grades, not caring about social connections with real-world peers, not caring about one's health, etc.), a kind of disappearing into the game world. For information seekers, heavy use might be associated with a kind of "filter bubble" problem: they get further and further into a particular view of the world without being forced to see messages from other perspectives or without interacting with people who, inevitably, will hold at least slightly different opinions.
If you just measure "internet use" or "smartphone use" as they relate to these outcomes, you might not find any effects, simply because the lack of effects in the other two groups "washes out" the significant effect in a single group. That doesn't mean the effects aren't there. By differentiating among these tribes (not necessarily by asking young people how they identify, but by measuring their actual use of video games, social media, and information/entertainment consumption), we would be able to see these different effects.
I'm usually quite skeptical of metaphors used to describe media technologies. Such metaphors tend to highlight the ways in which media technologies are similar to something else while ignoring all the ways in which it is not, and they seem chosen chiefly to support the pre-existing beliefs of the metaphor user. Do you think that a new media technology is harmful? Liken it to cigarettes or crack cocaine. Think that it's benign, or even helpful? Liken it to chess or painting or family.
But the security blanket metaphor seems a bit less...deterministic than these other metaphors. Of course, if one were to take it literally, it does have a negative connotation: the image of teenagers clinging to blankies evokes a kind of pathological arrested development, kind of like a pacifier. But what I like about the metaphor is that as long as you don't take it too literally, it helps you think about what young media users get out of the experience - security, comfort - and, at least for me, it doesn't dictate that this gratification come from a particular type of media experience.
I thought of my own experience in high school. Indeed, it was awkward. I didn't have a phone as a security blanket. I suppose I just thought about the things that mattered to me as a way of escaping the awkwardness. I thought about the video games I'd play when I got home, or the movies or music I loved. Social media didn't exist. Maybe I thought about hanging out with my friends the following weekend.
Also while reading this sentence, I thought of my nephew (age 9) and niece (age 5). They're both too young for social media and smartphones, but I started thinking about what they'd be like when they are old enough to use these things. My nephew is already enamored with video games, in particular Minecraft. It seems unlikely that he'll be a heavy user of social media, and very likely that he'll spend a lot of time playing video games. My niece plays video games, and I honestly am not sure whether she'll stay interested in video games and/or develop an intense interest in social media, like many middle school and high school girls.
But as I read this article, and as I imagined how my nephew and niece would use media when they get to high school, a picture started to emerge in my head, a picture of at least two, maybe three, relatively distinct "tribes". One tribe spent most of their screen time using social media like Instagram or Snapchat. Another tribe spent most of their screen time playing video games. Of course, there would be some overlap: the gamers wouldn't totally forsake social media, and those who spent a lot of time with social media would also play some games. But they would differ in terms of how these media experiences fulfilled some fundamental needs or desires, how digital media provided a kind of default security blanket for them during the awkward teenage years.
For the gamers, video games would deliver a sense of challenge and accomplishment, and sometimes a sense of esteem (others see what you've accomplished and admire you). They also would provide camaraderie via the community of gamers.
For the social media users (let's call them "socialites"), social media would deliver a sense of social support and esteem, evidence that people are paying attention to you, that people like you, that you're not alone.
And perhaps there would be a third group: information seekers/entertainment consumers - people who use media primarily to consume rather than interact; consume news, consume educational material, consume movies, music, etc. I think I was one of these types of people in high school, and I think they still exist in high schools. Some kids aren't that into gaming or social media. They love movies, music, books, etc.
These are distinct groups driven by distinct desires. This brings me back to the Uses & Gratifications theory, a theory that I'm not too fond of (because I don't think people are very good at reflecting on why they use media), but might be of some use to help determine what the positive or negative effects of media might be.
So what? Why do these categories matter?
Well, for light-to-moderate users, all of these types of media use might help to keep young people happy and engaged with the world around them, give them a sense of belonging and fulfillment. The particular kind of media use that provides that sense of belonging and fulfillment won't be the same for everyone.
What about heavy media use? Well, heavy use is probably bad for all groups, but bad in different ways. For those in the social tribe, heavy use would be associated with a kind of fragile ego and need for validation from others, and preoccupation with this validation. For gamers, heavy use would be associated with not caring about accomplishments in the real, non-game world (i.e., not caring about grades, not caring about social connections with real-world peers, not caring about one's health, etc.), a kind of disappearing into the game world. For information seekers, heavy use might be associated with a kind of "filter bubble" problem: they get further and further into a particular view of the world without being forced to see messages from other perspectives or without interacting with people who, inevitably, will hold at least slightly different opinions.
If you just measure "internet use" or "smartphone use" as they relate to these outcomes, you might not find any effects, simply because the lack of effects in the other two groups "washes out" the significant effect in a single group. That doesn't mean the effects aren't there. By differentiating among these tribes (not necessarily by asking young people how they identify, but by measuring their actual use of video games, social media, and information/entertainment consumption), we would be able to see these different effects.
I'm usually quite skeptical of metaphors used to describe media technologies. Such metaphors tend to highlight the ways in which media technologies are similar to something else while ignoring all the ways in which it is not, and they seem chosen chiefly to support the pre-existing beliefs of the metaphor user. Do you think that a new media technology is harmful? Liken it to cigarettes or crack cocaine. Think that it's benign, or even helpful? Liken it to chess or painting or family.
But the security blanket metaphor seems a bit less...deterministic than these other metaphors. Of course, if one were to take it literally, it does have a negative connotation: the image of teenagers clinging to blankies evokes a kind of pathological arrested development, kind of like a pacifier. But what I like about the metaphor is that as long as you don't take it too literally, it helps you think about what young media users get out of the experience - security, comfort - and, at least for me, it doesn't dictate that this gratification come from a particular type of media experience.
Monday, August 17, 2015
Going viral (by accident)
The video, which may or may not be available by the time you read this, was a recruitment video depicting the members of a sorority smiling and waving at the camera, dancing around, and doing what I would describe as frolicking. This is the first lesson: part of the difficulty of discussing any kind of media content is describing it. Each person will likely highlight some aspect of the content and leave out other aspects. One could note that all of the sorority sisters were white, or good-looking, or thin, or that the cinematography and editing was very professional looking for a student production, or that the young women don't seem to actually do much in the video. Coverage of the video in mainstream news and in blogs provide an excellent example of media framing: the aspects of the video that are mentioned are not chosen at random, but rather chosen so as to promote (or at least discuss) one of many possible interpretations.
The spread of the video also provides a good case study of audiences in the age of viral content. Many lessons on media creation start with the question: who is your audience? The answer to this question informs everything from aesthetic choices to the medium or venue through which you disseminate your message. In most cases, the Internet allows for a precise calibration of the relationship between content and audience, much more precise than the big-tent, shotgun approach more commonly used in old-school broadcast media. But in some cases, like this sorority video, content specifically tailored for a very small audience escapes into the wild. By now, there is a long list of media content, in particular YouTube videos, intended for a small, specific audience that, through no intention of its creators, found a much larger audience.
The prototypical example of this is Rebecca Black's Friday. Part of the pleasure of this kind of viral content, part of what makes it unique, is a result of the disjuncture between intended audience and actual audience. We're so used to seeing content that is either tailored to us or intended for a homogeneous audience that it is novel to see content that was so obviously not designed for us. Seeing this kind of content raises the question: "what were they (the creators, the target audience) thinking?" Like some kinds of reality TV, this type of content gives us a window into a culture and mindset that is foreign to us.
Also, like the Rebecca Black video, what makes the UA sorority recruitment video worth watching for so many people is that there are different ways to hate it or enjoy it. Some viewers seem to have taken unironic pleasure in the attractiveness of the video's stars, others laugh at its apparent earnestness, and others use it as evidence of an argument about the homogeneity of Greek organizations, and UA sororities in particular, or of how oblivious said sororities are of this fact, or of some aspect of a hegemonic culture that has inculcated the video's creators with this inability to see the underlying message that the video is sending, or that there is nothing wrong nor remarkable about the video and that the reaction to the video is indicative of political correctness run amok. The popularity of the video, the number of clicks and shares it gets, doesn't take into account which of these interpretations or reactions the user has. And, of course, the networked nature of social media and the way in which YouTube and similar sites highlight the number of clicks and shares facilitate the process: we watch it and talk about it because it is what everyone is watching and talking about.
I'm excited to find out what happens next, and to be a part of helping something positive come out of all this. It feels as though it could go either way. It could be yet another cultural object used to bludgeon people on the other side of the cultural divide, a prop in an escalating online shouting match. I imagine that such an experience would be thoroughly demoralizing to the content creators, prompting them to become deeply cynical about public discourse, causing them to "play it safe" by not sharing anything online or creating the most bland, benign content they can think of.
But I hope it doesn't turn out that way. I hope it leads to deeper reflection on what the video depicts, how it depicts its subjects, and how it is received by an audience with diverse, often diametrically opposed, viewpoints. If our students can get past the initial sting of intense scrutiny, I think they can learn a lot about the power of media. It certainly won't be hard to convince these students of the relevance of these lessons to their lives.
Monday, July 20, 2015
Reddit, Gawker, and The Freedom to Say Horrible Things
As a Redditor and someone who is interested in how online communities work, it’s been fascinating/sad to see what happened at Reddit over the past couple of weeks.
The misperception of Reddit promulgated by news stories is so beguiling in part because people judge online communities in much the same way that they judge offline communities. But Reddit isn’t a community in the ways that offline groupings of people like universities, neighborhoods or even countries are communities. It isn’t structured to be one shared experience or reflect a single, shared set of values. Proof of this: my experience of using Reddit changed very little during this upheaval (it was still mostly pictures of delicious hamburgers, science AMAs, and gifs of hilarious failed attempts at handshakes). By creating self-organizing sub-communities or "sub-reddits", the structure of Reddit (and perhaps the structure of other decentralized social media sites like Facebook, Twitter, and YouTube) facilitate distinct, individualized experiences. However, thanks to stories like the one in Gawker, this reality may ultimately matter less than public perception.
And so I couldn't help but relish the irony when Gawker, immediately after mocking Reddit for having a crisis over what to do with hurtful, hateful content, had a crisis of its own when editor Tommy Craggs resigned after the Founder and CEO pulled a hurtful, hateful piece of content without consulting him.
The cynical take on what happened at Reddit and Gawker is that these websites are getting popular and trying to make the next step toward profitability, making themselves appealing to advertisers by sanding off their rough edges and eliminating some types of content that the websites used to tolerate. In doing so, they are compromising the values of free speech and/or independent journalism. Gawker CEO Nick Denton states in an email to one of his employees that "These are the stories we used to do. But times have changed." Does this refer to the commercialization of Gawker and similar websites, or does it refer to the maturation of some of its leaders, a maturation which helped them realize that there are values other than free speech and getting web traffic, values like a consideration of the harm that words can do to others even when they are protected by law, and that sometimes these values come into conflict. Perhaps the phrase "times have changed" refers to both changes. Perhaps two forces - the commercial and the compassionate - are actually pushing in the same direction for once, against hurtful content, leaving libertarians on the other side, opposing both commerce and compassion (I don't like those odds).
In both the Reddit case and the Gawker case, the way in which the decisions to alter content were made (in a kind of sloppy, ad hoc way) left the company open to criticism. Personally, I side with the upper management of Reddit in their cleansing their site of hateful speech. With Gawker, it's trickier. I suppose I feel that they set themselves up by posting news stories that had so little value to begin with and were so obviously hurtful of others. Denton found himself, as he notes, in an impossible position: he had to either run a story that was "pure poison" to the reputation of the Gawker brand or know that some of his talent would resign in protest after he pulled the article.
But I think the major takeaway from this may be that the conflict at Gawker, like the conflict at Reddit, was kind of inevitable. You have hurtful content, and when you're small and the mainstream media doesn't draw attention to this content, you can get away with this. But once you get big and the eyes are on you, you either become associated with hurtful content or you change the brand's identity by restricting content, firing those who won't comply, and alienating part of your core users. Though I don't have that much sympathy for Denton, I find his remark about balancing the "calculus of cruelty and benefit" to be an encouraging sign for a purveyor of prurience (one that sounds oddly similar to Institutional Review Boards' policy regarding balancing risk and benefit in scientific studies).
To be sure, you can still say horrible, hurtful things on the Internet. Which raises the question: Where do Tommy Craggs and the libertarians leaving Reddit go? Do they all go to someplace like Vice Media? What does Vice do when all this happens to them? Is hate like energy: incapable of being created or destroyed, only redirected?
The
initial decision by the company to fire a beloved employee touched off an angry user rebellion which eventually led to
the harassment of CEO Ellen Pao by a smaller group of users and the CEO’s eventual resignation.
The whole series of incidents revealed to me that Reddit consists of two factions that can be defined by how upset they were at Pao. These factions always
existed, but recent events make the differences more visible.
It’s
important to note that the size of these two factions are not as easy to
measure as it initially seems. The highly vocal, negative anti-Pao sentiment
(and, more generally, strong emotions about anything) is conspicuous while the size of the other, less vocal group must be inferred by the
fact that the vast majority of content on the site has nothing to do with Pao
or the recent controversy. The first group is much more highly invested in the
site than the second group – it likely consists of a greater proportion of
moderators, heavy users of the site, and people who bother to up/downvote
Pao-related posts. But the second group is likely larger. The first group
consists of “strange bedfellows”: those with legitimate gripes about the
seemingly arbitrary and poorly communicated decisions of their leader and those
who are simply predisposed to expressing hatred.
The small group of people with ill will is influencing the fate of the site, but it isn't being done directly through the upvoting/downvoting of content or through posts or comments on the site. Instead, sites like Gawker and traditional news coverage focus on the small, vocal group with ill will and drive public perception of the entire site, which influences who participates or invests in the website.
The small group of people with ill will is influencing the fate of the site, but it isn't being done directly through the upvoting/downvoting of content or through posts or comments on the site. Instead, sites like Gawker and traditional news coverage focus on the small, vocal group with ill will and drive public perception of the entire site, which influences who participates or invests in the website.
The misperception of Reddit promulgated by news stories is so beguiling in part because people judge online communities in much the same way that they judge offline communities. But Reddit isn’t a community in the ways that offline groupings of people like universities, neighborhoods or even countries are communities. It isn’t structured to be one shared experience or reflect a single, shared set of values. Proof of this: my experience of using Reddit changed very little during this upheaval (it was still mostly pictures of delicious hamburgers, science AMAs, and gifs of hilarious failed attempts at handshakes). By creating self-organizing sub-communities or "sub-reddits", the structure of Reddit (and perhaps the structure of other decentralized social media sites like Facebook, Twitter, and YouTube) facilitate distinct, individualized experiences. However, thanks to stories like the one in Gawker, this reality may ultimately matter less than public perception.
And so I couldn't help but relish the irony when Gawker, immediately after mocking Reddit for having a crisis over what to do with hurtful, hateful content, had a crisis of its own when editor Tommy Craggs resigned after the Founder and CEO pulled a hurtful, hateful piece of content without consulting him.
The cynical take on what happened at Reddit and Gawker is that these websites are getting popular and trying to make the next step toward profitability, making themselves appealing to advertisers by sanding off their rough edges and eliminating some types of content that the websites used to tolerate. In doing so, they are compromising the values of free speech and/or independent journalism. Gawker CEO Nick Denton states in an email to one of his employees that "These are the stories we used to do. But times have changed." Does this refer to the commercialization of Gawker and similar websites, or does it refer to the maturation of some of its leaders, a maturation which helped them realize that there are values other than free speech and getting web traffic, values like a consideration of the harm that words can do to others even when they are protected by law, and that sometimes these values come into conflict. Perhaps the phrase "times have changed" refers to both changes. Perhaps two forces - the commercial and the compassionate - are actually pushing in the same direction for once, against hurtful content, leaving libertarians on the other side, opposing both commerce and compassion (I don't like those odds).
In both the Reddit case and the Gawker case, the way in which the decisions to alter content were made (in a kind of sloppy, ad hoc way) left the company open to criticism. Personally, I side with the upper management of Reddit in their cleansing their site of hateful speech. With Gawker, it's trickier. I suppose I feel that they set themselves up by posting news stories that had so little value to begin with and were so obviously hurtful of others. Denton found himself, as he notes, in an impossible position: he had to either run a story that was "pure poison" to the reputation of the Gawker brand or know that some of his talent would resign in protest after he pulled the article.
But I think the major takeaway from this may be that the conflict at Gawker, like the conflict at Reddit, was kind of inevitable. You have hurtful content, and when you're small and the mainstream media doesn't draw attention to this content, you can get away with this. But once you get big and the eyes are on you, you either become associated with hurtful content or you change the brand's identity by restricting content, firing those who won't comply, and alienating part of your core users. Though I don't have that much sympathy for Denton, I find his remark about balancing the "calculus of cruelty and benefit" to be an encouraging sign for a purveyor of prurience (one that sounds oddly similar to Institutional Review Boards' policy regarding balancing risk and benefit in scientific studies).
To be sure, you can still say horrible, hurtful things on the Internet. Which raises the question: Where do Tommy Craggs and the libertarians leaving Reddit go? Do they all go to someplace like Vice Media? What does Vice do when all this happens to them? Is hate like energy: incapable of being created or destroyed, only redirected?
Labels:
community,
gawker,
hate,
news,
online news,
reddit,
social media
Thursday, June 18, 2015
Anger Sells
First, a disclaimer about my disposition: I don’t like
anger. I guess there’s nothing wrong with it per se, but I’m of the
opinion that, like any strongly felt emotion, it can cloud people’s judgment.
Want to get angry at the Slayer concert? Knock yourself out (literally). But if
you’re trying to make some sort of judgment about the world around you, I think
that being angry can only lead you astray.
And yet, I have to admit, right now, after reading about
last night’s mass murder, I feel angry, angry and fatigued, because I know the
cycle: violence happens, then people want something to be done about the
violence, but that typically involves restricting or monitoring others in some
way (restricting gun access, making your mental health history available to
authorities, monitoring your web use, censoring (or at least condemning) certain
kinds of expression). Then a group of people will become angered by the effort
to control or restrict them, citing their right to be free and how it is being
infringed. When I think about the seeming inevitability of the cycle, I get
tired. I also think, “What can I do about it?”
Maybe my small contribution as a researcher and media
educator can be to further understanding of the role of anger in this whole
cycle. Being angry at some group of “unfamiliar others” (e.g., not some ex-boss
or some ex-girlfriend who pissed you off) seems like a necessary-but-not-sufficient
criterion for committing mass murder. The violent acts seem to grow from
expressions of anger and hatred, and they seem to inspire anger and hatred.
Maybe there’s something about the way media allows us to
stay “immersed” in an angry state that perpetuates this cycle. We all get angry
from time to time, sometimes at individuals, but other times at groups of
people we don’t know personally (Republicans, Democrats, politicians in
general, liberal media, Fox News, Comcast, etc.). How much time do you spend in that
angry state, and does exposure to certain media messages keep you in that angry
state? My guess is that in order to engage in an act of mass violence, you need
to have been in a prolonged state of anger against unfamiliar others, and that
media (mainstream media messages and interpersonal content via online
communities) likely plays some role in helping to sustain that anger. In this
respect, I suspect it is different than violence against people you know, which
might be prompted by one incident and happen in the heat of the moment.
But here’s why it’s hard to just say that expressions of
anger are uniformly bad. Sometimes, you have anger at an injustice, and the
anger seems to be what motivates people to take action. Without the emotional “fire”
of anger, people might not take action, and injustice would be allowed to persist.
But, of course, that concept of “injustice” is subjective: the people
perpetrating the initial act of aggression often see themselves as bringing
justice to the world. So, too, do the angry people who respond.
Maybe it’s what you do when you get angry – some people act
aggressively while others take political action. While it is obvious that those
who respond to mass violence do not (thank god) respond with acts of aggression
that are of the same degree, I can see the responses as acts of aggression
(albeit on the less harmful end of the continuum). Often, people don’t respond to
acts of mass violence by being motivated to vote, which is in many ways a slow,
complex process involving numerous compromises and, as such, not exactly
anger-sating. They are often verbally aggressive toward others that they feel
are somehow part of the “other side”. I see protests as somewhere in the middle:
sometimes, they are acts of political and civic action; other times, they are
not much different than coordinated verbal aggression directed at an ideological
“other”.
Moreover, any attempt to say “don’t be angry” or “don’t
expose yourself to anger-inducing media” to anyone will likely meet with the
response: “you’re just trying to pacify and distract us from the injustice!”
Who am I to say that anyone else shouldn’t be pissed off at the state of the
world, or should avoid news that makes them angry and watch more cat videos
instead (or this damn dog getting licked by cows, which is pretty cute),
or at least maintain some balance between the two?
But still, I do want to talk to people about how anger in
media, at least potentially, at least
sometimes, only appears to be about rectifying injustice and improving the world,
when in actuality, it is just a kind of emotional button-pushing, just working
on a vulnerability in our brains, this ancient instinct to be tribal, to find a
threat, to identify some Other as the source of evil or injustice. I cannot
tell anyone what kinds of messages are button-pushing and what kinds will help
motivate people to take action that will make the world a better place. To do
so would be imposing my concept of “justice” on them, and I just don’t think people
dig that.
But maybe they’d be receptive to this message: If you’re in
the business of producing or circulating media content, you’ll never go broke
overestimating consumers’ appetite for content that appeals to their sense of
righteous anger. Put simply: anger sells. And we all like to think it’s not the media we consume, but that it’s the media the other guy consumes.
So, take a step back. Take a good, long, hard look at the media content that
you consume that you know makes you angry. Be open to the possibility that the
messages you choose might just be, at least in part, button-pushing. Even if it
(and you) is/are on the right side of justice, maybe there’s some part of it
that just ends up keeping you angry and keeps you pushing the buttons for more.
Keep asking, “what can I do to make the world a better place?” And if the
answer is “shoot someone”, think a little harder.
Tuesday, May 26, 2015
Affordance or Attribute?
Another May, another great International Communication Association conference. Each year, I get to know more and more scholars from all over the world, seeming to run into someone I know every time I walk from one panel to another. Of course, it's also an opportunity to get back together with old friends from graduate school and the professors who helped shape my research interests and teaching style, but I think the real value of ICA is its ability to foster new connections, between people and between ideas.
I was particularly excited to attend a panel on affordances as a framework for understanding computer mediated communication. The appeal of the affordance-based approach was wonderfully and entertainingly laid out by Andrew Schrock of USC, who has created The Journal of Toaster Studies. The journal isn't real; it's a kind of parody of a certain type of scholarship about technology, or really a way of thinking about technology. So many of us who study digital media or new media focus on (and start to anthropomorphize) the actual technology itself, which sounds appropriate until you start thinking about how silly this would sound if we were talking about, say, toasters.
It was easy for me to see the problems with an exclusively techno-centric approach when I started my dissertation on the new media choice environment in 2011. Since then, the problem with making declarations about rapidly changing media technologies has only gotten worse, and yet we, as scholars and laypeople, continue to make claims about specific technologies, like Facebook. I'm trying to think about (and write about) technologies in terms of characteristics or qualities rather than specific iterations of technologies. I try not to think about sharing and responding to information on Facebook, but rather sharing and responding to information using social media (even more specifically: social media that facilitates communication among a fairly large group of people who know each other in in the offline world). Who the hell knows what'll happen to Facebook, but I assume there will be media technology that facilitates communication among fairly large groups of people who know each other in the offline world. And it would be even better if we could point to some quality or characteristic of social media (perhaps it's "public-ness" or perceived public-ness) that is most commonly associated with certain outcomes.
So this is what brought me to a panel on an affordance-based approach to studying media technologies. But as I heard from the panelists and the then members of the audience, I started to think that I had made a mistake. The affordance-based approach seemed to center on who got to decide how technologies were used and how that came to be. I thought back to where I had first encountered the idea of an affordance-based approach and realized, in the middle of the panel, that it was an article about an attribute based approach. I'd just gotten the "A" words mixed up.
But that wasn't quite it. Multiple panelists had presented examples of affordances that were, to me, the same thing as attributes: searchability and persistence (e.g., the recording and archiving of expressions in online communication). Indeed, danah boyd refers to these two qualities as affordances. So maybe we were talking about the same thing. Confused, and in a room full of experts on the topic, I took the opportunity to ask the room, "what is the difference between attributes and affordances"?
I'll paraphrase Cliff Lampe's answer: attributes are uses that are designed by the media technology producer while affordances have more to do with how users perceive the designed object to be useful, or what it should or could be used for. This helped un-confuse me a bit, but I was still left wondering why I thought of something like "persistence" as an attribute.
From what I heard at the panel and what I've read in various articles, the affordance-based approach seems to be in direct opposition to technological determinism. In fact, I see it as defined by it's opposition to this view of the relation between technology and people. In this respect, it is of a piece with the theory of Social Construction of Technology. There are two assumptions being made by these approaches: that people (who possess certain priorities and perspectives) shape technologies and not the other way around, and that this shaping process should be the central object of study. As much as I agree that technological determinism, in its purest form, is wrong-headed, I don't share the aforementioned assumptions.
I agree that people often use technologies in ways that were not intended by designers, that future design is informed by these unintended uses, and that various social and economic factors influence the likelihood and prevalence of these unintended uses. But concentrating our attention so exclusively on when and why unintentional use takes place ignores a lot of important aspects of how certain, very common uses of media technologies result in certain outcomes right now. It is important to study these common uses and outcomes because the outcomes are important (e.g., wellbeing, depression, social support, health outcomes, sexist attitudes) and our initial, untested assumptions about how certain common uses relate to certain outcomes are often wrong.
If people typically use a one-to-one communication technology like text-messaging in which messages typically persist, then this may more likely result in a certain outcome than if people typically used one-to-one communication technologies like Snapchat where messages typically do not persist. Can someone use text-messaging in such a way that messages don't persist? Yes. Can someone take screenshots of Snapchat messages so that they do persist? Yes. Could both technologies be altered by hackers so that they don't do what they were intended to do? You bet. Is this relationship between hackers, users, technologies, and designers worth studying? Totally. I care deeply about why technologies are the way they are and why certain uses become common and why others do not, but I also want to further understanding of how certain uses, regardless of how they came to be, result in certain outcomes.
Sometimes, connections between affordances of media technologies and outcomes occur regardless of socioeconomic or cultural contexts; people with lower self-esteem may believe that fewer people actually read what they post on social media regardless of whether they live in Bangor, Bangkok, and Bangledesh. Other times, such connections only occur under certain specific conditions. By looking at how affordances and outcomes relate to one another, we are not ignoring context or individual differences. Assessing the influence of those factors are part of any robust inquiry. The fact that we often find that the same effects in incredibly diverse socioeconomic and cultural circumstances should not be taken as a wholesale dismissal of the importance of such factors.
So, here's what I'm left with. You can concentrate on a few aspects of the intersection between media technologies and people. You can examine the relationships among designers, hackers, and users as articulated through different uses of the technologies (designed attribute vs. affordance). You can examine how uses come into being and become more or less popular (affordance as outcome). You can examine how common uses of media technology relate to outcomes (affordance as cause). Do we all use the word "affordance" to describe what we're studying? On a practical level, this doesn't work for me right now. When I search for the word in Google Scholar, most of the research concentrates on the first two parts of the technology/human intersection and not the third.
It's something I'm continuing to think about, and ICA, specifically that one panel, helped me zoom out and think about this issue as I continue my research.
I was particularly excited to attend a panel on affordances as a framework for understanding computer mediated communication. The appeal of the affordance-based approach was wonderfully and entertainingly laid out by Andrew Schrock of USC, who has created The Journal of Toaster Studies. The journal isn't real; it's a kind of parody of a certain type of scholarship about technology, or really a way of thinking about technology. So many of us who study digital media or new media focus on (and start to anthropomorphize) the actual technology itself, which sounds appropriate until you start thinking about how silly this would sound if we were talking about, say, toasters.
It was easy for me to see the problems with an exclusively techno-centric approach when I started my dissertation on the new media choice environment in 2011. Since then, the problem with making declarations about rapidly changing media technologies has only gotten worse, and yet we, as scholars and laypeople, continue to make claims about specific technologies, like Facebook. I'm trying to think about (and write about) technologies in terms of characteristics or qualities rather than specific iterations of technologies. I try not to think about sharing and responding to information on Facebook, but rather sharing and responding to information using social media (even more specifically: social media that facilitates communication among a fairly large group of people who know each other in in the offline world). Who the hell knows what'll happen to Facebook, but I assume there will be media technology that facilitates communication among fairly large groups of people who know each other in the offline world. And it would be even better if we could point to some quality or characteristic of social media (perhaps it's "public-ness" or perceived public-ness) that is most commonly associated with certain outcomes.
So this is what brought me to a panel on an affordance-based approach to studying media technologies. But as I heard from the panelists and the then members of the audience, I started to think that I had made a mistake. The affordance-based approach seemed to center on who got to decide how technologies were used and how that came to be. I thought back to where I had first encountered the idea of an affordance-based approach and realized, in the middle of the panel, that it was an article about an attribute based approach. I'd just gotten the "A" words mixed up.
But that wasn't quite it. Multiple panelists had presented examples of affordances that were, to me, the same thing as attributes: searchability and persistence (e.g., the recording and archiving of expressions in online communication). Indeed, danah boyd refers to these two qualities as affordances. So maybe we were talking about the same thing. Confused, and in a room full of experts on the topic, I took the opportunity to ask the room, "what is the difference between attributes and affordances"?
I'll paraphrase Cliff Lampe's answer: attributes are uses that are designed by the media technology producer while affordances have more to do with how users perceive the designed object to be useful, or what it should or could be used for. This helped un-confuse me a bit, but I was still left wondering why I thought of something like "persistence" as an attribute.
From what I heard at the panel and what I've read in various articles, the affordance-based approach seems to be in direct opposition to technological determinism. In fact, I see it as defined by it's opposition to this view of the relation between technology and people. In this respect, it is of a piece with the theory of Social Construction of Technology. There are two assumptions being made by these approaches: that people (who possess certain priorities and perspectives) shape technologies and not the other way around, and that this shaping process should be the central object of study. As much as I agree that technological determinism, in its purest form, is wrong-headed, I don't share the aforementioned assumptions.
I agree that people often use technologies in ways that were not intended by designers, that future design is informed by these unintended uses, and that various social and economic factors influence the likelihood and prevalence of these unintended uses. But concentrating our attention so exclusively on when and why unintentional use takes place ignores a lot of important aspects of how certain, very common uses of media technologies result in certain outcomes right now. It is important to study these common uses and outcomes because the outcomes are important (e.g., wellbeing, depression, social support, health outcomes, sexist attitudes) and our initial, untested assumptions about how certain common uses relate to certain outcomes are often wrong.
If people typically use a one-to-one communication technology like text-messaging in which messages typically persist, then this may more likely result in a certain outcome than if people typically used one-to-one communication technologies like Snapchat where messages typically do not persist. Can someone use text-messaging in such a way that messages don't persist? Yes. Can someone take screenshots of Snapchat messages so that they do persist? Yes. Could both technologies be altered by hackers so that they don't do what they were intended to do? You bet. Is this relationship between hackers, users, technologies, and designers worth studying? Totally. I care deeply about why technologies are the way they are and why certain uses become common and why others do not, but I also want to further understanding of how certain uses, regardless of how they came to be, result in certain outcomes.
Sometimes, connections between affordances of media technologies and outcomes occur regardless of socioeconomic or cultural contexts; people with lower self-esteem may believe that fewer people actually read what they post on social media regardless of whether they live in Bangor, Bangkok, and Bangledesh. Other times, such connections only occur under certain specific conditions. By looking at how affordances and outcomes relate to one another, we are not ignoring context or individual differences. Assessing the influence of those factors are part of any robust inquiry. The fact that we often find that the same effects in incredibly diverse socioeconomic and cultural circumstances should not be taken as a wholesale dismissal of the importance of such factors.
So, here's what I'm left with. You can concentrate on a few aspects of the intersection between media technologies and people. You can examine the relationships among designers, hackers, and users as articulated through different uses of the technologies (designed attribute vs. affordance). You can examine how uses come into being and become more or less popular (affordance as outcome). You can examine how common uses of media technology relate to outcomes (affordance as cause). Do we all use the word "affordance" to describe what we're studying? On a practical level, this doesn't work for me right now. When I search for the word in Google Scholar, most of the research concentrates on the first two parts of the technology/human intersection and not the third.
It's something I'm continuing to think about, and ICA, specifically that one panel, helped me zoom out and think about this issue as I continue my research.
Thursday, April 09, 2015
A paean to cassettes
While leafing through the book "mix tape: the art of cassette culture", which was recently given to me as a gift from a good friend, I got to thinking about how I came of age at a time when cassette tapes were the dominant mode of conveying music, and a time when VHS cassettes were the dominant medium through which video was disseminated. But the big innovation that came with cassettes - both audio and video - was that you could record on them. They were, to my knowledge, the first widespread medium for recording audio and video.
The other widespread recording medium that was already established was photography. But you typically took pictures of other things or other people, not pictures of existing artistic products (though, I suppose, plenty of people took pictures of paintings, causing some crisis regarding the value we place in an image). Of course, the influence of technologies that allow people to reproduce or copy art on the value of art has been thoroughly explored (probably beat into the ground at this point). I suppose I'm less interested in the ways cassettes allowed people to copy music and video and more interested in the ways in which it facilitated the re-purposing of existing work.
It wasn't radical re-purposing in most cases. It wasn't like we were using Photoshop to create some sophisticated blending of images, or using some audio editing software to create a unique mash-up. We were just putting songs next to other songs on a mix tape, or an episode of Late Night with David Letterman next to an episode of Square One. The juxtaposition still creates something unique, but the way in which it changes the meaning or mood of the listening/viewing experience is more subtle than the total reconfiguration that digital tools facilitate.
The other defining characteristic of that era, to me, only becomes apparent in retrospect: we couldn't disseminate the thing we created very widely. That is the difference between the mix tape and the Playlist on ITunes, Spotify, or YouTube. The mix tape was like a private joke, only intended to be relevant to certain people at a certain time and place, which makes it seem more intimate as I remember it. It makes me wonder about various kinds of hyper-local, hyper-personal social media like Yik Yak and Snapchat that have arisen in the wake of broadcasting social media like Twitter. Will the teens of today look back wistfully on the Snaps or Yaks they sent one another in the same way that I look back on mix tapes I made and received, and those cobbled-together VHS cassettes containing whatever I found funny in 1995?
Of course, the difference is that Snaps and Yaks are also intentionally ephemeral while cassettes were intended to preserve. Also, cassettes were intimate in that they were meant to be shared with one other person, or a small group, but they were comprised of elements from popular culture, which kept them from being too intimate or personal (though when I think back to some of the mix tapes I made for others, they do seem as embarrassingly soul-baring as an ill-conceived Yak). There wasn't even the possibility that the cassette mix would ever leak out into the wider public and impress anyone other than its intended audience of one or a few intimates.
With cassettes, we had a kind of circumscribed freedom to play around with the music and video that informed who we were and who we were becoming. Obviously, home recordings are worth preserving - the home videos and, though there aren't very many, the home audio recordings we made when we were young. But the mix tapes and the VHS tapes of TV shows seem to me to be more indicative of that time, more unique because of their limitations.
The other widespread recording medium that was already established was photography. But you typically took pictures of other things or other people, not pictures of existing artistic products (though, I suppose, plenty of people took pictures of paintings, causing some crisis regarding the value we place in an image). Of course, the influence of technologies that allow people to reproduce or copy art on the value of art has been thoroughly explored (probably beat into the ground at this point). I suppose I'm less interested in the ways cassettes allowed people to copy music and video and more interested in the ways in which it facilitated the re-purposing of existing work.
It wasn't radical re-purposing in most cases. It wasn't like we were using Photoshop to create some sophisticated blending of images, or using some audio editing software to create a unique mash-up. We were just putting songs next to other songs on a mix tape, or an episode of Late Night with David Letterman next to an episode of Square One. The juxtaposition still creates something unique, but the way in which it changes the meaning or mood of the listening/viewing experience is more subtle than the total reconfiguration that digital tools facilitate.
The other defining characteristic of that era, to me, only becomes apparent in retrospect: we couldn't disseminate the thing we created very widely. That is the difference between the mix tape and the Playlist on ITunes, Spotify, or YouTube. The mix tape was like a private joke, only intended to be relevant to certain people at a certain time and place, which makes it seem more intimate as I remember it. It makes me wonder about various kinds of hyper-local, hyper-personal social media like Yik Yak and Snapchat that have arisen in the wake of broadcasting social media like Twitter. Will the teens of today look back wistfully on the Snaps or Yaks they sent one another in the same way that I look back on mix tapes I made and received, and those cobbled-together VHS cassettes containing whatever I found funny in 1995?
Of course, the difference is that Snaps and Yaks are also intentionally ephemeral while cassettes were intended to preserve. Also, cassettes were intimate in that they were meant to be shared with one other person, or a small group, but they were comprised of elements from popular culture, which kept them from being too intimate or personal (though when I think back to some of the mix tapes I made for others, they do seem as embarrassingly soul-baring as an ill-conceived Yak). There wasn't even the possibility that the cassette mix would ever leak out into the wider public and impress anyone other than its intended audience of one or a few intimates.
With cassettes, we had a kind of circumscribed freedom to play around with the music and video that informed who we were and who we were becoming. Obviously, home recordings are worth preserving - the home videos and, though there aren't very many, the home audio recordings we made when we were young. But the mix tapes and the VHS tapes of TV shows seem to me to be more indicative of that time, more unique because of their limitations.
Tuesday, March 17, 2015
Instant Gratification & Digital Media: An Assumed Connection

The panelists at this talk tended to fall into two camps: "hand wringers" and "digital media apologists". The hand-wringers spent their time listing concerns about the ways in which overuse of digital technologies would lead to a society in which people could not delay gratification (which was assumed to be necessary to forge lasting, fulfilling relationships and for general social harmony). They relied on the growing body of evidence supporting the importance of gratification delay and grit (i.e., persistence in the face of multiple setbacks) in a variety of domains, including work and relationships. The apologists pointed out how the instantly gratifying digital media badmouthed by the hand-wringers (e.g., Twitter) connects and empowers formerly disenfranchised members of our society and gives rise to important social movements like #blacklivesmatter.
I kept waiting for a more nuanced discussion to break out, but it never happened. The experience did, however, make me think about how the conversation about this topic would benefit from some clarification of arguments and concepts. So, here would be some starting places:
1. Is there solid evidence of any kind of link between digital media use and any of the effects discussed (namely, reduced attention span and reduced ability to delay gratification)? The connection between these things is assumed to exist by almost everyone. Even many the digital media apologists assume that it exists, but differ in that they think that in addition to these effects, there are positive effects as well. I've found there to be a link among American college students between self-control and social media use as well as digital video viewing, but I didn't find a connection between self-control and cell phone use. This data was gathered before smartphones became truly dominant, so I might find different connections if I replicated the study.
But what about this assumed connection between grit (or lack thereof) and digital media use? Has anybody even tested this yet?
2. Does this affect young people or every user? There is plenty of evidence to suggest that the habits we acquire as younger people affect our behavior later in life, and that greater neuroplacticity of younger brains mean that media affects young minds, habits, and other behaviors more profoundly. But it is possible that adults who start using digital media in adulthood may be affected by it (specifically, may experience reduced ability to delay gratification as a result of heavy use of digital media).
3. Lowered attention span vs. Inability to delay gratification. Many people seem to conflate these two. Some experimental designs would conflate the two (e.g., an experiment in which people had to choose between reading for homework, which often requires sustained attention AND an ability to forego something more immediately gratifying, and a video game, which provides greater engagement and novelty as well as an immediate sense of accomplishment and pleasure). But it is worth testing these two things separately. It could be that digital media presents us with short bursts of information, and so it hurts our ability to concentrate on or pay attention to anything for a sustained period of time, and/or it may hurt our abilities to forego more immediately gratifying options for less immediately gratifying ones.
4. Hedonic experiences vs. habitual "empty" experiences vs. social surveillance. As the hand-wringers were talking about how digital media provides us with so many opportunities for feelings of accomplishment and affirmation and stimulation, I thought, "what about email?" Email seems to be one of the hardest habits to break, and yet almost everyone I know hates using it. It may be "gratifying" to check one's email in the way that scratching an itch may be gratifying, but I wouldn't call it pleasurable or hedonic. I'd imagine many people feel similarly about social media use: they don't like it, and they don't want to be doing it, yet they feel compelled to do it.
This gets me thinking about designations between things we have to do (like work), things we want to do (like reading a book or climbing a mountain), and things we end up doing (like channel surfing or frittering away time online). It also gets me thinking about the use of the term "addiction" in the media context. When we say that we are addicted to some kind of media use, maybe this just means that it's something we do but don't have to do, like work, nor do we want to do it (i.e., it doesn't give us pleasure), like having a blast with friends. It's value isn't immediately apparent in the way that the value of work or the value of hanging out with friends is. And yet, it could present us with some value: the value of social surveillance, of knowing where we stand with those around us, our family, friends, and co-workers. Email and social media provide us with relevant information about where we stand with these folks.
At the same time, there may be a "purely habitual" component to email and social media use. That is, through repetition, one might do it without thinking about what value it holds. It just is what you do when you pick up your phone, when you sit down at your laptop, or when you aren't otherwise engaged. There is evidence to suggest that when we aren't otherwise engaged, our brains "default" to self-reflection. Perhaps our seeking out of information on where we stand with others (i.e., the standing of our social self) is a symptom or a consequence of this kind of thinking.
5. Do the effects of digital media use carry over to non-digital contexts (e.g., eating), or does the inability to delay gratification assume that digital media is available at all times? When we talk about the poor decisions that people who have reduced ability to delay gratification make, is it because they are choosing some proximate digital instantly gratifying option, or is it because the use of digital media has reduced their abilities to delay gratification of any kind (not just digital kinds). If it were the former, then simply taking the digital temptation out of the environment would immediately reduce the harm, but if the effects of digital media use manifest themselves in other domains, then changing one's ability to delay gratification would take a bit longer, and you would need to take the digital temptations out of the environment for a longer period of time to change the habits of the individual in all domains.
So, going to the talk made me want to think more carefully about how to test these connections. It also reminded me of how easily discussions of this topic can fall into something repetitious, resembling age-old battles between hand-wringing finger-waggers and apologists. Keeping an open mind going into the process of inquiry is essential, but so is greater specificity regarding the concepts and claims we are putting to the test.
Sunday, January 25, 2015
Sports, Controversy, and the Court of Public Opinion in the Digital Age
As a fan of the New England Patriots, I feel compelled to think about (if not to speak about) the current kerfuffle regarding improperly inflated footballs. From what I gather, it is the general consensus that the footballs that the Patriots were playing with were not properly inflated, and that this gives the team an unfair advantage (hence, the existence of official rules regarding the proper inflation of footballs). It is not known (or not agreed upon, anyway) who, if anyone, is responsible for the fact that the footballs were not properly inflated. If the coach or the quarterback were aware of this or caused it, then that would be a big problem for the team. If it was an equipment manager who was responsible for the misdeed, that would be a much smaller problem.
I am well aware of the way in which my fandom biases everything I might think or say on the matter. So it seems uninteresting to offer any opinion regarding the guilt of the parties involved. But the whole incident did cause me to reflect on the nature of controversies and how we, the public, judge whether or not someone is guilty based on information we received through the media. I'll offer three factors which play a role in this process. Note that none of these three factors has anything to do with determining what actually happened. That is, they should not matter, but they do.
1. Having something that is easy to make silly jokes about changes the tenor of the conversation about the controversy. In this case, we have the word "balls" and sentences describing how "balls are perfect". Even if this incident did involve a breach of rules which compromises the integrity of the game (which I would take to be a relatively serious thing), the fact that people keep saying "balls" keeps it from being very serious. Comedians have a field day with it, as does meme culture on the Internet, which tends to silly-fy everything. This got me thinking about news events in general and how the presence of any potentially silly element can change public perception of an issue. Let's say someone tried to assassinate a head of state and the assassin shot him/her in the leg. Now imagine that the assassin shot him/her in the ass. In the first case, the public's discussion would contain little if any humor. In the second case, it would probably contain a lot of humor, leading people to take the whole thing a bit less seriously. I have to wonder how the discussion around something as deadly serious as Eric Garner's homicide would have been different if his name was Ha Ha Clinton-Dix.
2. Breaking the rules matters more when it may have affected the outcome of something. Most people (Pats fan or not) seem to agree that the improper inflation of the footballs did not cause the Patriots to win the game in question (which the Pats won 45-7). I seem to recall reading somewhere that the improperly inflated footballs were swapped out for properly inflated ones at halftime. After halftime, the Pats continued to dominate the other team. The circumstances under which cheating takes place shouldn't matter when judging guilt, but in the court of public opinion, they clearly do. And it certainly matters when you discuss proper punishments. To punish the Patriots by banning them from the Super Bowl would seem a bit much, given the fact that no one argues that they would have achieved that goal regardless of the inflation status of the football. But imagine if the same controversy had occurred in the other conference final playoff game, which went into overtime and hinged on a handful of key plays. Any minor change to the catchability of a football could have easily swayed the outcome of that game. The tenor of the discussion, again, would be more serious if the circumstances were different.
3. We live in an era of amateur forensic detectives. This, to me, is the most interesting thing to reflect on, and to consider how it may apply not only to this incident, but to all kinds of controversies in the era of digital media. My hunch is the increase in the ease with which we can record things and spread them around the world instantly has given people the expectation that if something occurred, they should be able to see visual evidence of it. They should not have to rely on the word of others, or trust in larger organizations, to determine the truth. Consider other recent sports controversies: L.A. Clippers' racist owner Donald Sterling was caught on tape saying racist things; Ray Rice was caught on tape punching his fiance. The presence or absence of this kind of evidence does not determine whether or not someone did something wrong, nor does it necessarily determine whether the person will be punished either by their employer or by the law. It does, however, play a huge role in determining whether the public feels that you are guilty and, again, affects the tenor of the discussion. When visual evidence is absent, as is the case with the Patriots' purposely deflating footballs (at least as of 1/25/15), then people are less willing to assume guilt. This expectation of visual evidence has troubling consequences for victims of domestic violence and sexual assault. By their very nature, these acts occur in private and are not recorded easily (while virtually everything that takes place in public is recorded, whether we like it or not). Our waning trust in authorities coincides with our need (and our ability) to "see for ourselves". We need to at least see the evidence, even if we can't agree on how to interpret it.
We do love a good controversy, and there is clearly an agenda setting effect present in this case, whereby ESPN analysts spend lots of time discussing this aspect of the sport and the Internet follows (though I wonder about the backlash against ESPN's tendency to beat a dead horse, as seemed to happen when Tim Tebow became popular). So it is unlikely that we will stop talking about improperly inflated footballs until after the Super Bowl. But I'm interested to see how the tenor of the discussion plays out.
I am well aware of the way in which my fandom biases everything I might think or say on the matter. So it seems uninteresting to offer any opinion regarding the guilt of the parties involved. But the whole incident did cause me to reflect on the nature of controversies and how we, the public, judge whether or not someone is guilty based on information we received through the media. I'll offer three factors which play a role in this process. Note that none of these three factors has anything to do with determining what actually happened. That is, they should not matter, but they do.
1. Having something that is easy to make silly jokes about changes the tenor of the conversation about the controversy. In this case, we have the word "balls" and sentences describing how "balls are perfect". Even if this incident did involve a breach of rules which compromises the integrity of the game (which I would take to be a relatively serious thing), the fact that people keep saying "balls" keeps it from being very serious. Comedians have a field day with it, as does meme culture on the Internet, which tends to silly-fy everything. This got me thinking about news events in general and how the presence of any potentially silly element can change public perception of an issue. Let's say someone tried to assassinate a head of state and the assassin shot him/her in the leg. Now imagine that the assassin shot him/her in the ass. In the first case, the public's discussion would contain little if any humor. In the second case, it would probably contain a lot of humor, leading people to take the whole thing a bit less seriously. I have to wonder how the discussion around something as deadly serious as Eric Garner's homicide would have been different if his name was Ha Ha Clinton-Dix.
2. Breaking the rules matters more when it may have affected the outcome of something. Most people (Pats fan or not) seem to agree that the improper inflation of the footballs did not cause the Patriots to win the game in question (which the Pats won 45-7). I seem to recall reading somewhere that the improperly inflated footballs were swapped out for properly inflated ones at halftime. After halftime, the Pats continued to dominate the other team. The circumstances under which cheating takes place shouldn't matter when judging guilt, but in the court of public opinion, they clearly do. And it certainly matters when you discuss proper punishments. To punish the Patriots by banning them from the Super Bowl would seem a bit much, given the fact that no one argues that they would have achieved that goal regardless of the inflation status of the football. But imagine if the same controversy had occurred in the other conference final playoff game, which went into overtime and hinged on a handful of key plays. Any minor change to the catchability of a football could have easily swayed the outcome of that game. The tenor of the discussion, again, would be more serious if the circumstances were different.
3. We live in an era of amateur forensic detectives. This, to me, is the most interesting thing to reflect on, and to consider how it may apply not only to this incident, but to all kinds of controversies in the era of digital media. My hunch is the increase in the ease with which we can record things and spread them around the world instantly has given people the expectation that if something occurred, they should be able to see visual evidence of it. They should not have to rely on the word of others, or trust in larger organizations, to determine the truth. Consider other recent sports controversies: L.A. Clippers' racist owner Donald Sterling was caught on tape saying racist things; Ray Rice was caught on tape punching his fiance. The presence or absence of this kind of evidence does not determine whether or not someone did something wrong, nor does it necessarily determine whether the person will be punished either by their employer or by the law. It does, however, play a huge role in determining whether the public feels that you are guilty and, again, affects the tenor of the discussion. When visual evidence is absent, as is the case with the Patriots' purposely deflating footballs (at least as of 1/25/15), then people are less willing to assume guilt. This expectation of visual evidence has troubling consequences for victims of domestic violence and sexual assault. By their very nature, these acts occur in private and are not recorded easily (while virtually everything that takes place in public is recorded, whether we like it or not). Our waning trust in authorities coincides with our need (and our ability) to "see for ourselves". We need to at least see the evidence, even if we can't agree on how to interpret it.
We do love a good controversy, and there is clearly an agenda setting effect present in this case, whereby ESPN analysts spend lots of time discussing this aspect of the sport and the Internet follows (though I wonder about the backlash against ESPN's tendency to beat a dead horse, as seemed to happen when Tim Tebow became popular). So it is unlikely that we will stop talking about improperly inflated footballs until after the Super Bowl. But I'm interested to see how the tenor of the discussion plays out.
Thursday, December 11, 2014
Looking Back on the Start of our Lives Online
I've been blogging for about 10 years as of last month. I started the blog during my first semester of graduate school at the University of Texas. Now, ten years later, I'm in my first semester as an Assistant Professor. I've used the blog as a way to catalog ideas related to my work and passion: understanding media use, in particular new/digital media use. I like to think that I've been able to refine my thinking on this topic through this blog. If nothing else, the blog serves as a record the evolution of my thinking. It allows me (or anyone else) to travel back in time and see how I thought.
While we're on the subject of travelling back in time (as a nostalgist, this is subject to which I obsessively return), I'd like to go back even further, about 20 years ago, to the time when I first started using the Internet. Recently, I was prompted by a question: "what was it like to use the Internet in the 90's". I took this to mean, "what did it feel like?" Here are some thoughts:
While we're on the subject of travelling back in time (as a nostalgist, this is subject to which I obsessively return), I'd like to go back even further, about 20 years ago, to the time when I first started using the Internet. Recently, I was prompted by a question: "what was it like to use the Internet in the 90's". I took this to mean, "what did it feel like?" Here are some thoughts:
The big difference between the online experience then and the online experience now, one that many young people today wouldn't think about, is the way in which search engines changes things. In the mid-90's, you had to either hear about a specific website from a friend, a magazine, or TV (though no one in mainstream media really cared about the internet, so it was mostly through friends). Then you had to type the specific web address into the Netscape Navigator browser address bar. Good search engines (and the explosion of worthwhile websites in the late 90's) changed the online experience from hopping around a small series of content islands to something that feels like moving through one's everyday offline life. You went from hearing about a particular website (the way you would hear about a particular book or movie) to just thinking of something, anything, typing it into a search engine, and finding it.
Reflecting on the changes wrought by search engines made me think about a similar big change in media choice that affected what it felt like to use the medium: the remote control. Both search engines and remote controls came along at a time when the number of available options exploded (in websites, or in cable television channels). They made the explosion of options manageable. The feeling of the media use experience changed in both cases, from a consideration of several options (akin to being in a store or a library and making a selection) to moving through a landscape, observing things around you and reacting to them, and at the same time, conjuring or creating a world from thin air, thinking of something and having it appear in front of you.
We are different selves in those situations (this is an idea that I keep coming back to: the ways in which our environment brings out different selves). In the first, we are a chooser. But in the second situation, there are a few selves that could be brought out or summoned. In the second situation, we are potentially a react-er, but also a creator, an unrestricted curious, creative impulse. We are also, potentially, an unrestricted Id, acting on inner impulses for immediate gratification, reacting not to the outer landscape but to subtle shifts in our moods or thoughts.
One of my big questions: How do you foster curiosity and creativity and downplay reactivity and the impulse for immediate gratification? The answer, I think, lies in manipulating (perhaps a kinder word would be "customizing") the choice environment, and we've only begun to do this, and not in a systematic manner. And that is what I want to do with my research.
We are different selves in those situations (this is an idea that I keep coming back to: the ways in which our environment brings out different selves). In the first, we are a chooser. But in the second situation, there are a few selves that could be brought out or summoned. In the second situation, we are potentially a react-er, but also a creator, an unrestricted curious, creative impulse. We are also, potentially, an unrestricted Id, acting on inner impulses for immediate gratification, reacting not to the outer landscape but to subtle shifts in our moods or thoughts.
One of my big questions: How do you foster curiosity and creativity and downplay reactivity and the impulse for immediate gratification? The answer, I think, lies in manipulating (perhaps a kinder word would be "customizing") the choice environment, and we've only begun to do this, and not in a systematic manner. And that is what I want to do with my research.
Sunday, October 12, 2014
The Gamified Life
Video games are a worthwhile leisure experience, something that can have positive effects on the players. If it occupies a certain place in a person's life, gaming can enrich a player in many ways. In fact, I think we've only scratched the surface on the positive effects gaming can have on individuals and communities. Even if it's a means of relaxation that allows one to more fully engage with reality after playing the game, that's a plus.
On the other hand, if gaming occupies another place in a person's life, it can substitute for real-world experiences. Even if gaming doesn't cause people to kill other people in real life (or some such horrible real-world behavioral consequence), I still wonder about what gamers would have done with the time they spend gaming, and how the gaming experience shapes their non-gaming, real world interactions and experiences.
My key interest is in how gaming is disconnected (and disconnects the player) from reality. A good game creates a kind of alternate reality with challenges, goals, and hazards and, increasingly, a social structure. All of that, it would seem, makes it easy to feel immersed in the gaming experience, to at least temporarily forget about the world outside the game and to focus exclusively on achievement and survival within the game.
There is an attempt to bridge the gap between the substitute reality of the game world and the real world: gamification. The point of gamification seems to be to incentivize a certain real-world behavior (exercise, civic engagement, etc.) by linking it to a reward.
The motive is laudable: instead of just getting people to spend time getting a high score or achieving status in a world that is disconnected from reality, you get them to do something good in the real world: learn about the world, become more civically engaged, etc. So, let's assume that gamification works: it gets people to engage in whatever behavior you're trying to get them to engage in when they would not have done so otherwise. Great! But here's what I'm wondering about:
How challenging is the game, or the gamified aspect of real-life, relative to the un-gamified aspects of life? The rules of games are tweaked so that they are challenging but not too frustrating. If a game were too frustrating, the player would stop playing (and likely pick up a less-frustrating-but-still-challenging game). But real life doesn't adapt to the individual in this respect. Even a successful gamification of reality is not all encompassing. Sooner or later, gamers must confront the un-gamified world.
Here are a few possible consequences of this discontinuity.
Gamification could create the perception that life isn't fair. The experience of the un-gamified challenges in their lives failing to adapt to their ability levels will seem increasingly unfair to gamers. To someone who has no experience with a gamified reality, the fact that day-to-day existence (work, interactions with loved ones, local politics) is very often frustrating is simply a part of life; whether or not it is "fair" isn't really an issue. This perception would be easy to measure: do you agree/disagree with the statement "often, life is not fair" (or some variation of this).
Gamification could create the desire to play more games, particularly the kind of games that adapt in some way when they become too frustrating for the user. Gamers would then disengage with any aspect of reality that does not adapt, including relationships and civic involvement. They go further and further into the adaptable world of the games.
But here's the weirdest possible consequence I've been thinking about.
What if the majority of certain people's realities become gamified? When they encounter an aspect of their realities that are frustrating or boring, they gamify it. They keep doing this until their entire realities (how often they exercise, what they eat, how they interact with their spouses, work, volunteering, local politics, etc.) are gamified, so that no aspect of their daily lives becomes too frustrating or not rewarding enough. I can imagine a small group of people doing this, but I can't imagine every human doing this (at least not for awhile, but who knows?). The problems will occur when the gamified society meets the un-gamified society.
Life is a kind of game, with challenges and goals and a "score" (money, happiness, status, righteousness, or whichever metric you want to use). But the key difference is that life lacks a "user experience" designer. It is continuously and simultaneously created by things that are often indifferent to the needs and desires of the individual. One of the worst consequences for the gamer who becomes increasingly frustrated with un-gamified life might be a kind of despondence that, left unchecked, causes them to quit the game of life.
This, of course, wouldn't be the fate of most gamers. So what is it that causes some gamers to eschew reality? And when we gamify another aspect of our lives, how does this change the way we view the rough edges of the world outside the game?
On the other hand, if gaming occupies another place in a person's life, it can substitute for real-world experiences. Even if gaming doesn't cause people to kill other people in real life (or some such horrible real-world behavioral consequence), I still wonder about what gamers would have done with the time they spend gaming, and how the gaming experience shapes their non-gaming, real world interactions and experiences.
My key interest is in how gaming is disconnected (and disconnects the player) from reality. A good game creates a kind of alternate reality with challenges, goals, and hazards and, increasingly, a social structure. All of that, it would seem, makes it easy to feel immersed in the gaming experience, to at least temporarily forget about the world outside the game and to focus exclusively on achievement and survival within the game.
There is an attempt to bridge the gap between the substitute reality of the game world and the real world: gamification. The point of gamification seems to be to incentivize a certain real-world behavior (exercise, civic engagement, etc.) by linking it to a reward.
The motive is laudable: instead of just getting people to spend time getting a high score or achieving status in a world that is disconnected from reality, you get them to do something good in the real world: learn about the world, become more civically engaged, etc. So, let's assume that gamification works: it gets people to engage in whatever behavior you're trying to get them to engage in when they would not have done so otherwise. Great! But here's what I'm wondering about:
How challenging is the game, or the gamified aspect of real-life, relative to the un-gamified aspects of life? The rules of games are tweaked so that they are challenging but not too frustrating. If a game were too frustrating, the player would stop playing (and likely pick up a less-frustrating-but-still-challenging game). But real life doesn't adapt to the individual in this respect. Even a successful gamification of reality is not all encompassing. Sooner or later, gamers must confront the un-gamified world.
Here are a few possible consequences of this discontinuity.
Gamification could create the perception that life isn't fair. The experience of the un-gamified challenges in their lives failing to adapt to their ability levels will seem increasingly unfair to gamers. To someone who has no experience with a gamified reality, the fact that day-to-day existence (work, interactions with loved ones, local politics) is very often frustrating is simply a part of life; whether or not it is "fair" isn't really an issue. This perception would be easy to measure: do you agree/disagree with the statement "often, life is not fair" (or some variation of this).
Gamification could create the desire to play more games, particularly the kind of games that adapt in some way when they become too frustrating for the user. Gamers would then disengage with any aspect of reality that does not adapt, including relationships and civic involvement. They go further and further into the adaptable world of the games.
But here's the weirdest possible consequence I've been thinking about.
What if the majority of certain people's realities become gamified? When they encounter an aspect of their realities that are frustrating or boring, they gamify it. They keep doing this until their entire realities (how often they exercise, what they eat, how they interact with their spouses, work, volunteering, local politics, etc.) are gamified, so that no aspect of their daily lives becomes too frustrating or not rewarding enough. I can imagine a small group of people doing this, but I can't imagine every human doing this (at least not for awhile, but who knows?). The problems will occur when the gamified society meets the un-gamified society.
Life is a kind of game, with challenges and goals and a "score" (money, happiness, status, righteousness, or whichever metric you want to use). But the key difference is that life lacks a "user experience" designer. It is continuously and simultaneously created by things that are often indifferent to the needs and desires of the individual. One of the worst consequences for the gamer who becomes increasingly frustrated with un-gamified life might be a kind of despondence that, left unchecked, causes them to quit the game of life.
This, of course, wouldn't be the fate of most gamers. So what is it that causes some gamers to eschew reality? And when we gamify another aspect of our lives, how does this change the way we view the rough edges of the world outside the game?
Wednesday, September 24, 2014
Social Media and Fear: A Case Study
It has been an interesting couple of days at my new academic home, the University of Alabama. As is the case with the initial time period around many incidents involving the safety of large numbers of people, the facts are a little unclear. According to school officials and local police, here is what happened: someone posted a threatening comment on a University of Alabama sorority's YouTube video. The threat was specific, referencing a time in the near future and a place on campus, and a threat of harm to large numbers of people. Additionally, at least one student reported, and later retracted, a statement about being attacked off-campus. Very soon after, school officials worked with police and determined that there was no threat to our students' safety beyond the YouTube comment, that it appeared as though no actual person was planning on carrying out any attack. This information did not have the intended effect of calming and reassuring everyone that it was okay to go about our regular business. As misinformation spread, it prompted many of our students and their parents to be fearful, so fearful that the students did not feel comfortable coming to class, which disrupted what, ostensibly, we're here to do: learn stuff in class.
The fears were based in part on real-life events. There was, apparently, a rumor that someone was dressed as the Joker on sorority row, which relates to the shooting in an Aurora, Colorado movie theater in 2012. They were also based on something resembling an urban legend. A student of mine quoted a message he had seen that was circulating, citing it as part of the reason he was electing to stay home: "The name of the person who posted the comment on the sorority video, Arthur Pendragon, is the name of the main character in a book who went and killed a bunch of people on the fall equinox, which is tonight at 9:30...That is an actual person and he calls himself King Arthur because he believes he is the reincarnated King Arthur from hundreds of years ago and holds a celebration every fall equinox."
One question immediately occurred to me as all this was happening (or not happening): what was the role of social media in this occurrence of mass fear (or, if you like, fear contagion?)?
Did social media cause the mass fear?
In trying to answer this question, I try to maintain the stance of a skeptic: just because social media was involved in the event at various stages does not mean that it caused the event, or necessarily caused it to occur in a certain way. Social media could simply be standing in for face-to-face or other existing forms of communication (phone, television, radio, etc.). The underlying psychological mechanisms associated with our tendency to believe scary information that may not be true clearly pre-dates social media. Even the hint of the occurrence of a low-frequency/high-severity event is enough to set parts of our brains into overdrive (as it happens, those parts have been around for a very long time, before humans evolved). Tightly-knit groups of young people in particular were likely always especially susceptible to this kind of rumor. Thirty years ago, before social media, you could call in a bomb threat and freak out a campus. So maybe this isn't anything new.
On the other hand, perhaps some unique attributes of social media interact with those underlying psychological mechanisms in a way that brings about a new outcome that would not have existed without social media. So here is a consideration of how social media might have facilitated mass fear.
Social media as inciter
The initial threat was delivered via social media. Perhaps the person issuing the threat believed he could do so anonymously, achieving the desired goal of sewing fear in the community without getting caught. This is certainly true of lower-scale harm, such as bullying. As of now, you can (unfortunately) use the anonymity provided by social media to harass or bully another person but not get caught in the act. But once the stakes get high enough (e.g., terrorist threats), we suddenly see that even those posting anonymously can be tracked down. The person posting may not have known this to be the case. They may have believed that they could do this (because they hated the school, because they were bored, because they wanted to get out of an exam that day, who knows) and thought they could get away with it.
Social media also provides a kind of remove or abstraction that perhaps makes it easier to commit harmful or disruptive acts without considering the consequences. Even if the social media user can never be truly anonymous, not having to see the face, hear the voice, or be in physical proximity to the individuals they are harming or the lives they are disrupting likely makes it easier to do so.
Social media as effective propagator of misinformation
When you discuss a rumor with someone face-to-face, you can spread misinformation to one other person. When you post about it, you are spreading it to a larger group of people, potentially. I say "potentially" because while the potential audience for any given post is all Internet users (and, if it's picked up by the mainstream media, all users of TVs or radios), in practice, most posts are either ignored or seen by very few people. However, in events of great interest like this, each post becomes part of a whole. When people search for a specific term (e.g., University of Alabama incident), each person's post about the topic gets added to the tally. Ten people posting about a topic doesn't make it seem worth paying attention to, but if a million people post about it, even if it doesn't have much evidence substantiating it, it gets more attention which may lead it to spread more easily. When they are included in search counts and taken out of context, even reasonable social media posts (or de-bunking social media posts that attempt to correct misinformation) can inadvertently contribute to the stoking of mass fear.
Even if relatively few people post misinformation, if those people make up a substantial number of a given community (e.g., most people in your sorority, or most people in your Facebook feed), you're likely to believe that whatever they are talking about is worth paying attention to, regardless of how true it is.
It is also very easy to post misinformation on social media; easier, I would argue, than talking to another person face-to-face. For most of our students, social media is always available. Many of them are in the habit of posting their thoughts and feelings frequently.
It is also easier for misinformation to spread this way. Linking to other information sources is an integral part of the affordances and norms involved in all Internet use. Unfortunately, citing the sources of information (and the sources of those sources, until you get to a primary source) is not. One wonders whether this lack of substantiation will persist after we go through more and more of these episodes. It certainly doesn't have to. Theoretically, I'd imagine you could design a quick and simple way to track the flow of each bit of information from link to link to link, back to an original source. Those bibliographies that were so annoying to format may be more useful than we thought!
Social media as abettor of flawed reasoning
This came to mind as I tried to make sense of the fact that there were multiple alarming events reported across a period of 48 hours. People tend to see patterns, connections, and stories in human behavior even when they are not really there. This is likely especially true of people who are in a state of fear. At some point, people's flawed reasoning would run into counter-evidence that would suggest that the perceived connection among these events isn't really there. It occurs to me that in most cases, that counter-evidence comes to us from authorities: police or officials. Often, the messages from those authorities are mediated in some way, coming to us through mainstream media channels (e.g., television news). Which brings me to the next consideration of the role of social media in stoking mass fear.
Social media as a perceived unfiltered alternative to mass media narratives
I've been thinking more and more about the importance of trust in modern life. Our fates are connected to one another in a million ways we tend to ignore. Most of us assume that our packaged food is safe to eat, that our cars are safe to drive, that our votes will be counted when electing a political candidate. But do we trust authorities to tell us about threats to our safety? Some events have given us reason not to.
Under oppressive regimes, the authorities tells one story through government-controlled mass media while social media tells another story, a story "from the ground". We know that the authorities have an incentive and an ability to deceive, so we might be more likely to believe the social media narrative. The social media narrative also comes not just from one source but from many individuals, so it would seem that it would be less likely to be corrupted or biased. This ignores the fact that those posting on social media may have their own agendas, and though they may be posting uncensored pictures or facts about what is going on in the world, they may be purposely (or unconsciously) ignoring other images or facts. Thus, social media may only appear to be a less biased place to get unvarnished information about the world.
Social media narratives have the advantage of being more difficult to repress or control, but they have the disadvantage of not having a reputation. When the social media mob gets things wrong, the reputations of each individual posting and forwarding messages do not suffer in the way that the reputations of mainstream news organizations (or authorities like the police) do if they get things wrong. Social media can respond quicker to events and spread rumors because there is less of a price to pay if the information turns out not to be true.
This, of course, doesn't matter much if you don't trust information from authorities or the mainstream media. There was likely always some distrust of such official narratives, but there weren't many alternatives besides the odd underground newsletter or the person ranting and raving on the street corner. Social media fills this void. It presents people already predisposed to distrust official narratives with a seemingly trust-worthy ("unvarnished, unbiased") alternative.
In the case of the events (or non-events) at the University of Alabama, I wonder about students' trust in messages from authorities and their trust in messages from social media. Their trust in social media messages may not reflect an ignorance or unawareness of information from reliable sources, but a cynicism about just how reliable those mainstream sources are.
There are many important questions which will be answered in different ways by mainstream and social media sources: How many people really are dying from Ebola? How big of a threat is ISIS, really? Who we believe is, in part, a reflection of who we trust.
Teachable moments
In the end, I hope things settle down quickly so our students, all of our students, will come back to class. I'm already thinking about how we can turn this into a productive discussion about where we get information from, what sources we trust, and how we all might do things differently next time.
The fears were based in part on real-life events. There was, apparently, a rumor that someone was dressed as the Joker on sorority row, which relates to the shooting in an Aurora, Colorado movie theater in 2012. They were also based on something resembling an urban legend. A student of mine quoted a message he had seen that was circulating, citing it as part of the reason he was electing to stay home: "The name of the person who posted the comment on the sorority video, Arthur Pendragon, is the name of the main character in a book who went and killed a bunch of people on the fall equinox, which is tonight at 9:30...That is an actual person and he calls himself King Arthur because he believes he is the reincarnated King Arthur from hundreds of years ago and holds a celebration every fall equinox."
One question immediately occurred to me as all this was happening (or not happening): what was the role of social media in this occurrence of mass fear (or, if you like, fear contagion?)?
Did social media cause the mass fear?
In trying to answer this question, I try to maintain the stance of a skeptic: just because social media was involved in the event at various stages does not mean that it caused the event, or necessarily caused it to occur in a certain way. Social media could simply be standing in for face-to-face or other existing forms of communication (phone, television, radio, etc.). The underlying psychological mechanisms associated with our tendency to believe scary information that may not be true clearly pre-dates social media. Even the hint of the occurrence of a low-frequency/high-severity event is enough to set parts of our brains into overdrive (as it happens, those parts have been around for a very long time, before humans evolved). Tightly-knit groups of young people in particular were likely always especially susceptible to this kind of rumor. Thirty years ago, before social media, you could call in a bomb threat and freak out a campus. So maybe this isn't anything new.
On the other hand, perhaps some unique attributes of social media interact with those underlying psychological mechanisms in a way that brings about a new outcome that would not have existed without social media. So here is a consideration of how social media might have facilitated mass fear.
Social media as inciter
The initial threat was delivered via social media. Perhaps the person issuing the threat believed he could do so anonymously, achieving the desired goal of sewing fear in the community without getting caught. This is certainly true of lower-scale harm, such as bullying. As of now, you can (unfortunately) use the anonymity provided by social media to harass or bully another person but not get caught in the act. But once the stakes get high enough (e.g., terrorist threats), we suddenly see that even those posting anonymously can be tracked down. The person posting may not have known this to be the case. They may have believed that they could do this (because they hated the school, because they were bored, because they wanted to get out of an exam that day, who knows) and thought they could get away with it.
Social media also provides a kind of remove or abstraction that perhaps makes it easier to commit harmful or disruptive acts without considering the consequences. Even if the social media user can never be truly anonymous, not having to see the face, hear the voice, or be in physical proximity to the individuals they are harming or the lives they are disrupting likely makes it easier to do so.
Social media as effective propagator of misinformation
When you discuss a rumor with someone face-to-face, you can spread misinformation to one other person. When you post about it, you are spreading it to a larger group of people, potentially. I say "potentially" because while the potential audience for any given post is all Internet users (and, if it's picked up by the mainstream media, all users of TVs or radios), in practice, most posts are either ignored or seen by very few people. However, in events of great interest like this, each post becomes part of a whole. When people search for a specific term (e.g., University of Alabama incident), each person's post about the topic gets added to the tally. Ten people posting about a topic doesn't make it seem worth paying attention to, but if a million people post about it, even if it doesn't have much evidence substantiating it, it gets more attention which may lead it to spread more easily. When they are included in search counts and taken out of context, even reasonable social media posts (or de-bunking social media posts that attempt to correct misinformation) can inadvertently contribute to the stoking of mass fear.
Even if relatively few people post misinformation, if those people make up a substantial number of a given community (e.g., most people in your sorority, or most people in your Facebook feed), you're likely to believe that whatever they are talking about is worth paying attention to, regardless of how true it is.
It is also very easy to post misinformation on social media; easier, I would argue, than talking to another person face-to-face. For most of our students, social media is always available. Many of them are in the habit of posting their thoughts and feelings frequently.
It is also easier for misinformation to spread this way. Linking to other information sources is an integral part of the affordances and norms involved in all Internet use. Unfortunately, citing the sources of information (and the sources of those sources, until you get to a primary source) is not. One wonders whether this lack of substantiation will persist after we go through more and more of these episodes. It certainly doesn't have to. Theoretically, I'd imagine you could design a quick and simple way to track the flow of each bit of information from link to link to link, back to an original source. Those bibliographies that were so annoying to format may be more useful than we thought!
Social media as abettor of flawed reasoning
This came to mind as I tried to make sense of the fact that there were multiple alarming events reported across a period of 48 hours. People tend to see patterns, connections, and stories in human behavior even when they are not really there. This is likely especially true of people who are in a state of fear. At some point, people's flawed reasoning would run into counter-evidence that would suggest that the perceived connection among these events isn't really there. It occurs to me that in most cases, that counter-evidence comes to us from authorities: police or officials. Often, the messages from those authorities are mediated in some way, coming to us through mainstream media channels (e.g., television news). Which brings me to the next consideration of the role of social media in stoking mass fear.
Social media as a perceived unfiltered alternative to mass media narratives
I've been thinking more and more about the importance of trust in modern life. Our fates are connected to one another in a million ways we tend to ignore. Most of us assume that our packaged food is safe to eat, that our cars are safe to drive, that our votes will be counted when electing a political candidate. But do we trust authorities to tell us about threats to our safety? Some events have given us reason not to.
Under oppressive regimes, the authorities tells one story through government-controlled mass media while social media tells another story, a story "from the ground". We know that the authorities have an incentive and an ability to deceive, so we might be more likely to believe the social media narrative. The social media narrative also comes not just from one source but from many individuals, so it would seem that it would be less likely to be corrupted or biased. This ignores the fact that those posting on social media may have their own agendas, and though they may be posting uncensored pictures or facts about what is going on in the world, they may be purposely (or unconsciously) ignoring other images or facts. Thus, social media may only appear to be a less biased place to get unvarnished information about the world.
Social media narratives have the advantage of being more difficult to repress or control, but they have the disadvantage of not having a reputation. When the social media mob gets things wrong, the reputations of each individual posting and forwarding messages do not suffer in the way that the reputations of mainstream news organizations (or authorities like the police) do if they get things wrong. Social media can respond quicker to events and spread rumors because there is less of a price to pay if the information turns out not to be true.
This, of course, doesn't matter much if you don't trust information from authorities or the mainstream media. There was likely always some distrust of such official narratives, but there weren't many alternatives besides the odd underground newsletter or the person ranting and raving on the street corner. Social media fills this void. It presents people already predisposed to distrust official narratives with a seemingly trust-worthy ("unvarnished, unbiased") alternative.
In the case of the events (or non-events) at the University of Alabama, I wonder about students' trust in messages from authorities and their trust in messages from social media. Their trust in social media messages may not reflect an ignorance or unawareness of information from reliable sources, but a cynicism about just how reliable those mainstream sources are.
There are many important questions which will be answered in different ways by mainstream and social media sources: How many people really are dying from Ebola? How big of a threat is ISIS, really? Who we believe is, in part, a reflection of who we trust.
Teachable moments
In the end, I hope things settle down quickly so our students, all of our students, will come back to class. I'm already thinking about how we can turn this into a productive discussion about where we get information from, what sources we trust, and how we all might do things differently next time.
Subscribe to:
Posts (Atom)