Thursday, December 11, 2014

Looking Back on the Start of our Lives Online

I've been blogging for about 10 years as of last month. I started the blog during my first semester of graduate school at the University of Texas. Now, ten years later, I'm in my first semester as an Assistant Professor. I've used the blog as a way to catalog ideas related to my work and passion: understanding media use, in particular new/digital media use. I like to think that I've been able to refine my thinking on this topic through this blog. If nothing else, the blog serves as a record the evolution of my thinking. It allows me (or anyone else) to travel back in time and see how I thought.

While we're on the subject of travelling back in time (as a nostalgist, this is subject to which I obsessively return), I'd like to go back even further, about 20 years ago, to the time when I first started using the Internet. Recently, I was prompted by a question: "what was it like to use the Internet in the 90's". I took this to mean, "what did it feel like?" Here are some thoughts:

The big difference between the online experience then and the online experience now, one that many young people today wouldn't think about, is the way in which search engines changes things. In the mid-90's, you had to either hear about a specific website from a friend, a magazine, or TV (though no one in mainstream media really cared about the internet, so it was mostly through friends). Then you had to type the specific web address into the Netscape Navigator browser address bar. Good search engines (and the explosion of worthwhile websites in the late 90's) changed the online experience from hopping around a small series of content islands to something that feels like moving through one's everyday offline life. You went from hearing about a particular website (the way you would hear about a particular book or movie) to just thinking of something, anything, typing it into a search engine, and finding it.

Reflecting on the changes wrought by search engines made me think about a similar big change in media choice that affected what it felt like to use the medium: the remote control. Both search engines and remote controls came along at a time when the number of available options exploded (in websites, or in cable television channels). They made the explosion of options manageable. The feeling of the media use experience changed in both cases, from a consideration of several options (akin to being in a store or a library and making a selection) to moving through a landscape, observing things around you and reacting to them, and at the same time, conjuring or creating a world from thin air, thinking of something and having it appear in front of you.

We are different selves in those situations (this is an idea that I keep coming back to: the ways in which our environment brings out different selves). In the first, we are a chooser. But in the second situation, there are a few selves that could be brought out or summoned. In the second situation, we are potentially a react-er, but also a creator, an unrestricted curious, creative impulse. We are also, potentially, an unrestricted Id, acting on inner impulses for immediate gratification, reacting not to the outer landscape but to subtle shifts in our moods or thoughts.

One of my big questions: How do you foster curiosity and creativity and downplay reactivity and the impulse for immediate gratification? The answer, I think, lies in manipulating (perhaps a kinder word would be "customizing") the choice environment, and we've only begun to do this, and not in a systematic manner. And that is what I want to do with my research. 


Sunday, October 12, 2014

The Gamified Life

Video games are a worthwhile leisure experience, something that can have positive effects on the players. If it occupies a certain place in a person's life, gaming can enrich a player in many ways. In fact, I think we've only scratched the surface on the positive effects gaming can have on individuals and communities. Even if it's a means of relaxation that allows one to more fully engage with reality after playing the game, that's a plus.

On the other hand, if gaming occupies another place in a person's life, it can substitute for real-world experiences. Even if gaming doesn't cause people to kill other people in real life (or some such horrible real-world behavioral consequence), I still wonder about what gamers would have done with the time they spend gaming, and how the gaming experience shapes their non-gaming, real world interactions and experiences.

My key interest is in how gaming is disconnected (and disconnects the player) from reality. A good game creates a kind of alternate reality with challenges, goals, and hazards and, increasingly, a social structure. All of that, it would seem, makes it easy to feel immersed in the gaming experience, to at least temporarily forget about the world outside the game and to focus exclusively on achievement and survival within the game.

There is an attempt to bridge the gap between the substitute reality of the game world and the real world: gamification. The point of gamification seems to be to incentivize a certain real-world behavior (exercise, civic engagement, etc.) by linking it to a reward.

The motive is laudable: instead of just getting people to spend time getting a high score or achieving status in a world that is disconnected from reality, you get them to do something good in the real world: learn about the world, become more civically engaged, etc. So, let's assume that gamification works: it gets people to engage in whatever behavior you're trying to get them to engage in when they would not have done so otherwise. Great! But here's what I'm wondering about:

How challenging is the game, or the gamified aspect of real-life, relative to the un-gamified aspects of life? The rules of games are tweaked so that they are challenging but not too frustrating. If a game were too frustrating, the player would stop playing (and likely pick up a less-frustrating-but-still-challenging game). But real life doesn't adapt to the individual in this respect. Even a successful gamification of reality is not all encompassing. Sooner or later, gamers must confront the un-gamified world.

Here are a few possible consequences of this discontinuity.

Gamification could create the perception that life isn't fair. The experience of the un-gamified challenges in their lives failing to adapt to their ability levels will seem increasingly unfair to gamers. To someone who has no experience with a gamified reality, the fact that day-to-day existence (work, interactions with loved ones, local politics) is very often frustrating is simply a part of life; whether or not it is "fair" isn't really an issue. This perception would be easy to measure: do you agree/disagree with the statement "often, life is not fair" (or some variation of this).

Gamification could create the desire to play more games, particularly the kind of games that adapt in some way when they become too frustrating for the user. Gamers would then disengage with any aspect of reality that does not adapt, including relationships and civic involvement. They go further and further into the adaptable world of the games.

But here's the weirdest possible consequence I've been thinking about.

What if the majority of certain people's realities become gamified? When they encounter an aspect of their realities that are frustrating or boring, they gamify it. They keep doing this until their entire realities (how often they exercise, what they eat, how they interact with their spouses, work, volunteering, local politics, etc.) are gamified, so that no aspect of their daily lives becomes too frustrating or not rewarding enough. I can imagine a small group of people doing this, but I can't imagine every human doing this (at least not for awhile, but who knows?). The problems will occur when the gamified society meets the un-gamified society.

Life is a kind of game, with challenges and goals and a "score" (money, happiness, status, righteousness, or whichever metric you want to use). But the key difference is that life lacks a "user experience" designer. It is continuously and simultaneously created by things that are often indifferent to the needs and desires of the individual. One of the worst consequences for the gamer who becomes increasingly frustrated with un-gamified life might be a kind of despondence that, left unchecked, causes them to quit the game of life.

This, of course, wouldn't be the fate of most gamers. So what is it that causes some gamers to eschew reality? And when we gamify another aspect of our lives, how does this change the way we view the rough edges of the world outside the game?

Wednesday, September 24, 2014

Social Media and Fear: A Case Study

It has been an interesting couple of days at my new academic home, the University of Alabama. As is the case with the initial time period around many incidents involving the safety of large numbers of people, the facts are a little unclear. According to school officials and local police, here is what happened: someone posted a threatening comment on a University of Alabama sorority's YouTube video. The threat was specific, referencing a time in the near future and a place on campus, and a threat of harm to large numbers of people. Additionally, at least one student reported, and later retracted, a statement about being attacked off-campus. Very soon after, school officials worked with police and determined that there was no threat to our students' safety beyond the YouTube comment, that it appeared as though no actual person was planning on carrying out any attack. This information did not have the intended effect of calming and reassuring everyone that it was okay to go about our regular business. As misinformation spread, it prompted many of our students and their parents to be fearful, so fearful that the students did not feel comfortable coming to class, which disrupted what, ostensibly, we're here to do: learn stuff in class.

The fears were based in part on real-life events. There was, apparently, a rumor that someone was dressed as the Joker on sorority row, which relates to the shooting in an Aurora, Colorado movie theater in 2012. They were also based on something resembling an urban legend. A student of mine quoted a message he had seen that was circulating, citing it as part of the reason he was electing to stay home: "The name of the person who posted the comment on the sorority video, Arthur Pendragon, is the name of the main character in a book who went and killed a bunch of people on the fall equinox, which is tonight at 9:30...That is an actual person and he calls himself King Arthur because he believes he is the reincarnated King Arthur from hundreds of years ago and holds a celebration every fall equinox."

One question immediately occurred to me as all this was happening (or not happening): what was the role of social media in this occurrence of mass fear (or, if you like, fear contagion?)?

Did social media cause the mass fear?

In trying to answer this question, I try to maintain the stance of a skeptic: just because social media was involved in the event at various stages does not mean that it caused the event, or necessarily caused it to occur in a certain way. Social media could simply be standing in for face-to-face or other existing forms of communication (phone, television, radio, etc.). The underlying psychological mechanisms associated with our tendency to believe scary information that may not be true clearly pre-dates social media. Even the hint of the occurrence of a low-frequency/high-severity event is enough to set parts of our brains into overdrive (as it happens, those parts have been around for a very long time, before humans evolved). Tightly-knit groups of young people in particular were likely always especially susceptible to this kind of rumor. Thirty years ago, before social media, you could call in a bomb threat and freak out a campus. So maybe this isn't anything new.

On the other hand, perhaps some unique attributes of social media interact with those underlying psychological mechanisms in a way that brings about a new outcome that would not have existed without social media. So here is a consideration of how social media might have facilitated mass fear.

Social media as inciter

The initial threat was delivered via social media. Perhaps the person issuing the threat believed he could do so anonymously, achieving the desired goal of sewing fear in the community without getting caught. This is certainly true of lower-scale harm, such as bullying. As of now, you can (unfortunately) use the anonymity provided by social media to harass or bully another person but not get caught in the act. But once the stakes get high enough (e.g., terrorist threats), we suddenly see that even those posting anonymously can be tracked down. The person posting may not have known this to be the case. They may have believed that they could do this (because they hated the school, because they were bored, because they wanted to get out of an exam that day, who knows) and thought they could get away with it.

Social media also provides a kind of remove or abstraction that perhaps makes it easier to commit harmful or disruptive acts without considering the consequences. Even if the social media user can never be truly anonymous, not having to see the face, hear the voice, or be in physical proximity to the individuals they are harming or the lives they are disrupting likely makes it easier to do so.

Social media as effective propagator of misinformation

When you discuss a rumor with someone face-to-face, you can spread misinformation to one other person. When you post about it, you are spreading it to a larger group of people, potentially. I say "potentially" because while the potential audience for any given post is all Internet users (and, if it's picked up by the mainstream media, all users of TVs or radios), in practice, most posts are either ignored or seen by very few people. However, in events of great interest like this, each post becomes part of a whole. When people search for a specific term (e.g., University of Alabama incident), each person's post about the topic gets added to the tally. Ten people posting about a topic doesn't make it seem worth paying attention to, but if a million people post about it, even if it doesn't have much evidence substantiating it, it gets more attention which may lead it to spread more easily. When they are included in search counts and taken out of context, even reasonable social media posts (or de-bunking social media posts that attempt to correct misinformation) can inadvertently contribute to the stoking of mass fear.

Even if relatively few people post misinformation, if those people make up a substantial number of a given community (e.g., most people in your sorority, or most people in your Facebook feed), you're likely to believe that whatever they are talking about is worth paying attention to, regardless of how true it is.

It is also very easy to post misinformation on social media; easier, I would argue, than talking to another person face-to-face. For most of our students, social media is always available. Many of them are in the habit of posting their thoughts and feelings frequently.

It is also easier for misinformation to spread this way. Linking to other information sources is an integral part of the affordances and norms involved in all Internet use. Unfortunately, citing the sources of information (and the sources of those sources, until you get to a primary source) is not. One wonders whether this lack of substantiation will persist after we go through more and more of these episodes. It certainly doesn't have to. Theoretically, I'd imagine you could design a quick and simple way to track the flow of each bit of information from link to link to link, back to an original source. Those bibliographies that were so annoying to format may be more useful than we thought!

Social media as abettor of flawed reasoning

This came to mind as I tried to make sense of the fact that there were multiple alarming events reported across a period of 48 hours. People tend to see patterns, connections, and stories in human behavior even when they are not really there. This is likely especially true of people who are in a state of fear. At some point, people's flawed reasoning would run into counter-evidence that would suggest that the perceived connection among these events isn't really there. It occurs to me that in most cases, that counter-evidence comes to us from authorities: police or officials. Often, the messages from those authorities are mediated in some way, coming to us through mainstream media channels (e.g., television news). Which brings me to the next consideration of the role of social media in stoking mass fear.

Social media as a perceived unfiltered alternative to mass media narratives

I've been thinking more and more about the importance of trust in modern life. Our fates are connected to one another in a million ways we tend to ignore. Most of us assume that our packaged food is safe to eat, that our cars are safe to drive, that our votes will be counted when electing a political candidate. But do we trust authorities to tell us about threats to our safety? Some events have given us reason not to.

Under oppressive regimes, the authorities tells one story through government-controlled mass media while social media tells another story, a story "from the ground". We know that the authorities have an incentive and an ability to deceive, so we might be more likely to believe the social media narrative. The social media narrative also comes not just from one source but from many individuals, so it would seem that it would be less likely to be corrupted or biased. This ignores the fact that those posting on social media may have their own agendas, and though they may be posting uncensored pictures or facts about what is going on in the world, they may be purposely (or unconsciously) ignoring other images or facts. Thus, social media may only appear to be a less biased place to get unvarnished information about the world.

Social media narratives have the advantage of being more difficult to repress or control, but they have the disadvantage of not having a reputation. When the social media mob gets things wrong, the reputations of each individual posting and forwarding messages do not suffer in the way that the reputations of mainstream news organizations (or authorities like the police) do if they get things wrong. Social media can respond quicker to events and spread rumors because there is less of a price to pay if the information turns out not to be true.

This, of course, doesn't matter much if you don't trust information from authorities or the mainstream media. There was likely always some distrust of such official narratives, but there weren't many alternatives besides the odd underground newsletter or the person ranting and raving on the street corner. Social media fills this void. It presents people already predisposed to distrust official narratives with a seemingly trust-worthy ("unvarnished, unbiased") alternative.

In the case of the events (or non-events) at the University of Alabama, I wonder about students' trust in messages from authorities and their trust in messages from social media. Their trust in social media messages may not reflect an ignorance or unawareness of information from reliable sources, but a cynicism about just how reliable those mainstream sources are.

There are many important questions which will be answered in different ways by mainstream and social media sources: How many people really are dying from Ebola? How big of a threat is ISIS, really? Who we believe is, in part, a reflection of who we trust.

Teachable moments

In the end, I hope things settle down quickly so our students, all of our students, will come back to class. I'm already thinking about how we can turn this into a productive discussion about where we get information from, what sources we trust, and how we all might do things differently next time.

Tuesday, August 26, 2014

Survival of the Angriest

My basic sense of the discourse about politics (articles, comments on online news websites, blogs, TV shows, radio shows) is that a certain rhetorical mode or style unifies those on the far right and far left. It is characterized by anger at authorities on the opposite end of an ideological or demographic spectrum. The "bad guy" in this mode could be those who support a patriarchal society, those who support a racist society, Obama, Bush, Congresspeople, university administration, corporations such as Comcast or Walmart, "Wall Street", unions, mainstream media, etc. I'll call this, for lack of a better term, the "anger-laden" rhetorical style. It succeeds by invoking strong, negative emotions in information consumers (arguably, you could include discourse that invokes or reflects anxiety, but prior research suggests that anger and anxiety operate differently when it comes to news). We could contrast this to a drier, scientific, inquiry-based style (a la fivethirtyeight.com), or a style that attempts to present both sides of an issue in a relatively affect-less way (a la The New York Times or the BBC). I recognize, of course, that no sources are totally bereft of affect-invoking content or style, but that when compared to, say, FoxNews or the Huffington Post, it becomes clear that inquiry-based or relatively affect-less sources employ a style that depends less on pushing the anger buttons of the audience.

I sense that in the arena of news or discourse about public issues, the ratio of anger-laden style information to other styles of information has changed, and that this change is associated with increasing partisanship among political leaders (even while the partisanship of the overall US population changes very little).

It is entirely possible that the angry, negative-affect-laden rhetorical style was always dominant (see the rhetoric of some popular newspapers' editorials, muckrakers, etc.), but I would counter that by noting that the decline in consumption of mainstream news sources (which are more apt to favor an affect-less style) and the ascendance of non-mainstream sources (which are more apt to favor an anger-laden style) is undeniable.

Perhaps the ascendance of the anger-laden rhetorical style of conveying political/public opinion information draws in a segment of the population that didn't spend much time reading or viewing such information at all. In other words, there was always anger-laden, affect-less, and inquiry-based styles of conveying this kind of information, but fewer people overall were spending less time with any kind of political information than is the case now. Now, we have more consumption of such information, which sounds great, but actually the people who are new to the conversation are driving the demand for (and the supply of, in a market-based news environment) anger-laden information. So yes, very few people ever consumed affect-less information and they were probably always outweighed by consumers of affect-laden information, but not to the extent that they are outnumbered now that there has been an influx of new news consumers and new news sources to meet the increased demand. The ratio of affect-less to affect-laden information shrunk in response to the demands of an influx of new consumers

This change in that ratio could push moderates out of the online conversation about politics and away from offline political participation. The moderates would still exist, and any public onion poll would detect their presence and reflect an overall population that didn't go through the increase in partisanship that our leadership has undergone. There might be a spiral-of-silence effect happening with the moderates: they see that their voice isn't represented in the articles themselves or the comments sections and so they spend their time on other things: their own families, their own hobbies and interests.

Of course, any such argument lamenting this shift toward affect-laden information leaves one up to the accusation that one is pining away for "the good old days" when sensible, logical elites dominated public discourse and the political arena while the hot-headed rabble kept to the fringes. It is impossible for me to deny the historical reality that elites often dismissed arguments they didn't agree with simply by dismissing rhetorical styles that privileged emotional, subjective experiences over scientific or quasi-scientific approaches. I certainly recognize the value (and inevitability) of emotion in public discourse, and the fact that all arguments contain some combination of objectivity and subjectivity, of "hot" emotion and "cold" logic.

But I just don't see the current use of an anger-laden rhetorical style as the reflection of pre-existing public opinions and rhetorical styles that were heretofore excluded from public discourse. I see it as driving a shift in political thought and behavior, one that excludes anything other than anger (including positive-affect messages favoring compassion, affection, or understanding). And even though it is problematic to assert (either explicitly or implicitly) the superiority of cold rationality, I think its total exclusion (or even a significant reduction) from public discourse would only lead to an increased inability to see the world through anyone else's eyes, and that this wouldn't be a good thing.

But I'm in idle speculation mode here. I seek evidence of this, and I don't have much to point to right now that would suggest I'm on to something or that I'm dead wrong. Does the ascendance of anger-laden information (if it even is truly ascendant) reflect increased political participation and/or does it increase the polarization of elected officials and the disenfranchisement of moderates, lessening any chance we have at mutual understanding?


Thursday, June 12, 2014

Are School Shootings Becoming the New Norm?

It's a familiar story: a string of low-probability/high-consequence events capture the front pages and they become the subject of public discourse, including plenty of editorials. Once it was airplane hijackings or serial killings. Now, it's school shootings. We need to do something to stop this trend of horrible events. In most cases, it's beyond debate whether or not the events are horrible. But are they indicative of trends (i.e., an increase over the prior frequency level of similar horrible events)?

Recently, a statistic has been making the rounds on blogs, social media, and in the mainstream news: 74 school shootings have occurred since the shootings at Sandy Hook in late 2012. The statistic comes from a group that has a rather explicit agenda: Everytown for Gun Safety. While it does give us some information about an important social phenomenon, my initial feeling about this statistic is that when taken out of context (and it is almost always shown without any context), it is apt to mislead.

First, there's the question of how we define "similar events". Do we count incidents in which guns were discharged but didn't kill anyone, or injure anyone? Do we count homicides related to drug deals? Do we count suicides? Do we count accidental gun discharges? Do we count colleges as well as elementary and secondary schools? In the case of the above statistic, Everytown for Gun Safety has counted all of these as "school shootings".

Why does it matter how inclusive our definition of "school shootings" is? Aren't all of these shootings horrible events that we should seek to avoid? In my opinion, yes, absolutely. However, if we're trying to understand a certain social phenomenon and whether a string of recent events are part of a trend, broad definitions only muddy the water. The social and psychological processes involved in suicide-by-gun at or near schools is likely to be different in important ways than those processes as they are applied to individuals like Adam Lanza and Elliot Rodger. Obviously the common denominators are "school" and "gun", and if you're of the mind that gun control is the ONLY solution to the problem of any kind of school shooting, then you may not care about the differences. But you should. Even if you're in favor of gun control and you think it will cut down on the number of people who die each year in school shootings, it won't help to ignore other factors, and I think overly-broad definitions of "school shootings" encourage this kind of ignorance. Is the increase in school shootings due to an increase in angry-young-males lashing out at the world, an increase in drug/gang-related shootings, or both? In order to address the issue, it's important to know.

Is it really a trend? If you just tell someone how many incidents occurred in one year, that doesn't really give them a good idea of whether or not it is part of a potentially alarming trend. This is the real reason I'm writing about this. I was genuinely curious. I wanted to know if the headlines were part of a familiar kind of hysteria about a random rash of low-probability/high-consequence events or if they're on to something. It wasn't easy to tell just by reading the news or the blogs. But there IS data on this. It ain't perfect data, but I think it may help me get an answer, this is one of the things I absolutely love about today's Internet - you have access to data and can see for yourself whether there is evidence to support a conclusion.

I used this list from Wikipedia of school shootings in the U.S., defined as "incidents in which a firearm was discharged at a school infrastructure, including incidents of shootings on a school bus." It included K-12 schools as well as colleges and universities, so the definition is pretty similar to that used by Everytown for Gun Safety. The list draws from newspaper archives. By virtue of their newsworthiness, school shootings seem unlikely to have ever gone unreported, so I think we can safely assume that this list is a fairly accurate record of school shootings and doesn't contain much in the way of systematic measurement error. 

Before I show you what I found, take a moment to guess: how many school shootings do you think there were in 1880? How about 1904? 1970?

(scroll down for the answers!)














Here's what I found:



So yeah, it does seem like there's something crazy going on in the last 18 months if you define school shooting this way. Why do I designate the last 18 months as the time period to pay attention to? It's about the time that the Adam Lanza killings occurred, and it's also near the beginning of the year, but these aren't good reasons to use this date as a cut-off. But if you look at the number of shootings per month, they really seem to go up in January of 2013.

There is plenty of seemingly random year-to-year variation across the whole 164 year period, but the number of incidents tend to fluctuate between 0 and 5 per year. It's worth noting that the last year there were no school shootings was 1981, so you could use that as a the beginning of the trend if you really wanted to.

It was weird for me to read about children shooting themselves in the early 1900's at school. It doesn't fit with my conception of the culture at the time. So school shootings aren't unprecedented, but after looking at this data, it seems hard to deny that the problem has gotten much worse in the past few years (or possibly since 1995 or 2005). If there were, say, 10 school shootings in 2013, I think you could possibly say that it was due to random variation. But there have been 31 in the first half of 2014!

What if we dig a bit deeper and look at the circumstances around each shooting. Do we see differences between the shootings from, say, the early 1900's and the shootings of today? In my search through the shootings since the 1850's in schools, I tried to isolate ones that I thought were similar to Sandy Hook: not suicide; not accidental; and not directed at one particular individual for reasons of, say, revenge or a rejection; not gang-related. I call this kind of shooting "mass school shootings". I include cases in which an individual clearly attempted a mass school shooting but was thwarted (though there are only a few of these, so they don't make much of a difference). Again, I must emphasize: I'm not saying that those other kinds of shootings aren't problems that need to be solved, but only that I want to isolate Sandy-Hook-ish shootings to see if there is indeed a trend. Here's what I found:




Note that the Y axis is different than the Y axis in the first graph. If I had used the same Y axis in both graphs (or included both lines in the same graph), it would've been hard to see the mass shootings line, so this is why I used two different y axes.

If you look at most of the school shootings pre-1966, they're directed at a particular individual and the motives are typically revenge (for a bad grade or being rebuffed by a would-be lover). Again, it was weird to read about mass school shootings similar to Sandy Hook that took place in the 1800's (though there were just two, so it really was anomalous).

As you can see, there's a similar pattern to the pattern in the overall school shootings: not much happens until recently. But what do we mean by "recently"? You could easily put the starting point of the trend at 1985, the first year there were more than two separate incidents of mass school shootings. What about since Sandy Hook? Let's take a closer look at the data, starting at 1985:



So, 1985 was a bad year, and so were 1988, 1998 (the 90's in general were pretty bad for this type of thing) and 2006. 2013 (the only post-Sandy-Hook year in our data set) is bad, but not much worse than these other years. In the first half of 2014, there have been three mass school shootings, but it would be dangerous to extrapolate from that and estimate that there will be six for the year. Extrapolation with such small sample sizes rarely maps on to reality.

In the end, here's what I get from this little exercise. The truth, such as I can determine it from available evidence, is that school shootings in general have become a lot more common in the last 18 months, but it seems unwarranted to say that Sandy-Hook-style mass school shootings have become a lot more common in that same time period. If you wanted to identify a starting point for the cultural phenomenon of mass school shootings, you'd be better off going back to 1985. If I'm looking for a culprit or cause of this phenomenon, I wouldn't look at things that are going on right now in our culture and in our laws. I would look at things that have been around and haven't changed much since 1985.

As would be the case with any phenomenon, my attitude would change if new evidence warranted such a change. If there were no more school shootings this year, I'd stick to my interpretation: nothing new in the world of mass school shootings since '85. BUT if there were 3 or more, then I'd reconsider my outlook.

From looking at this data, I also get the impression that there is an urgent problem with shootings at schools, but that this problem isn't akin to the problem of Sandy Hook or Columbine. Like any compassionate person, I'm appalled at that rapid increase in school shootings you see in the first graph, and I want to know more about why it's occurring and what we can do to stop it. A lot of these school shootings are committed by sane people who have a deep disagreement with another person and (importantly) access to guns. If anything, I think this analysis makes the case of those who choose to pin the problem of school shootings on a lack of proper mental health care and not on gun availability (I'm looking at you, Wayne LaPierre!) a lot weaker. School shootings are skyrocketing, according to the evidence, and most incidents don't involve mental health issues as such. So, maybe addressing the issues of gun availability, conflict resolution, and, perhaps, a cultural component may be an effective way to lessen the number of overall school shootings.

This analysis does NOT make for an easily conveyed, pithy soundbite. Because they need something pithy, the news and bloggers and others have latched on to a pithy-put-misleading alternative - the "74 since Sandy Hook" stat. On the one hand, this stat may grab more people's attention, gets them to click and gets them to post. On the other hand, based on all the evidence I can see, it IS misleading. It's use opens up those trying to convince others of the severity of the issue to attacks based on their use of misleading statistics.

This is, I would say, a familiar story in the use of statistics related to emotionally-loaded, low-probability/high-consequence events. It's a story that's worth returning to when discussing media literacy.

One final note: in my search for information about this topic, I found this article on CNN that actually DID dig a little deeper and found results similar to my own. They found a few more "Sandy-Hook-like" shootings because their definition differed slightly from mine, and (importantly) they had no historical comparison so they can't really talk about trends, but still, it gave me hope for more context when reporting stats in news. Big ups to CNN (though you really need to stop with the auto-playing videos).

Data source: http://en.wikipedia.org/wiki/List_of_school_shootings_in_the_United_States#cite_ref-47

Friday, June 06, 2014

Hardwiring and Software

At last week's symposium on media choice here at Drexel, the term "hardwired" came up a few times. This term pops up a lot these days in discussions of cognition and I wonder whether its use obfuscates as much as it explains.

The particular context in which it was used last week related to news: how and why certain people attend to certain news sources. People, so the argument went, are hardwired to seek out arguments and evidence with which they agree, and when they do happen to encounter counter-attitudinal arguments and evidence (i.e., stuff with which they don't agree), they are hardwired to interpret it as biased. When used in this context, what does "hardwired" really mean?

Hardwired reactions to stimuli in our environment are automatic and, as the metaphor would suggest, more-or-less permanent. Hardwired cognition/behavior are the products of adaptive processes that occurred over thousands of generations. We are born with these reactions; we don't need to be taught. Our startle reflexes, our orientation to faces, and our fear of spiders are all hardwired. Then there are reactions that are learned and, through repetition and conditioning, become automatic. It's easy to just lump both of these kinds of reactions together because they both involve automatic, quick processing of information without any voluntary control. But the hardwired reactions, I think, are harder to change than the automatic learned reactions because they have been around longer.

So let's assume that the aforementioned tendencies relating to news are, indeed, truly hardwired. None of us had to learn to seek out evidence we agree with and to view evidence that we don't agree with as biased. These information processing strategies evolved over many generations. Perhaps they're extensions of our need to preserve a stable sense of self or a coherent picture of our environment and where the rewards and threats lie or our social standing within a group of allies. This leaves us with a few important (and oft ignored) questions about our hardwired reactions to our information environment.

Can hardwired reactions to stimuli be "rewired"? Not easily, if at all. I don't think you could condition folks to seek out counter-attitudinal news and expect them to pass this tendency on to their offspring through their genes (which is what I take "rewired" to mean).

Can hardwired reactions to stimuli be overridden? Yes. Actually, I think this happens all the time. All of civilization, it's been said, is a kind of imposition on many of our hardwired reactions to stimuli, an attempt to override and otherwise control (i.e., repress) reactions that would be destructive in the long run to large numbers of people trying to live together (i.e., society). The repression of impulses is so ordinary that we forget the many ways the contours of society keeps them in check, through laws, rules, social mores, and restricted availability.

How strong is a hardwired predisposition? Put another way: how hard would it be to override a hardwired reaction to stimuli? This is another case in which the question of magnitude (what I like to call the "how much" question) is ignored, and I think it's ignored strategically. If you ignore the fact that some hardwired reactions are far easier to overcome than others, you could say that an inborn impulse that is not that hard to control is just like another inborn impulse that is almost impossible to control - they're both "hardwired". If it's somehow to your advantage to say that some behavior is unchangeable (say, if you're defending ideologically polarizing news coverage because it caters to our hardwired orientation toward information), you'll note how it's hardwired and leave it at that.

Just how hard is it to change behavior that is hardwired depends on the behavior, maybe on when it evolved and what's at stake (i.e., how great the reward or punishment for behaving in ways inconsistent with the predisposition). You can get most people to overcome their hardwired desire for sweets easier than you can get them to overcome their hardwired desire for sexual gratification or desire for novel stimuli or inclination toward competition or cooperation. Hardwired reactions or behaviors vary in the extent to which they are capable of being overridden. It is possible to know how hard it is to change a particular hardwired behavior through experimentation.

What the heck does this have to do with software?

The design of software affects what types of information are available to us in certain places at certain times. There are various reasons we're averse to the idea of restricting availability of information in any way. Information is speech, and speech should be free. More information and more freedom to choose just seem like inherently good things. But really, our access to information is restricted, in some sense, all the time. We see a fraction of all available information, if only because we can only process so much information and the amount to which we have access via the Internet has grown exponentially. The fraction of the total news information to which we pay attention is determined primarily by the aforementioned hardwired reactions to stimuli and software that makes certain information available to us based on our prior impulsive behavior (i.e., what we click on without thinking too deeply about it). Perhaps our hardwired reactions to stimuli determine our news consumption behavior not because they're all that hard to change, but because nothing bothered to stand in their way.

Here's some good news: experimenting with information environments is actually much easier than experimenting with our environment in general (i.e., the one we walk around in everyday). Simply installing browser extensions can remove ads from our information environment or restrict access to whatever websites we choose. Algorithms can recommend counter-attitudinal news stories. Installing a "Respect" button on comments sections, next to the "Like" and "Recommend" buttons, can increase the likelihood of exposure to counter-attitudinal messages.

You can't get rid of those hardwired impulses to seek out information we agree with. But can you override them? I don't think we've explored that question fully yet.

The discussion of hardwiring gets at an important underlying issue: how fixed are any of our characteristics? It's an emotionally loaded question. There's a lot at stake. But it's exciting to be able to explore the possibilities of changing ourselves and our hardwired impulses by changing our software.

Saturday, May 31, 2014

The Pros and Cons (mostly cons) of Satire

Last week in our news literacy class, we had a great, thought-provoking guest lecture by Dannagal Young. Young’s work is part of a growing body of research providing evidence of the benefits of watching satirical news programs such as The Daily Show.

So far, we have evidence suggesting that Daily Show viewers vote more than non-viewers, participate more in political campaigns, score higher on political knowledge, and consume more news from other news sources. This seems to answer the critics who worry that satirical news programs might lead to cynicism. But perhaps it doesn’t.

There is an assumption that cynicism would manifest itself as (or co-occur with) apathy. If The Daily Show made viewers cynical, then we wouldn’t expect them to be more civically engaged than non-viewers.

Cynicism or Being Principled?

I’d critique the assumption that cynicism’s connection to apathy is the real worry. I’d define and measure cynicism as negativity about something combined with an inability to respond to evidence. So, would a liberal person acknowledge evidence that, say, unions make countries or states less competitive, or would a conservative acknowledge evidence that gun control laws dramatically reduce injury and deaths. Would they just say "you're misinterpreting the evidence" or "you're not looking at the evidence in proper context" or "you're ignoring this other important piece of evidence". An inability to change one's view based on new evidence is, to me, a problem.

But other people (highly partisan people) might refer to this orientation toward new evidence as “principled”: you think that unions are good no matter what, military intervention is bad no matter what, corporate mergers are bad no matter what, government regulation is bad no matter what. So, are you being cynical or are you being principled? In either case, I suspect that exposure to certain kinds of satire news (The Daily Show, or Rush Limbaugh, who could be understood or defended as a satirist) increases the frequency and intensity of this orientation. Meanwhile, websites that present information on the same topics but do so through presenting evidence in a more-or-less straight-faced manner, like fivethirtyeight, might decrease the frequency or intensity of this orientation.

Dannagal Young pointed out, rightly, that getting people to really consider evidence that contradicted existing beliefs would be pretty hard. In some sense, we are “hard-wired” not to do such a thing. There’s plenty of evidence to suggest that the resistance to certain kinds of evidence is a matter of identity preservation. But it seems worth considering the extent to which media can reinforce this instinct to preserve one’s identity.

Angrier Candidates

Consider the difference between the rhetoric and reputation of two candidates: Bill Clinton vs. Elizabeth Warren. Or, if you’d prefer to compare conservatives: George W. Bush as a presidential candidate in 1999 vs. Ron Paul. In both cases, the former candidate runs a campaign based on a kind of compassionate rhetoric while the latter candidates run based on distrust and anger at established authorities. Which of these candidates’ rhetoric resonates with voters? I’d argue that the satire of Michael Moore and perhaps that of Jon Stewart cause viewers to favor rhetoric that is based on anger against the status quo rather than rhetoric of compassion. Is this necessarily bad? I’m not sure. But it certainly doesn’t move us any closer to a world in which an informed electorate is selecting a candidate based on evidence as to who might be the most effective candidate. It’s just survival of the angriest.

Increasing skepticism, but of whom?

As Young pointed out during her guest lecture, humor and satire have been used to prompt audiences to question authorities since the days of ancient Greece. But the humor also stops the audience from questioning the arguments made by the comedian (or to view the evidence differently). Humor prompts audiences to let their guard down and cultivate a kind of affinity for the comedian and his.her views. Humor is an effective way to stop people from questioning the authority of the comedian. It reduces people’s ability to counter-argue. Again, will this help people select the best candidate? I think it just makes people less likely to consider evidence that is inconsistent with the worldview implicitly endorsed by the satirist.  

Hard-wiring is not fate

So, can media challenge our existing worldviews by cultivating mindful consideration of the issues of the day? Last week, I hosted a symposium on media choice at Drexel University. One of our guest speakers, Talia Stroud, had an argument similar to Young’s about how we are “hard-wired” to defend our ideological in-group or “tribe”. We are predisposed to political or ideological polarization. The instinct to read other viewpoints and arguments as somehow biased and flawed reflect deep-seated, automatic way of acting and thinking. Stroud presented the results of years of research that, I thought, reflected a terrific persistence and a refusal to accept hard-wired predispositions as unchangeable. She presented the result of study after study in which she and her colleagues tried to reduce ideological polarization when people are exposed to viewpoints that differ from their own. Time and again, the interventions not only didn’t work, but often made people more polarized! Finally, she had seen evidence of reduced polarization when she presented news website users with a “respect” button alongside the ubiquitous “like” button. When people are prompted to respect a viewpoint other than their own along with the ability to endorse it through a comment or a “like”, some of them will do so. This subtle change to websites can affect discourse, and I think this change in discourse could potentially reduce ideological polarization.

Even if a large number of people weren’t ever oriented toward objectively considering evidence for or against a politically-charged topic (e.g., man-made climate change, effects of reducing unemployment benefits) in the past, are we to let this dictate what we’re capable of? Are we mistaking predispositions for unchangeable fate? Are we not to even explore the possibility of change? To do so would seem, well, cynical. And Stroud's research shows that there is already evidence that simple tweaks to the media choice environment cause changes in behavior and, perhaps, thinking, despite whatever hard-wiring we have. 

Saturday, May 24, 2014

who wants to know

It's been another thought-provoking International Communication Association annual conference. Among other effects, it's caused me to consider writing half-formed thoughts in this blog. Basically, these thoughts would be tweets but for that lack of brevity. Perhaps they'll take the form of rants, provocations, or polemics.

So, privacy. About six years ago, I blogged about the topic. My opinion on the matter (that is, my opposition to privacy absolutism) has not changed, nor (frustratingly) has the public "debate" about privacy. Mostly, I get the sense that privacy situationalists are an increasingly rare breed. Privacy seems to be an issue on which many on the left and right agree: very few people want other people spying on them. On the aforementioned entry, I raised one possible benefit of living in a world where we occasionally ceded our privacy to trusted authorities: reduced threat of being attacked through judicious use of surveillance. Another payoff of relinquishing privacy I wrote about might be an increased tendency to select options with delayed payoff: when no one is watching, you're more inclined to indulge in immediate gratification (fine in moderation, but not so good if it's all you're choosing).

Here's another possible downside to privacy absolutism. Privacy, carried to an extreme, negates our knowledge of one another. If we become distrustful of one another, no pollster, researchers, policy maker, etc. will be able to know anything about human behavior and be able to produce empirical evidence of how we think, feel, and behave. In some sense, privacy is the enemy of knowledge of human behavior.

I suppose there's an alternative though, one I've been exploring recently: the quantified self. It's amazing what we can learn just by tracking our own thoughts, feelings, and behaviors. Still, I sense that there are limits to what this approach can produce. We can understand ourselves through these means, but can we understand each other? If privacy concerns limit our approach to understanding human behavior to self-knowledge, I sense that we'll lose something important.

Tuesday, April 08, 2014

Stopping Media Addictions

Here's a question about addiction: to what degree does it depend on the environment, as opposed to the brain of the individual?

We tend to think of addiction as something that lives in the brain of an individual. Addiction is, in part, inherited. We can see it when we look at images of addicts' brains. But as I understand it, genes only predispose someone to becoming more easily addicted than other person. And yes, you can see differences between addicted brains and non-addicted brains, but these images don't necessarily tell you to what degree the behavior and its neuro-chemical correlates are products of genes, of habit (i.e., repeated behavior in the past), or of the environment (specifically, the array of options and stimuli one has in front of one's self). So neither of these bits of information really tell us all that much about the relationship of the addict to the choice environment.

Perhaps addiction is the name for a behavior that is less and less responsive to the environment. Instead of responding to the negative consequences of choosing to behave in a certain way (hangovers, social disapproval, damaged relationships, loss of professional status, etc.), the addict continues to repeat the behavior. The more addicted one is, the less it matters what goes on around them. All that matters is the repetition of the behavior.

But maybe we overestimate addicts' immunity to characteristics of their environment. Many approaches to stopping the compulsive behavior associated with addiction attempt to alter the way an individual responds to their environment. This is done through therapy, drug treatment, or other means. But in other cases, we try to alter the behavior of addicts by changing the environment itself: making them go cold-turkey, or removing certain cues in the environment that trigger the behavior. Many times, these approaches don't work. Addicts are able to find the substance to which they are addicted or engage in the behavior again, and its hard to remove ALL triggers in an environment.

But when we think about the environmental-manipulation approach to altering addictive behaviors, maybe we're not thinking big enough. What if we had an infinite amount of control over the environment? We could populate the environment with many other appealing options instead of merely removing the one that is preferred by the addict. Whether or not the addict relapses after being deprived of whatever it is they're addicted to would depend not only on how long they're deprived of it but also on what their other options are. What if we could take an addict who was down and out and plunk them down in a world in which they have many other opportunities for challenging, fulfilling accomplishment, nurturing, nourishing relationships, and spiritual and emotional support? I think that in many cases, the behavior would change, permanently. So addiction really depends on things outside of the individual's brain, but its hard to see this when our attempts to assess the efficacy of such approaches have been so modest.

Of course, it is difficult if not impossible to just plunk someone down in that perfectly challenging, supportive world. But the amount of control an individual has over the stimuli in their environment has changed. In particular, our media environments can be fine tuned in many different ways, though at present they just end up being tuned to suit our need for immediate gratification. We could, in theory, fine tune that environment to gradually ween someone off an addictive stimuli such as a video game, a social networking site, or the novel, relevant information provided by the Internet in general, and replace it with something that satisfies the individual in some way. This would be much harder to do with other addictions, like alcohol. You can't re-configure the world to eliminate all advertisements for alcohol, all liquor stores, all depictions of the joys of being drunk. It's simply harder to alter those aspects of the environment. Because it was so difficult to even try these environmental manipulation approaches to altering behavior, we haven't fully realized their potential.

By fine tuning media environments (rather than just demand that media addicts go cold turkey or other "blunt instrument" approaches to media addiction), I think we'll realize that media addicts are more responsive to their environments than previous thought. I'm not saying that media environments are infinitely manipulable; only that we haven't realized the full potential (or even really scratched the surface) of this approach to halting media addictions.

Monday, April 07, 2014

(Mis)Understanding Studies

Nate Silver and his merry band of data journalists recently re-launched fivethirtyeight.com, a fantastic site that tries to communicate original analyses of data relating to science, politics, health, lifestyle, the Oscars, sports, and pretty much everything else. It's unsurprising that articles on the site receive a fair amount of criticism. In reading the comments on the articles, I was heartened to see people debate the proper way to explain the purpose of a t-test (we're a long way from the typical YouTube comments section), but a bit saddened that the tone of the comments made them seem more like carping and less like constructive criticism. Instead of saying someone is "dead wrong", why not make a suggestion as to how their work might be improved?

One article on the site got me thinking about a topic I've already been thinking about as I begin teaching classes on news literacy and information literacy: how news articles about research misrepresent findings and what to do about this phenomenon. The 538 piece is wonderfully specific and constructive about what to do. It provides a checklist that readers can quickly apply to the abstract of a scientific article, and advises readers to take into account this information, along with their initial gut reaction to the claims, when deciding whether or not to believe the claims, act on them, or share the information. It applies to health news articles in the popular press, but I think it could be applied to articles about media effects.

Now, the list might not be exhaustive, and there might be totally valid findings that don't possess any of the criteria on the list, but I think this is a good start. And really, that's what I love about 538. I recognize it has flaws, but it is a much needed step away from groundless speculations based on anecdotes that are geared toward confirming the biases of their niche audience (i.e., lots of news articles and commentary). And they appear to be open to criticism. Through that, I hope, they will refine their pieces to develop something that will really help improve the information literacy of the public.

The piece got me thinking about the systematic nature of the ways in which the popular press misleads the public about scientific findings. They tend to follow a particular script: The researchers account for most likely contributors to an outcome in their studies and test these hypotheses in a more-or-less rigorous fashion. The popular press does not mention the fact that they accounted for certain possible contributing factors because of limited space and the need to attract a large, general audience. When people read the news article about the research study, they think "well, there's clearly another explanation for the finding!" But in most (not all, but most) cases, researchers have already accounted for whatever variable you imagine is affecting the outcome.

In other cases, the popular press simply overstates either the certainty that we should have about a finding or the magnitude of the effect of one thing on another thing. Again, if we look at a few things from the original research article (like the abstract and the discussion section), we should be able to know whether or not the popular press article was being misleading, and we wouldn't even have to know any stats to do this. 

The popular press benefits from articles and headlines that catch our eyes and confirm our biases. That's just the nature of the beast. Instead of just throwing out the abundant information around us, it's worth developing a system for quickly vetting it, and taking what we can from it. 

Thursday, March 13, 2014

What are you doing on that phone?

I wrote a sort of stray observation in the conclusion of my dissertation about a certain attribute of new media technologies: the way in which screens on portable technologies are typically oriented to face toward the user and remain obscure to others. The concerns over privacy vis a vis these media relates primarily to strangers (e.g., governments, corporations) knowing very private things about us and possibly using them against us. But in another way, these devices give us more privacy, privacy in our immediate physical surroundings. The screen orientation makes it harder to see what someone is watching, reading, or doing on their device, but what's more, there is an expectation that others not snoop and try to see what the user is watching/reading/doing.

I've been thinking about this, thinking about how it relates to the fact that we consider it to be rude when others are not paying attention to us but paying attention to their devices instead. I'm not talking about the moments when interaction is clearly expected (e.g., a job interview, a first date). Most of our lives are made up of moments where we are in the presence of others that we could converse with, but that it is not explicitly expected: hanging out with old friends, a lazy Sunday afternoon with a spouse. I think that part of the reason it feel can so alienating or discomforting to be in the presence of someone who is using a device is precisely because we do not know what he/she is reading/viewing/doing.

Here's a counter-example: someone you're hanging out with is reading a book. They're not talking to you, and you can't see precisely what they see (as you would if you were watching TV together), but you know the type of experience he or she is having. With a book, we know where we stand. We know what we're competing with. There is a sense of the psychic distance between ourselves and the person with which we share physical space. With a phone or a laptop? The person could be doing work, seeking out a better-looking partner than you, reading that website you hate, or that one that you love. I suppose that part of the anxiety comes from the fact that even if they're doing something you don't like, you can't do anything about it because you must respect their privacy.

There are ways to test this effect. It would probably work best with significant others, who have a long-term relationship and incentive to stay on each other's good sides and probably have some existing feelings regarding the other person's behavior. Get one person in a room with another who is either reading a book or a magazine, or get the person in a room with another who is on a phone or laptop. You could even have them in rooms with people using tablet PCs that are in one of two orientations: in the first condition, the screen is visible to the other person in the room; in the second condition, the screen is not visible to the other person in the room. Do the levels of anxiety or social exclusion (and the feelings of pain that come with it) increase when someone shares a room with a person using media technologies and they cannot see what the person is doing with those technologies?

If this turned out to be the case, then maybe we need to refine our media use etiquette rules. It's not necessarily a problem when people use digital devices around one another, and even if it is, it's probably unrealistic to expect people who live with one another to abstain from using digital devices around one another for long periods of time. Instead, perhaps we need to have more shared screen time. This is likely the idea behind technologies like Wii U or Chromecast: trying to give us the shared experience that television gave us while maintaining the interactivity and expanded options of the Internet. The only issue might be that we're left with an age old problem: who holds the remote? Who decides on the shape of the shared experience?

Maybe that's the trade-off. In order to avoid the social exclusion that comes with absent presence, we must relinquish some degree of privacy and control over our experiences.

Sunday, March 09, 2014

Facebook Photo Album as Everyday Storytelling Device

Let's say you go on vacation for a week. You travel to some interesting place. How do you tell people about your vacation?

Social media is, among other things, the way that we tell people we know (and, sometimes, people we don't know) about things that happened to us. It's a kind of everyday storytelling medium, a way in which we can relate to the choices made by a filmmaker or novelist. As with any form of narrative, Facebook does not include every event that happened from every point of view. There are two "editorial moments" as which people decide what gets left in the story and what gets cut out: the moment at which they decide to take a picture and the moment at which they decide to post it. In addition to determining what gets left in or out of the story, the person assembling a Facebook photo album can re-arrange the order in which pictures are presented.

I can think of several reasons why this kind of temporal re-arranging may take place. You may want to "set the stage" for people looking through your photo album (assuming they encounter it in a linear fashion, starting at the beginning and clicking the "next" button as they go through the pictures). You may wait until the end of your vacation to take pictures of, say, the hotel in which you stayed. But you may put those pictures at the beginning of your album, to orient the viewer. You may also save a picture that you took of an amazing sunset on the second day of you vacation for the last or next-to-last picture of the album, using it as a kind of climax to the story of your vacation (although, maybe it's just me who does this sort of thing).

Do you group all of the pictures of food together? There's a kind of organizational logic to this from the standpoint of the picture arranger. And yet, I would argue, it leads to an inferior experience for the picture viewer. There's a kind of tedium to seeing picture after picture of food. But if the pictures are placed among other events and thing - trips to the beach, happy people, monuments, etc. - then it might feel more like you are experiencing the vacation as the individual experienced it.

But one wonders how much time people spend on telling stories through Facebook. Would people care about having an abrupt ending to an album? The difference between telling a story and simply putting things up is something a viewer might pick up on, at least sub-consciously.

ABRUPT ENDING



Thursday, February 27, 2014

The problem of giving consumers what they want

After about a year and a half of intermittent reading, I've finished the terrific Thinking, Fast and Slow. At the same time, I've been co-teaching a class on the future of television. I'm in the process of preparing a final lecture for the class, and I plan on drawing from Kahneman to talk about supply and demand in the world of TV (or whatever TV will become, i.e., some system for the distribution and consumption of video online).

The rhetoric of giving viewers what they want, when they want it is a big part of how new TV technologies and content are being sold. As we move from an environment of restricted choice to one of expanded choice, it would seem that consumers of media are more likely to get what they want. How could it be otherwise?

First, let's think about "what they want": how to define that, how to measure it. There have always been feedback mechanisms built into commercial systems, ways in which producers determine demand so as to know what and how much to supply. In the days of early radio and TV, producers heard from audiences via letters. Then came Nielsen, with ever-improving sampling techniques that more closely reflect what people are choosing, though the Nielsen system and the TV choice environment it worked with had limitations: there were only so many options from which to choose; there were some people (e.g., college students) who were difficult to track and so they didn't show up in Nielsen's picture of audience demand. What if we could eliminate these limitations? Wouldn't we have a more pure picture of audience demand to which we could suit our supply?

Technology improved, costs fell, and these trends helped eliminate flaws and overcome limitations in the feedback system. I'd like to raise two issues related to this improving image of audience demand in the TV marketplace. The first is that there are certain attributes of the TV system that aren't overcome by simply monitoring audience behavior more closely and/or removing restrictions on the viewers, giving them more "control" (which is the general progression of improvements in TV technology). The second is that a market in which supply and demand were more perfectly matched may not be desirable.

No matter how good audience behavior monitoring gets and no matter how cheap it is to implement, it will run up against privacy concerns of the audience. As choice expands, we may get a clearer picture of how people behave in any situation without any limitations, and its bound to reveal some ugly truths about individuals and groups behavior. As much as the audience wants to have its desires understood, it wants to be selective about what it shares about those desires, especially in a world in which desires are so closely associated with identity and potential.

Also, even if we could know exactly how people behaved in a choice environment with few restrictions, are we then permanently locked in to what they will desire? There's a wonderful scene in Mad Men where Don Draper reacts to the in-house psychologist who tells him what her focus group observation revealed about audience preference. Don's objection may be to the difference between what people say they want and how they behave, but I think the more fundamental objection is that people's future desires can't be predicted by their past desires. Advertisers and content producers are in the business of telling people what they will want, and as much as that sentiment rubs people the wrong way, it explains something about audience behavior no amount of data or market research can. Of course these are the words of a defensive ad exec asserting his value in an age where empiricism is creeping into an artistic realm. But I think there's some truth to it.

Don is wrong about not being able to predict future behavior from past behavior if he's talking about certain kinds of individual behavior. But the next trend, the next popular TV show, may not reveal itself no matter how hard you stare at people's current or past behavior. Psychologists may know that people pass on certain ideas to other people in certain ways under various circumstances, but the actual ideas they pass on can be set in motion by anyone with the budget to get in front of enough opinion leaders. Advertisers and show runners are just such people. They survive by developing new tastes, new markets for new stuff, and they can still do it in an expanded TV marketplace. So you have this force that will keep dropping ideas that are beyond whatever audiences currently want so as to do this, no matter how accurate the audience demand measurements get. 

Then there's a point made by Jonathan Franzen at a discussion during last year's New Yorker Festival. He discussed how the number of tweets or mentions on Twitter was now being used as a metric of how worthwhile an up-and-coming writer was to a publisher. That is, a publisher would sign a writer who had 100,000 followers or mentions on Twitter and not one who had 100, just because social media mentions and followers are pretty good indicators of present audience demand. This is another instance of the tightening of the feedback loop between creators and audiences. This could force writers to cater to the audience in ways that they did not before. But this idea makes Franzen and other content creators uncomfortable. Writers spend more time on self-promotion and homogenization of their work and less on developing their voices as creators. Better work is produced, so the thinking goes, when creators are not so beholden to current audience preferences (at least the ones that audiences are capable of articulating). The work is "better" not just from some elitist, subjective judgement of its worth but from a market standpoint: if an author or a show runner takes their cues from the Twittersphere, the product will be less pleasing to that very audience in the long run than if the author or show runner listened to their inner muses.

Finally, Kahneman's book brings to mind that the definition of what an individual wants (and the way that the individual ultimately acts based on their desires) depends on various characteristics of the choice environment, namely the timing of the choice, the number of choices, and the arrangement of available options. For example, whether or not there are thumbnails for similar videos on the side of the screen, as there are now on YouTube, may influence what people click on. Recommended videos on one's Netflix screen is another example, or songs that come up on Pandora. Users could've searched for whatever their hearts desired in the search box, and yet what they viewed was influenced by the arrangement of options, not solely the product of a pre-existing, internal set of preferences.

Two questions that popped into my head toward the end of Thinking, Fast and Slow related to Kahneman's conceptualization of two different kinds of thought processes: System 1 which is intuitive, automatic, fast, instinctual, emotional, and System 2 which is deliberative, slow, rational. Often times, these "systems" or ways of thinking reflect conflicting desires: System 1 wants to eat a burger while System 2 wants a salad. System 1 wants to watch an action flick while System 2 wants a documentary. So how we define consumer desire depends on whether we appeal to System 1 or 2. Here are my questions: What would a world look like that was entirely geared toward System 1, without restrictions? Are we now living in that world?

The inconsistency between what we say we want and how we act under various choice conditions is not infinitely large. Depending on what you compare it with, it could be considered insignificant. It doesn't make much sense to throw out the entire system of valuation because it is flawed. Better to detect the flaws and correct for the flaws in a way brings about positive individual and collective outcomes. The first step is one many people haven't taken: seeing the inconsistencies between what we say we want and what we choose in different circumstances. Then, think about the world or the life that we want and think about how to design a choice environment to bring those about. 

The relationship between supply and demand work in a marketplace of culture objects (e.g., TV shows) has a level of complexity that is not accounted for in the rhetoric that surrounds improving TV technologies such as Netflix and YouTube, both how they are sold and how they are celebrated by the press and by consumers. The relationship isn't infinitely complex, and work like Thinking, Fast and Slow lays out some rules for how people behave in certain choice environments.


Thursday, February 13, 2014

Remote Controls

This moment keeps nagging at me, demanding that I think about it, write about it. First, I must acknowledgement the ways in which metaphors, or the likening of one moment in history to the present moment, can hinder understanding. By cherry-picking the ways in which the two moments are alike based on our preconceived notions of the fundamental nature of the present moment while ignoring all the ways that the two moments are not alike (or the ways in which the present moment is similar to another moment in history), we don't move any closer to understanding our current moment. But here I use the past moment not as a means of comparison or metaphor, but as a way of identifying how certain trends in media use got started. 

I'm speaking of the invention and popularization of the television remote control. The remote, along with the increase in the number of channels, marked a crucial lowering of the barrier to toggling among choice. It was possible to browse entertainment options before, but not quite as easy, and that shift toward easy browsing marked a change from comparing several options to one another to what I call entertainment foraging. Our experiences of using media in an impulsive manner and the attendant feelings of guilt grow out of this moment. The internet and mobile devices have merely extended the logic of the remote control to more moments and areas in our lives. Even when we stay on a single website like Facebook or Buzzfeed, we are often hunting or foraging for some unknown thing. We tend to think of media use as content consumption or connection with an other, as individual experiences: skyping with a friend, watching a video, spending time on Facebook. But I'm interested in the moments in between, the time spent looking for something, the time spent choosing, the proliferation of what you might call "choice points". It's the glue that holds together the other moments, but it takes up a lot of time, perhaps as much time as the moments themselves.

When I started thinking about media choice, I thought that change from the traditional media choice environment to the new media choice environment was the change from deliberative choice (System 2, in Kahneman's terms) to impulsive choice (System 1). But eventually I came to believe that even if the options are few, when its a matter of how you spend your leisure time, the stakes are very low, and so you make a quick choice. There isn't much at stake, so why deliberate? Even when the choices were few, we probably still chose impulsively or ritualistically, without much careful consideration. So perhaps our media choices were always usually impulsive, but they were impulsive with many borders or restrictions, different borders and restrictions than the ones we have now. The options from which we chose leisure media experiences were limited by bandwidth and shelf space. The times at which we chose such experiences were limited by synced schedules and clear demarcations between work and leisure times and places. Without the borders, without the restrictions, the options have changed. When the options change (and this is highly counter-intuitive, but supported by a ton of empirical evidence), our choice patterns change. Increasingly, our impulsive choices, collectively or individually, feedback into the system that generates the option menus. Our options, and our selections, are dictated by the impulsive self with less interference from the outside world. This doesn't bode well for our long term self, our abilities to achieve long term goals.

What can we do about it? What are we doing about it? There are new technologies that form a middle layer between media applications that offer us options and our impulsive choosing selves. I call these software applications, like Freedom or Self Control, choice prostheses. Are they effective? That depends. In some ways, use of choice prostheses resembles dieting, and most diets do not work in the long term. In other ways, they resemble choice architecture or nudges, which are more effective in changing behavior in the long term. This is the next step in my research on media choice: to better understand how choice prostheses work and how they might best be used to change our choices for the better.