Sunday, May 19, 2019

The Now Audience and the Later Audience

Game of Thrones is wrapping up its run on HBO, and watching the show and the reaction to the show has rekindled my interest in storytelling and audiences. I'd like to avoid discussing the actual qualities of the show, as other online voices seem to have that pretty well covered. I'd like to focus on the abundance of voices and the qualities of the conversation about the show, and how it resembles conversations about the endings of other popular TV stories.

Twitter, Reddit, and the blogosphere (i.e., the 'now' audience) give us a segment of the public's immediate reaction to a story. As such, it seems to come packaged with the story itself: we get the show, and we get reaction of this part of the public along with it. For a long time, we'd gotten the judgments of professional critics alongside the release of cultural products like TV shows, books, and movies. These judgments would maybe help audiences to sift through the large quantity of content and decide what was worth their time, or perhaps open a door to a new way of interpreting those works. And surely, many audience members thought of critics as elitist, their opinions being worth no more than the opinions of anyone else, and thus ignored them.

I doubt that many people use the immediate judgments of the audience that is vocal on social media as a guide to help determine what they should watch and what they should ignore (perhaps the mere fact that people are talking about a show, even if they're trashing it, serves as the signal that the show is worth watching). Instead, I think it's mostly of interest for people who are already watching. Most of the conversation seems to be commiseration: finding the voices that echo the way you, as a viewer, feel but perhaps could not articulate in that way, and then amplifying those voices through upvotes, likes, share, and retweets.

It is tempting to see the reaction to an ongoing story like Game of Thrones as the reaction to the show. However, the segment of people who talk about the show online is just a small portion of the overall audience for the show, and certainly not a representative sample of that larger group. It's hard to know what the rest of the audience makes of the show, and, in the absence of any information about that larger segment of the audience, its easy for our brains to just fill it in with adjacent, semi-relevant information (see the availability heuristic): the reaction on social media.

Beyond that, there's also a widely held assumption that the reaction on social media drives subsequent reaction or opinion. This view acknowledges that the voices on social media are but a sliver of the overall public, and that they're not a representative sample, but assumes that they wield an influence over the larger public. This may not occur through some conscious subservience of the public to the opinion-leaders on Twitter, but, again, maybe because of an unconscious mental heuristic. The social media reaction is the initial impression, and initial impressions greatly influence subsequent impressions, as people tend to ignore other relevant information (in this case, actual qualities of the show).

There's no doubt that news media plays a role in this process as well. Whereas news outlets in the past would have noted the number of people tuning in to a broadcast and used that as a kind of index of cultural importance, the news media now can talk about the reaction to the show on social media. This coverage amplifies those social media voices. I'm betting that more of the news articles about the ending of Game of Thrones are about the online reaction to the show than was the case with the endings of previous TV shows, and so the social media reaction to shows is likely 'louder' than it once was, making it more likely that the general public sees that reaction as the reaction to the show.

This topic - the relationship between voices on social media, news outlets, and public opinion - is much larger than Game of Thrones or popular TV in general. How much do people who post on Twitter or, more broadly, on social media really influence the culture at large? The answer, of course, depends on a variety of factors, though I think that many just assume it drives the interest and opinions of the rest of the masses for the above reasons.

When might this not be the case? When might the social media voices not reflect or influence the reactions of the larger public in subsequent years? I've been thinking about the importance of a mechanism to make visible, or monetize (same thing?), the non-immediate reaction of the larger public (i.e., the 'later' audience). In the case of news events, perhaps historical documentaries are the mechanism. Years after an event, we tell a story about the event and often contextualize the immediate reaction of the public at the time (it's hard to think of a better example of this than Ezra Edelman's O.J.: Made in America). The perspective of the historical documentary is a corrective one. But most news events don't get this treatment. Most news events are of such great importance to the people living through them and of such lesser importance to later audiences that there's not enough incentive to create a mechanism for registering anything other than the immediate social media reaction.

In the case of popular stories like Game of Thrones, the mechanism for capturing non-immediate reactions of the larger public are international streaming video sites like HBO GO, Netflix, Amazon Prime, Hulu, and Disney +. Years from now, these shows will be available in media markets to audiences who have likely forgotten, if they were aware of it in the first place, the initial reaction to the show on social media. The reaction, which was an overwhelming part of the context in which the show was viewed at the time it was first released, won't be any part of the context in which they watch the show. In terms of financial incentives, the importance of the 'later' audience is far greater than that of the 'now' audience: there is far more money to be made off a show in, say, the Chinese media market from 2025-2035 than from English-speaking media markets in 2019 (i.e., the media markets that are most likely influenced by or reflected in the online reaction to a show).

For a show that is specifically tied to a cultural moment in time and may not have much of a life after its initial run, the subsequent streaming market in other countries may not be as important (although the ongoing success of Friends, what I think of as a pretty dated product of the 1990's, suggests that we may overestimate the importance of universality in determining a show's subsequent success). But Game of Thrones strikes me as the quintessential example of a show that isn't built for a particular time or place, and was probably sold to HBO as such. Of course, the show reflects some of the values of its creators who are embedded in 2010's United States culture, but it is not nearly as embedded in that cultural context as the social media reaction to the show. The online reaction to the show is certainly about elements of the show itself - character arcs, pacing, etc. - but it also reflects the experiences, emotions, and politics of the people writing about the show in 2019. When people watch the show 25 years from now in another country, I think they'll mostly just see a show about dragons and deceit (which are pretty timeless motifs).

To an extent, I've already seen this disjuncture between the 'now' audience and the 'later' audience with The Sopranos. That show ended just as the 'now' audience was finding its voice online. Critics had already noted the way the show had lost its footing, and then came an ending that seemed, at the time (and perhaps even now), designed to piss off the 'now' audience.

But how is the show experienced by the 'later' audience in 2019? It's likely that much of the 'later' audience still finds fault with the final seasons of the show, but they don't watch single episodes or even single seasons in isolation, and they don't have time to dwell on whether or not the episode or season disappointed them before moving on to the next episode or season. Often, they consume the show as a whole. More important to this discussion, they likely don't go searching for the social media reaction to the show as it was aired 20 years ago. Some of that reaction is still there on the web, and, in some way, the reaction to the ending feels more dated than the show itself. The Sopranos, like many other shows that occasionally ran afoul of their most ardent and vocal fans, has a long shelf life. Even Lost, a show I thought of as the best example of a TV show with a disappointing ending, is apparently being reappraised.

It's impossible to know for sure what the 'later' audience will think of a show, but watching the long-term success of shows like Friends and The Office on streaming platforms, and watching the amount of money streaming services like Netflix will pay to keep them, has me thinking about the ways in which it is easy to overestimate the market power, and perhaps the cultural power, of the 'now' audience.

Thursday, April 25, 2019

Why do some people believe smartphones and social media are harmful?

I try to read and listen to critiques of smartphones and social media with an open mind. Typically, these critiques note the addictive qualities of these technologies and lament their effects on our abilities to engage in deep, sustained contemplation and substantive social interaction. A decade of experience conducting research on the uses and effects of social media has taught me to be skeptical of these claims. The first thing we would expect to see, if these claims were true, would be vast differences between those who use smartphones/social media a lot and those who use them a little or not at all. Among younger folks, we'd expect differences in academic achievement. Among everyone, we'd expect differences in happiness, well-being, depression, and/or the nature of their relationships with others. Even if such differences existed, this would not necessary suggest that these technologies caused changes in individuals. Perhaps individuals possess some existing characteristics (such as a lack of a clear sense of purpose in their lives) that, ten years ago, would have caused them to watch a lot of television and be unhappy, and that now causes them to spend a lot of time on Facebook and be unhappy. When I look at the sum total of what studies have found over the past ten years, I don't even see much evidence of the correlations you would expect to see if the claims were true. To the extent that there are correlations between use and negative outcomes, the magnitudes of the effects tend to be small (far smaller than is suggested by the level of concern they've prompted among many).

This doesn't rule out the possibility that within the more recent past (say, within the last year or two), smartphones and social media may have started to have the kinds of negative consequences about which many people worry. Research moves at a slow pace, and the technologies and the ways people use them are changing. Personally, it's hard to look at some of the freshmen I teach in large lecture classes, who are seemingly unable to stop using their phones, and not think, 'Maybe the worriers are right after all. Maybe we had to wait until a generation of young people had a chance to develop technology use habits at a young age to really see the effects of the technologies on people.' I'm passionate about doing whatever we can to establish solid evidence of the effects of smartphones/social media use on mental health, academic achievement, and relationships. I'm passionate about improving our measures. I don't want to set the bar for evidence of pernicious effects unrealistically high. I just want to see solid evidence before I believe the worriers.

At the same time, I think it's worthwhile for those who worry about the effects of smartphones/social media to consider alternative explanations for what they see. If there turns out not to be evidence of smartphones and social media's pernicious effects, why might it seem as though they have those effects?

I think part of it has to do with the people that worriers observe. Often, they observe young people: children and adolescents. The adults they observe, including themselves, often have the types of jobs (academic jobs; journalism; social media managers) that allow them flexibility in how they spend their time. This is what children and journalists have in common with one another: they have flexible schedules. There are no clear boundaries between work, play, and social time. Under these circumstances, social media, it would seem, can be pernicious. It can be habit-forming to an extent that feels addictive, crowd out deep thought with shallow distractions, put you on a hedonic treadmill that feels impossible to get off.

But these are not the circumstances of the average person.

Most people, in the United States and in the rest of the world, have more structured lives. They spend large periods of the day working or care-taking family members. Do they use smartphones/social media while they work and care-take? I'm sure some of them do. In the absence of smartphones/social media, would some of those folks have engaged in other non-work/non-care-taking activities such as socializing with co-workers face-to-face or watching television? Probably. In any case, it's clear that research on leisure use of smartphones and social media at work ('cyber-loafing' is the humorously antiquated name for this) and use while care-taking is important, but also clear that we should not assume that social media plays the same role in the lives of individuals with more structured lives as it does in the lives of those with less structure.

What I suspect is that many people, perhaps most people, spend most of their days engaged in work that is not especially stimulating or fulfilling, but that pays the bills. At the end of the day, they want to relax. Ten or twenty years ago, they might sit down in front of the television and unwind. Today, they might break out their smartphones (perhaps while also watching TV) and sift through social media posts, exchange messages with friends, and catch up on the day's events. This kind of use would serve roughly the same function as TV use ten or twenty years ago - relaxing after a hard day's work - while helping them to feel connected to far-flung friends and family (an especially valuable perk to those who work at home and may feel isolated).

Does their smartphone/social media use come with some bad side effects, such as exposure to an aggressively polarized online culture war, or exposure to plenty of misleading or untrue information? Sure, but when was it otherwise? Didn't exposure to TV come with plenty of bad side effects, such as exposure to a skewed version of reality, one that often trafficked in violence and ugly stereotypes? I don't mean to rely on the TV comparison as a way of excusing the many problems with online life. But I think it's worthwhile to bring up so as to correct the belief that our leisure time, in the past, tended to be especially productive and pro-social.

Perhaps the lives of certain people (those with flexible schedules) have changed a great deal due to the popularization of smartphones and social media. Perhaps they are suffering and they're angry at the creators and supporters of smartphones and social media, and in their anger, overreach in their claims about the effects of smartphones and social media. Perhaps not! But I hope to make the case that to believe that smartphones and social media are not problematic for the majority of their users is not something that only industry shills do. There is a sensible reason, if an untested one, to believe this.

Sunday, December 09, 2018

White Flight Part 2: When the upper class disconnects

It's been awhile since I've published anything in this blog. Mostly, it's been the incentive to publish in peer-reviewed journals in order to attain tenure that's to blame for the lack of blog productivity. I continue to have stray, undeveloped thoughts about media uses and effects, and there are a few drafts of blogs waiting to be finished, but in the meantime, here's one that's been percolating of late.

There are reports from the past year that many highly-educated, upper-class or upper-middle-class parents are raising their kids with minimal or no screen time, primarily out of fear of its addictive qualities. It seems to start with lay theories of people who work in the tech industry and/or people who live in Silicon Valley - people with a very idiosyncratic perspective on media technologies. One could look at this particular group of parents as experts, given their unusual access to the ways in which these technologies are developed, marketed, and used. But it's also possible that their experience skews their perception of the extent to which these technologies actually are addicting or otherwise pernicious.

One possibility is that the parents are right, that the technologies are pernicious, at least when used in what they deem to be excess. Another possibility is that they're mistaken, that the technologies are merely a new form of communication, like books or telephones, not bad in and of themselves. For now, I want to set aside that issue and focus on the repercussions of a certain type of young person - an upper-class young person - dropping out of the social media universe (or never participating in it in the first place).

There might be a new kind of digital divide, one in which upper-class young people are not participating in or contributing to online social spaces. Those young people will, of course, communicate with one another, through face-to-face social networks if not through technologies that upper-class parents look at with less fear (the good ol' fashioned phone; Facetime/Skype; maybe even texting). They'll use the internet, of course, but primarily for work, or the consumption of curated content with high production values.

Meanwhile, the hurly-burly social media universe - the YouTubers, the memes, the bullying and the overnight fame, the narcissism, confessions, and anonymous social support, all overcrowded with ads - will continue to exist. If the hostility and hate speech get worse and worse, and if other people become helplessly addicted to its pleasures? Well, that's their problem. It's hard not to think of the 'white flight' of privileged families from the sites of fear and anger of yesteryear - urban centers. Privileged young people's image of the unruly social media universe will be akin to the caricature of urban life that children of the 80's grew up with: they will see the most sensational worst-of-the-worst stories, and have no personal experience with it to temper these simplistic, negative depictions. When they get to college, whether or not they grew up on the internet could be as important as whether they grew up in a one-stoplight Midwestern hamlet or Brooklyn. The social distance between a lower-middle class child who spent hours on social media from age 9 and an upper class child who read books and played Dungeons and Dragons at his friend's house, even if those two kids grew up across the street from one another, might be immense.

Among the many fears that social media evoke is the fear of the filter bubble: that subtle social media algorithms and quirks of human behavior will work to balkanize societies. Ten years after the popularization of social media, evidence seems to suggest that the opposite has happened, that we vastly overestimated the power of those algorithms, underestimated the extent to which offline social networks of old were already balkanized, and underestimated the serendipity and unpredictability of evolving online social networks. If balkanization occurred, and if it is occurring again, it may be between those who were/are socializing online and those who were not/are not.

Tuesday, October 03, 2017

If you can't stay here, where do you go? The sustainability of refuges for digital exiles

This semester, our research team has waded into some of the murkier waters of the internet in search of the conditions under which online hostility flourishes. We're still developing our tools and getting a sense of the work that is being done in this area.

Among the most pertinent, and recent, studies was a study by Eshwar Chandrasekharan and colleagues about the effects of banning toxic, hate-filled subreddits. I've always been curious as to whether banning (i.e., eliminating) entire communities (in this case, subreddits on Reddit) had the intended effect of curbing hate speech, or whether users merely expressed their hostility in another community. The study suggests that banning communities is an effective way to curb hate speech on Reddit: 'migrants' or 'exiles' of the banned communities either stopped posting on Reddit altogether, or posted in other subreddits but not in a hateful manner. The authors are quick to point out that these exiles might have just taken their hostility to another website. Given the fact that Reddit users cannot be tracked beyond Reddit, it's hard to determine whether or not that happened, but there is some evidence to suggest that websites like acted as a kind of refuge or safe harbor for Reddit's exiles: many of the same usernames that were used in Reddit's banned communities surfaced on To quote the authors of the study, banning a community might just have made hate speech "someone else's problem."

I'm intrigued by this possibility. It fits with a hunch of mine; what you might call the homeostatic hatred hypothesis, or the law of conservation of hatred: there is a stable amount of hatred in the world. It cannot be created or destroyed, but merely transformed, relocated, or redirected.

Refuges like are like cess pools or septic tanks: they isolate elements that are considered toxic by most members of the general community. In the context of waste disposal, cess pools and septic tanks are great, but I wonder if the same is true in social contexts. On the one hand, they might prevent contagion: fewer non-hateful people are exposed to hateful ideas and behavior and thus are less likely to become hateful. On the other hand, by creating highly concentrated hateful communities, you may reduce the possibility that hateful folks would be kept in check by anyone else. You're creating a self-reinforcing echo chamber, a community that supports its members' hateful ideologies, behavior, and speech.

Whether or not these online refuges are good or bad may be moot if they are not sustainable. In searching for more information about Voat, I was surprised to find that Voat isn't doing so well. Reports of its demise seem to be premature (it is up and running as of this moment), but it seems clear that it faces challenges. The foremost of these challenges is revenue.

I get the sense that people often underestimate how much time and money is involved in creating and hosting a large (or even moderately sized) online community, or community-of-communities. Someone needs to pay for the labor and server space. Advertisers and funders, in general, don't seem to be wild about being associated with these types of online communities. If there were a greater number of people who were willing to inhabit these refuges, people who had a lot of money and could buy a lot of things, then it might be worth it to advertise there and to host these communities. If the users had a lot of disposable income, they could use a crowdfunded model. But it doesn't seem to be the case that there are enough users with enough money to keep a large community running for very long.

Such sites could end up as bare-bones communities with fewer bells and whistles that are easier and cheaper to maintain, but they seem to encounter other problems. I get the sense that people also underestimate the difficulty of creating a community that produces frequently updated, novel, interesting content. Content quickly becomes repetitive, or boring, or filled with spam, or subject to malicious attacks. This is a real problem when the value of the site is content that is generated by users: bored users leave, creating a smaller pool of potential content suppliers. The smaller the conversation gets, the less alluring it is. These refuges will continue to be bare-bones while other online communities, video games, TV shows, VR experiences, and other ways to spend your free time add more and more bells and whistles. Why bother spending time in a small, repetitive conversation when there are more alluring ways to spend your free time?

Of course, defining 'hostility' and 'hate speech' is tricky, and the obvious objections to studies like this is that 'hate speech' is being defined in the wrong way. You get criticism from both sides: either you're defining it too narrowly and not including robust, sustainable communities like commenters on far right wing or left wing blogs, or you're defining it too broadly, categorizing legitimate criticism of others as hateful and hostile. It's clear to me that you can't please everyone when you're doing research like this. In fact, it's pretty clear that you can please very, very few people. I suppose my interests have less to do with whether or not we classify one speech or the other as 'hateful' or 'hostile,' and more to do with user migratory patterns, in particular those expressing widely unpopular beliefs (or expressing beliefs in a widely unacceptable way). It seems that people have their minds made up when it comes to the question of whether techniques such as banning communities are restricting speech or making the internet/society a safer, more tolerant space. But both sides are assuming that the technique actually works.

While some would lament the existence of refuges and others are likely willing to sacrifice a great deal to see that they persist, it's worth asking 'what forces constrain them? Why aren't they bigger? How long can they persist?'

Friday, June 30, 2017

Anonymity: Expectation or Right?

Somewhat recently, a public official was linked to remarks he allegedly posted online while using a pseudonym. The official had done nothing illegal, but his reputation suffered greatly after being linked to the remarks. That got me thinking about people's expectations of being able to express themselves anonymously online.

Let's assume, for the moment, that the official in question really did post remarks that, once linked to him, resulted in public disgrace. Anyone posting online using a pseudonym or posting anonymously likely has some expectation that his or her remarks won't be linked to his/her "real world," offline identity. At the very least, having remarks you made anonymously or pseudonymously is a violation of your expectations. I'd expect it to feel as though your privacy had being violated; anonymity gives you a kind of privacy. In fact, that's how I originally processed the story of the official: as a case in which an individual's privacy was violated. People generally regard privacy (however fuzzily defined) as a right (though people also have a way of justifying such violations if they feel that the uncovered sin is great enough).

On further reflection, I'm not so sure linking someone to comments they made anonymously is analogous to other violations of privacy (i.e., someone installing a camera in your bathroom). Perhaps we've come to conflate anonymity with privacy. When I say things to a friend in a private setting, I expect those things not to be recorded and played back in some other context. This kind of privacy of self-expression in a particular limited context (i.e., secrets) has been a part of many societies for a long time (though I'd stop short of calling it natural and/or a basic human right). But the ability to express one's self to a large number of people anonymously hasn't been around for more than a decade or so. Of course, there have been anonymous sources for a long time, and the protection of witnesses through the assignment of new identities has been a common protocol for a long time. But in terms of the frequency and ease with which the average person can express themselves anonymously on an everyday basis, I think it's a relatively new phenomenon. Additionally, things said in private and things said anonymously differ radically in terms of their impact. Whispering secrets among a small group of friends likely has one impact on the attitudes and beliefs of others while writing something anonymously online likely has another (typically larger) impact.

I can understand a society that wants to enshrine the first kind of privacy (whispering in private, off the record) as a basic right, but to lump anonymous self-expression (a relatively recent widespread phenomenon) in with this strikes me as rash. Certainly, many of us have come to take for granted the ability to say things anonymously that will not be associated with our "real world" identities, and it feels bad when our expectations are violated, and but that doesn't make it a right.

When considering whether or not something should be treated as a right, we tend to look backward, for precedent. I wonder about the limits of this approach. It demands that we make forced analogies that don't really fit. We select the analogy to the past that suits us ("posting anonymously is like publishing controversial political tracts under an assumed name," or, if you're on the other side, "posting anonymously is like the hoods that members of the Ku Klux Klan wore"). Instead, it seems to me to be worthwhile to consider the aggregate effects on society, now and for the foreseeable future, of enshrining something as a right. Would a world in which we had to live in fear of being associated with everything we say and do anonymously online be a better or worse world?

Reasons why anonymity is good: it makes it easier for folks who are seeking help for a stigmatized condition to receive help. It facilitates "whistle-blowing" and ensures confidentiality of sources, making it easier to hold powerful institutions accountable. Anonymity is also a kind of bulwark against surveillance and the permanence of online memory and the ease with which messages are taken out of context, widely disseminated, framed in misleading ways, and used against the speaker. This last one seems like a biggie. The tactic of using one's past words against one's future self was once a technique used by the press on politicians, but now it seems to be used by anyone on anyone. And so we cling to anonymous self-expression as a way to retain some freedom of speech.

Reasons why anonymity is bad: it permits hostility without consequences, on a massive scale and, thus, normalizes hostile thinking and behavior. Hostile people aren't as isolated as they were before; they can easily find one another and, together, justify their hostility as a defense of their rights, freedom, or as an act of justice.

So, if we lose trust in the ability of any communication tool to provide us with true anonymity (as would likely happen if a few more high-profile un-maskings were to occur), we're probably going to lose some good things and some bad things. Any attempt to determine whether anonymity should be defended as a right should consider the weight of those things. I think that gets lost in debates about the merits of, well, a lot of things these days. It isn't enough to link a particular course of action to bad consequences. You must consider all of the consequences as well as all of the consequences of the other plausible courses of action, to the extent that such things are possible, before arriving at a decision.

It could be that younger people who've grown up with the ability to express themselves anonymously may simply dislike the prospect of losing this ability so much that it may not matter whether we officially enshrine anonymous speech as part and parcel of the right to privacy. The demand for it might be so high that economically and politically (rather than ethically), it will be treated as a necessity. Conversely, the decay of true anonymity (and the fear of being "outed") may be an inevitable consequence of a highly networked world in which sufficiently motivated people can unmasked whomever they want, regardless of how badly the majority of folks wish that anonymity were a protect right.

Tuesday, March 28, 2017

Recording each other for justice

Each Saturday, a recurring conflict takes place outside of an abortion clinic, a conflict that sheds light on how media technologies raise questions about the ethics of surveillance and privacy. It's a good example of how these issues are not just about how governments and corporations monitor citizens and consumers, but how these questions arise from interactions among citizens. My understanding of precisely what occurs around the abortion clinic is based on anecdotal evidence, so take everything in this entry with a grain of salt. This is one of those phenomena that I'd love to devote more time to understanding, if only I had the time. There is a compelling ethnography waiting to be written about it, one that would be particularly relevant to communication law scholars.

The space outside the clinic is occupied by two groups: protesters seeking to persuade individuals entering the clinic not to get abortions, and a group of people (hereafter referred to as "defenders") seeking to protect or buffer individuals entering the clinic from harassment from the protesters. This particular arrangement of individuals could have occurred long before the advent of digital networked technologies. What I'm interested in is what happens when you add those tools to the mix.

Protesters are often seen using their phones to take pictures of defenders and the license plates of defenders' cars. Protesters are, by law (at least as far as I know), permitted to do this. Both the protesters and defenders occupy a public space, or at least a space that is not "private" in the sense that one's home is private. Protesters are not, as I understand it, allowed to take pictures of individuals entering the clinic, as this would violate their rights to privacy as patients of the clinic. Even though they are not yet inside the clinic, if they are on the clinic's property and they are patients, their rights to privacy extend to the area around the clinic. If defenders believe that protesters are engaging in unlawful picture-taking, the defenders will use their phones to video record the protesters taking pictures.

Predictably, tempers occasionally flare, voices are raised, people get in each others' faces, and when behavior that approaches the legal definition of harassment or assault occurs, everyone starts recording everything with their phones. The image of extremely worked-up people wielding cameras as one would wield a weapon, recording each other recording each other in a kind of infinite regress of surveillance, strikes me as ludicrous, partly because the act of recording someone with your phone makes you appear passive, somewhat nerdy, and almost...dainty.

What is the point of all this recording? In most cases, the intention seems to be to catch others in an act of law-breaking, to create a record of evidence to turn over to the police that could be admissible in a court of law. But in other cases (e.g., the protester's pictures of defenders' license plates), the intent seems to have little to do with the actual law. The police would have little interest in the license plate numbers of law-abiding citizens. So, why are they doing this, and what happens to these pictures?

Enter social networking sites (SNS). The pictures, as I understand it, are subsequently uploaded to a SNS group page which contains a collection of pictures of defenders and their license plates. It is possible for SNS users to make such groups "secret" and/or invitation-only, so that the groups could not be found by those in the pictures. My understanding is that this leads to those who are in the pictures disguising their identities online so as to infiltrate the secret groups, acting as moles.

But what is the point of developing these online inventories of people who are defenders or protesters or, for that matter, publicly state any particular belief? Is all of this just an intimidation technique? And if so, is it effective? Is there a kind of panoptic logic at work here, in which the fear comes from not knowing precisely who will see those pictures and in what context they will be seen (e.g., by a would-be employer 20 years from now)? Are they using the pictures as part of a concrete plan to take action against the individuals in the pictures, or is it not that well thought-out? Do people taking pictures and amassing inventories like this do so because they imagine that someday, the law will change, or collective sentiment will change, at which point it will becoming damning evidence that one was affiliated with a group that is then seen as abhorrent? Is it akin to a German taking a picture of a Nazi sympathizer in 1939, banking on the fact that while being a Nazi at that time was socially acceptable, it would not be so for long, and that when it became unacceptable, the picture could be used to discredit or blackmail the person?

I don't think this phenomenon is relegated to protesters or defenders of abortion. I often think of it in a much more benign context: traffic violations. Let's say I were to record individuals making illegal U turns (or not using their fricking blinkers). The police may not be interested in my small-stakes vigilantism, but what if I were to start to amass an online inventory that included recordings of lousy drivers caught in the act, and the database included their license plates and pictures of their faces. The judgment here isn't taking place in a court of law. It's a kind of public shaming via peer-to-peer surveillance.

Aside from questions of motivation, the phenomenon raises questions of legality and ethics. It's my understanding that it is okay to take pictures of other individuals in public places, and/or their cars and license plates, and post those pictures online as long as the individuals and cars were in public. Perhaps at some point, were someone to take thousands of pictures of someone leaving their office everyday, it becomes an illegal activity (stalking), but I imagine the line to be blurry (are two pictures okay? Is it only not okay when you've been told to stop?). But what if you only take a single picture of an individual in public, and that individual isn't breaking the law, but you put that picture together with pictures of many other people engaged in a similar activity in an effort to publicly shame them or intimidate them? That seems illegal, but how do you prove what the effort or intention really was? Would it qualify as defamation?

Maybe it hinges on whether or not someone is a public figure. If a protester was quoted in a newspaper or their picture was on the front page of the New York Times, then this seems like fair game. The individual in the paper might suffer negative consequences as a result of being widely known for his quote and his behavior, but what are we supposed to do, not allow the press to depict protesters? What if a blog quoted a protester and featured a picture of him? Functionally, the blog isn't much different than the newspaper, and the lines between the two are blurry. The case of the protester quoted in the New York Times is realistic but rare, whereas there will likely be a great many quotes and pictures on SNS of people stating all sorts of beliefs and doing all sorts of things that may be worthy of judgment to some people at some time.

Even if we decide it fits into a certain legal or ethical category, this may not matter if the behavior remains secret, if technology enables it but doesn't bring it to the surface (i.e., the problem of policing secret inventories).

One possibility is that the law has simply been outstripped by technology. In place of the law, people respond to unethical uses of technology with, you guessed it, more technology. In this case, technology developers make it harder to take surreptitious pictures of people by making it difficult to turn the camera shutter sound off on camera phones. Other tech developers have engineered clothing that reflects light back at cameras in such a way that renders their images invisible. It could be argued that we have been living in a world in which every step of the justice system now functions through technology outside of traditional channels of justice administration, and that the best thing we can do is acknowledge that reality and consider how best to create an ethical world given the imperfect state of things, at least for the foreseeable future.

Wednesday, December 14, 2016

Thoughts on Post-2016 Election America: Re-examining the "Fringe Fighting" Hypothesis

In my conversations with people (both online and face-to-face conversations) about the post-election media environment, I'm finding it increasingly difficult to maintain my position as a dispassionate optimist. Is this because the world itself is contradicting that position, or is it because I'm being met with more resistance from those around me? That's what I'm still trying to sort out.

Many of the conversations come back to the premise that America is somehow more hostile than it used to be (not just that our leaders are objectionable and/or dangerous, but that the increasing danger resides in our populous). There are also conversations about what particular politicians are doing, will do, or can do, but I want to set those aside for a moment and focus on premises relating to the American population and the extent and intensity of its hostility toward one another. Previously, I've argued that the impression that we're a nation divided is largely an illusion, that the true conflict is mainly at the fringes, but that was before the election. So, I'd like to revisit that argument in light of discussions of public opinion, fake news, and a general sense of threat.

Essentially, my argument was that the strong disagreement we see in our culture is relegated to small groups of individuals on either end of an ideological spectrum that manifests themselves in highly visible ways. Although it can appear as though our entire culture is in a state of unrest (and is getting worse in this respect), this may be an illusion. To paraphrase myself:

This illusion occurs when we mistake uncommon, extreme online behaviors for exemplars. We implicitly or explicitly link hundreds or thousands of people actually stating a belief online (or, in this case, acting hostile toward other Americans online) with the behaviors and beliefs of a larger mainstream group that, while not actually stating the belief, has stated or acted in such ways that makes it clear that they believe in some of the same things as the group that actually states the belief online. In the U.S. right now, the large groups to which we most often extrapolate are "liberals/Democrats" and "conservatives/Republicans." Dissimilarities between ideas held by the small group actually stating the belief (or actually being openly hostile) online and members of the large group who are not stating the belief (and are not actually being openly hostile online) are ignored in favor of whatever they have in common. This is justified on the grounds that what the small group and the large group have in common is thought to represent a shared, coherent ideological framework (see "Arguing with the Fringes" for further details).

In retrospect, I shouldn't have used the word "fringe" to describe these small groups. The word feels dismissive and judgmental, which is not what I intended. Really, I just want to make a statement about the size of the groups that are in strong disagreement with (and are hostile toward) other Americans. Still, the term "fringe fighting" has a certain ring to it, and I can't think of a suitable alternative word for these groups at the moment, so for the purposes of this post, I'll stick with "fringe."

Arguments/evidence for the Fringe Fighting hypothesis

Though there is more talk about social unrest than there was when I wrote "Arguing with the Fringes," this talk fits a "moral panic" narrative in which people become extremely alarmed over novel behavior that is rapidly becoming popular (often involving media use) and extrapolate to a future world in which the novel behavior radically changes our world for the worse. There are, of course, concerns about rapidly spreading novel behaviors that turn out to be justified, and the dismissal of such concerns as hysterical can have dire consequences. But there are also dire consequences to succumbing to overblown fears, namely rapid declines in interpersonal and institutional trust that are essential to functioning societies, in addition to the "boy who cried 'wolf'" problem (if one's concerns are found by others to be overblown, one loses credibility, forfeiting the ability to call others' attention to future threats). Given the similarities between the talk of social unrest and previous instances of moral panics, it at least seems worthwhile to consider the possibility that concern about Americans' hostility toward one another is a moral panic.

It is also important to ask, "What are we using as indicators of how 320 Million or so Americans think of feel?" How Americans voted and what they say on the internet seem to be commonly used indicators. The majority of Americans did not vote in the last election, so it would be difficult to use 2016 voting behavior to assume anything about how "America" feels about anything. For those who did vote, whom they voted for is a pretty weak signal of any particular belief, as these candidates, in effect, bundle together various disparate beliefs, and some votes are not intended as endorsements of anything the candidate stated or believed but instead are merely "protest votes."

What people say on the internet is also a weak signal of overall public opinion. For one thing, comparatively few people post about politics and social issues (roughly one-third of social media users, according to Pew). And many of those are posting information that is visible only to those in their immediate social circles (e.g., posting on Facebook). Such information is highly salient to individuals consciously or unconsciously forming beliefs about what other Americans believe, but it is hardly representative of Americans as a whole.

We may also question assumptions about the impact the hostility we're able to see is having. The extreme voices may have been largely filtered out because most of their friends unfollowed or hid them. The only people who don't filter out the extreme voices are the ones who already would have believed whatever the poster is trying to convince them of. What good is sharing a news story if very few people follow you, and those few people already knew about the news you're sharing? As a side note, it would be nice to have some information about actual audience and the practice of unfollowing to go along with the information about sharing and 'liking' information online.

Better evidence about what America, as a whole, believes can be found in the General Social Survey, which attempts to look at what ALL Americans believe rather than the few that contribute content to the internet. Data from the survey suggests that American public opinion on a variety of social issues is relatively stable over the past few decades; an abrupt shift in that, though not impossible, would seem unlikely.

Finally, there is evidence that the growth of political animosity in the U.S. is a trend that pre-dates social media, so perhaps social media is just making visible what was already there. It should be noted that animosity (and attitude) is not the same thing as hostility (a behavior).

Arguments/evidence against the Fringe Fighting hypothesis

There is some evidence of growing distrust of the media. If you're not getting your information from the media (or whatever you define "mainstream media" as), where are you getting it from? You could either get it directly from alternative news sources or get it via social media, which carries stories from those alternative news sources. Many existing measures of exposure to news have yet to catch up with the way we consume news. It is entirely possible, given the growing distrust in mass/mainstream media and the lack of good indicators about where Americans get their information, that Americans have quickly shifted toward consuming news stories that frame current events chielfy in terms of conflict between groups of Americans.

What does this have to do with hostility? Well, those intra-American-conflict news stories could play to whatever various groups are inclined to believe about various other groups, play to one's fears (climate change on the Left; undesirable social change on the Right; an unfair, "rigged" economy and government on both the Left and Right), decrease trust and empathy, increase fear and cynicism, and sew dissent. Very few people may be initially fighting with one another, but if those people can disseminate information so as to convince disinterested others that the people on the Other Side of the fight are a growing threat to everyone, they can effectively "enlist" those disinterested others in the fight. What starts as a fringe fight could quickly grow into something larger.

So What?

For a moment, let's assume that there is a problem, that a significant number of people in America disagree with one another to a significant degree. Is this necessarily bad? Disagreement isn't bad in and of itself; arguably, it's desirable so as to avoid the pitfalls of "groupthink." But strong disagreement could lead to incivility (which, some would say, is antithetical to empathy and compromise, and compromise seems like a prerequisite to having a functioning democracy and economy) and censorship (which is antithetical to democracy and the progress of science and education). Incivility could lead to violent attacks (though I've heard at least one scholar argue that arguments can be uncivil and not be bad in these senses). In so far as we see evidence of strong disagreement growing and/or leading to incivility, censorship, and violent attacks, then yes, it's bad.

Assuming we have a problem, what do we do about it? It's possible that the traditional channels by which we sought to address social unrest would no longer work within a decentralized, non-hierarchical information system like today's (or tomorrow's) internet. This is the "everyone finds their own facts" problem (a topic for an upcoming blog entry). Even if you engage in a dispassionate analysis of evidence and find support for the fringe fighting hypothesis, or evidence that people are consuming more and more biased information and wandering further from any objective truth, what are you going to do with that information? You might teach it in a class, but if people don't want to hear it, then people might just start distrusting teachers more. You might publish in an academic journal, but what good is that when the journal loses it's sense of authority and credibility? If you publish in that academic journal and your research is covered by The New York Times, what good is that if fewer and fewer people trust The New York Times?

I'm left with a desire (quixotic as it may be) to try to step outside the problem. I know that many are fond of casting Intra-American conflict (online and offline) as a part of global phenomenon, but here again, I think we're making a facile analogy, choosing to see the similarities and ignoring the many differences. Surely, not every country is experiencing precisely the same kind of online conflict problem that Americans are experiencing. I was reminded of this point while attending this year's Association of Internet Researchers annual conference in Berlin, where I was fortunate enough to present on a panel with researchers from Israel, Denmark, and the U.K. I was left with the notion that not all online discussion forums are the same with regard to conflict, that intra-group conflict is not inevitable in the digital era.

We might also step outside of the internet for a moment. Anecdotally, I've observed a few folks taking a break from social media and news media because the emotional pitch of online discourse became so shrill as to be unbearable. I'm reminded of an idea put forth in Joshua Rothman's book review in the New Yorker (as well as the book he was reviewing, I assume) that our face-to-face interactions with individuals and our feelings about the political groups to which those individuals belong are often in conflict. Short version: we love (or at least tolerate) our neighbors but we hate the political groups to which they belong. The basic idea is that it is harder to hate a person in person. It will be important to see how our face-to-face interactions at work, school, family, and in public places progresses along side our perceptions of behavior online.

So, where to go from here? For starters, it seems worthwhile to examine the framing of news about current events: do we really see an uptick in exposure to intra-American conflict framing, or are our filter bubbles fooling us into think this? It's also important to understand more about the contexts in which online hostility occurs (a goal of my current research project examining hostile behavior on Reddit) and when and where this is associated with offline hostility.