Thursday, August 29, 2019

What's Wrong with News?

Since Jimmy Wales came to speak at the University of Alabama last weekend, I've been thinking more about WikiTribune, his new-ish news curation/creation venture. This seems like a rare opportunity to contribute to the building of a tool that has the backing of the creator of one of the most popular, influential, important websites in the world. Of course the venture might fail, because most new ventures fail and because news is a tricky thing to get right, maybe trickier in a lot of ways than building something akin to a library or encyclopedia. WikiTribune seems to be functioning right now as a collectively curated news aggregation website; it might evolve into something else in the future, but I'll assume that's what it is for now.

What might be the starting points for a platform like this? If news is broken (which seems a fairly uncontroversial take in 2019), what part of it could be fixed by a Wiki-type platform?

First, there's the issue of factual accuracy. A decent way of getting a newsfeed that only has news that is factually correct is just by linking to stories that are from sources that can be held accountable, that have a reputation, and that typically follow standard journalistic practices. You weed out the parody stories, the polarizing disinformation, the deliberate attempts to poison discourse with fake news. This seems like something a dedicated group of volunteers could do.

I feel like the factual accuracy problem isn't as widespread as some believe, that networks of upvoting and sharing bots only make it seem as though a handful of untrue stories are being very widely read and believed. Of course, there are most likely relatively small pockets of people who are actually believing in false stories and act on those beliefs, and despite their relatively small numbers, they can be very harmful to society. And various characteristics of popular social media platforms such as Twitter, YouTube, and Facebook (such as algorithms that don't vet information, the ability for anyone to post and share anything, being too big and too fast to moderate) increase the footprint and influence of false stories beyond what they were in the pre-internet says.

I doubt WikiTribune would lure the type of folks who seek out false stories away from their favorite hyperpartisan sites, but who knows what might happen if it eventually became even a tenth as popular as Wikipedia. Maybe it would end up as a kind of standard system for current events information vetting, a better middle layer between journalism and audiences than social media currently is. Most people would then recognize the false news stories and sources in a way that most people did before the internet: as a kind of inevitable fringe to the information ecosphere, relegated to a recognizable outskirt rather than popping up in the midst of a feed of journalism vetted in the traditional way (as tends to happen w/ news via social media). It's important to know whether factually inaccurate news exists, how much of it exists, and who is sharing and reading it, but it's also important to think about where, in our information environment, it resides. Is it concentrated or distributed? In the center or on the periphery?

During his talk, Wales brought up the problem of clickbait headlines: headlines that mislead audiences, play on our emotional, tribal, impulsive tendencies, exploit a kind of shallow curiosity. So, maybe WikiTribune curators use a kind of rough guiding principle: avoid posting clickbait headlines (or maybe just rewrite the headlines. Often, the stories are fine, but the headlines seem like they were written by search engine optimizers). Obviously, clickbait is a term with a fuzzy definition, but that's nothing new, and certainly doesn't stop websites, publishers, etc. from moderating various kinds of vaguely-defined content standard. Get multiple experienced coders to rate each headline's clickbait-iness, and when they agree, don't post it or re-write the headline.

Wales also brought up ads and their effects on content, one of which is to essentially incentivize the creation of clickbait headlines. But I guess I'm a bit unclear on how to account for the fact that many of the sources to which WikiTribune links are ad supported. Related to this, how does WikiTribune work with paywall news sites? Wales seemed to be pro-paywall, to endorse the idea that if more news sites were subscription-funded rather than ad-funded, the quality of information would improve. Would WikiTribune just give you a taste of the article, a kind of abstract or summary, something a bit more than a headline, some kind of compromise that would give the reader some value while not totally substituting for the story itself, perhaps pushing users to the full article in the way that Wikipedia might push users to the source material? That seems reasonable to me.

He also brought up algorithms. Perhaps algorithms also, in some way, guide news consumers and creators toward more clickbait-y headlines. If you have humans in the loop, maybe it would be easier to slow or stop the prevalence of clickbait and false news.

He also brought up the fact that Wikipedia is not totally open and it's not a democracy, and I think that's a way of setting this apart from the primary news aggregator of our time, Reddit. Reddit is ostensibly open, and is a kind of democracy: registered users submit and vote on content, thereby increasing or decreasing its visibility. Over time, this has led to certain kinds of news stories ending up on the front page. You might call it a product of 'hivemind.' It has a certain narrow tonal range, and a certain focus on particular topics that reflects the interests and values of the voting public.

There is recent talk of moderators playing a bigger role in directing what gets posted in subreddits, and the inevitable push-back by those who value unfettered speech and a democratic public sphere above all else. I doubt that push-back will fully subside on Reddit because voting for the noteworthiness of content is a defining characteristic of the platform. It's not ancillary in the way that 'likes' are to Instagram. But if you start a new site that simply doesn't have that as one of it's defining characteristics, you don't necessarily have that problem. Sure, a lot of people believe strongly in a flat hierarchy, an open democratic sphere, but I think that enough people have been repulsed by what that yielded to want something else.

Somewhat more controversially, we might consider the question of tone and emotion in news, and whether or not that needs 'fixing,' or could be fixed. Personally, I'm turned off by the abundance of outrage and fear that many news stories in 2019 evoke, but I recognize that an argument can be (and has been) made for the value of anger and fear in this sphere. Here, I think it's more a matter of personal preference, and not everyone would agree on this, but some would like, sometimes, to see a newsfeed that features less fear and anger. I feel like that's where a lot of popular democratic news feeds, like r/news on Reddit, end up: a kind of distilled outrage and fear.

Here, we need think about the purpose of news. Is it a kind of 'immune system' for societies that's sole purpose is to detect threats and alert us to them? Well then, it seems entirely appropriate that news would invoke anger and fear. Should news be more broad, emotionally, than that? Should it invoke wonder, curiosity, gratitude? You have subreddits like r/upliftingnews that cater to another point on the emotional spectrum, so it's not as if there isn't already a place for that in many people's information diets.

Then there's the matter of filter bubbles/echo chambers, and I don't know that there's much that WikiTribune could do about that. I think worries about filter bubbles and echo chambers are somewhat overstated and/or that we'll never fully solve that problem, and while we wait around for the perfect solution, we're making do with a pretty lousy news ecosystem that's run, by default, by impulsive clicks and a lack of accountability. I think pure democracy was one potential solution - giving the power of vetting and curation to anyone and everyone - but we've seen how well that went.

Wikipedia never solved the filter bubble problem when it came to creating an encyclopedia. The editors don't remotely reflect the readership, in terms of race, gender, education level, ideology, etc. Wikipedia isn't flawless, but it seems to be working well enough, and I gather that there is a sense within the organization that they should try to broaden the diversity of people who edit it to include more women and people of color. Should it also try to include more people who identify as politically conservative? Does it make sense to pursue intellectual diversity among information curators as well?

It looks like the current version of WikiTribune features a way to follow particular feeds curator by particular Wiki-editors. If that's the case, then what's to stop a couple of ultra-liberal or ultra-conservative editors from setting up feeds full of clickbait and partisan vitriol? Is there some overseer that decides when editors have gone too far, similar to the way that admins on Reddit ultimately have control over volunteer moderators? Might that decision as to what goes too far be motivated by one's ideology?

Yes. But I get the sense that Wikipedia has already dealt with similarly motivated people who have tried to turn Wikipedia into a more partisan information environment, and it has some sort of mechanism for dealing with them that seems to be largely effective. Is the mechanism entirely democratic and open? Probably not, but now might be the time that some of the public revisits the relationship between direct democracy and news curation and distribution.

Even if something similar to WikiTribune existed in the past (and I get the sense that that's the case) and ultimately failed, that does not determine whether WikiTribune will fail. There are many, many cases in which a creation didn't succeed because it was timed poorly. Maybe we had to wait to see how poorly open, democratic, free-for-all, algorithmic, impulsive curation of news would go before there would enough demand for something like WikiTribune to be sustainable. I'm just happy to see someone trying something like this right now.


Saturday, August 24, 2019

A World Without Frames

I had the privilege of seeing Jimmy Wales, founder of Wikipedia, speak at the University of Alabama, courtesy of The Blackburn Institute. In anticipation of seeing him speak, I'd been reflecting on the value of Wikipedia in a post-2016 world. Since the 2016 U.S. Presidential election, it seems as though the discourse on social media has become more toxic and less fact-based, that traditional news outlets don't quite know what to do with Trump and his international equivalents, and that non-traditional news outlets are creating and disseminating biased and false information about our world. All the while, Wikipedia continues to churn away, largely free from the toxicity and contentiousness that has gripped the rest of the internet and, seemingly, the rest of the world. How did they do it?

I went into the talk with a slightly more specific version of that question: is it the particular approach/model that Wikipedia uses that is responsible for its relative success vis a vis the Truth, or is it the particular domain in which it operates - that of the encyclopedia? Wales didn't quite speak to this question, but he did talk about his relatively new passion project: the WikiTribune. In a way, the fate of that venture will answer the question, as it applies the model and logic of Wikipedia to the world of news and current events. Wales' working theory seemed to be that ads were largely to blame for the degradation of news: the way that the online ad economy works puts all websites on an even playing field, all competing against one another for attention. News sites do not just compete with other news sites; they compete with parody news sites, entertainment sites, gossip sites, etc. He praised the recent move toward the subscription model, noting that the New York Times has seen recent financial success pursuing this model. Subscriptions prompt users to think, 'what is the overall value of this product, in the long term?' That's a key shift in thinking, from what you click on in an impulsive manner to what you value, a shift from short term thinking to long term thinking. So, if we get rid of the ads, do we improve the quality of news and discourse around news?

My suspicion is that there is another factor at play: whether the content pertains exclusively to current events. News must privilege certain stories over others in a way that an encyclopedia or a library does not, assuming it has a front page (can we conceive of a news site without a front page, regardless of whether said front page is personalized?). Here, the decades of research on framing and agenda setting is relevant: by virtue of editorial decisions about what to cover and what not to cover, news gets us to think about certain issues or events (or certain aspects of those issues/events) and ignore others. Encyclopedias do not direct attention in quite the same way. Sure, it could be argued that within a given entry, an encyclopedia/wikipedia chooses to emphasize certain aspects of the subject while ignoring or downplaying others, and thus frames the subject in a way that shapes perception. But I'd argue the way in which Wikipedia has incorporated different perspectives on contentious topics into entries reduces this effect.

Can you do the same thing with news? Is that what WikiTribune will be?

I haven't a clue. But it was kind of thrilling to be in the room with someone who was taking a stab at solving a problem of this scope, someone who was uniquely positioned to stand a decent chance of succeeding.

Sunday, May 19, 2019

The Now Audience and the Later Audience

Game of Thrones is wrapping up its run on HBO, and watching the show and the reaction to the show has rekindled my interest in storytelling and audiences. I'd like to avoid discussing the actual qualities of the show, as other online voices seem to have that pretty well covered. I'd like to focus on the abundance of voices and the qualities of the conversation about the show, and how it resembles conversations about the endings of other popular TV stories.

Twitter, Reddit, and the blogosphere (i.e., the 'now' audience) give us a segment of the public's immediate reaction to a story. As such, it seems to come packaged with the story itself: we get the show, and we get reaction of this part of the public along with it. For a long time, we'd gotten the judgments of professional critics alongside the release of cultural products like TV shows, books, and movies. These judgments would maybe help audiences to sift through the large quantity of content and decide what was worth their time, or perhaps open a door to a new way of interpreting those works. And surely, many audience members thought of critics as elitist, their opinions being worth no more than the opinions of anyone else, and thus ignored them.

I doubt that many people use the immediate judgments of the audience that is vocal on social media as a guide to help determine what they should watch and what they should ignore (perhaps the mere fact that people are talking about a show, even if they're trashing it, serves as the signal that the show is worth watching). Instead, I think it's mostly of interest for people who are already watching. Most of the conversation seems to be commiseration: finding the voices that echo the way you, as a viewer, feel but perhaps could not articulate in that way, and then amplifying those voices through upvotes, likes, share, and retweets.

It is tempting to see the reaction to an ongoing story like Game of Thrones as the reaction to the show. However, the segment of people who talk about the show online is just a small portion of the overall audience for the show, and certainly not a representative sample of that larger group. It's hard to know what the rest of the audience makes of the show, and, in the absence of any information about that larger segment of the audience, its easy for our brains to just fill it in with adjacent, semi-relevant information (see the availability heuristic): the reaction on social media.

Beyond that, there's also a widely held assumption that the reaction on social media drives subsequent reaction or opinion. This view acknowledges that the voices on social media are but a sliver of the overall public, and that they're not a representative sample, but assumes that they wield an influence over the larger public. This may not occur through some conscious subservience of the public to the opinion-leaders on Twitter, but, again, maybe because of an unconscious mental heuristic. The social media reaction is the initial impression, and initial impressions greatly influence subsequent impressions, as people tend to ignore other relevant information (in this case, actual qualities of the show).

There's no doubt that news media plays a role in this process as well. Whereas news outlets in the past would have noted the number of people tuning in to a broadcast and used that as a kind of index of cultural importance, the news media now can talk about the reaction to the show on social media. This coverage amplifies those social media voices. I'm betting that more of the news articles about the ending of Game of Thrones are about the online reaction to the show than was the case with the endings of previous TV shows, and so the social media reaction to shows is likely 'louder' than it once was, making it more likely that the general public sees that reaction as the reaction to the show.

This topic - the relationship between voices on social media, news outlets, and public opinion - is much larger than Game of Thrones or popular TV in general. How much do people who post on Twitter or, more broadly, on social media really influence the culture at large? The answer, of course, depends on a variety of factors, though I think that many just assume it drives the interest and opinions of the rest of the masses for the above reasons.

When might this not be the case? When might the social media voices not reflect or influence the reactions of the larger public in subsequent years? I've been thinking about the importance of a mechanism to make visible, or monetize (same thing?), the non-immediate reaction of the larger public (i.e., the 'later' audience). In the case of news events, perhaps historical documentaries are the mechanism. Years after an event, we tell a story about the event and often contextualize the immediate reaction of the public at the time (it's hard to think of a better example of this than Ezra Edelman's O.J.: Made in America). The perspective of the historical documentary is a corrective one. But most news events don't get this treatment. Most news events are of such great importance to the people living through them and of such lesser importance to later audiences that there's not enough incentive to create a mechanism for registering anything other than the immediate social media reaction.

In the case of popular stories like Game of Thrones, the mechanism for capturing non-immediate reactions of the larger public are international streaming video sites like HBO GO, Netflix, Amazon Prime, Hulu, and Disney +. Years from now, these shows will be available in media markets to audiences who have likely forgotten, if they were aware of it in the first place, the initial reaction to the show on social media. The reaction, which was an overwhelming part of the context in which the show was viewed at the time it was first released, won't be any part of the context in which they watch the show. In terms of financial incentives, the importance of the 'later' audience is far greater than that of the 'now' audience: there is far more money to be made off a show in, say, the Chinese media market from 2025-2035 than from English-speaking media markets in 2019 (i.e., the media markets that are most likely influenced by or reflected in the online reaction to a show).

For a show that is specifically tied to a cultural moment in time and may not have much of a life after its initial run, the subsequent streaming market in other countries may not be as important (although the ongoing success of Friends, what I think of as a pretty dated product of the 1990's, suggests that we may overestimate the importance of universality in determining a show's subsequent success). But Game of Thrones strikes me as the quintessential example of a show that isn't built for a particular time or place, and was probably sold to HBO as such. Of course, the show reflects some of the values of its creators who are embedded in 2010's United States culture, but it is not nearly as embedded in that cultural context as the social media reaction to the show. The online reaction to the show is certainly about elements of the show itself - character arcs, pacing, etc. - but it also reflects the experiences, emotions, and politics of the people writing about the show in 2019. When people watch the show 25 years from now in another country, I think they'll mostly just see a show about dragons and deceit (which are pretty timeless motifs).

To an extent, I've already seen this disjuncture between the 'now' audience and the 'later' audience with The Sopranos. That show ended just as the 'now' audience was finding its voice online. Critics had already noted the way the show had lost its footing, and then came an ending that seemed, at the time (and perhaps even now), designed to piss off the 'now' audience.

But how is the show experienced by the 'later' audience in 2019? It's likely that much of the 'later' audience still finds fault with the final seasons of the show, but they don't watch single episodes or even single seasons in isolation, and they don't have time to dwell on whether or not the episode or season disappointed them before moving on to the next episode or season. Often, they consume the show as a whole. More important to this discussion, they likely don't go searching for the social media reaction to the show as it was aired 20 years ago. Some of that reaction is still there on the web, and, in some way, the reaction to the ending feels more dated than the show itself. The Sopranos, like many other shows that occasionally ran afoul of their most ardent and vocal fans, has a long shelf life. Even Lost, a show I thought of as the best example of a TV show with a disappointing ending, is apparently being reappraised.

It's impossible to know for sure what the 'later' audience will think of a show, but watching the long-term success of shows like Friends and The Office on streaming platforms, and watching the amount of money streaming services like Netflix will pay to keep them, has me thinking about the ways in which it is easy to overestimate the market power, and perhaps the cultural power, of the 'now' audience.



Thursday, April 25, 2019

Why do some people believe smartphones and social media are harmful?

I try to read and listen to critiques of smartphones and social media with an open mind. Typically, these critiques note the addictive qualities of these technologies and lament their effects on our abilities to engage in deep, sustained contemplation and substantive social interaction. A decade of experience conducting research on the uses and effects of social media has taught me to be skeptical of these claims. The first thing we would expect to see, if these claims were true, would be vast differences between those who use smartphones/social media a lot and those who use them a little or not at all. Among younger folks, we'd expect differences in academic achievement. Among everyone, we'd expect differences in happiness, well-being, depression, and/or the nature of their relationships with others. Even if such differences existed, this would not necessary suggest that these technologies caused changes in individuals. Perhaps individuals possess some existing characteristics (such as a lack of a clear sense of purpose in their lives) that, ten years ago, would have caused them to watch a lot of television and be unhappy, and that now causes them to spend a lot of time on Facebook and be unhappy. When I look at the sum total of what studies have found over the past ten years, I don't even see much evidence of the correlations you would expect to see if the claims were true. To the extent that there are correlations between use and negative outcomes, the magnitudes of the effects tend to be small (far smaller than is suggested by the level of concern they've prompted among many).

This doesn't rule out the possibility that within the more recent past (say, within the last year or two), smartphones and social media may have started to have the kinds of negative consequences about which many people worry. Research moves at a slow pace, and the technologies and the ways people use them are changing. Personally, it's hard to look at some of the freshmen I teach in large lecture classes, who are seemingly unable to stop using their phones, and not think, 'Maybe the worriers are right after all. Maybe we had to wait until a generation of young people had a chance to develop technology use habits at a young age to really see the effects of the technologies on people.' I'm passionate about doing whatever we can to establish solid evidence of the effects of smartphones/social media use on mental health, academic achievement, and relationships. I'm passionate about improving our measures. I don't want to set the bar for evidence of pernicious effects unrealistically high. I just want to see solid evidence before I believe the worriers.

At the same time, I think it's worthwhile for those who worry about the effects of smartphones/social media to consider alternative explanations for what they see. If there turns out not to be evidence of smartphones and social media's pernicious effects, why might it seem as though they have those effects?

I think part of it has to do with the people that worriers observe. Often, they observe young people: children and adolescents. The adults they observe, including themselves, often have the types of jobs (academic jobs; journalism; social media managers) that allow them flexibility in how they spend their time. This is what children and journalists have in common with one another: they have flexible schedules. There are no clear boundaries between work, play, and social time. Under these circumstances, social media, it would seem, can be pernicious. It can be habit-forming to an extent that feels addictive, crowd out deep thought with shallow distractions, put you on a hedonic treadmill that feels impossible to get off.

But these are not the circumstances of the average person.

Most people, in the United States and in the rest of the world, have more structured lives. They spend large periods of the day working or care-taking family members. Do they use smartphones/social media while they work and care-take? I'm sure some of them do. In the absence of smartphones/social media, would some of those folks have engaged in other non-work/non-care-taking activities such as socializing with co-workers face-to-face or watching television? Probably. In any case, it's clear that research on leisure use of smartphones and social media at work ('cyber-loafing' is the humorously antiquated name for this) and use while care-taking is important, but also clear that we should not assume that social media plays the same role in the lives of individuals with more structured lives as it does in the lives of those with less structure.

What I suspect is that many people, perhaps most people, spend most of their days engaged in work that is not especially stimulating or fulfilling, but that pays the bills. At the end of the day, they want to relax. Ten or twenty years ago, they might sit down in front of the television and unwind. Today, they might break out their smartphones (perhaps while also watching TV) and sift through social media posts, exchange messages with friends, and catch up on the day's events. This kind of use would serve roughly the same function as TV use ten or twenty years ago - relaxing after a hard day's work - while helping them to feel connected to far-flung friends and family (an especially valuable perk to those who work at home and may feel isolated).

Does their smartphone/social media use come with some bad side effects, such as exposure to an aggressively polarized online culture war, or exposure to plenty of misleading or untrue information? Sure, but when was it otherwise? Didn't exposure to TV come with plenty of bad side effects, such as exposure to a skewed version of reality, one that often trafficked in violence and ugly stereotypes? I don't mean to rely on the TV comparison as a way of excusing the many problems with online life. But I think it's worthwhile to bring up so as to correct the belief that our leisure time, in the past, tended to be especially productive and pro-social.

Perhaps the lives of certain people (those with flexible schedules) have changed a great deal due to the popularization of smartphones and social media. Perhaps they are suffering and they're angry at the creators and supporters of smartphones and social media, and in their anger, overreach in their claims about the effects of smartphones and social media. Perhaps not! But I hope to make the case that to believe that smartphones and social media are not problematic for the majority of their users is not something that only industry shills do. There is a sensible reason, if an untested one, to believe this.

Sunday, December 09, 2018

White Flight Part 2: When the upper class disconnects

It's been awhile since I've published anything in this blog. Mostly, it's been the incentive to publish in peer-reviewed journals in order to attain tenure that's to blame for the lack of blog productivity. I continue to have stray, undeveloped thoughts about media uses and effects, and there are a few drafts of blogs waiting to be finished, but in the meantime, here's one that's been percolating of late.

There are reports from the past year that many highly-educated, upper-class or upper-middle-class parents are raising their kids with minimal or no screen time, primarily out of fear of its addictive qualities. It seems to start with lay theories of people who work in the tech industry and/or people who live in Silicon Valley - people with a very idiosyncratic perspective on media technologies. One could look at this particular group of parents as experts, given their unusual access to the ways in which these technologies are developed, marketed, and used. But it's also possible that their experience skews their perception of the extent to which these technologies actually are addicting or otherwise pernicious.

One possibility is that the parents are right, that the technologies are pernicious, at least when used in what they deem to be excess. Another possibility is that they're mistaken, that the technologies are merely a new form of communication, like books or telephones, not bad in and of themselves. For now, I want to set aside that issue and focus on the repercussions of a certain type of young person - an upper-class young person - dropping out of the social media universe (or never participating in it in the first place).

There might be a new kind of digital divide, one in which upper-class young people are not participating in or contributing to online social spaces. Those young people will, of course, communicate with one another, through face-to-face social networks if not through technologies that upper-class parents look at with less fear (the good ol' fashioned phone; Facetime/Skype; maybe even texting). They'll use the internet, of course, but primarily for work, or the consumption of curated content with high production values.

Meanwhile, the hurly-burly social media universe - the YouTubers, the memes, the bullying and the overnight fame, the narcissism, confessions, and anonymous social support, all overcrowded with ads - will continue to exist. If the hostility and hate speech get worse and worse, and if other people become helplessly addicted to its pleasures? Well, that's their problem. It's hard not to think of the 'white flight' of privileged families from the sites of fear and anger of yesteryear - urban centers. Privileged young people's image of the unruly social media universe will be akin to the caricature of urban life that children of the 80's grew up with: they will see the most sensational worst-of-the-worst stories, and have no personal experience with it to temper these simplistic, negative depictions. When they get to college, whether or not they grew up on the internet could be as important as whether they grew up in a one-stoplight Midwestern hamlet or Brooklyn. The social distance between a lower-middle class child who spent hours on social media from age 9 and an upper class child who read books and played Dungeons and Dragons at his friend's house, even if those two kids grew up across the street from one another, might be immense.

Among the many fears that social media evoke is the fear of the filter bubble: that subtle social media algorithms and quirks of human behavior will work to balkanize societies. Ten years after the popularization of social media, evidence seems to suggest that the opposite has happened, that we vastly overestimated the power of those algorithms, underestimated the extent to which offline social networks of old were already balkanized, and underestimated the serendipity and unpredictability of evolving online social networks. If balkanization occurred, and if it is occurring again, it may be between those who were/are socializing online and those who were not/are not.


Tuesday, October 03, 2017

If you can't stay here, where do you go? The sustainability of refuges for digital exiles

This semester, our research team has waded into some of the murkier waters of the internet in search of the conditions under which online hostility flourishes. We're still developing our tools and getting a sense of the work that is being done in this area.

Among the most pertinent, and recent, studies was a study by Eshwar Chandrasekharan and colleagues about the effects of banning toxic, hate-filled subreddits. I've always been curious as to whether banning (i.e., eliminating) entire communities (in this case, subreddits on Reddit) had the intended effect of curbing hate speech, or whether users merely expressed their hostility in another community. The study suggests that banning communities is an effective way to curb hate speech on Reddit: 'migrants' or 'exiles' of the banned communities either stopped posting on Reddit altogether, or posted in other subreddits but not in a hateful manner. The authors are quick to point out that these exiles might have just taken their hostility to another website. Given the fact that Reddit users cannot be tracked beyond Reddit, it's hard to determine whether or not that happened, but there is some evidence to suggest that websites like voat.co acted as a kind of refuge or safe harbor for Reddit's exiles: many of the same usernames that were used in Reddit's banned communities surfaced on voat.co. To quote the authors of the study, banning a community might just have made hate speech "someone else's problem."

I'm intrigued by this possibility. It fits with a hunch of mine; what you might call the homeostatic hatred hypothesis, or the law of conservation of hatred: there is a stable amount of hatred in the world. It cannot be created or destroyed, but merely transformed, relocated, or redirected.

Refuges like Voat.co are like cess pools or septic tanks: they isolate elements that are considered toxic by most members of the general community. In the context of waste disposal, cess pools and septic tanks are great, but I wonder if the same is true in social contexts. On the one hand, they might prevent contagion: fewer non-hateful people are exposed to hateful ideas and behavior and thus are less likely to become hateful. On the other hand, by creating highly concentrated hateful communities, you may reduce the possibility that hateful folks would be kept in check by anyone else. You're creating a self-reinforcing echo chamber, a community that supports its members' hateful ideologies, behavior, and speech.

Whether or not these online refuges are good or bad may be moot if they are not sustainable. In searching for more information about Voat, I was surprised to find that Voat isn't doing so well. Reports of its demise seem to be premature (it is up and running as of this moment), but it seems clear that it faces challenges. The foremost of these challenges is revenue.

I get the sense that people often underestimate how much time and money is involved in creating and hosting a large (or even moderately sized) online community, or community-of-communities. Someone needs to pay for the labor and server space. Advertisers and funders, in general, don't seem to be wild about being associated with these types of online communities. If there were a greater number of people who were willing to inhabit these refuges, people who had a lot of money and could buy a lot of things, then it might be worth it to advertise there and to host these communities. If the users had a lot of disposable income, they could use a crowdfunded model. But it doesn't seem to be the case that there are enough users with enough money to keep a large community running for very long.

Such sites could end up as bare-bones communities with fewer bells and whistles that are easier and cheaper to maintain, but they seem to encounter other problems. I get the sense that people also underestimate the difficulty of creating a community that produces frequently updated, novel, interesting content. Content quickly becomes repetitive, or boring, or filled with spam, or subject to malicious attacks. This is a real problem when the value of the site is content that is generated by users: bored users leave, creating a smaller pool of potential content suppliers. The smaller the conversation gets, the less alluring it is. These refuges will continue to be bare-bones while other online communities, video games, TV shows, VR experiences, and other ways to spend your free time add more and more bells and whistles. Why bother spending time in a small, repetitive conversation when there are more alluring ways to spend your free time?

Of course, defining 'hostility' and 'hate speech' is tricky, and the obvious objections to studies like this is that 'hate speech' is being defined in the wrong way. You get criticism from both sides: either you're defining it too narrowly and not including robust, sustainable communities like commenters on far right wing or left wing blogs, or you're defining it too broadly, categorizing legitimate criticism of others as hateful and hostile. It's clear to me that you can't please everyone when you're doing research like this. In fact, it's pretty clear that you can please very, very few people. I suppose my interests have less to do with whether or not we classify one speech or the other as 'hateful' or 'hostile,' and more to do with user migratory patterns, in particular those expressing widely unpopular beliefs (or expressing beliefs in a widely unacceptable way). It seems that people have their minds made up when it comes to the question of whether techniques such as banning communities are restricting speech or making the internet/society a safer, more tolerant space. But both sides are assuming that the technique actually works.

While some would lament the existence of refuges and others are likely willing to sacrifice a great deal to see that they persist, it's worth asking 'what forces constrain them? Why aren't they bigger? How long can they persist?'

Friday, June 30, 2017

Anonymity: Expectation or Right?

Somewhat recently, a public official was linked to remarks he allegedly posted online while using a pseudonym. The official had done nothing illegal, but his reputation suffered greatly after being linked to the remarks. That got me thinking about people's expectations of being able to express themselves anonymously online.

Let's assume, for the moment, that the official in question really did post remarks that, once linked to him, resulted in public disgrace. Anyone posting online using a pseudonym or posting anonymously likely has some expectation that his or her remarks won't be linked to his/her "real world," offline identity. At the very least, having remarks you made anonymously or pseudonymously is a violation of your expectations. I'd expect it to feel as though your privacy had being violated; anonymity gives you a kind of privacy. In fact, that's how I originally processed the story of the official: as a case in which an individual's privacy was violated. People generally regard privacy (however fuzzily defined) as a right (though people also have a way of justifying such violations if they feel that the uncovered sin is great enough).

On further reflection, I'm not so sure linking someone to comments they made anonymously is analogous to other violations of privacy (i.e., someone installing a camera in your bathroom). Perhaps we've come to conflate anonymity with privacy. When I say things to a friend in a private setting, I expect those things not to be recorded and played back in some other context. This kind of privacy of self-expression in a particular limited context (i.e., secrets) has been a part of many societies for a long time (though I'd stop short of calling it natural and/or a basic human right). But the ability to express one's self to a large number of people anonymously hasn't been around for more than a decade or so. Of course, there have been anonymous sources for a long time, and the protection of witnesses through the assignment of new identities has been a common protocol for a long time. But in terms of the frequency and ease with which the average person can express themselves anonymously on an everyday basis, I think it's a relatively new phenomenon. Additionally, things said in private and things said anonymously differ radically in terms of their impact. Whispering secrets among a small group of friends likely has one impact on the attitudes and beliefs of others while writing something anonymously online likely has another (typically larger) impact.

I can understand a society that wants to enshrine the first kind of privacy (whispering in private, off the record) as a basic right, but to lump anonymous self-expression (a relatively recent widespread phenomenon) in with this strikes me as rash. Certainly, many of us have come to take for granted the ability to say things anonymously that will not be associated with our "real world" identities, and it feels bad when our expectations are violated, and but that doesn't make it a right.

When considering whether or not something should be treated as a right, we tend to look backward, for precedent. I wonder about the limits of this approach. It demands that we make forced analogies that don't really fit. We select the analogy to the past that suits us ("posting anonymously is like publishing controversial political tracts under an assumed name," or, if you're on the other side, "posting anonymously is like the hoods that members of the Ku Klux Klan wore"). Instead, it seems to me to be worthwhile to consider the aggregate effects on society, now and for the foreseeable future, of enshrining something as a right. Would a world in which we had to live in fear of being associated with everything we say and do anonymously online be a better or worse world?

Reasons why anonymity is good: it makes it easier for folks who are seeking help for a stigmatized condition to receive help. It facilitates "whistle-blowing" and ensures confidentiality of sources, making it easier to hold powerful institutions accountable. Anonymity is also a kind of bulwark against surveillance and the permanence of online memory and the ease with which messages are taken out of context, widely disseminated, framed in misleading ways, and used against the speaker. This last one seems like a biggie. The tactic of using one's past words against one's future self was once a technique used by the press on politicians, but now it seems to be used by anyone on anyone. And so we cling to anonymous self-expression as a way to retain some freedom of speech.

Reasons why anonymity is bad: it permits hostility without consequences, on a massive scale and, thus, normalizes hostile thinking and behavior. Hostile people aren't as isolated as they were before; they can easily find one another and, together, justify their hostility as a defense of their rights, freedom, or as an act of justice.

So, if we lose trust in the ability of any communication tool to provide us with true anonymity (as would likely happen if a few more high-profile un-maskings were to occur), we're probably going to lose some good things and some bad things. Any attempt to determine whether anonymity should be defended as a right should consider the weight of those things. I think that gets lost in debates about the merits of, well, a lot of things these days. It isn't enough to link a particular course of action to bad consequences. You must consider all of the consequences as well as all of the consequences of the other plausible courses of action, to the extent that such things are possible, before arriving at a decision.

It could be that younger people who've grown up with the ability to express themselves anonymously may simply dislike the prospect of losing this ability so much that it may not matter whether we officially enshrine anonymous speech as part and parcel of the right to privacy. The demand for it might be so high that economically and politically (rather than ethically), it will be treated as a necessity. Conversely, the decay of true anonymity (and the fear of being "outed") may be an inevitable consequence of a highly networked world in which sufficiently motivated people can unmasked whomever they want, regardless of how badly the majority of folks wish that anonymity were a protect right.