Tuesday, October 03, 2017

If you can't stay here, where do you go? The sustainability of refuges for digital exiles

This semester, our research team has waded into some of the murkier waters of the internet in search of the conditions under which online hostility flourishes. We're still developing our tools and getting a sense of the work that is being done in this area.

Among the most pertinent, and recent, studies was a study by Eshwar Chandrasekharan and colleagues about the effects of banning toxic, hate-filled subreddits. I've always been curious as to whether banning (i.e., eliminating) entire communities (in this case, subreddits on Reddit) had the intended effect of curbing hate speech, or whether users merely expressed their hostility in another community. The study suggests that banning communities is an effective way to curb hate speech on Reddit: 'migrants' or 'exiles' of the banned communities either stopped posting on Reddit altogether, or posted in other subreddits but not in a hateful manner. The authors are quick to point out that these exiles might have just taken their hostility to another website. Given the fact that Reddit users cannot be tracked beyond Reddit, it's hard to determine whether or not that happened, but there is some evidence to suggest that websites like voat.co acted as a kind of refuge or safe harbor for Reddit's exiles: many of the same usernames that were used in Reddit's banned communities surfaced on voat.co. To quote the authors of the study, banning a community might just have made hate speech "someone else's problem."

I'm intrigued by this possibility. It fits with a hunch of mine; what you might call the homeostatic hatred hypothesis, or the law of conservation of hatred: there is a stable amount of hatred in the world. It cannot be created or destroyed, but merely transformed, relocated, or redirected.

Refuges like Voat.co are like cess pools or septic tanks: they isolate elements that are considered toxic by most members of the general community. In the context of waste disposal, cess pools and septic tanks are great, but I wonder if the same is true in social contexts. On the one hand, they might prevent contagion: fewer non-hateful people are exposed to hateful ideas and behavior and thus are less likely to become hateful. On the other hand, by creating highly concentrated hateful communities, you may reduce the possibility that hateful folks would be kept in check by anyone else. You're creating a self-reinforcing echo chamber, a community that supports its members' hateful ideologies, behavior, and speech.

Whether or not these online refuges are good or bad may be moot if they are not sustainable. In searching for more information about Voat, I was surprised to find that Voat isn't doing so well. Reports of its demise seem to be premature (it is up and running as of this moment), but it seems clear that it faces challenges. The foremost of these challenges is revenue.

I get the sense that people often underestimate how much time and money is involved in creating and hosting a large (or even moderately sized) online community, or community-of-communities. Someone needs to pay for the labor and server space. Advertisers and funders, in general, don't seem to be wild about being associated with these types of online communities. If there were a greater number of people who were willing to inhabit these refuges, people who had a lot of money and could buy a lot of things, then it might be worth it to advertise there and to host these communities. If the users had a lot of disposable income, they could use a crowdfunded model. But it doesn't seem to be the case that there are enough users with enough money to keep a large community running for very long.

Such sites could end up as bare-bones communities with fewer bells and whistles that are easier and cheaper to maintain, but they seem to encounter other problems. I get the sense that people also underestimate the difficulty of creating a community that produces frequently updated, novel, interesting content. Content quickly becomes repetitive, or boring, or filled with spam, or subject to malicious attacks. This is a real problem when the value of the site is content that is generated by users: bored users leave, creating a smaller pool of potential content suppliers. The smaller the conversation gets, the less alluring it is. These refuges will continue to be bare-bones while other online communities, video games, TV shows, VR experiences, and other ways to spend your free time add more and more bells and whistles. Why bother spending time in a small, repetitive conversation when there are more alluring ways to spend your free time?

Of course, defining 'hostility' and 'hate speech' is tricky, and the obvious objections to studies like this is that 'hate speech' is being defined in the wrong way. You get criticism from both sides: either you're defining it too narrowly and not including robust, sustainable communities like commenters on far right wing or left wing blogs, or you're defining it too broadly, categorizing legitimate criticism of others as hateful and hostile. It's clear to me that you can't please everyone when you're doing research like this. In fact, it's pretty clear that you can please very, very few people. I suppose my interests have less to do with whether or not we classify one speech or the other as 'hateful' or 'hostile,' and more to do with user migratory patterns, in particular those expressing widely unpopular beliefs (or expressing beliefs in a widely unacceptable way). It seems that people have their minds made up when it comes to the question of whether techniques such as banning communities are restricting speech or making the internet/society a safer, more tolerant space. But both sides are assuming that the technique actually works.

While some would lament the existence of refuges and others are likely willing to sacrifice a great deal to see that they persist, it's worth asking 'what forces constrain them? Why aren't they bigger? How long can they persist?'