Sunday, December 09, 2018

White Flight Part 2: When the upper class disconnects

It's been awhile since I've published anything in this blog. Mostly, it's been the incentive to publish in peer-reviewed journals in order to attain tenure that's to blame for the lack of blog productivity. I continue to have stray, undeveloped thoughts about media uses and effects, and there are a few drafts of blogs waiting to be finished, but in the meantime, here's one that's been percolating of late.

There are reports from the past year that many highly-educated, upper-class or upper-middle-class parents are raising their kids with minimal or no screen time, primarily out of fear of its addictive qualities. It seems to start with lay theories of people who work in the tech industry and/or people who live in Silicon Valley - people with a very idiosyncratic perspective on media technologies. One could look at this particular group of parents as experts, given their unusual access to the ways in which these technologies are developed, marketed, and used. But it's also possible that their experience skews their perception of the extent to which these technologies actually are addicting or otherwise pernicious.

One possibility is that the parents are right, that the technologies are pernicious, at least when used in what they deem to be excess. Another possibility is that they're mistaken, that the technologies are merely a new form of communication, like books or telephones, not bad in and of themselves. For now, I want to set aside that issue and focus on the repercussions of a certain type of young person - an upper-class young person - dropping out of the social media universe (or never participating in it in the first place).

There might be a new kind of digital divide, one in which upper-class young people are not participating in or contributing to online social spaces. Those young people will, of course, communicate with one another, through face-to-face social networks if not through technologies that upper-class parents look at with less fear (the good ol' fashioned phone; Facetime/Skype; maybe even texting). They'll use the internet, of course, but primarily for work, or the consumption of curated content with high production values.

Meanwhile, the hurly-burly social media universe - the YouTubers, the memes, the bullying and the overnight fame, the narcissism, confessions, and anonymous social support, all overcrowded with ads - will continue to exist. If the hostility and hate speech get worse and worse, and if other people become helplessly addicted to its pleasures? Well, that's their problem. It's hard not to think of the 'white flight' of privileged families from the sites of fear and anger of yesteryear - urban centers. Privileged young people's image of the unruly social media universe will be akin to the caricature of urban life that children of the 80's grew up with: they will see the most sensational worst-of-the-worst stories, and have no personal experience with it to temper these simplistic, negative depictions. When they get to college, whether or not they grew up on the internet could be as important as whether they grew up in a one-stoplight Midwestern hamlet or Brooklyn. The social distance between a lower-middle class child who spent hours on social media from age 9 and an upper class child who read books and played Dungeons and Dragons at his friend's house, even if those two kids grew up across the street from one another, might be immense.

Among the many fears that social media evoke is the fear of the filter bubble: that subtle social media algorithms and quirks of human behavior will work to balkanize societies. Ten years after the popularization of social media, evidence seems to suggest that the opposite has happened, that we vastly overestimated the power of those algorithms, underestimated the extent to which offline social networks of old were already balkanized, and underestimated the serendipity and unpredictability of evolving online social networks. If balkanization occurred, and if it is occurring again, it may be between those who were/are socializing online and those who were not/are not.


Tuesday, October 03, 2017

If you can't stay here, where do you go? The sustainability of refuges for digital exiles

This semester, our research team has waded into some of the murkier waters of the internet in search of the conditions under which online hostility flourishes. We're still developing our tools and getting a sense of the work that is being done in this area.

Among the most pertinent, and recent, studies was a study by Eshwar Chandrasekharan and colleagues about the effects of banning toxic, hate-filled subreddits. I've always been curious as to whether banning (i.e., eliminating) entire communities (in this case, subreddits on Reddit) had the intended effect of curbing hate speech, or whether users merely expressed their hostility in another community. The study suggests that banning communities is an effective way to curb hate speech on Reddit: 'migrants' or 'exiles' of the banned communities either stopped posting on Reddit altogether, or posted in other subreddits but not in a hateful manner. The authors are quick to point out that these exiles might have just taken their hostility to another website. Given the fact that Reddit users cannot be tracked beyond Reddit, it's hard to determine whether or not that happened, but there is some evidence to suggest that websites like voat.co acted as a kind of refuge or safe harbor for Reddit's exiles: many of the same usernames that were used in Reddit's banned communities surfaced on voat.co. To quote the authors of the study, banning a community might just have made hate speech "someone else's problem."

I'm intrigued by this possibility. It fits with a hunch of mine; what you might call the homeostatic hatred hypothesis, or the law of conservation of hatred: there is a stable amount of hatred in the world. It cannot be created or destroyed, but merely transformed, relocated, or redirected.

Refuges like Voat.co are like cess pools or septic tanks: they isolate elements that are considered toxic by most members of the general community. In the context of waste disposal, cess pools and septic tanks are great, but I wonder if the same is true in social contexts. On the one hand, they might prevent contagion: fewer non-hateful people are exposed to hateful ideas and behavior and thus are less likely to become hateful. On the other hand, by creating highly concentrated hateful communities, you may reduce the possibility that hateful folks would be kept in check by anyone else. You're creating a self-reinforcing echo chamber, a community that supports its members' hateful ideologies, behavior, and speech.

Whether or not these online refuges are good or bad may be moot if they are not sustainable. In searching for more information about Voat, I was surprised to find that Voat isn't doing so well. Reports of its demise seem to be premature (it is up and running as of this moment), but it seems clear that it faces challenges. The foremost of these challenges is revenue.

I get the sense that people often underestimate how much time and money is involved in creating and hosting a large (or even moderately sized) online community, or community-of-communities. Someone needs to pay for the labor and server space. Advertisers and funders, in general, don't seem to be wild about being associated with these types of online communities. If there were a greater number of people who were willing to inhabit these refuges, people who had a lot of money and could buy a lot of things, then it might be worth it to advertise there and to host these communities. If the users had a lot of disposable income, they could use a crowdfunded model. But it doesn't seem to be the case that there are enough users with enough money to keep a large community running for very long.

Such sites could end up as bare-bones communities with fewer bells and whistles that are easier and cheaper to maintain, but they seem to encounter other problems. I get the sense that people also underestimate the difficulty of creating a community that produces frequently updated, novel, interesting content. Content quickly becomes repetitive, or boring, or filled with spam, or subject to malicious attacks. This is a real problem when the value of the site is content that is generated by users: bored users leave, creating a smaller pool of potential content suppliers. The smaller the conversation gets, the less alluring it is. These refuges will continue to be bare-bones while other online communities, video games, TV shows, VR experiences, and other ways to spend your free time add more and more bells and whistles. Why bother spending time in a small, repetitive conversation when there are more alluring ways to spend your free time?

Of course, defining 'hostility' and 'hate speech' is tricky, and the obvious objections to studies like this is that 'hate speech' is being defined in the wrong way. You get criticism from both sides: either you're defining it too narrowly and not including robust, sustainable communities like commenters on far right wing or left wing blogs, or you're defining it too broadly, categorizing legitimate criticism of others as hateful and hostile. It's clear to me that you can't please everyone when you're doing research like this. In fact, it's pretty clear that you can please very, very few people. I suppose my interests have less to do with whether or not we classify one speech or the other as 'hateful' or 'hostile,' and more to do with user migratory patterns, in particular those expressing widely unpopular beliefs (or expressing beliefs in a widely unacceptable way). It seems that people have their minds made up when it comes to the question of whether techniques such as banning communities are restricting speech or making the internet/society a safer, more tolerant space. But both sides are assuming that the technique actually works.

While some would lament the existence of refuges and others are likely willing to sacrifice a great deal to see that they persist, it's worth asking 'what forces constrain them? Why aren't they bigger? How long can they persist?'

Friday, June 30, 2017

Anonymity: Expectation or Right?

Somewhat recently, a public official was linked to remarks he allegedly posted online while using a pseudonym. The official had done nothing illegal, but his reputation suffered greatly after being linked to the remarks. That got me thinking about people's expectations of being able to express themselves anonymously online.

Let's assume, for the moment, that the official in question really did post remarks that, once linked to him, resulted in public disgrace. Anyone posting online using a pseudonym or posting anonymously likely has some expectation that his or her remarks won't be linked to his/her "real world," offline identity. At the very least, having remarks you made anonymously or pseudonymously is a violation of your expectations. I'd expect it to feel as though your privacy had being violated; anonymity gives you a kind of privacy. In fact, that's how I originally processed the story of the official: as a case in which an individual's privacy was violated. People generally regard privacy (however fuzzily defined) as a right (though people also have a way of justifying such violations if they feel that the uncovered sin is great enough).

On further reflection, I'm not so sure linking someone to comments they made anonymously is analogous to other violations of privacy (i.e., someone installing a camera in your bathroom). Perhaps we've come to conflate anonymity with privacy. When I say things to a friend in a private setting, I expect those things not to be recorded and played back in some other context. This kind of privacy of self-expression in a particular limited context (i.e., secrets) has been a part of many societies for a long time (though I'd stop short of calling it natural and/or a basic human right). But the ability to express one's self to a large number of people anonymously hasn't been around for more than a decade or so. Of course, there have been anonymous sources for a long time, and the protection of witnesses through the assignment of new identities has been a common protocol for a long time. But in terms of the frequency and ease with which the average person can express themselves anonymously on an everyday basis, I think it's a relatively new phenomenon. Additionally, things said in private and things said anonymously differ radically in terms of their impact. Whispering secrets among a small group of friends likely has one impact on the attitudes and beliefs of others while writing something anonymously online likely has another (typically larger) impact.

I can understand a society that wants to enshrine the first kind of privacy (whispering in private, off the record) as a basic right, but to lump anonymous self-expression (a relatively recent widespread phenomenon) in with this strikes me as rash. Certainly, many of us have come to take for granted the ability to say things anonymously that will not be associated with our "real world" identities, and it feels bad when our expectations are violated, and but that doesn't make it a right.

When considering whether or not something should be treated as a right, we tend to look backward, for precedent. I wonder about the limits of this approach. It demands that we make forced analogies that don't really fit. We select the analogy to the past that suits us ("posting anonymously is like publishing controversial political tracts under an assumed name," or, if you're on the other side, "posting anonymously is like the hoods that members of the Ku Klux Klan wore"). Instead, it seems to me to be worthwhile to consider the aggregate effects on society, now and for the foreseeable future, of enshrining something as a right. Would a world in which we had to live in fear of being associated with everything we say and do anonymously online be a better or worse world?

Reasons why anonymity is good: it makes it easier for folks who are seeking help for a stigmatized condition to receive help. It facilitates "whistle-blowing" and ensures confidentiality of sources, making it easier to hold powerful institutions accountable. Anonymity is also a kind of bulwark against surveillance and the permanence of online memory and the ease with which messages are taken out of context, widely disseminated, framed in misleading ways, and used against the speaker. This last one seems like a biggie. The tactic of using one's past words against one's future self was once a technique used by the press on politicians, but now it seems to be used by anyone on anyone. And so we cling to anonymous self-expression as a way to retain some freedom of speech.

Reasons why anonymity is bad: it permits hostility without consequences, on a massive scale and, thus, normalizes hostile thinking and behavior. Hostile people aren't as isolated as they were before; they can easily find one another and, together, justify their hostility as a defense of their rights, freedom, or as an act of justice.

So, if we lose trust in the ability of any communication tool to provide us with true anonymity (as would likely happen if a few more high-profile un-maskings were to occur), we're probably going to lose some good things and some bad things. Any attempt to determine whether anonymity should be defended as a right should consider the weight of those things. I think that gets lost in debates about the merits of, well, a lot of things these days. It isn't enough to link a particular course of action to bad consequences. You must consider all of the consequences as well as all of the consequences of the other plausible courses of action, to the extent that such things are possible, before arriving at a decision.

It could be that younger people who've grown up with the ability to express themselves anonymously may simply dislike the prospect of losing this ability so much that it may not matter whether we officially enshrine anonymous speech as part and parcel of the right to privacy. The demand for it might be so high that economically and politically (rather than ethically), it will be treated as a necessity. Conversely, the decay of true anonymity (and the fear of being "outed") may be an inevitable consequence of a highly networked world in which sufficiently motivated people can unmasked whomever they want, regardless of how badly the majority of folks wish that anonymity were a protect right.





Tuesday, March 28, 2017

Recording each other for justice

Each Saturday, a recurring conflict takes place outside of an abortion clinic, a conflict that sheds light on how media technologies raise questions about the ethics of surveillance and privacy. It's a good example of how these issues are not just about how governments and corporations monitor citizens and consumers, but how these questions arise from interactions among citizens. My understanding of precisely what occurs around the abortion clinic is based on anecdotal evidence, so take everything in this entry with a grain of salt. This is one of those phenomena that I'd love to devote more time to understanding, if only I had the time. There is a compelling ethnography waiting to be written about it, one that would be particularly relevant to communication law scholars.

The space outside the clinic is occupied by two groups: protesters seeking to persuade individuals entering the clinic not to get abortions, and a group of people (hereafter referred to as "defenders") seeking to protect or buffer individuals entering the clinic from harassment from the protesters. This particular arrangement of individuals could have occurred long before the advent of digital networked technologies. What I'm interested in is what happens when you add those tools to the mix.

Protesters are often seen using their phones to take pictures of defenders and the license plates of defenders' cars. Protesters are, by law (at least as far as I know), permitted to do this. Both the protesters and defenders occupy a public space, or at least a space that is not "private" in the sense that one's home is private. Protesters are not, as I understand it, allowed to take pictures of individuals entering the clinic, as this would violate their rights to privacy as patients of the clinic. Even though they are not yet inside the clinic, if they are on the clinic's property and they are patients, their rights to privacy extend to the area around the clinic. If defenders believe that protesters are engaging in unlawful picture-taking, the defenders will use their phones to video record the protesters taking pictures.

Predictably, tempers occasionally flare, voices are raised, people get in each others' faces, and when behavior that approaches the legal definition of harassment or assault occurs, everyone starts recording everything with their phones. The image of extremely worked-up people wielding cameras as one would wield a weapon, recording each other recording each other in a kind of infinite regress of surveillance, strikes me as ludicrous, partly because the act of recording someone with your phone makes you appear passive, somewhat nerdy, and almost...dainty.

What is the point of all this recording? In most cases, the intention seems to be to catch others in an act of law-breaking, to create a record of evidence to turn over to the police that could be admissible in a court of law. But in other cases (e.g., the protester's pictures of defenders' license plates), the intent seems to have little to do with the actual law. The police would have little interest in the license plate numbers of law-abiding citizens. So, why are they doing this, and what happens to these pictures?

Enter social networking sites (SNS). The pictures, as I understand it, are subsequently uploaded to a SNS group page which contains a collection of pictures of defenders and their license plates. It is possible for SNS users to make such groups "secret" and/or invitation-only, so that the groups could not be found by those in the pictures. My understanding is that this leads to those who are in the pictures disguising their identities online so as to infiltrate the secret groups, acting as moles.

But what is the point of developing these online inventories of people who are defenders or protesters or, for that matter, publicly state any particular belief? Is all of this just an intimidation technique? And if so, is it effective? Is there a kind of panoptic logic at work here, in which the fear comes from not knowing precisely who will see those pictures and in what context they will be seen (e.g., by a would-be employer 20 years from now)? Are they using the pictures as part of a concrete plan to take action against the individuals in the pictures, or is it not that well thought-out? Do people taking pictures and amassing inventories like this do so because they imagine that someday, the law will change, or collective sentiment will change, at which point it will becoming damning evidence that one was affiliated with a group that is then seen as abhorrent? Is it akin to a German taking a picture of a Nazi sympathizer in 1939, banking on the fact that while being a Nazi at that time was socially acceptable, it would not be so for long, and that when it became unacceptable, the picture could be used to discredit or blackmail the person?

I don't think this phenomenon is relegated to protesters or defenders of abortion. I often think of it in a much more benign context: traffic violations. Let's say I were to record individuals making illegal U turns (or not using their fricking blinkers). The police may not be interested in my small-stakes vigilantism, but what if I were to start to amass an online inventory that included recordings of lousy drivers caught in the act, and the database included their license plates and pictures of their faces. The judgment here isn't taking place in a court of law. It's a kind of public shaming via peer-to-peer surveillance.

Aside from questions of motivation, the phenomenon raises questions of legality and ethics. It's my understanding that it is okay to take pictures of other individuals in public places, and/or their cars and license plates, and post those pictures online as long as the individuals and cars were in public. Perhaps at some point, were someone to take thousands of pictures of someone leaving their office everyday, it becomes an illegal activity (stalking), but I imagine the line to be blurry (are two pictures okay? Is it only not okay when you've been told to stop?). But what if you only take a single picture of an individual in public, and that individual isn't breaking the law, but you put that picture together with pictures of many other people engaged in a similar activity in an effort to publicly shame them or intimidate them? That seems illegal, but how do you prove what the effort or intention really was? Would it qualify as defamation?

Maybe it hinges on whether or not someone is a public figure. If a protester was quoted in a newspaper or their picture was on the front page of the New York Times, then this seems like fair game. The individual in the paper might suffer negative consequences as a result of being widely known for his quote and his behavior, but what are we supposed to do, not allow the press to depict protesters? What if a blog quoted a protester and featured a picture of him? Functionally, the blog isn't much different than the newspaper, and the lines between the two are blurry. The case of the protester quoted in the New York Times is realistic but rare, whereas there will likely be a great many quotes and pictures on SNS of people stating all sorts of beliefs and doing all sorts of things that may be worthy of judgment to some people at some time.

Even if we decide it fits into a certain legal or ethical category, this may not matter if the behavior remains secret, if technology enables it but doesn't bring it to the surface (i.e., the problem of policing secret inventories).

One possibility is that the law has simply been outstripped by technology. In place of the law, people respond to unethical uses of technology with, you guessed it, more technology. In this case, technology developers make it harder to take surreptitious pictures of people by making it difficult to turn the camera shutter sound off on camera phones. Other tech developers have engineered clothing that reflects light back at cameras in such a way that renders their images invisible. It could be argued that we have been living in a world in which every step of the justice system now functions through technology outside of traditional channels of justice administration, and that the best thing we can do is acknowledge that reality and consider how best to create an ethical world given the imperfect state of things, at least for the foreseeable future.

Wednesday, December 14, 2016

Thoughts on Post-2016 Election America: Re-examining the "Fringe Fighting" Hypothesis

In my conversations with people (both online and face-to-face conversations) about the post-election media environment, I'm finding it increasingly difficult to maintain my position as a dispassionate optimist. Is this because the world itself is contradicting that position, or is it because I'm being met with more resistance from those around me? That's what I'm still trying to sort out.

Many of the conversations come back to the premise that America is somehow more hostile than it used to be (not just that our leaders are objectionable and/or dangerous, but that the increasing danger resides in our populous). There are also conversations about what particular politicians are doing, will do, or can do, but I want to set those aside for a moment and focus on premises relating to the American population and the extent and intensity of its hostility toward one another. Previously, I've argued that the impression that we're a nation divided is largely an illusion, that the true conflict is mainly at the fringes, but that was before the election. So, I'd like to revisit that argument in light of discussions of public opinion, fake news, and a general sense of threat.

Essentially, my argument was that the strong disagreement we see in our culture is relegated to small groups of individuals on either end of an ideological spectrum that manifests themselves in highly visible ways. Although it can appear as though our entire culture is in a state of unrest (and is getting worse in this respect), this may be an illusion. To paraphrase myself:

This illusion occurs when we mistake uncommon, extreme online behaviors for exemplars. We implicitly or explicitly link hundreds or thousands of people actually stating a belief online (or, in this case, acting hostile toward other Americans online) with the behaviors and beliefs of a larger mainstream group that, while not actually stating the belief, has stated or acted in such ways that makes it clear that they believe in some of the same things as the group that actually states the belief online. In the U.S. right now, the large groups to which we most often extrapolate are "liberals/Democrats" and "conservatives/Republicans." Dissimilarities between ideas held by the small group actually stating the belief (or actually being openly hostile) online and members of the large group who are not stating the belief (and are not actually being openly hostile online) are ignored in favor of whatever they have in common. This is justified on the grounds that what the small group and the large group have in common is thought to represent a shared, coherent ideological framework (see "Arguing with the Fringes" for further details).

In retrospect, I shouldn't have used the word "fringe" to describe these small groups. The word feels dismissive and judgmental, which is not what I intended. Really, I just want to make a statement about the size of the groups that are in strong disagreement with (and are hostile toward) other Americans. Still, the term "fringe fighting" has a certain ring to it, and I can't think of a suitable alternative word for these groups at the moment, so for the purposes of this post, I'll stick with "fringe."

Arguments/evidence for the Fringe Fighting hypothesis

Though there is more talk about social unrest than there was when I wrote "Arguing with the Fringes," this talk fits a "moral panic" narrative in which people become extremely alarmed over novel behavior that is rapidly becoming popular (often involving media use) and extrapolate to a future world in which the novel behavior radically changes our world for the worse. There are, of course, concerns about rapidly spreading novel behaviors that turn out to be justified, and the dismissal of such concerns as hysterical can have dire consequences. But there are also dire consequences to succumbing to overblown fears, namely rapid declines in interpersonal and institutional trust that are essential to functioning societies, in addition to the "boy who cried 'wolf'" problem (if one's concerns are found by others to be overblown, one loses credibility, forfeiting the ability to call others' attention to future threats). Given the similarities between the talk of social unrest and previous instances of moral panics, it at least seems worthwhile to consider the possibility that concern about Americans' hostility toward one another is a moral panic.

It is also important to ask, "What are we using as indicators of how 320 Million or so Americans think of feel?" How Americans voted and what they say on the internet seem to be commonly used indicators. The majority of Americans did not vote in the last election, so it would be difficult to use 2016 voting behavior to assume anything about how "America" feels about anything. For those who did vote, whom they voted for is a pretty weak signal of any particular belief, as these candidates, in effect, bundle together various disparate beliefs, and some votes are not intended as endorsements of anything the candidate stated or believed but instead are merely "protest votes."

What people say on the internet is also a weak signal of overall public opinion. For one thing, comparatively few people post about politics and social issues (roughly one-third of social media users, according to Pew). And many of those are posting information that is visible only to those in their immediate social circles (e.g., posting on Facebook). Such information is highly salient to individuals consciously or unconsciously forming beliefs about what other Americans believe, but it is hardly representative of Americans as a whole.

We may also question assumptions about the impact the hostility we're able to see is having. The extreme voices may have been largely filtered out because most of their friends unfollowed or hid them. The only people who don't filter out the extreme voices are the ones who already would have believed whatever the poster is trying to convince them of. What good is sharing a news story if very few people follow you, and those few people already knew about the news you're sharing? As a side note, it would be nice to have some information about actual audience and the practice of unfollowing to go along with the information about sharing and 'liking' information online.

Better evidence about what America, as a whole, believes can be found in the General Social Survey, which attempts to look at what ALL Americans believe rather than the few that contribute content to the internet. Data from the survey suggests that American public opinion on a variety of social issues is relatively stable over the past few decades; an abrupt shift in that, though not impossible, would seem unlikely.

Finally, there is evidence that the growth of political animosity in the U.S. is a trend that pre-dates social media, so perhaps social media is just making visible what was already there. It should be noted that animosity (and attitude) is not the same thing as hostility (a behavior).

Arguments/evidence against the Fringe Fighting hypothesis

There is some evidence of growing distrust of the media. If you're not getting your information from the media (or whatever you define "mainstream media" as), where are you getting it from? You could either get it directly from alternative news sources or get it via social media, which carries stories from those alternative news sources. Many existing measures of exposure to news have yet to catch up with the way we consume news. It is entirely possible, given the growing distrust in mass/mainstream media and the lack of good indicators about where Americans get their information, that Americans have quickly shifted toward consuming news stories that frame current events chielfy in terms of conflict between groups of Americans.

What does this have to do with hostility? Well, those intra-American-conflict news stories could play to whatever various groups are inclined to believe about various other groups, play to one's fears (climate change on the Left; undesirable social change on the Right; an unfair, "rigged" economy and government on both the Left and Right), decrease trust and empathy, increase fear and cynicism, and sew dissent. Very few people may be initially fighting with one another, but if those people can disseminate information so as to convince disinterested others that the people on the Other Side of the fight are a growing threat to everyone, they can effectively "enlist" those disinterested others in the fight. What starts as a fringe fight could quickly grow into something larger.

So What?

For a moment, let's assume that there is a problem, that a significant number of people in America disagree with one another to a significant degree. Is this necessarily bad? Disagreement isn't bad in and of itself; arguably, it's desirable so as to avoid the pitfalls of "groupthink." But strong disagreement could lead to incivility (which, some would say, is antithetical to empathy and compromise, and compromise seems like a prerequisite to having a functioning democracy and economy) and censorship (which is antithetical to democracy and the progress of science and education). Incivility could lead to violent attacks (though I've heard at least one scholar argue that arguments can be uncivil and not be bad in these senses). In so far as we see evidence of strong disagreement growing and/or leading to incivility, censorship, and violent attacks, then yes, it's bad.

Assuming we have a problem, what do we do about it? It's possible that the traditional channels by which we sought to address social unrest would no longer work within a decentralized, non-hierarchical information system like today's (or tomorrow's) internet. This is the "everyone finds their own facts" problem (a topic for an upcoming blog entry). Even if you engage in a dispassionate analysis of evidence and find support for the fringe fighting hypothesis, or evidence that people are consuming more and more biased information and wandering further from any objective truth, what are you going to do with that information? You might teach it in a class, but if people don't want to hear it, then people might just start distrusting teachers more. You might publish in an academic journal, but what good is that when the journal loses it's sense of authority and credibility? If you publish in that academic journal and your research is covered by The New York Times, what good is that if fewer and fewer people trust The New York Times?

I'm left with a desire (quixotic as it may be) to try to step outside the problem. I know that many are fond of casting Intra-American conflict (online and offline) as a part of global phenomenon, but here again, I think we're making a facile analogy, choosing to see the similarities and ignoring the many differences. Surely, not every country is experiencing precisely the same kind of online conflict problem that Americans are experiencing. I was reminded of this point while attending this year's Association of Internet Researchers annual conference in Berlin, where I was fortunate enough to present on a panel with researchers from Israel, Denmark, and the U.K. I was left with the notion that not all online discussion forums are the same with regard to conflict, that intra-group conflict is not inevitable in the digital era.

We might also step outside of the internet for a moment. Anecdotally, I've observed a few folks taking a break from social media and news media because the emotional pitch of online discourse became so shrill as to be unbearable. I'm reminded of an idea put forth in Joshua Rothman's book review in the New Yorker (as well as the book he was reviewing, I assume) that our face-to-face interactions with individuals and our feelings about the political groups to which those individuals belong are often in conflict. Short version: we love (or at least tolerate) our neighbors but we hate the political groups to which they belong. The basic idea is that it is harder to hate a person in person. It will be important to see how our face-to-face interactions at work, school, family, and in public places progresses along side our perceptions of behavior online.

So, where to go from here? For starters, it seems worthwhile to examine the framing of news about current events: do we really see an uptick in exposure to intra-American conflict framing, or are our filter bubbles fooling us into think this? It's also important to understand more about the contexts in which online hostility occurs (a goal of my current research project examining hostile behavior on Reddit) and when and where this is associated with offline hostility.

Sunday, November 06, 2016

Nostalgorithms

Nostalgia is a feeling, to start with. We have songs and photographs we happen upon that conjure nostalgia. We have articulations of nostalgia, in poetry, in the lyrics of songs, in films, TV shows, novels. And now, we have algorithms that serve up content (songs, photographs, news) that make us feel nostalgic.

The “Your memories on Facebook” function fascinates me, as a potential mechanism for conjuring nostalgia. It's hard to know precisely how the algorithm works - how and when it decides to bring a photo up from the past and ask you whether or not you'd like to share this photo again on your timeline - but it appears to bring up pictures from the same day of the year in previous years, most likely at least 2 years from the past. It also is likely that photos/memories are chosen based on the amount of "likes" or comments they received at the time. 

That’s certainly one of the simplest approaches, and it works well enough, but perhaps not as well as it could work. I’ve only really heard people talk about this aspect of the Facebook experience (as is the case with many aspects of any kind of technology) when it doesn’t work. People make note of the times when Facebook served up a picture on an ex or, worse, a deceased love one. It’s clear that it doesn’t work perfectly, and yet it works well enough to persist. 

Does that algorithm learn from the times it presents unpleasant memories to users? Probably. Perhaps it starts by serving up memories, allowing a certain period for the memories to “steep," and, after a period of trial and error, it would be possible to identify certain types of memories that people elected to share. These types would be defined by objective qualities the shared memories had in common, qualities that set them apart from the non-shared memories. The algorithm is “dumb” in the sense that it doesn’t know anything about the concept of nostalgia, or the individual users’ lives, or about human emotion in general. But if you give it enough data, enough pictures, enough memories, it will probably get better at serving up pictures that you want to share, pictures that tap into something that we would call nostalgia. It learns not to serve up those pictures of your ex.

Perhaps there's an unseen pattern or signature to nostalgia that could be revealed by the algorithm. It's not just a matter of how much time has passed that makes us nostalgic for something. It has to do with the specific contours of social relations and feelings, all of which leave an imperfect imprint in our social media archives (less and less imperfect as more and more of our social/emotion lives are channeled through social media). 

Here's an example pattern using data that a social media company like Facebook could collect: Optimal nostalgia resides in the pictures with that person you appeared with in other pictures and exchanged frequent IMs with for a period of three years after which there were fewer and fewer pictures of the two of you together and fewer IMs until the trail went cold, but you were still "liking" and occasionally commenting on their posts, though this wasn't reciprocated, suggesting a kind of unreciprocated longing for re-connection. Or maybe it takes into account the time of day at which it was posted (maybe people are more nostalgic about things that happened at night) or the place (maybe nostalgia clings to certain places more than others, or it requires a certain physical distance from our current locations, at least 1,000 miles). Maybe it's all there, residing in the metadata. 

I think about nostalgia in terms of music, too. Pop music (and movies/TV shows that use pop music) have worked with a crude version of the nostalgia principle for decades, if not centuries. Artists arrange a song in a familiar way, or include a certain familiar phrase or melody, so as to strike a particular emotional chord in the listener. Genres are revived in part out of nostalgia. But algorithms could give us something much more fine-grained, more personalized. Imagine that your entire music listening history was archived (as will be the case for people starting to listen to music in the age of streaming services like Spotify, Pandora, or YouTube). The program would know that you really loved a particular song (you played it 100 times that one week in 2010) but then seem to have forgotten about it (you haven't played it since). One of life's great pleasures is hearing that song you loved but have not heard in years. Part of you knows the rhythm and the lyrics, but another part of you has forgotten them. Your ability to sing along with the first verse feels instinctual, but you can't remember exactly what the chorus was until it comes crashing in, and you think, "how could I have forgotten this?" 

Maybe the music program is integrated with your preferred social media app. The social media app has a rough indication of your mood and what's going on in your life. It can make a pretty good guess as to when you're ready for an upbeat song and when you're ready for something more introspective. Maybe it knows that you found a love song when you weren't in love and seemed to like it but couldn't listen to it too frequently because you weren't in love. And now that it knows you're in love, you're ready to hear it again. Maybe it knows that the lyrics to another song will be more poignant to you now that you're 40. 

There is a visceral revulsion at technology colonizing overly-personal or artistic realms of human experience. All fine and well if the algorithms make shopping more efficient, but nostalgia? Memories, and our experiences of them, are tied to identity. This may account for the way in which we need nostalgia triggers to feel serendipitous. The idea of an algorithm writing poetry is a bit unsettling, but what about an algorithm that can conjure the feeling that inspires poets in the first place?


Saturday, August 27, 2016

The Power of the (online) Court of Public Opinion

I've been trying to keep track of instances in which some individual or organization makes a decision or takes an action that others consider to be unjust, leading those others to decry the decision or action online and to take some sort of action that, ultimately, constitutes a kind of judgment and/or punishment of the individual or organization. Here are a few examples:

1) A judge made a decision that certain members of the public believed to be unjust. Those members mobilized online and demanded the removal of the judge from his position. In one sense, it didn't achieve it's intended effect (the judge wasn't removed) but in another sense, it has: the judge decided not to preside over cases involving sexual assault and then decided not to preside over any kind of criminal cases. The cause of his decision seems to be the "distraction" created by the online protesters. If the case hadn't risen to a certain level of prominence, if the protesters hadn't been so vocal, then it seems reasonable to conclude that the judge would have gone on presiding over criminal cases, including cases involving sexual assault.

2) A pharmaceutical company made a decision that certain members of the public felt was unjust. Those members decried the decision online and, in response to this, members of the U.S. Congress are discussing ways in which they might change regulation of the pharmaceutical industry. If those members of the public were not as vocal, it seems unlikely that Congress would have taken up the issue. It's too early to say whether this attempt to change the ways in which companies are regulate will succeed in any way, but it seems closer to succeeding before the online outcry.

3) On my Facebook feed, I came across a post from a friend which featured a picture of an individual riding the subway and a description that explained the ways in which he sexually assaulted and/or harassed other individuals on the subway. The friend was re-posting it: she had no first-hand experience with the individual that I know of and it was unclear (as it is so many times with re-posts or "shares" on Facebook) whether the experience was second-hand, third-hand, fourth-hand, etc. Again, it's hard to know what the impact of this post will be: will people who would have otherwise been victims recognize this man and avoid him? Will people recognize him and report him, or shun him or verbally assault him? All outcomes seem plausible.

These instances seem to be getting more numerous and they prompt me to think about how justice is meted out in the online court of public opinion and how the way that happens changes the relationship of power and justice.

How do people respond to someone they feel has done something unjust but is not being published by the traditional form of meting out justice (law enforcement; legal system)?

1) A group of people on the internet harasses someone and/or makes it possible for others to harass the person online and offline.

2) A group of people on the internet ruins someone's (either a person's or a corporation's) reputation. It's hard to say how permanent the reputation-ruining really is. I think it probably varies a lot from individual to individual, but we treat it as a given that from now until the end of time, now that you've been defamed online, you will be unable to get a job, will be unable to date, will have to move, will have to change your name, etc. It's easy to think of examples of permanent reputation damage (The New York Times Magazine had an excellent piece last year about how this happens to folks who post hurtful tweets), but I'm quite certain there are instances of temporary reputation damage that we're forgetting. And that's my point: we can't just rely on memory to evaluate the impact of online reputation-ruining because we won't remember the instances in which reputation damage was temporary.

It's also hard to say at what point a person's reputation really becomes ruined. What is the difference between a large number of people expressing their displeasure at your actions and reputation-ruining? Consider a case in which a person has done something that ticked off many people online. Those people have written a lot of bad things about the person. Those bad things only have an impact if the people reading them take them at face value. But this must vary. The impact of those bad things likely depends on how many good things may counter-balance the bad things. It also depends on how reputable the sources of the bad things are (are we talking The New York Times or someone's blog, a blog that pretty obviously has a bias to it or is the work of someone with a personal grudge?). It also depends on the reader's goal (are they thinking of hiring a person? Considering dating them? Just randomly curious about them?).

Then there's another factor which really interests me: a kind of general savvy-ness on the part of the reader about what he/she/they read about anyone online. It seems likely that in the early history of Google, blogs, and social media, the average internet user would be inclined to believe what they read online regardless of it's source (maybe this habit carried over from the era of mainstream information in which readers assumed some baseline level of veracity because of the gate-keeping function of mainstream sources and the extent to which they were accountable for publishing untruths because they were trying to protect their public reputation). My bet is that as time goes on and people run into more and more inaccurate, biased, or misleading information from non-reputable sources like social media posts and blogs, they will learn to discount information from these sources. If this is the case, a disparaging social media post in 2007 (assuming a non-savvy reading public) would have a far greater impact on the subject's reputation than a disparaging social media post in 2016 (assuming a somewhat more savvy, skeptical reading public). The savvy-ness and skepticism of the reading public must be taken into account when considering the actual impact of online disparagement on one's reputation.

3) A group of people on the internet provide enough pressure to make the person (or corporation) change their behavior. You don't have to engage in any kind of harassment or reputation-ruining to have this effect. Also, I wouldn't really call it a punishment, but it is a kind of judgment. The person or company at the center of it may just make a kind of calculation: "would I rather persist in my unpopular behavior now that it is known to so many people and unpopular among so many people, or should I change my behavior?" They often make the perfectly understandable decision to alter their behavior just so that they can get on with their lives. They don't have a moral high ground on which they can claim that they were being harassed or defamed. A lot of people didn't like what they were doing and publicly expressed this displeasure (which they are entitled to do) and this made life tough for them, so they changed their behavior.

The traditional justice system has flaws: it's slow and sometimes it gets things wrong. The court of public opinion has flaws: people's emotion and the extent to which an opinion is shared by others who are similar to them shape their reasoning. The court of public opinion also gets things wrong, but in a different way. Whereas traditional, established power structures and hierarchies often bias traditional justice systems, emotion and ingroup/outgroup tribalism bias the court of public opinion.

The court of public opinion has a certain appeal to it. It feels more democratic than the justice system. It feels like the people have the power while the justice system feels like (often un-elected) elites have the power. There's the sense that the court of public opinion compensates for the failings of the traditional justice system.

When thinking about any online phenomenon, I always like to try to answer the question, "is this really all that new?" or "what is it, specifically, that is new about it?"

I'm no historian of justice, but I'm pretty sure that the court of public opinion is not new at all. There has always been this kind of shadow justice system. You could do real violence to someone reputation in a small village if you and some other folks disapproved of what someone was saying or doing. My sense is that the group of people doing the judging and punishing are different online than they would be offline. In the offline court of public opinion, you're tried by members of your community. In the online court of public opinion, you're tried by groups of people who a) have internet access, b) have the time and motivation to read about and post about matters of justice.

This leads me to ask a couple of questions (maybe this is the beginning of yet another research agenda!): First, who are those people posting about matters of justice? How many of them are there? What are their beliefs? Where do they come from? My hunch is that public opinion relating to matters of justice as it manifests itself online is really the opinion of a relatively small (10%?) chunk of the public that posts about events happening all over the world (or at least in their country), and that it's, on average, younger and wealthier than the average citizen. It'll be tough to know much about these folks because so much posting is anonymous or pseudononymous, but who knows, we might be able to at least start to put together some answers.

Second, I'm really curious about that "reader savvy-ness" variable. We tend to focus on those posting online, but what about those reading those posts. There might be a certain understanding that develops on the part of the reader, a certain heuristic for identifying sources of more biased, more emotional information (Twitter) and less biased, less emotional information (Wikipedia). Information is curated on Twitter and Wikipedia in different ways: it's not just one big homogeneous internet, and it's hard to believe people treat it as such. Maybe lots of people already use heuristics like this, maybe not. That's why we do the research.